• 沒有找到結果。

Quantum Computing – Two Applications

N/A
N/A
Protected

Academic year: 2022

Share "Quantum Computing – Two Applications"

Copied!
101
0
0

加載中.... (立即查看全文)

全文

(1)

Quantum Computing – Two Applications

Which two?

1. In Communication Complexity: [2].

2. In Cryptography: [1].

(2)

Bibliography

References

[1] Mark Adcock and Richard Cleve, “A quantum Goldreich-Levin theorem with cryptographic applications,” STACS 2002, 323–334.

[2] Harry Buhrman, Richard Cleve, John Watrous and Ronald de Wolf, “Quantum fingerprinting,” PRL, 87(16), 2001.

(3)

Communication Complexity

(4)

Communication Complexity – Model Description

Alice has x

E(x) E(y)

Bob has y

Figure 1: A protocol P for computing f (x, y) Model Description:

• |x| = |y| = n, E(v) : encoding of v(= x or y).

• f (x, y): a Boolean predicate of x and y.

(f : {0, 1}n × {0, 1}n 7−→ {0, 1})

(5)

Communication Complexity – Goal

Goal:

• Design a protocol P such that – Pr[P(x, y) = f (x, y)] ≥ 1 − ε.

(for 0 ∈ [0, 12])

– The length of E(v) is as minimum as possible.

(6)

Communication Complexity – Definition

Definition:

• Communication Complexity of P:

CP = max

(x,y){E(x), E(y)} (of the protocal P).

• Communication Complexity of f : C(f ) = min

P CP.

(7)

SMM (Simultaneous Message Model)

Bob has y

E(x) E(y)

Alice has x

Referee R

Figure 2: A protocol P for computing f (x, y) in the SMM.

• Alice and Bob cannot interact with each other.

• E(x) and E(y) can be sent to the Referee R only.

(8)

EQ

ε

(x,y) Problem

• (We only care the protocols in SMM hereafter.)

• (We only care f (x, y) = EQε(x, y) hereafter.)

• Definition EQε(x, y) :

Pr[EQε(x, y) = 1] = 1, when x = y;

Pr[EQε(x, y) = 0] ≥ 1 − ε, when x 6= y.

(1)

• Amazingly, CSMM(EQε) = Θ(√ n)!

(9)

Protocol s.t. C

SMM

(EQ

ε

) = O( √

n) – Warmup!

Good code E(v) (Justesen code):

• E : {0, 1}n 7−→ {0, 1}cn for c > 1

• d(x, y): Hamming distance between x and y.

For 0 ≤ ε ≤ 1

2, we have:

d(E(x), E(y)) = 0, x = y;

d(E(x), E(y)) ≥ (1 − ε)cn, x 6= y.

(2) (Compare with (1)).

(10)

Justesen code – construction (1)

` `

|v| = n

n = m`

v

0

v

1

v

m−1

` g(r)

`

Figure 3: Divide v into m piece of equal length ` (m ≤ 2`−1, suggested)

g(r) =

m−1

X viri (mod 2`). (3)

(11)

Justesen code – construction (2)

g(r) rg(r)

h(r) = (g(r), rg(r))

2` 2` 2`

h(2

`

− 1) h(1)

h(0)

N = 2

`

2`

(12)

Justesen code – construction (3)

• Let h(r) = (g(r), rg(r)), then

E(v) ← {h(r)}r∈GF (2`) ← {(3), r(3)}r∈GF (2`) (4) is a Justesen code of v for |E(v)| = 2`2`.

• Analysis of case m ≤ 2`−1: – c = |E(v)||v|2m``2` = 4

– Hamming distance: at least δ(2` − m)2`.

– Compare with (2), we have ε ≥ 1 − δ2 because δ(2` − m)2` ≥ 2δm` ≥ (1 − ε)cn ≥ 4(1 − ε)m`.

(13)

Protocol s.t. C

SMM

(EQ

ε

) = O( √

n) – Step 1

Step 1:

(v, |v| = n)

(E(v), |E(v)| = cn)

Figure 4: Encode v by Justesen code

(14)

Protocol s.t. C

SMM

(EQ

ε

) = O( √

n) – Step 2

Step 2. Rearrange E(x) into a √

cn × √

cn square:

(E(v), |E(v)| = cn)

√ cn

√ cn

(15)

Protocol s.t. C

SMM

(EQ

ε

) = O( n) – Step 3

Step 3:

E

i,∗

(x)

E

∗,j

(y)

Alice Bob

• Alice choose i ∈ {1, 2, . . . ,√

cn} and send Ei,∗(x) to Referee R.

(16)

Protocol s.t. C

SMM

(EQ

ε

) = O( √

n) – Step 4

Step 4 Referee R checks whether Ei,j(x) = Ei,j(y):

E

i,∗

(x)

E

∗,j

(y)

E

i,j

(x) = E

i,j

(y)?

Referee

(17)

Protocol s.t. C

SMM

(EQ

ε

) = O( √

n) – Analysis

Analysis:

• x = y: Ei,j(x) = Ei,j(y).

• x 6= y: Pr[Ei,j(x) 6= Ei,j(y)] ≥ 1 − ε.

(Because [d(E(x), E(y))] ≥ (1 − ε)cn)

(18)

EQ

ε

(x,y) Problem in Quantum World M

Idea. Recall that encoding v by Justesen code:

(v, |v| = n)

(E(v))

1 2 cn

(Superposition)

~

v = P

cn i=1

v ~

i

(19)

Encode v in M (1)

Idea. Let x be encoded as |xi, and y as |yi (in M).

Bob has y Alice has x

Referee R

|xi |yi

Find a way of encoding s.t.

| hx | yi |

= 1, x = y,

(20)

Encode v in M (2)

Let m = cn = |E(v)|. Encode x into

|xi =

m−1

X

i=0

√1

m |ii ⊗ |Ei(x)i , and y into

|yi =

m−1

X

i=0

√1

m |ii ⊗ |Ei(y)i . Then

hx | yi = 1 m

m

X

i=1

Ei(x)Ei(y)

(21)

Encode v in M (3)

• Here, dim(|ii) = m and dim(|Ei(v)i) = 2.

• It’s easy to verify that when x 6= y hx | yi = 1

m

m

X

i=1

Ei(x)Ei(y) ≤ 1

mεm because d[(E(x), E(y))] ≥ (1 − ε)m.

• What should Referee R do then?

(22)

Referee’s Circuit

H H

c-SWAP

|yi

|0i

|xi

Figure 5: A circuit for testing if |xi = |yi or | hx | yi | ≤ ε

(23)

What is H? (1)

H

H |0i

|0i

1

2

|0i +

1

2

|1i

1

2

|0i +

1

2

|1i

(24)

What is H? (2)

H

H |1i

|1i

1

2

|0i −

1

2

|1i

1

2

|0i −

1

2

|1i

(25)

What is c-SWAP? (1)

c-SWAP

c = |0i

|xi

|yi

|xi

|yi

(26)

What is c-SWAP? (2)

c-SWAP

|1i

|xi

|yi

|yi

|xi

(27)

Stage 1

|0i ⊗ |xi ⊗ |yi −→ 1

√2 |0i ⊗ |xi ⊗ |yi + 1

√2 |1i ⊗ |yi ⊗ |xi (5)

H H

c-SWAP

|0i

|xi

|yi

(28)

Stage 2

(5) −→ 1

2(|0i + |1i) ⊗ |xi ⊗ |yi + 1

2(|0i − |1i) ⊗ |yi ⊗ |xi

= 1

2 |0i ⊗ (|xi ⊗ |yi + |yi ⊗ |xi) + 1

2 |1i ⊗ (|xi ⊗ |yi − |yi ⊗ |xi)

= (2)

H H

c-SWAP

|0i

|xi

|yi

(29)

Stage 3

Referee R regards |0i as x = y, |1i as x 6= y.

Apply the Projection operation P|0i to (2) = 1

2 |0i⊗(|xi⊗|yi+|yi⊗|xi)+ 1

2 |1i⊗(|xi⊗|yi−|yi⊗|xi), then

P|0i(2) = |0i 1

2(hx| ⊗ hy| + hy| ⊗ hx|)1

2(|xi ⊗ |yi + |yi ⊗ |xi)



= |0i 1

2(1 + | hx | yi |2)

 .

(30)

Stage 3 (Cont.)

Thus,

1

2(1 + | hx | yi |2)

= 1, x = y;

12(1 + ε2), x 6= y.

(6)

(31)

EQ

ε

(x,y) Protocol in M – Analysis

Figure 6: What is sent by Bob – classical vs quantum

(32)

EQ

ε

(x,y) Protocol in M – Analysis

Comparison

• Classically Bob sends j and E∗,j(y): lg n + c√

n bits (Θ(√

n) de facto).

• Quantumly Bob sends |yi: O(lg n) qubits.

(33)

Reduce error

• – Can we reduce the one side error  = 12(1 + ε2)?

– Naively, repeat the protocol k times, we have an error bound (1+ε2 2)k.

• Moreover it can be reduced to √

πk(1+ε2 )2k.

• But it cannot be less than 14(1+ε2 )2k.

(34)

Reduce to √

πk(

1+ε2

)

2k

(0)

Idea:

• Know fact:

hx | yi ≤ ε (7)

• Duplicate |xi and |yi k times respectively we have

|Xi = |xi (k) and |Y i = |yi (k).

(35)

Reduce to √

πk(

1+ε2

)

2k

(1)

Prepare two kinds of quantum registers

• Permutation register |P i.

• Data register |Di = |XY i.

(36)

Reduce to √

πk(

1+ε2

)

2k

(2)

Permutation register |si:

• Defined by the permutition group S2k for σs ∈ S2k. (Note s = 0: the index of identity permutition)

• Define C = |S2k|

• Initially, we prepare |si = |0i(C).

(37)

Reduce to √

πk(

1+ε2

)

2k

(3)

H H

|Di

PERM

|P i = |0i

(C)

Figure 7: The algorithm for reducing err to√

πk(1+ε2 )2k (|Di = |XY i = |xi(k) |yi(k))

(38)

Reduce to √

πk(

1+ε2

)

2k

(4)

H H

|Di

PERM

|P i = |0i

(C)

Figure 8: |P i = 1

C

PC−1

s=0 |si: generate all possible permuta- tion uniformly

(39)

Reduce to √

πk(

1+ε2

)

2k

(5)

H H

|Di

PERM

|P i = |0i

(C)

(40)

|P i ⊗ |Di = 1

√C

C−1

X

s=0

|si ⊗ σs(|Di)

= 1

√C

C−1

X

s=0

|si ⊗ |σs(D)i (8)

(41)

Reduce to √

πk(

1+ε2

)

2k

(6)

Measure

H H

|Di

PERM

|P i = |0i

(C)

|P i

Figure 9: We only care whether |P i = |0i(C) thus measure

(42)

|P i ⊗ |Di = (h0|(C) H(C) ⊗ I)(8)

= 1

√C

C−1

X

s=0

h0|(C) H(C) |si ⊗ |σs(D)i

= 1

√C

C−1

X

s=0

( 1

√C

C−1

X

t=0

ht|)|si ⊗ |σs(D)i

= 1 C

C−1

X

s=0

|si ⊗ |σs(D)i (9)

h0|(C) (9) = 1 C

C−1

X

s=0

s(D)i (10)

(43)

Reduce to πk(

2

) (7)

The probability that we measure |P i = |0i(C) is (10)(10) = ( 1

C

C−1

X

t=0

t(D)|)( 1 C

C−1

X

s=0

s(D)i)

= 1 C2

C−1

X

t=0

C−1

X

s=0

t(D) | σs(D)i = 1 C2

C−1

X

t=0

C−1

X

s=0

hD| σt−1σs |Di

= 1 C2

C−1

X

s=0

hD| Cσs(|Di) 1 C−1X

hD| σ 1 C−1X

hx|(k) hy|(k) (k) |yi(k)

(44)

Reduce to √

πk(

1+ε2

)

2k

(8)

Because hx | yi ≤ ε and C = |S2k| = (2k)!, we have (11) = 1

C

C−1

X

s=0

hx|(k) hy|(k) σs(|xi(k) |yi(k))

≤ (k!)2 (2k)!

k

X

i=0

(k i



εi)2 ≤ (k!)2

(2k)!(1 + ε)2k ≤ √

πk(1 + ε

2 )2k (12)

(45)

Cannot be smaller than

14

(

1+ε2

)

2k

(1)

Extremal case:

• |φi = |x1i(k) |y1i(k) and |ψi = |x2i(k) |y2i(k)

• Set cos(θ) = hx2 | y2i = ε, |x 1i = |0i, |y1i = |0i;

|x2i = cos(θ2) |0i + sin(θ2) |1i,

|y2i = cos(θ2) |0i − sin(θ2) |1i.

• hφ | ψi = cos2k(θ2) = (1+cos(θ)2 )k = (1+ε2 )k ∆= cos(β)

(46)

Cannot be smaller than

14

(

1+ε2

)

2k

(2)

|noi

|yesi |ψi

|φi

β 2 β 2

π 4

Figure 10: Indistinguishable case for |φi and |ψi

(47)

Cannot be smaller than

14

(

1+ε2

)

2k

(3)

• |yesi: |φi and |ψi are the same.

|noi: |φi and |ψi are different.

Pr[Answer yes when different]

+Pr[Answer no when the same]

= 1

2 sin2

4 − β

2 ) + 1

2 sin2

4 − β 2 )

= 1 − sin(β)

2 ≥ 1

4 cos2(β) = 1

4(1 + ε

2 )2k (13)

(48)

Cryptography

(49)

Goldreich Levin Theorem

• OWF: one-way function f : {0, 1}n → {0, 1}n

• HCP: hardcore predicate h : {0, 1}n → {0, 1}

• Predicting a HCP is as hard as inverting a OWF.

• We only care about the efficeincy of the reduction from OWF to HCP.

(50)

Main Results

The efficeincy of the reduction:

• Classical world: Ω(δnε2 )

• Quantum world: O(1ε) Modified Reduction/Problem:

• EQ query corresponds to computing (b, x) = (f (a), x).

• IP query corresponds to computing h(a, x) = a · x.

(51)

The Problem

• Input: a ∈ {0, 1}n

(given but kept confidential in a black box.)

• Output: a (rechieve it from the black box!)

• Allowed operations: black-box queries only.

• Goal: determine a with a minimun number of black-box queries.

(52)

Classical black boxes

1. IP. for a set S(⊆ {0, 1}n) which satisfies

|S| ≥ (0.5 + ε)2n:

IP(x) =

a · x, x ∈ S;

a · x, x 6∈ S.

Alternative speaking, Prx∈{0,1}n[IP(x) = a · x] ≥ 0.5 + ε 2. EQ.

EQ(x) =

1, x = a;

0, x 6= a.

(53)

Classical Theorem

Given

• success probability: δ(> 0) and

• ε ≥ √

n2n3 .

We should determine a by

• at lease 2n2 EQ queries; or

• Ω(δnε2 ) IP queries.

(54)

From randomized to deterministic

• Let

– I: the set of all possible inputs;

p: chosen distribution of all possible algorithms;

Rε: a randomized algorithm with err prob ε.

– A: the set of all possible algorithms.

q: chosen distribution of all possible inputs;

D: a deterministic algorithm with err prob 2ε.

Then we have

2 max

I∈I Ep[Rε] ≥ min

A∈A Eq[D] (14)

(55)

From randomized to deterministic

• a deterministic algorithm with error inputs can lower bounded corresponding randomized ones.

• That’s the reason we define IP which might have error string in.

(56)

Classical black box algorithm

• Do IP queries for m times first.

• Then do EQ queries for 2n2 times.

• Analyze the conditional mutual information about a:

– Lower bound: determined by IP queries.

– Upper bound: determined by EQ queries.

• estimate m from the conditional mutual information about a.

(57)

H(A|Y

1

, . . . , Y

m−1

, Y

m

)

H(A|Y 1, . . . , Y m−1, Y m):

• the quality of information on the input a ∈ {0, 1}n (which corresponds to the random variable A)

we gained after applying m queries.

• Y i: the {0, 1}-valued random variable corresponding to the output of the i-th time IP query.

(58)

Conditional and Joint Entropy

• Let X and Y are two random variables, then

• Conditional Entropy:

H(X|Y ) = − X

y∈Y

Pr[y] X

x∈X

Pr[x|y] lg(Pr[x|y])(15)

= H(X, Y ) − H(Y ) (16)

• Joint Entropy:

H(X, Y ) =

− X

y∈Y

X

x∈X

Pr[x, y] lg(Pr[x, y])

(17)

= H(X) + H(Y |X) (18)

(59)

Compute H(A|Y

1

, . . . , Y

m−1

, Y

m

)

Let Y m−1 ∆= {Y 1, . . . , Y m−1}, then H(A|Y 1, . . . , Y m−1, Y m)

= H(A|Y m−1, Y m)

= H(A, Y m−1, Y m) − H(Y m−1, Y m)

= 

H(Y m|A, Y m−1) + H(A, Y m−1) 

− H(Y m|Y m−1) + H(Y m−1)

=



H(Y m|A, Y m−1) + H(A|Y m−1) + H(Y m−1)



(60)

Compute H(A|Y

1

, . . . , Y

m−1

, Y

m

)

Thus (19) can be spreaded as follows:

H(A|Y 1, . . . , Y m) = H(A|Y 1, . . . , Y m−1)

+ H(Y m|A, Y 1, . . . , Y m−1)

− H(Y m|Y 1, . . . , Y m−1) H(A|Y 1, . . . , Y m−1) = H(A|Y 1, . . . , Y m−2)

+ H(Y m−1|A, Y 1, . . . , Y m−2)

− H(Y m−1|Y 1, . . . , Y m−2) H(A|Y 1, Y 2) = H(A|Y 1) + H(Y 2|A, Y 1)

− H(Y 2|Y 1)

H(A|Y ) = H(A) + H(Y |A)

(61)

Compute H(A|Y

1

, . . . , Y

m−1

, Y

m

)

Recursively plug the above equations into (19), we have H(A|Y 1, . . . , Y m) = H(A) +

m

X

ı=1

H(Y i|A, Y 1, . . . , Y ı−1)

m

X

ı=1

H(Y i|Y 1, . . . , Y i−1)

= (X) + (Y) − (Z) (20)

We will analyze the above terms.

(62)

Analyze (X)

Because A is a random variable

(which corresponds to the input a of our algorithm) uniformly chosen from {0, 1}n, it’s trivial that

(X) = H(A) = − X

a∈{0,1}n

Pr[a] lg(Pr[a])

= −2n 1

2n lg( 1

2n) = n (21)

(63)

Analyze (Y): algorithm IPQuery

IPQuery(m)

1 U ← {0, 1}n

2 S ← nil, S ← nil 3 j ← 0

4 for i ← 1 to m 5 do x ∈R U

6 w.p. ((0.5 + ε)2n − j)/(2n − (i − 1))

7 do S ← S ∪ x

8 j ← j + 1

9 or S ← S ∪ x

10 U ← U \ {x}

(64)

Analyze (Y)

• S can be regarded as the success set {x | IP(x) = a · x}

and

S as the fail set {x | IP(x) = a · x}.

• Let pi be the probability that x is put into the success set at the i-th query, then

0.5−2ε ≤ (0.5 + ε)2n − (i − 1)

2n − (i − 1) ≤ pi ≤ (0.5 + ε)2n

2n − (i − 1) ≤ 0.5+2ε (22)

(65)

Analyze (Y)

Thus, the information on the output of the i-th query (when a and the information on the output of previous queries are known) has a lower bound determined by (22) because H(p) is convex for p ∈ [0, 1], max when p = 0.5.

(66)

Analyze (Y)

0.5 0

H(p)

ε

p ε

1 1

Figure 11: H(p) is convex for p ∈ [0, 1]

(67)

Analyze (Y)

That is

H(Y i|A, Y 1, . . . , Y ı−1)

≥ H(0.5 − 2ε) (≡ H(0.5 + 2ε))

= −(0.5 − 2ε) lg(0.5 − 2ε) − (0.5 + 2ε) lg(0.5 + 2ε)

≥ 1 − 16

ln 2ε2 (Taylor expansion)

(Y) =

m

XH(Y i|A, Y 1, . . . , Y ı−1) ≥ (1 − 16

ln 2ε2)m(23)

(68)

Analyze (Z)

Because Y i is a random variable chosen from {0, 1}

(which corresponds to the output yi after the ith query) and the entropy of an 1-bit string is at most 1, we have

H(Y i|Y 1, . . . , Y i−1) ≤ 1

=⇒ (Z) =

m

X

ı=1

H(Y i|Y 1, . . . , Y i−1) ≤ m (24)

(69)

Lower bound of H(A|Y

1

, . . . , Y

m−1

, Y

m

)

Substituting (21), (23) and (24) into (20), we have H(A|Y 1, . . . , Y m) = (X) + (Y) − (Z)

≥ (n) +



(1 − 16

ln 2ε2)m



− (m)

= n −  16 ln 2ε2



m (25)

(70)

Two tuned parameters

• the number of EQ queries: 2n2

• the upper bound of ε: δ√

n2n3

(71)

Upper bound of H(A|Y

1

, . . . , Y

m−1

, Y

m

)

Achieve maximum entropy when δ(> 0) is fixed:

• 2n/2 elements each have EQUAL probability 2n/2δ .

• 2n − 2n/2 elements each have EQUAL probability

1−δ 2n−2n/2.

(72)

Therefore,

H(A|Y 1, . . . , Y m−1, Y m)

≤ H( δ

2n/2 , · · · , δ 2n/2

| {z }

2n/2

, 1 − δ

2n − 2n/2, · · · , 1 − δ 2n − 2n/2)

| {z }

2n−2n/2

= δ lg(2n/2) + H(δ) + (1 − δ) lg(2n − 2n/2)

< δn/2 + 1 + (1 − δ)n = n − δn/2 + 1 (26)

(73)

Estimate m: the number of queries to IP

Combine (25) with (26), we have n −  16

ln 2ε2



m ≤ H(A|Y 1, . . . , Y m−1, Y m) < n − δn

2 + 1 Finally,

m > δn − 2

32ε2 ln 2 ∈ Ω(δn

ε2 ) (27)

(74)

The Problem in quantum model

• Input: a ∈ {0, 1}n

(given but kept confidential in a black box.)

• Output: a (rechieve it from the black box!)

• Allowed operations: quantum black-box queries only.

• Goal: determine a with a minimun number of quantum black-box queries.

(75)

Quantum black boxes

• UIP:

UIP

n qubits

z}|{|xi |0mi

1 qubit

z}|{|oi

= |xi (αx |vxi |a · xi + βx |wxi |a · xi) |oi

1 2n

X

x∈{0,1}n

α2x

 ≥ 1

2 + ε, 1 2n

X

x∈{0,1}n

βx2

 ≤ 1

2 − ε

(76)

• UEQ:

UEQ |xi

0m−1

1 qubit

z}|{|bi |oi =

|xi

0m−1

¯b |oi , x = a;

|xi

0m−1 |bi |oi , x 6= a.

(77)

What is U

EQ

?

For x, a ∈ {0, 1}n and b ∈ {0, 1},

• if |ai |0i is in the form of a 2n+1-dimention column vector −e→K a,

then UEQ can be represented as the following

2n+1 × 2n+1 matrix: (for the first 0 in the frame box 0 1

1 0

is located at (K, K))

(78)

 1

. ..

. .. 1

0 1 1 0

1

. .. 1

(79)

The circuit C

n

m

H H H

X Z

H H U

IP

U

IP

H

qubits

qubits

Figure 12: The circuit C

1 1

 

0 1

 

1 0

(80)

goal

• Circuit input: |0n, 0m, 0i.

• Ideal output: |a, 0m, 1i, actual output: C |0n, 0m, 0i.

• Prove that

ha, 0m, 1| · C |0n, 0m, 0i ≥ 2ε, or

|ha, 0m, 1| · C |0n, 0m, 0i|2 ≥ 4ε2

• Thus when repeating the quantum algorithm a for O(ε12) times, the input a can be found w.h.p.

aThat is, feed |0n, 0m 0i into the circuit C

(81)

Decompose C

n

m

H H H

X Z

H H U

IP

U

IP

H

qubits

qubits

C

1

C

2

C

3

C

4

C

5

(82)

goal in detail

ha, 0m, 1| · C |0n, 0m, 0i

= ha, 0m, 1| · C5C4C3C2C1 |0n, 0m, 0i

= C4−1C5−1 ha, 0m, 1| · C3C2C1 |0n, 0m, 0i

= C4−1C5−1 ha| h0m| h1| · C3C2C1 |0ni |0mi |0i

= (A) · (B) ≥ 2ε (28)

(83)

Compute (B): stage C

1

H H H

X Z

H H U

IP

U

IP

H

C

1

|0

m

i

|0

n

i

|0i

(84)

C1 |0ni |0mi |0i

= 1

2n

P

x∈{0,1}n |xi |0mi |1i (29)

(85)

Compute (B): stage C

2

H H H

X Z

H H U

IP

U

IP

H

C

2

|0

n

i

|0

m

i

|0i

(86)

C2(29) = C2 1

√2n

X

x∈{0,1}n

|xi |0mi |1i

= 1

√2n

X

x∈{0,1}n

|xi (αx |vxi |a · xi + βx |wxi |a · xi) |1i (30)

(87)

Compute (B): stage C

3

H H H

X Z

H H U

IP

U

IP

H

C

3

|0

n

i

|0

m

i

|0i

(88)

C3(30)

= C3 1

√2n

X

x∈{0,1}n

|xi (αx |vxi |a · xi + βx |wxi |a · xi) |1i

= 1

√2n

X

x∈{0,1}n

|xi (αx(−1)a·x |vxi |a · xi) |1i

+ 1

√2n

X

x∈{0,1}n

|xi βx(−1)a·x |wxi |a · xi |1i

= 1

√2n

X

x∈{0,1}n

(−1)a·x |xi (αx |vxi |a · xi) |1i

− 1

√2n

X

x∈{0,1}n

(−1)a·x |xi (βx |wxi |a · xi) |1i

= (B) (31)

(89)

Compute (A): stage C

5

H H H

X Z

H H U

IP

U

IP

H

C

5

|ai

|0

m

i

|1i

(90)

C5−1 |ai |0mi |1i

= 1

√2n

X

x∈{0,1}n



(−1)a·x |xi |0mi |1i

(32)

(91)

Compute (A): stage C

4

H H H

X Z

H H U

IP

U

IP

H

C

4

|ai

|0

m

i

|1i

(92)

C4−1(32)

= C4−1 1

√2n

X

x∈{0,1}n

((−1)a·x |xi |0mi |1i)

= 1

√2n

X

x∈{0,1}n

(−1)a·x |xi (αx |vxi |a · xi) |1i

+ 1

√2n

X

x∈{0,1}n

(−1)a·x |xi (βx |wxi |a · xi) |1i

= (A−1) (33)

(93)

Compute (A) · (B): warmup!

(A) = 1

√2n

X

x∈{0,1}n

αx((−1)a·x |xi |vxi |a · xi |1i)

+ 1

√2n

X

x∈{0,1}n

βx((−1)a·x |xi |wxi |a · xi |1i)

(B) = 1

√2n

X

x∈{0,1}n

αx((−1)a·x |xi |vxi |a · xi |1i)

− 1

√2n

X

n

βx((−1)a·x |xi |wxi |a · xi |1i)

(94)

Compute (A) · (B)

(A) · (B)

= 1

2n

X

x∈{0,1}n

α2x − βx2

=

 1 2n

X

x∈{0,1}n

α2x

 −

 1 2n

X

x∈{0,1}n

βx2

≥  1

2 + ε



−  1

2 − ε



= 2ε (34)

(95)

Boosting: achieve the goal in another way

• Previously known: repeat the quantum algorithm for O(ε−2) times.

• More effeciently: do the quantum algorithm once then apply the boosting algorithm :

Q = −C(U 0 ⊗ I)C−1(Ua ⊗ I)

for O(ε−1) times. That is, compute Q(t) · (C |0n, 0m, 0i) for t = O(ε−1).

(96)

Q = −C(U

0

⊗ I)C

−1

(U

a

⊗ I)

• Revise C s.t. (ha, 0m, 1|) · (C |0n, 0m, 0i) ≡ 2ε.

• Ua or U0: apply to the first n qubit.

• I: apply to the last m + 1 qubits.

• Ua:

Ua |xi =

|xi x 6= a,

− |xi x = a.

Alternative speaking, Ua = I − 2 |aiha|.

• U0: a kind of Ua when a = 0n.

(97)

(U

a

⊗ I)

drop I

−→ U

a

|ai

|φi

U

a

|φi

−|ai = U

a

|ai

sp{|ai

}

Figure 19: Ua: refection in the hyperplane sp{|ai}

(98)

C(U

0

⊗ I)C

−1

= U

C|0n,zi

• For z ∈ {0, 1}m+1:

C(U0 ⊗ I)C−1 · C |0n, zi

= C(U0 ⊗ I) C−1C |0n, zi = CU0 |0n, zi

= C (− |0n, zi) = −C |0n, zi (35)

• For y ∈ {0, 1}n and y 6= 0n:

C(U0 ⊗ I)C−1 · C |y, zi = C(U0 ⊗ I) C−1C |y, zi

= CU0 |y, zi = C |y, zi (36)

• Thus, C(U0 ⊗ I)C−1 = UC|0n,zi

(99)

−C(U

0

⊗ I)C = −U

C|0n,zi

θ θ

|ψi = U

a

|φi

−U

C|0ni

|ψi

θ θ

|ψi 3θ

sp{C|0i

}

|φi ≡ C|0

n

i

sp{|ai

}(3 |0

n

i)

Figure 20: −UC|0ni: rotate |φi to alternative direction.

• Recall that Q = −C(U ⊗ I)C−1(U ⊗ I)

(100)

Ratate towards |ai

|ai

−|ai = U

a

|ai

sp{|ai

} θ

|φi ≡ C|0

n

i

U

a

|φi

Figure 21: θ ≡ sin−1(ha| · C |0ni) = sin−1(2ε)

(101)

Boost the probability that |ai happens

• When sin((2k + 1)θ) = 1, Q(k) |0n, 0m, 0i = |a, 0m, 1i.

• The minimun k which satisfies

sin((2k + 1)θ) = 1 ⇐⇒ (2k + 1)θ = π

2 (37) is π−sin2 sin−1−1(2ε)(2ε).

• Because sin−1(2ε) ≥ 2ε holds for small ε, we can estimate that

π − sin−1(2ε) π − 2ε π 1 1

參考文獻

相關文件

Theorem 3.1, together with some algebraic manipulations, implies that the quantum corrections attached to the extremal ray exactly remedy the defect caused by the classical product

In part II (“Invariance of quan- tum rings under ordinary flops II”, Algebraic Geometry, 2016), we develop a quantum Leray–Hirsch theorem and use it to show that the big

Tsung-Min Hwang, Wei-Cheng Wang and Weichung Wang, Numerical schemes for three dimensional irregular shape quantum dots over curvilinear coordinate systems, accepted for publication

These include so-called SOC means, SOC weighted means, and a few SOC trace versions of Young, H¨ older, Minkowski inequalities, and Powers-Størmer’s inequality.. All these materials

double-slit experiment is a phenomenon which is impossible, absolutely impossible to explain in any classical way, and.. which has in it the heart of quantum mechanics -

Workshop of Recent developments in QCD and Quantum field theories, 2017

In x 2 we describe a top-down construction approach for which prototype charge- qubit devices have been successfully fabricated (Dzurak et al. Array sites are de­ ned by

In an Ising spin glass with a large number of spins the number of lowest-energy configurations (ground states) grows exponentially with increasing number of spins.. It is in