• 沒有找到結果。

Monte Carlo Algorithms

N/A
N/A
Protected

Academic year: 2022

Share "Monte Carlo Algorithms"

Copied!
66
0
0

加載中.... (立即查看全文)

全文

(1)

Randomization vs. Nondeterminism

a

• What are the differences between randomized algorithms and nondeterministic algorithms?

• Think of a randomized algorithm as a nondeterministic one but with a probability associated with every

guess/branch.

• So each computation path of a randomized algorithm has a probability associated with it.

aContributed by Mr. Olivier Valery (D01922033) and Mr. Hasan Al- hasan (D01922034) on November 27, 2012.

(2)

Monte Carlo Algorithms

a

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that

– If the algorithm finds that a matching exists, it is always correct (no false positives; no type 1 errors).

– If the algorithm answers in the negative, then it may make an error (false negatives; type 2 errors).

aMetropolis & Ulam (1949).

(3)

Monte Carlo Algorithms (continued)

• The algorithm makes a false negative with probability

≤ 0.5.a

• Again, this probability refers tob

prob[ algorithm answers “no”| G has a perfect matching ] not

prob[G has a perfect matching | algorithm answers “no” ].

aEquivalently, among the coin flip sequences, at most half of them lead to the wrong answer.

bIn general, prob[ algorithm answers “no”| input is a yes instance ].

(4)

Monte Carlo Algorithms (concluded)

• This probability 0.5 is not over the space of all graphs or determinants, but over the algorithm’s own coin flips.

– It holds for any bipartite graph.

• In contrast, to calculate

prob[G has a perfect matching | algorithm answers “no” ], we will need the distribution of G.

• But it is an empirical statement that is very hard to verify.

(5)

The Markov Inequality

a

Lemma 67 Let x be a random variable taking nonnegative integer values. Then for any k > 0,

prob[x ≥ kE[ x ] ] ≤ 1/k.

• Let pi denote the probability that x = i.

E[ x ] = 

i

ipi = 

i<kE[ x ]

ipi + 

i≥kE[ x ]

ipi



i≥kE[ x ]

ipi ≥ kE[ x ] 

i≥kE[ x ]

pi

≥ kE[ x ] × prob[x ≥ kE[ x ]].

aAndrei Andreyevich Markov (1856–1922).

(6)

Andrei Andreyevich Markov (1856–1922)

(7)

fsat for k-sat Formulas (p. 500)

• Let φ(x1, x2, . . . , xn) be a k-sat formula.

• If φ is satisfiable, then return a satisfying truth assignment.

• Otherwise, return “no.”

• We next propose a randomized algorithm for this problem.

(8)

A Random Walk Algorithm for φ in CNF Form

1: Start with an arbitrary truth assignment T ;

2: for i = 1, 2, . . . , r do

3: if T |= φ then

4: return “φ is satisfiable with T ”;

5: else

6: Let c be an unsatisfied clause in φ under T ; {All of its literals are false under T .}

7: Pick any x of these literals at random;

8: Modify T to make x true;

9: end if

10: end for

11: return “φ is unsatisfiable”;

(9)

3sat vs. 2sat Again

• Note that if φ is unsatisfiable, the algorithm will answer

“unsatisfiable.”

• The random walk algorithm needs expected exponential time for 3sat.

– In fact, it runs in expected O((1.333 · · · + )n) time with r = 3n,a much better than O(2n).b

• We will show immediately that it works well for 2sat.

• The state of the art as of 2014 is expected O(1.30704n) time for 3sat and expected O(1.46899n) time for 4sat.c

aUse this setting per run of the algorithm.

bSch¨oning (1999). Makino, Tamaki, & Yamamoto (2011) improve the bound to deterministic O(1.3303n).

cHertli (2014).

(10)

Random Walk Works for 2sat

a

Theorem 68 Suppose the random walk algorithm with r = 2n2 is applied to any satisfiable 2sat problem with n variables. Then a satisfying truth assignment will be

discovered with probability at least 0.5.

• Let ˆT be a truth assignment such that ˆT |= φ.

• Assume our starting T differs from ˆT in i values.

– Their Hamming distance is i.

– Recall T is arbitrary.

aPapadimitriou (1991).

(11)

The Proof

• Let t(i) denote the expected number of repetitions of the flipping stepa until a satisfying truth assignment is

found.

• It can be shown that t(i) is finite.

• t(0) = 0 because it means that T = ˆT and hence T |= φ.

• If T = ˆT or any other satisfying truth assignment, then we need to flip the coin at least once.

• We flip a coin to pick among the 2 literals of a clause not satisfied by the present T .

• At least one of the 2 literals is true under ˆT because ˆT satisfies all clauses.

aThat is, Statement 7.

(12)

The Proof (continued)

• So we have at least a 50% chance of moving closer to ˆT .

• Thus

t(i) ≤ t(i − 1) + t(i + 1)

2 + 1

for 0 < i < n.

– Inequality is used because, for example, T may differ from ˆT in both literals.

• It must also hold that

t(n) ≤ t(n − 1) + 1 because at i = n, we can only decrease i.

(13)

The Proof (continued)

• Now, put the necessary relations together:

t(0) = 0, (10)

t(i) ≤ t(i − 1) + t(i + 1)

2 + 1, 0 < i < n, (11)

t(n) ≤ t(n − 1) + 1. (12)

• Technically, this is a one-dimensional random walk with an absorbing barrier at i = 0 and a reflecting barrier at i = n (if we replace “≤” with “=”).a

aThe proof in the textbook does exactly that. But a student pointed out difficulties with this proof technique on December 8, 2004. So our proof here uses the original inequalities.

(14)

The Proof (continued)

• Add up the relations for

2t(1), 2t(2), 2t(3), . . . , 2t(n − 1), t(n) to obtaina 2t(1) + 2t(2) + · · · + 2t(n − 1) + t(n)

≤ t(0) + t(1) + 2t(2) + · · · + 2t(n − 2) + 2t(n − 1) + t(n) +2(n − 1) + 1.

• Simplify it to yield

t(1) ≤ 2n − 1. (13)

aAdding up the relations for t(1), t(2), t(3), . . . , t(n−1) will also work, thanks to Mr. Yen-Wu Ti (D91922010).

(15)

The Proof (continued)

• Add up the relations for 2t(2), 2t(3), . . . , 2t(n − 1), t(n) to obtain

2t(2) + · · · + 2t(n − 1) + t(n)

≤ t(1) + t(2) + 2t(3) + · · · + 2t(n − 2) + 2t(n − 1) + t(n) +2(n − 2) + 1.

• Simplify it to yield

t(2) ≤ t(1) + 2n − 3 ≤ 2n − 1 + 2n − 3 = 4n − 4 by Eq. (13) on p. 544.

(16)

The Proof (continued)

• Continuing the process, we shall obtain t(i) ≤ 2in − i2.

• The worst upper bound happens when i = n, in which case

t(n) ≤ n2.

• We conclude that

t(i) ≤ t(n) ≤ n2 for 0 ≤ i ≤ n.

(17)

The Proof (concluded)

• So the expected number of steps is at most n2.

• The algorithm picks r = 2n2.

• Apply the Markov inequality (p. 535) with k = 2 to yield the desired probability of 0.5.

• The proof does not yield a polynomial bound for 3sat.a

aContributed by Mr. Cheng-Yu Lee (R95922035) on November 8, 2006.

(18)

Boosting the Performance

• We can pick r = 2mn2 to have an error probability of

1 2m by Markov’s inequality.

• Alternatively, with the same running time, we can run the “r = 2n2” algorithm m times.

• The error probability is now reduced to

≤ 2−m.

(19)

Primality Tests

• primes asks if a number N is a prime.

• The classic algorithm tests if k | N for k = 2, 3, . . . ,√ N .

• But it runs in Ω(2(log2 N)/2) steps.

(20)

The Fermat Test for Primality

Fermat’s “little” theorem (p. 486) suggests the following primality test for any given number N:

1: Pick a number a randomly from { 1, 2, . . . , N − 1 };

2: if aN−1 ≡ 1 mod N then

3: return “N is composite”;

4: else

5: return “N is (probably) a prime”;

6: end if

(21)

The Fermat Test for Primality (concluded)

• Carmichael numbers are composite numbers that will pass the Fermat test for all a ∈ { 1, 2, . . . , N − 1 }.a

– The Fermat test will return “N is a prime” for all Carmichael numbers N.

• Unfortunately, there are infinitely many Carmichael numbers.b

• In fact, the number of Carmichael numbers less than N exceeds N2/7 for N large enough.

• So the Fermat test is an incorrect algorithm for primes.

aCarmichael (1910). Lo (1994) mentions an investment strategy based on such numbers!

bAlford, Granville, & Pomerance (1992).

(22)

Square Roots Modulo a Prime

• Equation x2 ≡ a mod p has at most two (distinct) roots by Lemma 64 (p. 491).

– The roots are called square roots.

– Numbers a with square roots and gcd(a, p) = 1 are called quadratic residues.

∗ They are

12 mod p, 22 mod p, . . . , (p − 1)2 mod p.

• We shall show that a number either has two roots or has none, and testing which is the case is trivial.a

aBut no efficient deterministic general-purpose square-root-extracting

(23)

Euler’s Test

Lemma 69 (Euler) Let p be an odd prime and a = 0 mod p.

1. If

a(p−1)/2 ≡ 1 mod p, then x2 ≡ a mod p has two roots.

2. If

a(p−1)/2 ≡ 1 mod p, then

a(p−1)/2 ≡ −1 mod p and x2 ≡ a mod p has no roots.

(24)

The Proof (continued)

• Let r be a primitive root of p.

• Fermat’s “little” theorem says rp−1 ≡ 1 mod p, so r(p−1)/2

is a square root of 1.

• In particular,

r(p−1)/2 ≡ 1 or −1 mod p.

• But as r is a primitive root, r(p−1)/2 ≡ 1 mod p.

• Hence r(p−1)/2 ≡ −1 mod p.

(25)

The Proof (continued)

• Let a = rk mod p for some k.

• Suppose a(p−1)/2 ≡ 1 mod p.

• Then

1 ≡ a(p−1)/2 ≡ rk(p−1)/2 

r(p−1)/2 k

≡ (−1)k mod p.

• So k must be even.

(26)

The Proof (continued)

• Suppose a = r2j mod p for some 1 ≤ j ≤ (p − 1)/2.

• Then

a(p−1)/2 ≡ rj(p−1) ≡ 1 mod p.

• The two distinct roots of a are

rj, −rj(≡ rj+(p−1)/2 mod p).

– If rj ≡ −rj mod p, then 2rj ≡ 0 mod p, which implies rj ≡ 0 mod p, a contradiction as r is a primitive root.

(27)

The Proof (continued)

• As 1 ≤ j ≤ (p − 1)/2, there are (p − 1)/2 such a’s.

• Each such a ≡ r2j mod p has 2 distinct square roots.

• The square roots of all these a’s are distinct.

– The square roots of different a’s must be different.

• Hence the set of square roots is { 1, 2, . . . , p − 1 }.

• As a result,

a = r2j mod p, 1 ≤ j ≤ (p − 1)/2, exhaust all the quadratic residues.

(28)

The Proof (concluded)

• Suppose a = r2j+1 mod p now.

• Then it has no square roots because all the square roots have been taken.

• Finally,

a(p−1)/2 

r(p−1)/2 2j+1

≡ (−1)2j+1 ≡ −1 mod p.

(29)

The Legendre Symbola and Quadratic Residuacity Test

• By Lemma 69 (p. 553),

a(p−1)/2 mod p = ±1 for a ≡ 0 mod p.

• For odd prime p, define the Legendre symbol (a | p) as

(a | p) =Δ

0, if p | a,

1, if a is a quadratic residue modulo p,

−1, if a is a quadratic nonresidue modulo p.

• It is sometimes pronounced “a over p.”

aAndrien-Marie Legendre (1752–1833).

(30)

The Legendre Symbol and Quadratic Residuacity Test (concluded)

• Euler’s test (p. 553) implies

a(p−1)/2 ≡ (a | p) mod p for any odd prime p and any integer a.

• Note that (ab | p) = (a | p)(b | p).

(31)

Gauss’s Lemma

Lemma 70 (Gauss) Let p and q be two distinct odd primes. Then (q | p) = (−1)m, where m is the number of residues in R =Δ { iq mod p : 1 ≤ i ≤ (p − 1)/2 } that are greater than (p − 1)/2.

• All residues in R are distinct.

– If iq = jq mod p, then p | (j − i) or p | q.

– But neither is possible.

• No two elements of R add up to p.

– If iq + jq ≡ 0 mod p, then p | (i + j) or p | q.

– But neither is possible.

(32)

The Proof (continued)

• Replace each of the m elements a ∈ R such that a > (p − 1)/2 by p − a.

– This is equivalent to performing −a mod p.

• Call the resulting set of residues R.

• All numbers in R are at most (p − 1)/2.

• In fact, R = { 1, 2, . . . , (p − 1)/2 } (see illustration next page).

– Otherwise, two elements of R would add up to p,a which has been shown to be impossible.

aBecause then iq ≡ −jq mod p for some i = j.

(33)

5 1 2 3 4

6 5

1 2 3 4

6

p = 7 and q = 5.

(34)

The Proof (concluded)

• Alternatively, R = { ±iq mod p : 1 ≤ i ≤ (p − 1)/2 }, where exactly m of the elements have the minus sign.

• Take the product of all elements in the two representations of R.

• So

[(p − 1)/2]! ≡ (−1)mq(p−1)/2[(p − 1)/2]! mod p.

• Because gcd([(p − 1)/2]!, p) = 1, the above implies 1 = (−1)mq(p−1)/2 mod p.

(35)

Legendre’s Law of Quadratic Reciprocity

a

• Let p and q be two distinct odd primes.

• The next result says (p | q) and (q | p) are distinct if and only if both p and q are 3 mod 4.

Lemma 71 (Legendre, 1785; Gauss)

(p | q)(q | p) = (−1)p−12 q−12 .

aFirst stated by Euler in 1751. Legendre (1785) did not give a cor- rect proof. Gauss proved the theorem when he was 19. He gave at least 8 different proofs during his life. The 152nd proof appeared in 1963. A computer-generated formal proof was given in Russinoff (1990).

As of 2008, there had been 4 such proofs. Wiedijk (2008), “the Law of Quadratic Reciprocity is the first nontrivial theorem that a student encounters in the mathematics curriculum.”

(36)

The Proof (continued)

• Sum the elements of R in the previous proof in mod2.

• On one hand, this is just (p−1)/2

i=1 i mod 2.

• On the other hand, the sum equals mp +

(p−1)/2

i=1



iq − p

iq p



mod 2

= mp +

⎝q (p−1)/2

i=1

i − p

(p−1)/2

i=1

iq p

⎞⎠ mod 2.

m of the iq mod p are replaced by p − iq mod p.

– But signs are irrelevant under mod2.

(37)

The Proof (continued)

• Ignore odd multipliers to make the sum equal

m +

(p−1)/2

i=1

i −

(p−1)/2

i=1

iq p

⎞⎠ mod 2.

• Equate the above with (p−1)/2

i=1 i modulo 2.

• Now simplify to obtain

m ≡

(p−1)/2

i=1

iq p



mod 2.

(38)

The Proof (continued)

(p−1)/2

i=1 iqp is the number of integral points below the line

y = (q/p) x for 1 ≤ x ≤ (p − 1)/2.

• Gauss’s lemma (p. 561) says (q | p) = (−1)m.

• Repeat the proof with p and q reversed.

• Then (p | q) = (−1)m, where m is the number of integral points above the line y = (q/p) x for

1 ≤ y ≤ (q − 1)/2.

(39)

The Proof (concluded)

• As a result,

(p | q)(q | p) = (−1)m+m.

• But m + m is the total number of integral points in the [1, p−12 ] × [1, q−12 ] rectangle, which is

p − 1 2

q − 1 2 .

(40)

Eisenstein’s Rectangle

(p,q)

(p - 1)/2 (q - 1)/2

Above, p = 11, q = 7, m = 7, m = 8.

(41)

The Jacobi Symbol

a

• The Legendre symbol only works for odd prime moduli.

• The Jacobi symbol (a | m) extends it to cases where m is not prime.

a is sometimes called the numerator and m the denominator.

• Trivially, (1 | m) = 1.

• Define (a | 1) = 1.

aCarl Jacobi (1804–1851).

(42)

The Jacobi Symbol (concluded)

• Let m = p1p2 · · · pk be the prime factorization of m.

• When m > 1 is odd and gcd(a, m) = 1, then

(a | m) =Δ

k i=1

(a | pi).

– Note that the Jacobi symbol equals ±1.

– It reduces to the Legendre symbol when m is a prime.

(43)

Properties of the Jacobi Symbol

The Jacobi symbol has the following properties when it is defined.

1. (ab | m) = (a | m)(b | m).

2. (a | m1m2) = (a | m1)(a | m2).

3. If a ≡ b mod m, then (a | m) = (b | m).

4. (−1 | m) = (−1)(m−1)/2 (by Lemma 70 on p. 561).

5. (2 | m) = (−1)(m2−1)/8.a

6. If a and m are both odd, then (a | m)(m | a) = (−1)(a−1)(m−1)/4.

aBy Lemma 70 (p. 561) and some parity arguments.

(44)

Properties of the Jacobi Symbol (concluded)

• Properties 3–6 allow us to calculate the Jacobi symbol without factorization.

– It will also yield the same result as Euler’s testa when m is an odd prime.

• This situation is similar to the Euclidean algorithm.

• Note also that (a | m) = 1/(a | m) because (a | m) = ±1.b

aRecall p. 553.

bContributed by Mr. Huang, Kuan-Lin (B96902079, R00922018) on December 6, 2011.

(45)

Calculation of (2200 | 999)

(2200| 999) = (202 | 999)

= (2 | 999)(101 | 999)

= (−1)(9992−1)/8(101| 999)

= (−1)124750(101| 999) = (101 | 999)

= (−1)(100)(998)/4(999| 101) = (−1)24950(999| 101)

= (999| 101) = (90 | 101) = (−1)(1012−1)/8(45| 101)

= (−1)1275(45| 101) = −(45 | 101)

= −(−1)(44)(100)/4(101| 45) = −(101 | 45) = −(11 | 45)

= −(−1)(10)(44)/4(45| 11) = −(45 | 11)

= −(1 | 11) = −1.

(46)

A Result Generalizing Proposition 10.3 in the Textbook

Theorem 72 The group of set Φ(n) under multiplication mod n has a primitive root if and only if n is either 1, 2, 4, pk, or 2pk for some nonnegative integer k and an odd prime p.

This result is essential in the proof of the next lemma.

(47)

The Jacobi Symbol and Primality Test

a

Lemma 73 If (M | N) ≡ M(N−1)/2 mod N for all M ∈ Φ(N), then N is a prime. (Assume N is odd.)

• Assume N = mp, where p is an odd prime, gcd(m, p) = 1, and m > 1 (not necessarily prime).

• Let r ∈ Φ(p) such that (r | p) = −1.

• The Chinese remainder theorem says that there is an M ∈ Φ(N) such that

M = r mod p, M = 1 mod m.

aMr. Clement Hsiao (B4506061, R88526067) pointed out that the text- book’s proof for Lemma 11.8 is incorrect in January 1999 while he was a senior.

(48)

The Proof (continued)

• By the hypothesis,

M(N−1)/2 = (M | N) = (M | p)(M | m) = −1 mod N.

• Hence

M(N−1)/2 = −1 mod m.

• But because M = 1 mod m,

M(N−1)/2 = 1 mod m, a contradiction.

(49)

The Proof (continued)

• Second, assume that N = pa, where p is an odd prime and a ≥ 2.

• By Theorem 72 (p. 576), there exists a primitive root r modulo pa.

• From the assumption, MN−1 =

M(N−1)/2 2

= (M|N)2 = 1 mod N for all M ∈ Φ(N).

(50)

The Proof (continued)

• As r ∈ Φ(N) (prove it), we have

rN−1 = 1 mod N.

• As r’s exponent modulo N = pa is φ(N) = pa−1(p − 1), pa−1(p − 1) | (N − 1),

which implies that p | (N − 1).

• But this is impossible given that p | N.

(51)

The Proof (continued)

• Third, assume that N = mpa, where p is an odd prime, gcd(m, p) = 1, m > 1 (not necessarily prime), and a is even.

• The proof mimics that of the second case.

• By Theorem 72 (p. 576), there exists a primitive root r modulo pa.

• From the assumption, MN−1 =

M(N−1)/2 2

= (M|N)2 = 1 mod N for all M ∈ Φ(N).

(52)

The Proof (continued)

• In particular,

MN−1 = 1 mod pa (14)

for all M ∈ Φ(N).

• The Chinese remainder theorem says that there is an M ∈ Φ(N) such that

M = r mod pa, M = 1 mod m.

• Because M = r mod pa and Eq. (14), rN−1 = 1 mod pa.

(53)

The Proof (concluded)

• As r’s exponent modulo N = pa is φ(N) = pa−1(p − 1), pa−1(p − 1) | (N − 1),

which implies that p | (N − 1).

• But this is impossible given that p | N.

(54)

The Number of Witnesses to Compositeness

Theorem 74 (Solovay & Strassen, 1977) If N is an

odd composite, then (M | N) ≡ M(N−1)/2 mod N for at most half of M ∈ Φ(N).

• By Lemma 73 (p. 577) there is at least one a ∈ Φ(N) such that (a | N) ≡ a(N−1)/2 mod N.

• Let B =Δ { b1, b2, . . . , bk } ⊆ Φ(N) be the set of all

distinct residues such that (bi | N) ≡ b(N−1)/2i mod N.

• Let aB =Δ { abi mod N : i = 1, 2, . . . , k }.

• Clearly, aB ⊆ Φ(N), too.

(55)

The Proof (concluded)

• | aB | = k.

abi ≡ abj mod N implies N | a(bi − bj), which is

impossible because gcd(a, N) = 1 and N > | bi − bj |.

• aB ∩ B = ∅ because

(abi)(N−1)/2 ≡ a(N−1)/2b(N−1)/2i ≡ (a | N)(bi | N) ≡ (abi | N).

• Combining the above two results, we know

| B |

φ(N) | B |

| B ∪ aB | = 0.5.

(56)

1: if N is even but N = 2 then

2: return “N is composite”;

3: else if N = 2 then

4: return “N is a prime”;

5: end if

6: Pick M ∈ { 2, 3, . . . , N − 1 } randomly;

7: if gcd(M, N ) > 1 then

8: return “N is composite”;

9: else

10: if (M | N ) ≡ M(N−1)/2 mod N then

11: return “N is (probably) a prime”;

12: else

13: return “N is composite”;

14: end if

(57)

Analysis

• The algorithm certainly runs in polynomial time.

• There are no false positives (for compositeness).

– When the algorithm says the number is composite, it is always correct.

(58)

Analysis (concluded)

• The probability of a false negative (again, for compositeness) is at most one half.

– Suppose the input is composite.

– By Theorem 74 (p. 584),

prob[ algorithm answers “no”| N is composite ] ≤ 0.5.

– Note that we are not referring to the probability that N is composite when the algorithm says “no.”

• So it is a Monte Carlo algorithm for compositeness.a

aNot primes.

(59)

The Improved Density Attack for compositeness

All numbers < N

Witnesses to compositeness of

N via Jacobi Witnesses to

compositeness of N via common

factor

(60)

Randomized Complexity Classes; RP

• Let N be a polynomial-time precise NTM that runs in time p(n) and has 2 nondeterministic choices at each step.

• N is a polynomial Monte Carlo Turing machine for a language L if the following conditions hold:

– If x ∈ L, then at least half of the 2p(n) computation paths of N on x halt with “yes” where n = | x |.

– If x ∈ L, then all computation paths halt with “no.”

• The class of all languages with polynomial Monte Carlo TMs is denoted RP (randomized polynomial time).a

(61)

Comments on RP

• In analogy to Proposition 41 (p. 331), a “yes” instance of an RP problem has many certificates (witnesses).

• There are no false positives.

• If we associate nondeterministic steps with flipping fair coins, then we can phrase RP in the language of

probability.

– If x ∈ L, then N(x) halts with “yes” with probability at least 0.5.

– If x ∈ L, then N(x) halts with “no.”

(62)

Comments on RP (concluded)

• The probability of false negatives is ≤ 0.5.

• But any constant  between 0 and 1 can replace 0.5.

– Repeat the algorithm k =Δ −log12  times and answer

“no” only if all the runs answer “no.”

– The probability of false negatives becomes k ≤ 0.5.

(63)

Where RP Fits

• P ⊆ RP ⊆ NP.

– A deterministic TM is like a Monte Carlo TM except that all the coin flips are ignored.

– A Monte Carlo TM is an NTM with more demands on the number of accepting paths.

• compositeness ∈ RP;a primes ∈ coRP;

primes ∈ RP.b

– In fact, primes ∈ P.c

• RP ∪ coRP is an alternative “plausible” notion of efficient computation.

aRabin (1976); Solovay & Strassen (1977).

bAdleman & Huang (1987).

cAgrawal, Kayal, & Saxena (2002).

(64)

ZPP

a

(Zero Probabilistic Polynomial)

• The class ZPP is defined as RP ∩ coRP.

• A language in ZPP has two Monte Carlo algorithms, one with no false positives (RP) and the other with no false negatives (coRP).

• If we repeatedly run both Monte Carlo algorithms, eventually one definite answer will come (unlike RP).

– A positive answer from the one without false positives.

– A negative answer from the one without false negatives.

(65)

The ZPP Algorithm (Las Vegas)

1: {Suppose L ∈ ZPP.}

2: {N1 has no false positives, and N2 has no false negatives.}

3: while true do

4: if N1(x) = “yes” then

5: return “yes”;

6: end if

7: if N2(x) = “no” then

8: return “no”;

9: end if

10: end while

(66)

ZPP (concluded)

• The expected running time for the correct answer to emerge is polynomial.

– The probability that a run of the 2 algorithms does not generate a definite answer is 0.5 (why?).

– Let p(n) be the running time of each run of the while-loop.

– The expected running time for a definite answer is

 i=1

0.5iip(n) = 2p(n).

• Essentially, ZPP is the class of problems that can be

參考文獻

相關文件

• A language in ZPP has two Monte Carlo algorithms, one with no false positives and the other with no

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that.. – If the algorithm finds that a matching exists, it is always correct

• Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.. • By Markov’s inequality, this new algorithm runs in time kT (n)

• Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.. • By Markov’s inequality, this new algorithm runs in time kT (n)

• Suppose, instead, we run the algorithm for the same running time mkT (n) once and rejects the input if it does not stop within the time bound.. • By Markov’s inequality, this

• If we repeatedly run both Monte Carlo algorithms, eventually one definite answer will come (unlike RP). – A positive answer from the one without

• Appearance: vectorized mathematical code appears more like the mathematical expressions found in textbooks, making the code easier to understand.. • Less error prone: without

It has been well-known that, if △ABC is a plane triangle, then there exists a unique point P (known as the Fermat point of the triangle △ABC) in the same plane such that it