• 沒有找到結果。

Perfect Matching for General Graphs

N/A
N/A
Protected

Academic year: 2022

Share "Perfect Matching for General Graphs"

Copied!
57
0
0

加載中.... (立即查看全文)

全文

(1)

Perfect Matching for General Graphs

Page 438 is about bipartite perfect matching

Now we are given a graph G = (V, E).

V = {v1, v2, . . . , v2n}.

We are asked if there is a perfect matching.

A permutation π of {1, 2, . . . , 2n} such that (vi, vπ(i)) ∈ E

for all vi ∈ V .

(2)

The Tutte Matrix

a

Given a graph G = (V, E), construct the 2n × 2n Tutte matrix TG such that

TijG =







xij if (vi, vj) ∈ E and i < j,

−xij if (vi, vj) ∈ E and i > j, 0 othersie.

The Tutte matrix is a skew-symmetric symbolic matrix.

Similar to Proposition 58 (p. 442):

Proposition 60 G has a perfect matching if and only if det(TG) is not identically zero.

(3)

William Thomas Tutte (1917–2002)

(4)

Monte Carlo Algorithms

a

The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that

– If the algorithm finds that a matching exists, it is always correct (no false positives).

– If the algorithm answers in the negative, then it may make an error (false negatives).

aMetropolis and Ulam (1949).

(5)

Monte Carlo Algorithms (concluded)

The algorithm makes a false negative with probability

≤ 0.5.a

– Note this probability refers to

prob[ algorithm answers “no” | G has a perfect matching ] not

prob[ G has a perfect matching | algorithm answers “no” ].

This probability is not over the space of all graphs or determinants, but over the algorithm’s own coin flips.

It holds for any bipartite graph.

aEquivalently, among the coin flips, at most half of them lead to the wrong answer.

(6)

The Markov Inequality

a

Lemma 61 Let x be a random variable taking nonnegative integer values. Then for any k > 0,

prob[ x ≥ kE[ x ] ] ≤ 1/k.

Let pi denote the probability that x = i.

E[ x ] = X

i

ipi

= X

i<kE[ x ]

ipi + X

i≥kE[ x ]

ipi

≥ kE[ x ] × prob[x ≥ kE[ x ]].

aAndrei Andreyevich Markov (1856–1922).

(7)

Andrei Andreyevich Markov (1856–1922)

(8)

An Application of Markov’s Inequality

Algorithm C runs in expected time T (n) and always gives the right answer.

Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.

By Markov’s inequality, this new algorithm runs in time kT (n) and gives the wrong answer with probability

≤ 1/k.

By running this algorithm m times, we reduce the error probability to ≤ k−m.a

(9)

An Application of Markov’s Inequality (concluded)

Suppose, instead, we run the algorithm for the same running time mkT (n) once and rejects the input if it does not stop within the time bound.

By Markov’s inequality, this new algorithm gives the wrong answer with probability ≤ 1/(mk).

This is much worse than the previous algorithm’s error probability of ≤ k−m.

(10)

fsat for k-sat Formulas (p. 425)

Let φ(x1, x2, . . . , xn) be a k-sat formula.

If φ is satisfiable, then return a satisfying truth assignment.

Otherwise, return “no.”

We next propose a randomized algorithm for this problem.

(11)

A Random Walk Algorithm for φ in CNF Form

1: Start with an arbitrary truth assignment T ;

2: for i = 1, 2, . . . , r do

3: if T |= φ then

4: return “φ is satisfiable with T ”;

5: else

6: Let c be an unsatisfied clause in φ under T ; {All of its literals are false under T .}

7: Pick any x of these literals at random;

8: Modify T to make x true;

9: end if

10: end for

11: return “φ is unsatisfiable”;

(12)

3sat vs. 2sat Again

Note that if φ is unsatisfiable, the algorithm will not refute it.

The random walk algorithm needs expected exponential time for 3sat.

In fact, it runs in expected O((1.333 · · · + ²)n) time with r = 3n,a much better than O(2n).b

We will show immediately that it works well for 2sat.

The state of the art as of 2006 is expected O(1.322n) time for 3sat and expected O(1.474n) time for 4sat.c

aUse this setting per run of the algorithm.

bSch¨oning (1999).

(13)

Random Walk Works for 2sat

a

Theorem 62 Suppose the random walk algorithm with r = 2n2 is applied to any satisfiable 2sat problem with n variables. Then a satisfying truth assignment will be

discovered with probability at least 0.5.

Let ˆT be a truth assignment such that ˆT |= φ.

Assume our starting T differs from ˆT in i values.

Their Hamming distance is i.

Recall T is arbitrary.

Let t(i) denote the expected number of repetitions of the flipping step until a satisfying truth assignment is found.

aPapadimitriou (1991).

(14)

The Proof

It can be shown that t(i) is finite.

t(0) = 0 because it means that T = ˆT and hence T |= φ.

If T 6= ˆT or T is not equal to any other satisfying truth assignment, then we need to flip at least once.

We flip a coin to pick among the 2 literals of a clause not satisfied by the present T .

At least one of the 2 literals is true under ˆT because ˆT satisfies all clauses.

So we have at least 0.5 chance of moving closer to ˆT .

(15)

The Proof (continued)

Thus

t(i) ≤ t(i − 1) + t(i + 1)

2 + 1

for 0 < i < n.

Inequality is used because, for example, T may differ from ˆT in both literals.

It must also hold that

t(n) ≤ t(n − 1) + 1 because at i = n, we can only decrease i.

(16)

The Proof (continued)

As we are only interested in upper bounds, we solve x(0) = 0

x(n) = x(n − 1) + 1

x(i) = x(i − 1) + x(i + 1)

2 + 1, 0 < i < n

This is one-dimensional random walk with a reflecting and an absorbing barrier.

(17)

The Proof (continued)

Add the equations up to obtain

x(1) + x(2) + · · · + x(n)

= x(0) + x(1) + 2x(2) + · · · + 2x(n − 2) + x(n − 1) + x(n) 2

+n + x(n − 1).

Simplify to yield

x(1) + x(n) − x(n − 1)

2 = n.

As x(n) − x(n − 1) = 1, we have x(1) = 2n − 1.

(18)

The Proof (continued)

Iteratively, we obtain

x(2) = 4n − 4, ...

x(i) = 2in − i2.

The worst case happens when i = n, in which case x(n) = n2.

(19)

The Proof (concluded)

We therefore reach the conclusion that t(i) ≤ x(i) ≤ x(n) = n2.

So the expected number of steps is at most n2.

The algorithm picks a running time 2n2.

This amounts to invoking the Markov inequality (p. 460) with k = 2, with the consequence of having a probability of 0.5.

The proof does not yield a polynomial bound for 3sat.a

aContributed by Mr. Cheng-Yu Lee (R95922035) on November 8, 2006.

(20)

Boosting the Performance

We can pick r = 2mn2 to have an error probability of

1 2m by Markov’s inequality.

Alternatively, with the same running time, we can run the “r = 2n2” algorithm m times.

The error probability is now reduced to

≤ 2−m.

(21)

Primality Tests

primes asks if a number N is a prime.

The classic algorithm tests if k | N for k = 2, 3, . . . ,√ N .

But it runs in Ω(2n/2) steps, where n = log2 N .

(22)

The Density Attack for primes

1: Pick k ∈ {2, . . . , N − 1} randomly; {Assume N > 2.}

2: if k | N then

3: return “N is composite”;

4: else

5: return “N is a prime”;

6: end if

(23)

Analysis

a

Suppose N = P Q, a product of 2 primes.

The probability of success is

< 1 − φ(N )

N = 1 − (P − 1)(Q − 1)

P Q = P + Q − 1 P Q .

In the case where P ≈ Q, this probability becomes

< 1

P + 1

Q 2

√N .

This probability is exponentially small.

aSee also p. 407.

(24)

The Fermat Test for Primality

Fermat’s “little” theorem on p. 409 suggests the following primality test for any given number p:

1: Pick a number a randomly from {1, 2, . . . , N − 1};

2: if aN −1 6= 1 mod N then

3: return “N is composite”;

4: else

5: return “N is a prime”;

6: end if

(25)

The Fermat Test for Primality (concluded)

Unfortunately, there are composite numbers called Carmichael numbers that will pass the Fermat test for all a ∈ {1, 2, . . . , N − 1}.a

The Fermat test will return “N is a prime” for all Carmichael numbers N .

There are infinitely many Carmichael numbers.b

In fact, the number of Carmichael numbers less than n exceeds n2/7 for n large enough.

aCarmichael (1910).

bAlford, Granville, and Pomerance (1992).

(26)

Square Roots Modulo a Prime

Equation x2 = a mod p has at most two (distinct) roots by Lemma 56 (p. 414).

– The roots are called square roots.

Numbers a with square roots and gcd(a, p) = 1 are called quadratic residues.

They are

12 mod p, 22 mod p, . . . , (p − 1)2 mod p.

We shall show that a number either has two roots or has none, and testing which one is true is trivial.a

aNo efficient deterministic root-finding algorithms are known yet.

(27)

Euler’s Test

Lemma 63 (Euler) Let p be an odd prime and a 6= 0 mod p.

1. If a(p−1)/2 = 1 mod p, then x2 = a mod p has two roots.

2. If a(p−1)/2 6= 1 mod p, then a(p−1)/2 = −1 mod p and x2 = a mod p has no roots.

(28)

The Proof (continued)

Let r be a primitive root of p.

By Fermat’s “little” theorem, r(p−1)/2 is a square root of 1, so r(p−1)/2 = 1 mod p or r(p−1)/2 = −1 mod p.

But as r is a primitive root, r(p−1)/2 6= 1 mod p.

Hence

r(p−1)/2 = −1 mod p.

(29)

The Proof (continued)

Let a = rk mod p for some k.

Then

1 = a(p−1)/2 = rk(p−1)/2 = h

r(p−1)/2 ik

= (−1)k mod p.

So k must be even.

Suppose a = r2j for some 1 ≤ j ≤ (p − 1)/2.

Then a(p−1)/2 = rj(p−1) = 1 mod p, and a’s two distinct roots are rj, −rj(= rj+(p−1)/2 mod p).

If rj = −rj mod p, then 2rj = 0 mod p, which implies rj = 0 mod p, a contradiction.

(30)

The Proof (continued)

As 1 ≤ j ≤ (p − 1)/2, there are (p − 1)/2 such a’s.

Each such a has 2 distinct square roots.

The square roots of all the a’s are distinct.

The square roots of different a’s must be different.

Hence the set of square roots is {1, 2, . . . , p − 1}.

Because there are (p − 1)/2 such a’s and each a has two distinct square roots.

As a result, a = r2j, 1 ≤ j ≤ (p − 1)/2, exhaust all the quadratic residues.

(31)

The Proof (concluded)

If a = r2j+1, then it has no roots because all the square roots have been taken.

Now,

a(p−1)/2 = h

r(p−1)/2

i2j+1

= (−1)2j+1 = −1 mod p.

(32)

The Legendre Symbola and Quadratic Residuacity Test

By Lemma 63 (p. 481) a(p−1)/2 mod p = ±1 for a 6= 0 mod p.

For odd prime p, define the Legendre symbol (a | p) as

(a | p) =

0 if p | a,

1 if a is a quadratic residue modulo p,

−1 if a is a quadratic nonresidue modulo p.

Euler’s test implies a(p−1)/2 = (a | p) mod p for any odd prime p and any integer a.

Note that (ab|p) = (a|p)(b|p).

(33)

Gauss’s Lemma

Lemma 64 (Gauss) Let p and q be two odd primes. Then (q|p) = (−1)m, where m is the number of residues in

R = { iq mod p : 1 ≤ i ≤ (p − 1)/2 } that are greater than (p − 1)/2.

All residues in R are distinct.

If iq = jq mod p, then p|(j − i) q or p|q.

No two elements of R add up to p.

If iq + jq = 0 mod p, then p|(i + j) or p|q.

– But neither is possible.

(34)

The Proof (continued)

Consider the set R0 of residues that result from R if we replace each of the m elements a ∈ R such that

a > (p − 1)/2 by p − a.

This is equivalent to performing −a mod p.

All residues in R0 are now at most (p − 1)/2.

In fact, R0 = {1, 2, . . . , (p − 1)/2} (see illustration next page).

Otherwise, two elements of R would add up to p, which has been shown to be impossible.

(35)

5 1 2 3 4

6 5

1 2 3 4

6

p = 7 and q = 5.

(36)

The Proof (concluded)

Alternatively, R0 = {±iq mod p : 1 ≤ i ≤ (p − 1)/2}, where exactly m of the elements have the minus sign.

Take the product of all elements in the two representations of R0.

So

[(p − 1)/2]! = (−1)mq(p−1)/2[(p − 1)/2]! mod p.

Because gcd([(p − 1)/2]!, p) = 1, the above implies 1 = (−1)mq(p−1)/2 mod p.

(37)

Legendre’s Law of Quadratic Reciprocity

a

Let p and q be two odd primes.

The next result says their Legendre symbols are distinct if and only if both numbers are 3 mod 4.

Lemma 65 (Legendre (1785), Gauss)

(p|q)(q|p) = (−1)p−12 q−12 .

aFirst stated by Euler in 1751. Legendre (1785) did not give a correct proof. Gauss proved the theorem when he was 19. He gave at least 6 different proofs during his life. The 152nd proof appeared in 1963.

(38)

The Proof (continued)

Sum the elements of R0 in the previous proof in mod2.

On one hand, this is just P(p−1)/2

i=1 i mod 2.

On the other hand, the sum equals

(p−1)/2X

i=1

µ

qi − p

¹qi p

º¶

+ mp mod 2

=

q

(p−1)/2X

i=1

i − p

(p−1)/2X

i=1

¹qi p

º

 + mp mod 2.

– Signs are irrelevant under mod2.

m is as in Lemma 64 (p. 487).

(39)

The Proof (continued)

Ignore odd multipliers to make the sum equal

(p−1)/2X

i=1

i −

(p−1)/2X

i=1

¹qi p

º

 + m mod 2.

Equate the above with P(p−1)/2

i=1 i mod 2 to obtain

m =

(p−1)/2X

i=1

¹qi p

º

mod 2.

(40)

The Proof (concluded)

P(p−1)/2

i=1 bqip c is the number of integral points under the line

y = (q/p) x for 1 ≤ x ≤ (p − 1)/2.

Gauss’s lemma (p. 487) says (q|p) = (−1)m.

Repeat the proof with p and q reversed.

So (p|q) = (−1)m0, where m0 is the number of integral points above the line y = (q/p) x for 1 ≤ y ≤ (q − 1)/2.

As a result, (p|q)(q|p) = (−1)m+m0.

But m + m0 is the total number of integral points in the

p−1 q−1 p−1 q−1

(41)

Eisenstein’s Rectangle

(p,q)

(p - 1)/2 (q - 1)/2

p = 11 and q = 7.

(42)

The Jacobi Symbol

a

The Legendre symbol only works for odd prime moduli.

The Jacobi symbol (a | m) extends it to cases where m is not prime.

Let m = p1p2 · · · pk be the prime factorization of m.

When m > 1 is odd and gcd(a, m) = 1, then

(a|m) = Yk i=1

(a | pi).

Note that the Jacobi symbol equals ±1.

It reduces to the Legendre symbol when m is a prime.

Define (a | 1) = 1.

(43)

Properties of the Jacobi Symbol

The Jacobi symbol has the following properties, for arguments for which it is defined.

1. (ab | m) = (a | m)(b | m).

2. (a | m1m2) = (a | m1)(a | m2).

3. If a = b mod m, then (a | m) = (b | m).

4. (−1 | m) = (−1)(m−1)/2 (by Lemma 64 on p. 487).

5. (2 | m) = (−1)(m2−1)/8.a

6. If a and m are both odd, then (a | m)(m | a) = (−1)(a−1)(m−1)/4.

aBy Lemma 64 (p. 487) and some parity arguments.

(44)

Calculation of (2200|999)

Similar to the Euclidean algorithm and does not require factorization.

(202|999) = (−1)(9992−1)/8(101|999)

= (−1)124750(101|999) = (101|999)

= (−1)(100)(998)/4(999|101) = (−1)24950(999|101)

= (999|101) = (90|101) = (−1)(1012−1)/8(45|101)

= (−1)1275(45|101) = −(45|101)

= −(−1)(44)(100)/4(101|45) = −(101|45) = −(11|45)

= −(−1)(10)(44)/4(45|11) = −(45|11)

= −(1|11) = −1.

(45)

A Result Generalizing Proposition 10.3 in the Textbook

Theorem 66 The group of set Φ(n) under multiplication mod n has a primitive root if and only if n is either 1, 2, 4, pk, or 2pk for some nonnegative integer k and and odd

prime p.

This result is essential in the proof of the next lemma.

(46)

The Jacobi Symbol and Primality Test

a

Lemma 67 If (M |N ) = M(N −1)/2 mod N for all M ∈ Φ(N ), then N is prime. (Assume N is odd.)

Assume N = mp, where p is an odd prime, gcd(m, p) = 1, and m > 1 (not necessarily prime).

Let r ∈ Φ(p) such that (r | p) = −1.

The Chinese remainder theorem says that there is an M ∈ Φ(N ) such that

M = r mod p, M = 1 mod m.

aMr. Clement Hsiao (R88526067) pointed out that the textbook’s

(47)

The Proof (continued)

By the hypothesis,

M(N −1)/2 = (M | N ) = (M | p)(M | m) = −1 mod N.

Hence

M(N −1)/2 = −1 mod m.

But because M = 1 mod m,

M(N −1)/2 = 1 mod m, a contradiction.

(48)

The Proof (continued)

Second, assume that N = pa, where p is an odd prime and a ≥ 2.

By Theorem 66 (p. 499), there exists a primitive root r modulo pa.

From the assumption, MN −1 =

h

M(N −1)/2 i2

= (M |N )2 = 1 mod N for all M ∈ Φ(N ).

(49)

The Proof (continued)

As r ∈ Φ(N ) (prove it), we have

rN −1 = 1 mod N.

As r’s exponent modulo N = pa is φ(N ) = pa−1(p − 1), pa−1(p − 1) | N − 1,

which implies that p | N − 1.

But this is impossible given that p | N .

(50)

The Proof (continued)

Third, assume that N = mpa, where p is an odd prime, gcd(m, p) = 1, m > 1 (not necessarily prime), and a is even.

The proof mimics that of the second case.

By Theorem 66 (p. 499), there exists a primitive root r modulo pa.

From the assumption, MN −1 =

h

M(N −1)/2 i2

= (M |N )2 = 1 mod N for all M ∈ Φ(N ).

(51)

The Proof (continued)

In particular,

MN −1 = 1 mod pa (7)

for all M ∈ Φ(N ).

The Chinese remainder theorem says that there is an M ∈ Φ(N ) such that

M = r mod pa, M = 1 mod m.

Because M = r mod pa and Eq. (7), rN −1 = 1 mod pa.

(52)

The Proof (concluded)

As r’s exponent modulo N = pa is φ(N ) = pa−1(p − 1), pa−1(p − 1) | N − 1,

which implies that p | N − 1.

But this is impossible given that p | N .

(53)

The Number of Witnesses to Compositeness

Theorem 68 (Solovay and Strassen (1977)) If N is an odd composite, then (M |N ) 6= M(N −1)/2 mod N for at least half of M ∈ Φ(N ).

By Lemma 67 (p. 500) there is at least one a ∈ Φ(N ) such that (a|N ) 6= a(N −1)/2 mod N .

Let B = {b1, b2, . . . , bk} ⊆ Φ(N ) be the set of all distinct residues such that (bi|N ) = b(N −1)/2i mod N .

Let aB = {abi mod N : i = 1, 2, . . . , k}.

(54)

The Proof (concluded)

|aB| = k.

abi = abj mod N implies N |a(bi − bj), which is

impossible because gcd(a, N ) = 1 and N > |bi − bj|.

aB ∩ B = ∅ because

(abi)(N −1)/2 = a(N −1)/2b(N −1)/2i 6= (a|N )(bi|N ) = (abi|N ).

Combining the above two results, we know

| B |

φ(N ) | B |

| B ∪ aB | = 0.5.

(55)

1: if N is even but N 6= 2 then

2: return “N is composite”;

3: else if N = 2 then

4: return “N is a prime”;

5: end if

6: Pick M ∈ {2, 3, . . . , N − 1} randomly;

7: if gcd(M, N ) > 1 then

8: return “N is a composite”;

9: else

10: if (M |N ) 6= M(N −1)/2 mod N then

11: return “N is composite”;

12: else

13: return “N is a prime”;

14: end if

15: end if

(56)

Analysis

The algorithm certainly runs in polynomial time.

There are no false positives (for compositeness).

– When the algorithm says the number is composite, it is always correct.

The probability of a false negative is at most one half.

– If the input is composite, then the probability that the algorithm says the number is a prime is ≤ 0.5.

So it is a Monte Carlo algorithm for compositeness.

(57)

The Improved Density Attack for compositeness

All numbers < N

Witnesses to compositeness of

N via Jacobi Witnesses to

compositeness of N via common

factor

參考文獻

相關文件

• Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.. • By Markov’s inequality, this new algorithm runs in time kT (n)

• By definition, a pseudo-polynomial-time algorithm becomes polynomial-time if each integer parameter is limited to having a value polynomial in the input length.. • Corollary 42

We solve the three-in-a-tree problem on

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that.. – If the algorithm finds that a matching exists, it is always correct

• Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.. • By Markov’s inequality, this new algorithm runs in time kT (n)

• Suppose, instead, we run the algorithm for the same running time mkT (n) once and rejects the input if it does not stop within the time bound.. • By Markov’s inequality, this

In this way, we can take these bits and by using the IFFT, we can create an output signal which is actually a time-domain OFDM signal.. The IFFT is a mathematical concept and does

• The .NET Framework provides a run-time environment called the common language runtime, which runs the code and provides services that make the development process easier. •