• 沒有找到結果。

Function Problems

N/A
N/A
Protected

Academic year: 2022

Share "Function Problems"

Copied!
9
0
0

加載中.... (立即查看全文)

全文

(1)

A Few Calculations

• Let p = 13.

• From p. 362, we know φ(p − 1) = 4.

• Hence R(12) = 4.

• And there are 4 primitives roots of p.

• As Φ(p − 1) = {1, 5, 7, 11}, the primitive roots are g1, g5, g7, g11 for any primitive root g.

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 368

The Other Direction of Theorem 47 (p. 346)

• We must show p is a prime only if there is a number r (called primitive root) such that

1. rp−1= 1 mod p, and

2. r(p−1)/q6= 1 mod p for all prime divisors q of p − 1.

• Suppose p is not a prime.

• We proceed to show that no primitive roots exist.

• Suppose rp−1= 1 mod p (note gcd(r, p) = 1).

• We will show that the 2nd condition must be violated.

The Proof (concluded)

• rφ(p)= 1 mod p by the Fermat-Euler theorem (p. 362).

• Because p is not a prime, φ(p) < p − 1.

• Let k be the smallest integer such that rk= 1 mod p.

• As k ≤ φ(p), k < p − 1.

• Let q be a prime divisor of (p − 1)/k > 1.

• Then k|(p − 1)/q.

• Therefore, by virtue of the definition of k, r(p−1)/q = 1 mod p.

• But this violates the 2nd condition.

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 370

Function Problems

• Decisions problem are yes/no problems (sat, tsp (d), etc.).

• Function problems require a solution (a satisfying truth assignment, a best tsp tour, etc.).

• Optimization problems are clearly function problems.

• What is the relation between function and decision problems?

• Which one is harder?

(2)

Function Problems Cannot Be Easier than Decision Problems

• If we know how to generate a solution, we can solve the corresponding decision problem.

– If you can find a satisfying truth assignment efficiently, then sat is in P.

– If you can find the best tsp tour efficiently, then tsp (d) is in P.

• But decision problems can be as hard as the corresponding function problems.

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 372

fsat

• fsat is this function problem:

– Let φ(x1, x2, . . . , xn) be a boolean expression.

– If φ is satisfiable, then return a satisfying truth assignment.

– Otherwise, return “no.”

• We next show that if sat ∈ P, then fsat has a polynomial-time algorithm.

An Algorithm for fsat Using sat

1: t := ;

2: if φ ∈ sat then 3: for i= 1, 2, . . . , n do

4: if φ[ xi= true ] ∈ sat then 5: t:= t ∪ { xi= true };

6: φ:= φ[ xi= true ];

7: else

8: t:= t ∪ { xi= false };

9: φ:= φ[ xi= false ];

10: end if 11: end for 12: return t;

13: else

14: return“no”;

15: end if

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 374

Analysis

• There are ≤ n + 1 calls to the algorithm for sat.a

• Shorter boolean expressions than φ are used in each call to the algorithm for sat.

• So if sat can be solved in polynomial time, so can fsat.

• Hence sat and fsat are equally hard (or easy).

aContributed by Ms. Eva Ou (R93922132) on November 24, 2004.

(3)

tsp and tsp (d) Revisited

• We are given n cities 1, 2, . . . , n and integer distances dij= dji between any two cities i and j.

• The tsp asks for a tour with the shortest total distance (not just the shortest total distance, as earlier).

– The shortest total distance must be at most 2| x |, where x is the input.

• tsp (d) asks if there is a tour with a total distance at most B.

• We next show that if tsp (d) ∈ P, then tsp has a polynomial-time algorithm.

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 376

An Algorithm for tsp Using tsp (d)

1: Perform a binary search over interval [ 0, 2| x |] by calling tsp (d) to obtain the shortest distance C;

2: for i, j = 1, 2, . . . , n do

3: Call tsp (d) with B = C and dij = C + 1;

4: if “no” then

5: Restore dij to old value; {Edge [ i, j ] is critical.}

6: end if

7: end for

8: return the tour with edges whose dij ≤ C;

Analysis

• An edge that is not on any optimal tour will be eliminated, with its dij set to C + 1.

• An edge which is not on all remaining optimal tours will also be eliminated.

• So the algorithm ends with n edges which are not eliminated (why?).

• There are O(| x | + n2) calls to the algorithm for tsp (d).

• So if tsp (d) can be solved in polynomial time, so can tsp.

• Hence tsp (d) and tsp are equally hard (or easy).

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 378

Randomized Computation

(4)

I know that half my advertising works, I just don’t know which half.

— John Wanamaker I know that half my advertising is a waste of money, I just don’t know which half!

— McGraw-Hill ad.

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 380

Randomized Algorithms

a

• Randomized algorithms flip unbiased coins.

• There are important problems for which there are no known efficient deterministic algorithms but for which very efficient randomized algorithms exist.

– Extraction of square roots, for instance.

• There are problems where randomization is necessary.

– Secure protocols.

• Randomized version can be more efficient.

– Parallel algorithm for maximal independent set.

• Are randomized algorithms algorithms?

aRabin (1976); Solovay and Strassen (1977).

Bipartite Perfect Matching

• We are given a bipartite graph G = (U, V, E).

– U = {u1, u2, . . . , un}.

– V = {v1, v2, . . . , vn}.

– E ⊆ U × V .

• We are asked if there is a perfect matching.

– A permutation π of {1, 2, . . . , n} such that (ui, vπ(i)) ∈ E

for all ui∈ U .

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 382

A Perfect Matching



  

 

 

 

 

(5)

Symbolic Determinants

• Given a bipartite graph G, construct the n × n matrix AG whose (i, j)th entry AGij is a variable xij if

(ui, vj) ∈ E and zero otherwise.

• The determinant of AG is det(AG) =X

π

sgn(π)

n

Y

i=1

AGi,π(i). (5) – π ranges over all permutations of n elements.

– sgn(π) is 1 if π is the product of an even number of transpositions and −1 otherwise.

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 384

Determinant and Bipartite Perfect Matching

• InP

πsgn(π)Qn

i=1AGi,π(i), note the following:

– Each summand corresponds to a possible prefect matching π.

– As all variables appear only once, all of these summands are different monomials and will not cancel.

• It is essentially an exhaustive enumeration.

Proposition 56 (Edmonds (1967)) G has a perfect matching if and only ifdet(AG) is not identically zero.

A Perfect Matching in a Bipartite Graph



  

 

 

 

 

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 386

The Perfect Matching in the Determinant

• The matrix is

AG=

0 0 x13 x14 0

0 x22 0 0 0

x31 0 0 0 x35

x41 0 x43 x44 0

x51 0 0 0 x55

 .

• det(AG) = −x14x22x35x43x51+ x13x22x35x44x51+ x14x22x31x43x55− x13x22x31x44x55, each denoting a perfect matching.

(6)

How To Test If a Polynomial Is Identically Zero?

• det(AG) is a polynomial in n2 variables.

• There are exponentially many terms in det(AG).

• Expanding the determinant polynomial is not feasible.

– Too many terms.

• Observation: If det(AG) is identically zero, then it remains zero if we substitute arbitrary integers for the variables x11, . . . , xnn.

• What is the likelihood of obtaining a zero when det(AG) is not identically zero?

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 388

Number of Roots of a Polynomial

Lemma 57 (Schwartz (1980)) Let p(x1, x2, . . . , xm) 6≡ 0 be a polynomial inm variables each of degree at most d. Let M ∈ Z+. Then the number ofm-tuples

(x1, x2, . . . , xm) ∈ {0, 1, . . . , M − 1}m such that p(x1, x2, . . . , xm) = 0 is

≤ mdMm−1.

• By induction on m (consult the textbook).

Density Attack

• The density of roots in the domain is at most mdMm−1

Mm = md

M .

• So suppose p(x1, x2, . . . , xm) 6≡ 0.

• Then a random

(x1, x2, . . . , xn) ∈ { 0, 1, . . . , M − 1 }n has a probability of ≤ md/M of being a root of p.

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 390

Density Attack (concluded)

Here is a sampling algorithm to test if p(x1, x2, . . . , xm) 6≡ 0.

1: Choose i1, . . . , im from {0, 1, . . . , M − 1} randomly;

2: if p(i1, i2, . . . , im) 6= 0 then

3: return “p is not identically zero”;

4: else

5: return “p is identically zero”;

6: end if

(7)

A Randomized Bipartite Perfect Matching Algorithm

a

We now return to the original problem of bipartite perfect matching.

1: Choose n2 integers i11, . . . , inn from {0, 1, . . . , b − 1}

randomly;

1: Calculate det(AG(i11, . . . , inn)) by Gaussian elimination;

2: if det(AG(i11, . . . , inn)) 6= 0 then

3: return “G has a perfect matching”;

4: else

5: return “G has no perfect matchings”;

6: end if

aLov´asz (1979).

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 392

Analysis

• Pick b = 2n2.

• If G has no perfect matchings, the algorithm will always be correct.

• Suppose G has a perfect matching.

– The algorithm will answer incorrectly with probability at most n2d/b = 0.5 because d = 1.

– Run the algorithm independently k times and output

“G has no perfect matchings” if they all say no.

– The error probability is now reduced to at most 2−k.

• Is there an (i11, . . . , inn) that will always give correct answers for all bipartite graphs of 2n nodes?a

aThanks to a lively class discussion on November 24, 2004.

Perfect Matching for General Graphs

• Page 382 is about bipartite perfect matching

• Now we are given a graph G = (V, E).

– V = {v1, v2, . . . , v2n}.

• We are asked if there is a perfect matching.

– A permutation π of {1, 2, . . . , 2n} such that (vi, vπ(i)) ∈ E

for all vi ∈ V .

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 394

The Tutte Matrix

a

• Given a graph G = (V, E), construct the 2n × 2n Tutte matrix TGsuch that

TijG=





xij if (vi, vj) ∈ E and i < j,

−xij if (vi, vj) ∈ E and i > j, 0 othersie.

• The Tutte matrix is a skew-symmetric symbolic matrix.

• Similar to Proposition 56 (p. 385):

Proposition 58 G has a perfect matching if and only if det(TG) is not identically zero.

aWilliam Thomas Tutte (1917–2002).

(8)

Monte Carlo Algorithms

a

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that

– If the algorithm finds that a matching exists, it is always correct (no false positives).

– If the algorithm answers in the negative, then it may make an error (false negative).

• The algorithm makes a false negative with probability

≤ 0.5.

• This probability is not over the space of all graphs or determinants, but over the algorithm’s own coin flips.

– It holds for any bipartite graph.

aMetropolis and Ulam (1949).

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 396

The Markov Inequality

a

Lemma 59 Let x be a random variable taking nonnegative integer values. Then for anyk > 0,

prob[x ≥ kE[ x ]] ≤ 1/k.

• Let pi denote the probability that x = i.

E[ x ] = X

i

ipi

= X

i<kE[ x ]

ipi+ X

i≥kE[ x ]

ipi

≥ kE[ x ] × prob[x ≥ kE[ x ]].

aAndrei Andreyevich Markov (1856–1922).

An Application of Markov’s Inequality

• Algorithm C runs in expected time T (n) and always gives the right answer.

• Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.

• By Markov’s inequality, this new algorithm runs in time kT (n) and gives the wrong answer with probability

≤ 1/k.

• By running this algorithm m times, we reduce the error probability to ≤ k−m.

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 398

An Application of Markov’s Inequality (concluded)

• Suppose, instead, we run the algorithm for the same running time mkT (n) once and rejects the input if it does not stop within the time bound.

• By Markov’s inequality, this new algorithm gives the wrong answer with probability ≤ 1/(mk).

• This is a far cry from the previous algorithm’s error probability of ≤ k−m.

• The loss comes from the fact that Markov’s inequality does not take advantage of any specific feature of the random variable.

(9)

fsat for k-sat Formulas (p. 373)

• Let φ(x1, x2, . . . , xn) be a k-sat formula.

• If φ is satisfiable, then return a satisfying truth assignment.

• Otherwise, return “no.”

• We next propose a randomized algorithm for this problem.

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 400

A Random Walk Algorithm for φ in CNF Form

1: Start with an arbitrary truth assignment T ;

2: for i = 1, 2, . . . , r do

3: if T |= φ then

4: return “φ is satisfiable with T ”;

5: else

6: Let c be an unsatisfiable clause in φ under T ; {All of its literals are false under T .}

7: Pick any x of these literals at random;

8: Modify T to make x true;

9: end if

10: end for

11: return “φ is unsatisfiable”;

3sat vs. 2sat Again

• Note that if φ is unsatisfiable, the algorithm will not refute it.

• The random walk algorithm needs expected exponential time for 3sat.

– In fact, it runs in expected O((1.333 · · · + )n) time with r = 3n, much better than O(2n).a

• We will show immediately that it works well for 2sat.

• The state of the art is expected O(1.324n) time for 3sat and expected O(1.474n) time for 4sat.b

aSch¨oning (1999).

bKwama and Tamaki (2004).

c

2004 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 402

參考文獻

相關文件

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that.. – If the algorithm finds that a matching exists, it is always correct

• Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.. • By Markov’s inequality, this new algorithm runs in time kT (n)

To do (9), you need to recall the exercise from hw 1 and hw 2 in Calculus I: (you do not need to turn in the following exercises) If you are not familiar with the exercises below,

[Hint: You may find the following fact useful.. If d is a metric for the topology of X, show that d|A × A is a metric for

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that.. – If the algorithm finds that a matching exists, it is always correct

Prove that the algorithm is correct for positive integers a, b using the mathematical definition of gcd?. (Hint: Proving by contradiction may

(a) Consider a binary classification algorithm A majority that returns a constant classifier that always predicts the majority class (i.e., the class with more instances in the data

[r]