fsat
• fsat is this function problem:
– Let φ(x1, x2, . . . , xn) be a boolean expression.
– If φ is satisfiable, then return a satisfying truth assignment.
– Otherwise, return “no.”
• We next show that if sat ∈ P, then fsat has a polynomial-time algorithm.
An Algorithm for fsat Using sat
1: t := ²; {Truth assignment.}
2: if φ ∈ sat then
3: for i = 1, 2, . . . , n do
4: if φ[ xi = true ] ∈ sat then 5: t := t ∪ { xi = true };
6: φ := φ[ xi = true ];
7: else
8: t := t ∪ { xi = false };
9: φ := φ[ xi = false ];
10: end if 11: end for 12: return t;
13: else
14: return “no”;
15: end if
Analysis
• If sat can be solved in polynomial time, so can fsat.
– There are ≤ n + 1 calls to the algorithm for sat.a – Shorter boolean expressions than φ are used in each
call to the algorithm for sat.
• Hence sat and fsat are equally hard (or easy).
• Note that this reduction from fsat to sat is not a Karp reduction (recall p. 217).
• Instead, it calls sat multiple times as a subroutine.
aContributed by Ms. Eva Ou (R93922132) on November 24, 2004.
tsp and tsp (d) Revisited
• We are given n cities 1, 2, . . . , n and integer distances dij = dji between any two cities i and j.
• tsp (d) asks if there is a tour with a total distance at most B.
• tsp asks for a tour with the shortest total distance.
– The shortest total distance is at most P
i,j dij.
∗ Recall that the input string contains d11, . . . , dnn.
∗ Thus the shortest total distance is less than 2| x | in magnitude, where x is the input (why?).
• We next show that if tsp (d) ∈ P, then tsp has a polynomial-time algorithm.
An Algorithm for tsp Using tsp (d)
1: Perform a binary search over interval [ 0, 2| x | ] by calling tsp (d) to obtain the shortest distance, C;
2: for i, j = 1, 2, . . . , n do
3: Call tsp (d) with B = C and dij = C + 1;
4: if “no” then
5: Restore dij to old value; {Edge [ i, j ] is critical.}
6: end if
7: end for
8: return the tour with edges whose dij ≤ C;
Analysis
• An edge that is not on any optimal tour will be eliminated, with its dij set to C + 1.
• An edge which is not on all remaining optimal tours will also be eliminated.
• So the algorithm ends with n edges which are not eliminated (why?).
• There are O(| x | + n2) calls to the algorithm for tsp (d).
• So if tsp (d) can be solved in polynomial time, so can tsp.
• Hence tsp (d) and tsp are equally hard (or easy).
Function Problems Are Not Harder than Decision Problems If P = NP
Theorem 57 Suppose that P = NP. Then, for every NP language L there exists a polynomial-time TM B that on input x ∈ L outputs a certificate for x.
• We are looking for a certificate in the sense of Proposition 34 (p. 273).
• That is, a certificate y for every x ∈ L such that (x, y) ∈ R,
where R is a polynomially decidable and polynomially balanced relation.
The Proof (concluded)
• Recall the algorithm for fsat on p. 426.
• The reduction of Cook’s Theorem L to sat is a Levin reduction (p. 277).
• So there is a polynomial-time computable function R such that x ∈ L iff R(x) ∈ sat.
• In fact, the proof gives an efficient algorithm to
transform a satisfying assignment of R(x) to a certificate for x, too.
• Therefore, we can use the algorithm for fsat to come up with an assignment for R(x) and then map it back into a certificate for x.
What If NP = coNP?
a• Can you say similar things?
aContributed by Mr. Ren-Shuo Liu (D98922016) on October 27, 2009.
Randomized Computation
I know that half my advertising works, I just don’t know which half.
— John Wanamaker I know that half my advertising is a waste of money, I just don’t know which half!
— McGraw-Hill ad.
Randomized Algorithms
a• Randomized algorithms flip unbiased coins.
• There are important problems for which there are no known efficient deterministic algorithms but for which very efficient randomized algorithms exist.
– Extraction of square roots, for instance.
• There are problems where randomization is necessary.
– Secure protocols.
• Randomized version can be more efficient.
– Parallel algorithm for maximal independent set.
aRabin (1976); Solovay and Strassen (1977).
“Four Most Important Randomized Algorithms”
a1. Primality testing.b
2. Graph connectivity using random walks.c 3. Polynomial identity testing.d
4. Algorithms for approximate counting.e
aTrevisan (2006).
bRabin (1976); Solovay and Strassen (1977).
cAleliunas, Karp, Lipton, Lov´asz, and Rackoff (1979).
dSchwartz (1980); Zippel (1979).
eSinclair and Jerrum (1989).
Bipartite Perfect Matching
• We are given a bipartite graph G = (U, V, E).
– U = {u1, u2, . . . , un}.
– V = {v1, v2, . . . , vn}.
– E ⊆ U × V .
• We are asked if there is a perfect matching.
– A permutation π of {1, 2, . . . , n} such that (ui, vπ(i)) ∈ E
for all i ∈ {1, 2, . . . , n}.
A Perfect Matching
X
X
X
X
X
Y
Y
Y
Y
Y
Symbolic Determinants
• We are given a bipartite graph G.
• Construct the n × n matrix AG whose (i, j)th entry AGij is a variable xij if (ui, vj) ∈ E and zero otherwise.
Symbolic Determinants (concluded)
• The determinant of AG is det(AG) = X
π
sgn(π) Yn i=1
AGi,π(i). (5)
– π ranges over all permutations of n elements.
– sgn(π) is 1 if π is the product of an even number of transpositions and −1 otherwise.
– Equivalently, sgn(π) = 1 if the number of (i, j)s such that i < j and π(i) > π(j) is even.a
aContributed by Mr. Hwan-Jeu Yu (D95922028) on May 1, 2008.
Determinant and Bipartite Perfect Matching
• In P
π sgn(π)Qn
i=1 AGi,π(i), note the following:
– Each summand corresponds to a possible perfect matching π.
– As all variables appear only once, all of these summands are different monomials and will not cancel.
• It is essentially an exhaustive enumeration.
Proposition 58 (Edmonds (1967)) G has a perfect matching if and only if det(AG) is not identically zero.
A Perfect Matching in a Bipartite Graph
X
X
X
X
X
Y
Y
Y
Y
Y
The Perfect Matching in the Determinant
• The matrix is
AG =
0 0 x13 x14 0
0 x22 0 0 0
x31 0 0 0 x35
x41 0 x43 x44 0
x51 0 0 0 x55
.
• det(AG) = −x14x22x35x43x51 + x13x22x35x44x51 + x14x22x31x43x55 − x13x22x31x44x55, each denoting a perfect matching.
How To Test If a Polynomial Is Identically Zero?
• det(AG) is a polynomial in n2 variables.
• There are exponentially many terms in det(AG).
• Expanding the determinant polynomial is not feasible.
– Too many terms.
• Observation: If det(AG) is identically zero, then it
remains zero if we substitute arbitrary integers for the variables x11, . . . , xnn.
• What is the likelihood of obtaining a zero when det(AG) is not identically zero?
Number of Roots of a Polynomial
Lemma 59 (Schwartz (1980)) Let p(x1, x2, . . . , xm) 6≡ 0 be a polynomial in m variables each of degree at most d. Let M ∈ Z+. Then the number of m-tuples
(x1, x2, . . . , xm) ∈ {0, 1, . . . , M − 1}m such that p(x1, x2, . . . , xm) = 0 is
≤ mdMm−1.
• By induction on m (consult the textbook).
Density Attack
• The density of roots in the domain is at most mdMm−1
Mm = md
M . (6)
• So suppose p(x1, x2, . . . , xm) 6≡ 0.
• Then a random
(x1, x2, . . . , xm) ∈ { 0, 1, . . . , M − 1 }m has a probability of ≤ md/M of being a root of p.
• Note that M is under our control.
Density Attack (concluded)
Here is a sampling algorithm to test if p(x1, x2, . . . , xm) 6≡ 0.
1: Choose i1, . . . , im from {0, 1, . . . , M − 1} randomly;
2: if p(i1, i2, . . . , im) 6= 0 then
3: return “p is not identically zero”;
4: else
5: return “p is (probably) identically zero”;
6: end if
A Randomized Bipartite Perfect Matching Algorithm
aWe now return to the original problem of bipartite perfect matching.
1: Choose n2 integers i11, . . . , inn from {0, 1, . . . , 2n2 − 1}
randomly;
2: Calculate det(AG(i11, . . . , inn)) by Gaussian elimination;
3: if det(AG(i11, . . . , inn)) 6= 0 then
4: return “G has a perfect matching”;
5: else
6: return “G has no perfect matchings”;
7: end if
aLov´asz (1979). According to Paul Erd˝os, Lov´asz wrote his first sig- nificant paper “at the ripe old age of 17.”
Analysis
• If G has no perfect matchings, the algorithm will always be correct.
• Suppose G has a perfect matching.
– The algorithm will answer incorrectly with
probability at most n2d/(2n2) = 0.5 with d = 1 in Eq. (6) on p. 447.
– Run the algorithm independently k times and output
“G has no perfect matchings” if and only if they all say no.
– The error probability is now reduced to at most 2−k.
Analysis (concluded)
a• Note that we are calculating
prob[ algorithm answers “no” | G has no perfect matchings ], prob[ algorithm answers “yes” | G has a perfect matching ].
• We are not calculating
prob[ G has no perfect matchings | algorithm answers “no” ], prob[ G has a perfect matching | algorithm answers “yes” ].
aThanks to a lively class discussion on May 1, 2008.
But How Large Can det(A
G(i
11, . . . , i
nn)) Be?
• It is at most
n! ¡
2n2¢n .
• Stirling’s formula says n! ∼ √
2πn (n/e)n.
• Hence
log2 det(AG(i11, . . . , inn)) = O(n log2 n) bits are sufficient for representing the determinant.
• We skip the details about how to make sure that all intermediate results are of polynomial sizes.
L´oszl´o Lov´asz (1948–)
An Intriguing Question
a• Is there an (i11, . . . , inn) that will always give correct answers for all bipartite graphs of 2n nodes?
• A theorem on p. 543 shows that such a witness exists!
• Whether it can be found efficiently is another question.
aThanks to a lively class discussion on November 24, 2004.