• 沒有找到結果。

# An Algorithm for fsat Using sat

N/A
N/A
Protected

Share "An Algorithm for fsat Using sat"

Copied!
56
0
0

(1)

### Function Problems

• Decision problems are yes/no problems (sat, tsp (d), etc.).

• Function problems require a solution (a satisfying truth assignment, a best tsp tour, etc.).

• Optimization problems are clearly function problems.

• What is the relation between function and decision problems?

• Which one is harder?

(2)

### Function Problems Cannot Be Easier than Decision Problems

• If we know how to generate a solution, we can solve the corresponding decision problem.

– If you can ﬁnd a satisfying truth assignment eﬃciently, then sat is in P.

– If you can ﬁnd the best tsp tour eﬃciently, then tsp (d) is in P.

• But we shall see that decision problems can be as hard as the corresponding function problems. immediately.

(3)

### fsat

• fsat is this function problem:

– Let φ(x1, x2, . . . , xn) be a boolean expression.

– If φ is satisﬁable, then return a satisfying truth assignment.

– Otherwise, return “no.”

• We next show that if sat ∈ P, then fsat has a polynomial-time algorithm.

• sat is a subroutine (black box) that returns “yes” or

“no” on the satisﬁability of the input.

(4)

### An Algorithm for fsat Using sat

1: t := ; {Truth assignment.}

2: if φ ∈ sat then

3: for i = 1, 2, . . . , n do

4: if φ[ xi = true ] ∈ sat then 5: t := t ∪ { xi = true };

6: φ := φ[ xi = true ];

7: else

8: t := t ∪ { xi = false };

9: φ := φ[ xi = false ];

10: end if 11: end for 12: return t;

13: else

14: return “no”;

15: end if

(5)

### Analysis

• If sat can be solved in polynomial time, so can fsat.

– There are ≤ n + 1 calls to the algorithm for sat.a – Boolean expressions shorter than φ are used in each

call to the algorithm for sat.

• Hence sat and fsat are equally hard (or easy).

aContributed by Ms. Eva Ou (R93922132) on November 24, 2004.

(6)

### Analysis (concluded)

• Note that this reduction from fsat to sat is not a Karp reduction.a

– Will the set of NP-complete problems diﬀer under diﬀerent reductions?b

• Instead, it calls sat multiple times as a subroutine, and its answers guide the search on the computation tree.

aRecall p. 275 and p. 280.

bContributed by Mr. Yu-Ming Lu (R06723032, D08922008) and Mr.

Han-Ting Chen (R10922073) on December 9, 2021.

(7)

### tsp and tsp (d) Revisited

• We are given n cities 1, 2, . . . , n and integer distances dij = dji between any two cities i and j.

• tsp (d) asks if there is a tour with a total distance at most B.

• tsp asks for a tour with the shortest total distance.

– The shortest total distance is at most 

i,j dij.

∗ Recall that the input string contains d11, . . . , dnn.

• Thus the shortest total distance is less than 2| x | in magnitude, where x is the input (why?).

• We next show that if tsp (d) ∈ P, then tsp has a

(8)

### An Algorithm for tsp Using tsp (d)

1: Perform a binary search over interval [ 0, 2| x | ] by calling tsp (d) to obtain the shortest distance, C;

2: for i, j = 1, 2, . . . , n do

3: Call tsp (d) with B = C and dij = C + 1;

4: if “no” then

5: Restore dij to its old value; {Edge [ i, j ] is critical.}

6: end if

7: end for

8: return the tour with edges whose dij ≤ C;

(9)

### Analysis

• An edge which is not on any remaining optimal tours will be eliminated, with its dij set to C + 1.

• So the algorithm ends with n edges which are not eliminated (why?).

• This is true even if there are multiple optimal tours!a

aThanks to a lively class discussion on November 12, 2013 and De- cember 9, 2021.

(10)

### Analysis (concluded)

• There are O(| x | + n2) calls to the algorithm for tsp (d).

• Each call has an input length of O(| x |).

• So if tsp (d) can be solved in polynomial time, so can tsp.

• Hence tsp (d) and tsp are equally hard (or easy).a

aHow about counting the number of optimal tsp tours? This is re- lated to #P-completeness (p. 874). Contributed by Mr. Vincent Hwang (R10922138) on December 9, 2021.

(11)

## Randomized Computation

(12)

I know that half my advertising works, I just don’t know which half.

— John Wanamaker I know that half my advertising is a waste of money, I just don’t know which half!

(13)

### Randomized Algorithms

a

• Randomized algorithms ﬂip unbiased coins.

• There are important problems for which there are no known eﬃcient deterministic algorithms but for which very eﬃcient randomized algorithms exist.

– Extraction of square roots, for instance.

• There are problems where randomization is necessary.

– Secure protocols.

• Randomized version can be more eﬃcient.

– Parallel algorithms for maximal independent set.b

(14)

### Randomized Algorithms (concluded)

• Are randomized algorithms algorithms?a

• Coin ﬂips are occasionally used in politics.b

aPascal, “Truth is so delicate that one has only to depart the least bit from it to fall into error.”

bIn the 2016 Iowa Democratic caucuses, e.g. (see http://edition.cnn.com/2016/02/02/politics/hillary-clinton-coin -flip-iowa-bernie-sanders/index.html).

(15)

### “Four Most Important Randomized Algorithms”

a

1. Primality testing.b

2. Graph connectivity using random walks.c 3. Polynomial identity testing.d

4. Algorithms for approximate counting.e

aTrevisan (2006).

bRabin (1976); Solovay & Strassen (1977).

cAleliunas, Karp, Lipton, Lov´asz, & Rackoﬀ (1979).

dSchwartz (1980); Zippel (1979).

eSinclair & Jerrum (1989).

(16)

### Bipartite Perfect Matching

• We are given a bipartite graph G = (U, V, E).

– U = { u1, u2, . . . , un }.

– V = { v1, v2, . . . , vn }.

– E ⊆ U × V .

• We are asked if there is a perfect matching.

– A permutation π of { 1, 2, . . . , n } such that

(ui, vπ(i)) ∈ E for all i ∈ { 1, 2, . . . , n }.

• A perfect matching contains n edges.

(17)

: : : : :

;

;

;

;

;

(18)

### Symbolic Determinants

• We are given a bipartite graph G.

• Construct the n × n matrix AG whose (i, j)th entry AGij is a symbolic variable xij if (ui, vj) ∈ E and 0 otherwise:

AGij =

⎧⎨

xij, if (ui, vj) ∈ E, 0, otherwise.

(19)

### Symbolic Determinants (continued)

• The matrix for the bipartite graph G on p. 533 isa

AG =

⎢⎢

⎢⎢

⎢⎢

⎢⎢

0 0 x13 x14 0

0 x22 0 0 0

x31 0 0 0 x35

x41 0 x43 x44 0

x51 0 0 0 x55

⎥⎥

⎥⎥

⎥⎥

⎥⎥

. (8)

aThe idea is similar to the Tanner (1981) graph in coding theory.

(20)

### Symbolic Determinants (concluded)

• The determinant of AG is det(AG) =

π

sgn(π) n i=1

AGi,π(i). (9)

– π ranges over all permutations of n elements.

– sgn(π) is 1 if π is the product of an even number of transpositions and −1 otherwise.a

• det(AG) contains n! terms, many of which may be 0s.

aEquivalently, sgn(π) = 1 if the number of (i, j)s such that i < j and π(i) > π(j) is even. Contributed by Mr. Hwan-Jeu Yu (D95922028) on May 1, 2008.

(21)

### Determinant and Bipartite Perfect Matching

• In 

π sgn(π)n

i=1 AGi,π(i), note the following:

– Each summand corresponds to a possible perfect matching π.

– Nonzero summands n

i=1 AGi,π(i) are distinct monomials and will not cancel.

• det(AG) is essentially an exhaustive enumeration.

Proposition 65 (Edmonds, 1967) G has a perfect matching if and only if det(AG) is not identically zero.

(22)

:

: : : :

;

;

;

;

;

(23)

### Perfect Matching and Determinant (concluded)

• The matrix is (p. 535)

AG =

⎢⎢

⎢⎢

⎢⎢

⎢⎢

0 0 x13 x14 0

0 x22 0 0 0

x31 0 0 0 x35

x41 0 x43 x44 0

x51 0 0 0 x55

⎥⎥

⎥⎥

⎥⎥

⎥⎥

.

• det(AG) = −x14x22x35x43x51 + x13x22x35x44x51 + x14x22x31x43x55 − x13x22x31x44x55.

(24)

### How To Test If a Polynomial Is Identically Zero?

• det(AG) is a polynomial in n2 variables.

• It has, potentially, exponentially many terms.

• Expanding the determinant polynomial is thus infeasible.

• If det(AG) ≡ 0, then it remains zero if we substitute arbitrary integers for the variables x11, . . . , xnn.

• When det(AG) ≡ 0, what is the likelihood of obtaining a zero?

(25)

### Number of Roots of a Polynomial

Lemma 66 (Schwartz, 1980) Let p(x1, x2, . . . , xm) ≡ 0 be a polynomial in m variables each of degree at most d. Let M ∈ Z+. Then the number of m-tuples

(x1, x2, . . . , xm) ∈ { 0, 1, . . . , M − 1 }m such that p(x1, x2, . . . , xm) = 0 is

≤ mdMm−1.

• By induction on m (consult the textbook).

(26)

### Density Attack

• The density of roots in the domain is at most mdMm−1

Mm = md

M . (10)

• So suppose p(x1, x2, . . . , xm) ≡ 0.

• Then a random

(x1, x2, . . . , xm) ∈ { 0, 1, . . . , M − 1 }m has a probability of ≤ md/M of being a root of p.

• Note that M is under our control!

– One can raise M to lower the error probability, e.g.

(27)

### Density Attack (concluded)

Here is a sampling algorithm to test if p(x1, x2, . . . , xm) ≡ 0.

1: Choose i1, . . . , im from { 0, 1, . . . , M − 1 } randomly;

2: if p(i1, i2, . . . , im) = 0 then

3: return “p is not identically zero”;

4: else

5: return “p is (probably) identically zero”;

6: end if

(28)

### Analysis

• If p(x1, x2, . . . , xm) ≡ 0 , the algorithm will always be correct as p(i1, i2, . . . , im) = 0.

• Suppose p(x1, x2, . . . , xm) ≡ 0.

– The algorithm will answer incorrectly with

probability at most md/M by Eq. (10) on p. 542.

• We next return to the original problem of bipartite perfect matching.

(29)

### A Randomized Bipartite Perfect Matching Algorithm

a

1: Choose n2 integers i11, . . . , inn from { 0, 1, . . . , 2n2 − 1 } randomly; {So M = 2n2.}

2: Calculate det(AG(i11, . . . , inn)) by Gaussian elimination;

3: if det(AG(i11, . . . , inn)) = 0 then

4: return “G has a perfect matching”;

5: else

6: return “G has (probably) no perfect matchings”;

7: end if

aLov´asz (1979). According to Paul Erd˝os, Lov´asz wrote his ﬁrst sig- niﬁcant paper “at the ripe old age of 17.”

(30)

### Analysis

• If G has no perfect matchings, the algorithm will always be correct as det(AG(i11, . . . , inn)) = 0.

• Suppose G has a perfect matching.

– The algorithm will answer incorrectly with

probability at most md/M = 0.5 with m = n2, d = 1 and M = 2n2 in Eq. (10) on p. 542.

• Run the algorithm independently k times.

• Output “G has no perfect matchings” if and only if all say “(probably) no perfect matchings.”

• The error probability is now reduced to at most 2−k.

(31)

(32)

### Remarks

a

• Note that we calculated

prob[ algorithm answers “no” | G has no perfect matchings ], prob[ algorithm answers “yes” | G has a perfect matching ].

– And they are 1 and ≥ 1/2, respectively.

• We did not calculateb

prob[ G has no perfect matchings | algorithm answers “no” ], prob[ G has a perfect matching | algorithm answers “yes” ].

aThanks to a lively class discussion on May 1, 2008.

bNumerical Recipes in C (1988), “statistics is not a branch of math- ematics!” Similar issues arise in MAP (maximum a posteriori) estimates

(33)

G

11

nn

### )) Be?

• It is at mosta

n! 

2n2n .

• Stirling’s formula says n! ∼

2πn (n/e)n.

• Hence

log2 det(AG(i11, . . . , inn)) = O(n log2 n) bits are suﬃcient for representing the determinant.

• We skip the details about how to make sure that all intermediate results are of polynomial size.

(34)

### An Intriguing Question

a

• Is there an (i11, . . . , inn) that will always give correct answers for the algorithm on p. 545?

• A theorem on p. 642 shows that such an (i11, . . . , inn) exists!

– Whether it can be found eﬃciently is another matter.

• Once (i11, . . . , inn) is available, the algorithm can be made deterministic.

– Is it an algorithm for bipartite perfect matching?b

aThanks to a lively class discussion on November 24, 2004.

bWe have one algorithm for each n — unless there is an algorithm to generate such (i11, . . . , inn) for all n. Contributed by Mr. Han-Ting

(35)

### Randomization vs. Nondeterminism

a

• What are the diﬀerences between randomized algorithms and nondeterministic algorithms?

• Think of a randomized algorithm as a nondeterministic one but with a probability associated with every

guess/branch.

• So each computation path of a randomized algorithm has a probability associated with it.

aContributed by Mr. Olivier Valery (D01922033) and Mr. Hasan Al- hasan (D01922034) on November 27, 2012.

(36)

### Monte Carlo Algorithms

a

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that

– If the algorithm ﬁnds that a matching exists, it is always correct (no false positives; no type I errors).

– If the algorithm answers in the negative, then it may make an error (false negatives; type II errors).

∗ And the error probability must be small.

aMetropolis & Ulam (1949).

(37)

### Monte Carlo Algorithms (continued)

• The algorithm makes a false negative with probability

≤ 0.5.a

• Again, this probability refers tob

prob[ algorithm answers “no”| G has a perfect matching ] not

prob[ G has a perfect matching | algorithm answers “no” ].

aEquivalently, among the coin ﬂip sequences, at most half of them lead to the wrong answer.

(38)

### Monte Carlo Algorithms (concluded)

• This probability 0.5 is not over the space of all graphs or determinants, but over the algorithm’s own coin ﬂips.

– It holds for any bipartite graph.

• In contrast, to calculate

prob[ G has a perfect matching | algorithm answers “no” ], we will need the distribution of G.

• But it is an empirical statement that is very hard to verify.

(39)

### The Markov Inequality

a

Lemma 67 Let x be a random variable taking nonnegative integer values. Then for any k > 0,

prob[ x ≥ kE[ x ] ] ≤ 1/k.

• Let pi denote the probability that x = i.

E[ x ] =

i

ipi =

i<kE[ x ]

ipi +

i≥kE[ x ]

ipi

i≥kE[ x ]

ipi ≥ kE[ x ]

i≥kE[ x ]

pi

≥ kE[ x ] × prob[x ≥ kE[ x ]].

(40)

(41)

### fsat for k-sat Formulas (p. 519)

• Let φ(x1, x2, . . . , xn) be a k-sat formula.

• If φ is satisﬁable, then return a satisfying truth assignment.

• Otherwise, return “no.”

• We next propose a randomized algorithm for this problem.

(42)

### A Random Walk Algorithm for φ in CNF Form

2: for i = 1, 2, . . . , r do

3: if T |= φ then

4: return “φ is satisﬁable with T ”;

5: else

6: Let c be an unsatisﬁed clause in φ under T ; {All of its literals are false under T .}

7: Pick any x of these literals at random;

8: Modify T to make x true;

9: end if

10: end for

11: return “φ is unsatisﬁable”;

(43)

### 3sat vs. 2sat Again

• Note that if φ is unsatisﬁable, the algorithm will answer

“unsatisﬁable.”

• The random walk algorithm needs expected exponential time for 3sat.

– In fact, it runs in expected O((1.333 · · · + )n) time with r = 3n,a much better than O(2n).b

• We will show immediately that it works well for 2sat.

• The state of the art as of 2014 is expected O(1.30704n) time for 3sat and expected O(1.46899n) time for 4sat.c

aUse this setting per run of the algorithm.

b

(44)

### Random Walk Works for 2sat

a

Theorem 68 Suppose the random walk algorithm with r = 2n2 is applied to any satisfiable 2sat problem with n variables. Then a satisfying truth assignment will be

discovered with probability at least 0.5.

• Let ˆT be a truth assignment such that ˆT |= φ.

• Assume our starting T diﬀers from ˆT in i values.

– Their Hamming distance is i.

• Recall T is arbitrary.

(45)

### The Proof

• Let t(i) denote the expected number of repetitions of the ﬂipping stepa until a satisfying truth assignment is

found.

• It can be shown that t(i) is ﬁnite.

• t(0) = 0 because it means that T = ˆT and hence T |= φ.

• If T = ˆT or any other satisfying truth assignment, then we need to ﬂip the coin at least once.

• We ﬂip a coin to pick among the 2 literals of a clause not satisﬁed by the present T .

• At least one of the 2 literals is true under ˆT because ˆT

(46)

### The Proof (continued)

• So we have at least a 50% chance of moving closer to ˆT .

• Thus

t(i) ≤ t(i − 1) + t(i + 1)

2 + 1

for 0 < i < n.

– Inequality is used because, for example, T may diﬀer from ˆT in both literals.

• It must also hold that

t(n) ≤ t(n − 1) + 1 because at i = n, we can only decrease i.

(47)

### The Proof (continued)

• Now, put the necessary relations together:

t(0) = 0, (11)

t(i) ≤ t(i − 1) + t(i + 1)

2 + 1, 0 < i < n, (12)

t(n) ≤ t(n − 1) + 1. (13)

• Technically, this is a one-dimensional random walk with an absorbing barrier at i = 0 and a reﬂecting barrier at i = n (if we replace “≤” with “=”).a

aThe proof in the textbook does exactly that. But a student pointed

(48)

### The Proof (continued)

• Add up the relations for

2t(1), 2t(2), 2t(3), . . . , 2t(n − 1), t(n) to obtaina 2t(1) + 2t(2) + · · · + 2t(n − 1) + t(n)

≤ t(0) + t(1) + 2t(2) + · · · + 2t(n − 2) + 2t(n − 1) + t(n) +2(n − 1) + 1.

• Simplify it to yield

t(1) ≤ 2n − 1. (14)

aAdding up the relations for t(1), t(2), t(3), . . . , t(n−1) will also work, thanks to Mr. Yen-Wu Ti (D91922010).

(49)

### The Proof (continued)

• Add up the relations for 2t(2), 2t(3), . . . , 2t(n − 1), t(n) to obtain

2t(2) + · · · + 2t(n − 1) + t(n)

≤ t(1) + t(2) + 2t(3) + · · · + 2t(n − 2) + 2t(n − 1) + t(n) +2(n − 2) + 1.

• Simplify it to yield

t(2) ≤ t(1) + 2n − 3 ≤ 2n − 1 + 2n − 3 = 4n − 4

(50)

### The Proof (continued)

• Continuing the process, we shall obtaina t(i) ≤ 2in − i2.

• The worst upper bound happens when i = n, in which case

t(n) ≤ n2.

• We conclude that

t(i) ≤ t(n) ≤ n2 for 0 ≤ i ≤ n.

a

(51)

### The Proof (concluded)

• So the expected number of steps is at most n2.

• The algorithm picks r = 2n2.

• Apply the Markov inequality (p. 555) with k = 2 to yield the desired probability of 0.5.

• The proof does not yield a polynomial bound for 3sat.a

aContributed by Mr. Cheng-Yu Lee (R95922035) on November 8, 2006.

(52)

### Boosting the Performance

• We can pick r = 2mn2 to have an error probability of

1 2m by Markov’s inequality.

• Alternatively, with the same running time, we can run the “r = 2n2” algorithm m times.

• The error probability is now reduced to

≤ 2−m.

(53)

### Primality Tests

• primes asks if a number N is a prime.

• The classic algorithm tests if k | N for k = 2, 3, . . . ,√ N .

• But it runs in Ω(2(log2 N)/2) steps.

• compositeness asks if a number is composite.

(54)

### The Fermat Test for Primality

Fermat’s “little” theorem (p. 505) suggests the following primality test for any given number N :

1: Pick a number a randomly from { 1, 2, . . . , N − 1 };

2: if aN−1 ≡ 1 mod N then

3: return “N is composite”;

4: else

5: return “N is (probably) a prime”;

6: end if

(55)

### The Fermat Test for Primality (continued)

• Carmichael numbers are composite numbers that will pass the Fermat test for all a ∈ { 1, 2, . . . , N − 1 }.a

– The Fermat test will return “N is a prime” for all Carmichael numbers N .

• If there are ﬁnitely many Carmichael numbers, store them for matches before running the Fermat test.

• Unfortunately, there are inﬁnitely many such numbers.b – The number of Carmichael numbers less than N

exceeds N2/7 for N large enough.

aCarmichael (1910). Lo (1994) mentions an investment strategy based

(56)

### The Fermat Test for Primality (concluded)

• The Fermat test will fail all of them.

• So the Fermat test is an incorrect algorithm for primes.

• Even suppose N is not a Carmichael number but remains composite.

• We need many a ∈ { 1, 2, . . . , N − 1 } such that aN−1 ≡ 1 mod N.

• Otherwise, the correct answer will come only with a vanishing probability (say 1/N ).a

aContributed by Mr. Vincent Hwang (R10922138) on December 9, 2021.

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that.. – If the algorithm ﬁnds that a matching exists, it is always correct

– It is not hard to show that calculating Euler’s phi function a is “harder than” breaking the RSAa. – Factorization is “harder than” calculating Euler’s phi function

• Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.. • By Markov’s inequality, this new algorithm runs in time kT (n)

Here, a deterministic linear time and linear space algorithm is presented for the undirected single source shortest paths problem with positive integer weights.. The algorithm

In particular, we present a linear-time algorithm for the k-tuple total domination problem for graphs in which each block is a clique, a cycle or a complete bipartite graph,

Real Schur and Hessenberg-triangular forms The doubly shifted QZ algorithm.. Above algorithm is locally

An algorithm is called stable if it satisfies the property that small changes in the initial data produce correspondingly small changes in the final results. (初始資料的微小變動

Then, it is easy to see that there are 9 problems for which the iterative numbers of the algorithm using ψ α,θ,p in the case of θ = 1 and p = 3 are less than the one of the