• 沒有找到結果。

The Proof (concluded)

N/A
N/A
Protected

Academic year: 2022

Share "The Proof (concluded)"

Copied!
8
0
0

加載中.... (立即查看全文)

全文

(1)

Approximability, Unapproximability, and Between

• knapsack, node cover, maxsat, and max cut have approximation thresholds less than 1.

– knapsack has a threshold of 0 (see p. 586).

– But node cover and maxsat have a threshold larger than 0.

• The situation is maximally pessimistic for tsp: It cannot be approximated unless P = NP (see p. 584).

– The approximation threshold of tsp is 1.

∗ The threshold is 1/3 if the tsp satisfies the triangular inequality.

– The same holds for independent set.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 583

Unapproximability of tsp

a

Theorem 75 The approximation threshold oftsp is 1 unless P= NP.

• Suppose there is a polynomial-time -approximation algorithm for tsp for some  < 1.

• We shall construct a polynomial-time algorithm for the NP-complete hamiltonian cycle.

• Given any graph G = (V, E), construct a tsp with |V | cities with distances

dij=

1, if { i, j } ∈ E

|V |

1−, otherwise

aSahni and Gonzales (1976).

The Proof (concluded)

• Run the alleged approximation algorithm on this tsp.

• Suppose a tour of cost |V | is returned.

– This tour must be a Hamiltonian cycle.

• Suppose a tour with at least one edge of length 1−|V | is returned.

– The total length of this tour is > 1−|V |.

– Because the algorithm is -approximate, the optimum is at least 1 −  times the returned tour’s length.

– The optimum tour has a cost exceeding | V |.

– Hence G has no Hamiltonian cycles.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 585

knapsack Has an Approximation Threshold of Zero

a

Theorem 76 For any , there is a polynomial-time

-approximation algorithm for knapsack.

• We have n weights w1, w2, . . . , wn∈ Z+, a weight limit W , and n values v1, v2, . . . , vn∈ Z+.b

• We must find an S ⊆ {1, 2, . . . , n} such that P

i∈Swi≤ W and P

i∈Svi is the largest possible.

• Let

V = max{v1, v2, . . . , vn}.

aIbarra and Kim (1975).

bIf the values are fractional, the result is slightly messier but the main conclusion remains correct. Contributed by Mr. Jr-Ben Tian (R92922045) on December 29, 2004.

(2)

The Proof (continued)

• For 0 ≤ i ≤ n and 0 ≤ v ≤ nV , define W (i, v) to be the minimum weight attainable by selecting some among the i first items, so that their value is exactly v.

• Start with W (0, v) = ∞ for all v.

• Then

W (i + 1, v) = min{W (i, v), W (i, v − vi+1) + wi+1}.

• Finally, pick the largest v such that W (n, v) ≤ W .

• The running time is O(n2V ), not polynomial time.

• Key idea: Limit the number of precision bits.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 587

The Proof (continued)

• Given the instance x = (w1, . . . , wn, W, v1, . . . , vn), we define the approximate instance

x0= (w1, . . . , wn, W, v10, . . . , v0n), where

vi0= 2bj vi

2b k.

• Solving x0takes time O(n2V /2b).

• The solution S0 is close to the optimum solution S:

X

i∈S

vi X

i∈S0

vi X

i∈S0

vi0X

i∈S

v0iX

i∈S

(vi2b) ≥X

i∈S

vi− n2b.

The Proof (concluded)

• Hence

X

i∈S0

vi ≥X

i∈S

vi− n2b.

• Because V is a lower bound on opt (if, without loss of generality, wi≤ W ), the relative deviation from the optimum is at most n2b/V .

• By truncating the last b = blog2 V

nc bits of the values, the algorithm becomes -approximate.

• The running time is then O(n2V /2b) = O(n3/), a polynomial in n and 1/.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 589

Pseudo-Polynomial-Time Algorithms

• Consider problems with inputs that consist of a

collection of integer parameters (tsp, knapsack, etc.).

• An algorithm for such a problem whose running time is a polynomial of the input length and the value (not length) of the largest integer parameter is a

pseudo-polynomial-time algorithm.a

• On p. 587, we presented a pseudo-polynomial-time algorithm for knapsack that runs in time O(n2V ).

• How about tsp (d), another NP-complete problem?

aGarey and Johnson (1978).

(3)

No Pseudo-Polynomial-Time Algorithms for tsp (d)

• By definition, a pseudo-polynomial-time algorithm becomes polynomial-time if each integer parameter is limited to having a value polynomial in the input length.

• Corollary 39 (p. 304) showed that hamiltonian path is reducible to tsp (d) with weights 1 and 2.

• As hamiltonian path is NP-complete, tsp (d) cannot have pseudo-polynomial-time algorithms unless P = NP.

• tsp (d) is said to be strongly NP-hard.

• Many weighted versions of NP-complete problems are strongly NP-hard.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 591

Polynomial-Time Approximation Scheme

• Algorithm M is a polynomial-time approximation scheme(PTAS) for a problem if:

– For each  > 0 and instance x of the problem, M runs in time polynomial (depending on ) in | x |.

∗ Think of  as a constant.

– M is an -approximation algorithm for every  > 0.

Fully Polynomial-Time Approximation Scheme

• A polynomial-time approximation scheme is fully polynomial(FPTAS) if the running time depends polynomially on | x | and 1/.

– Maybe the best result for a “hard” problem.

– For instance, knapsack is fully polynomial with a running time of O(n3/) (p. 586).

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 593

Square of G

• Let G = (V, E) be an undirected graph.

• G2 has nodes {(v1, v2) : v1, v2∈ V } and edges

{{ (u, u0), (v, v0) } : (u = v ∧ { u0, v0} ∈ E) ∨ { u, v } ∈ E}.

1

2

3

(1,1)

G

(1,2) (1,3)

(2,1) (2,2) (2,3)

(3,1) (3,2) (3,3)

G2

(4)

Independent Sets of G and G

2

Lemma 77 G(V, E) has an independent set of size k if and only ifG2 has an independent set of sizek2.

• Suppose G has an independent set I ⊆ V of size k.

• {(u, v) : u, v ∈ I} is an independent set of size k2 of G2.

1

2

3

(1,1)

G

(1,2) (1,3)

(2,1) (2,2) (2,3)

(3,1) (3,2) (3,3)

G2

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 595

The Proof (continued)

• Suppose G2 has an independent set I2 of size k2.

• U ≡ {u : ∃v ∈ V (u, v) ∈ I2} is an independent set of G.

1

2

3

(1,1)

G

(1,2) (1,3)

(2,1) (2,2) (2,3)

(3,1) (3,2) (3,3)

G2

• | U | is the number of “rows” that the nodes in I2 occupy.

The Proof (concluded)

a

• If | U | ≥ k, then we are done.

• Now assume | U | < k.

• As the k2 nodes in I2 cover fewer than k “rows,” there must be a “row” in possession of > k nodes of I2.

• Those > k nodes will be independent in G as each “row”

is a copy of G.

aThanks to a lively class discussion on December 29, 2004.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 597

Approximability of independent set

• The approximation threshold of the maximum independent set is either zero or one (it is one!).

Theorem 78 If there is a polynomial-time -approximation algorithm forindependent set for any 0 <  < 1, then there is a polynomial-time approximation scheme.

• Let G be a graph with a maximum independent set of size k.

• Suppose there is an O(ni)-time -approximation algorithm for independent set.

(5)

The Proof (continued)

• By Lemma 77 (p. 595), the maximum independent set of G2 has size k2.

• Apply the algorithm to G2.

• The running time is O(n2i).

• The resulting independent set has size ≥ (1 − ) k2.

• By the construction in Lemma 77 (p. 595), we can obtain an independent set of size ≥p

(1 − ) k2 for G.

• Hence there is a (1 −√

1 − )-approximation algorithm for independent set.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 599

The Proof (concluded)

• In general, we can apply the algorithm to G2` to obtain an (1 − (1 − )2−`)-approximation algorithm for

independent set.

• The running time is n2`i.a

• Now pick ` = dloglog(1−log(1−)0)e.

• The running time becomes nilog(1−0)log(1−).

• It is an 0-approximation algorithm for independent set.

aIt is not fully polynomial.

Comments

• independent set and node cover are reducible to each other (Corollary 37, p. 286).

• node cover has an approximation threshold at most 0.5 (p. 569).

• But independent set is unapproximable (see the textbook).

• independent set limited to graphs with degree ≤ k is called k-degree independent set.

• k-degree independent set is approximable (see the textbook).

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 601

On P vs NP

(6)

Density

a

The density of language L ⊆ Σ is defined as densL(n) = |{x ∈ L : | x | ≤ n}|.

• If L = {0, 1}, then densL(n) = 2n+1− 1.

• So the density function grows at most exponentially.

• For a unary language L ⊆ {0}, densL(n) ≤ n + 1.

– Because L ⊆ {, 0, 00, . . . ,

n

z }| { 00 · · · 0, . . .}.

aBerman and Hartmanis (1977).

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 603

Sparsity

• Sparse languages are languages with polynomially bounded density functions.

• Dense languages are languages with superpolynomial density functions.

Self-Reducibility for sat

• An algorithm exploits self-reducibility if it reduces the problem to the same problem with a smaller size.

• Let φ be a boolean expression in n variables x1, x2, . . . , xn.

• t ∈ {0, 1}j is a partial truth assignment for x1, x2, . . . , xj.

• φ[ t ] denotes the expression after substituting the truth values of t for x1, x2, . . . , xt in φ.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 605

An Algorithm for sat with Self-Reduction

We call the algorithm below with empty t.

1: if | t | = n then

2: returnφ[ t ];

3: else

4: returnφ[ t0 ] ∨ φ[ t1 ];

5: end if

The above algorithm runs in exponential time, by visiting all the partial assignments (or nodes on a depth-n binary tree).

(7)

NP-Completeness and Density

a

Theorem 79 If a unary language U ⊆ {0} is NP-complete, then P= NP.

• Suppose there is a reduction R from sat to U.

• We shall use R to guide us in finding the truth assignment that satisfies a given boolean expression φ with n variables if it is satisfiable.

• Specifically, we use R to prune the exponential-time exhaustive search on p. 606.

• The trick is to keep the already discovered results φ[ t ] in a table H.

aBerman (1978).

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 607

1: if | t | = n then 2: return φ[ t ];

3: else

4: if (R(φ[ t ]), v) is in table H then 5: return v;

6: else

7: if φ[ t0 ] = “satisfiable” or φ[ t1 ] = “satisfiable” then 8: Insert (R(φ[ t ]), 1) into H;

9: return“satisfiable”;

10: else

11: Insert (R(φ[ t ]), 0) into H;

12: return“unsatisfiable”;

13: end if 14: end if 15: end if

The Proof (continued)

• Since R is a reduction, R(φ[ t ]) = R(φ[ t0]) implies that φ[ t ] and φ[ t0] must be both satisfiable or unsatisfiable.

• R(φ[ t ]) has polynomial length ≤ p(n) because R runs in log space.

• As R maps to unary numbers, there are only polynomially many p(n) values of R(φ[ t ]).

• How many nodes of the complete binary tree (of invocations/truth assignments) need to be visited?

• If that number is a polynomial, the overall algorithm runs in polynomial time and we are done.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 609

The Proof (continued)

• A search of the table takes time O(p(n)) in the random access memory model.

• The running time is O(Mp(n)), where M is the total number of invocations of the algorithm.

• The invocations of the algorithm form a binary tree of depth at most n.

(8)

The Proof (continued)

• There is a set T = {t1, t2, . . .} of invocations (partial truth assignments, i.e.) such that:

– |T | ≥ (M − 1)/(2n).

– All invocations in T are recursive (nonleaves).

– None of the elements of T is a prefix of another.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 611

         

    

            

              

          

             

  

              

            

  ! "   !  

! #            

The Proof (continued)

• All invocations t ∈ T have different R(φ[ t ]) values.

– None of s, t ∈ T is a prefix of another.

– The invocation of one started after the invocation of the other had terminated.

– If they had the same value, the one that was invoked second would have looked it up, and therefore would not be recursive, a contradiction.

• The existence of T implies that there are at least (M − 1)/(2n) different R(φ[ t ]) values in the table.

2004 Prof. Yuh-Dauh Lyuu, National Taiwan Universityc Page 613

The Proof (concluded)

• We already know that there are at most p(n) such values.

• Hence (M − 1)/(2n) ≤ p(n).

• Thus M ≤ 2np(n) + 1.

• The running time is therefore O(Mp(n)) = O(np2(n)).

• We comment that this theorem holds for any sparse language, not just unary ones.a

aMahaney (1980).

參考文獻

相關文件

A: polynomially verifiable if ∃ a polynomial verifier Note that we measure time on |w | without.

If ∃ a polynomial algorithm for one NP ⇒ P = NP These problems are called NP-complete problems They are useful to study the issue of P versus NP To prove P 6= NP: only need to focus

We then use Theorem 1 to show that a graph having neither articulation points nor similar pairs must have an induced subgraph isomorphic to some member of

The NP-completeness of finding the mini- mum diameter subgraph with a budget constraint was es- tablished in [17], while a polynomial-time algorithm for finding the minimum

(1) In this exercise, you are going to prove the fundamental Theorem of algebra: any nonconstant complex polynomial must have a root in C. Let us prove it by

2002, Kanpur, India, Agrawal, and university students Kayal and Saxena found a polynomial time algorithm for the primality testing problem, thereby proves it is a problem in class

In almost all pre-calculus books a rational function is defined in the classical sense as the ratio of two polynomial functions, i.e. one polynomial divided by another

Piecewise polynomial interpolation: divide the interval into a collection of subintervals and construct different approximation on each subinterval. The simplest piecewise