• 沒有找到結果。

Degrees of Difficulty

N/A
N/A
Protected

Academic year: 2022

Share "Degrees of Difficulty"

Copied!
51
0
0

加載中.... (立即查看全文)

全文

(1)

Reductions and Completeness

(2)

Degrees of Difficulty

When is a problem more difficult than another?

B reduces to A if there is a transformation R which for every input x of B yields an equivalent input R(x) of A.

The answer to x for B is the same as the answer to R(x) for A.

– There must be restrictions on the complexity of computing R.

Otherwise, R(x) might as well solve B.

E.g., R(x) = “yes” if and only if x ∈ B!

(3)

Degrees of Difficulty (concluded)

We say problem A is at least as hard as problem B if B reduces to A.

This makes intuitive sense: If A is able to solve your problem B after only a little bit of work (R), then A must be at least as hard.

If A were easy, it combined with R (which is also easy) would make B easy, too.a

aThanks to a lively class discussion on October 13, 2009.

(4)

Reduction

x R(x) yes/no

R algorithm

for A

Solving problem B by calling the algorithm for problem once and without further processing its answer.

(5)

Comments

a

Suppose B reduces to A via a transformation R.

The input x is an instance of B.

The output R(x) is an instance of A.

R(x) may not span all possible instances of A.b

So some instances of A may never appear in the range of the reduction R.

aContributed by Mr. Ming-Feng Tsai (D92922003) on October 29, 2003.

bR(x) may not be onto; Mr. Alexandr Simak (D98922040) on October 13, 2009.

(6)

Reduction between Languages

Language L1 is reducible to L2 if there is a function R computable by a deterministic TM in space O(log n).

Furthermore, for all inputs x, x ∈ L1 if and only if R(x) ∈ L2.

R is said to be a (Karp) reduction from L1 to L2.

Note that by Theorem 22 (p. 189), R runs in polynomial time.

Suppose R is a reduction from L1 to L2.

Then solving “R(x) ∈ L2” is an algorithm for solving

“x ∈ L1.”

(7)

A Paradox?

Degree of difficulty is not defined in terms of absolute complexity.

So a language B ∈ TIME(n99) may be “easier” than a language A ∈ TIME(n3).

– This happens when B is reducible to A.

But isn’t this a contradiction if the best algorithm for B requires n99 steps?

That is, how can a problem requiring n99 steps be reducible to a problem solvable in n3 steps?

(8)

A Paradox? (concluded)

The so-called contradiction does not hold.

When we solve the problem “x ∈ B?” via “R(x) ∈ A?”, we must consider the time spent by R(x) and its length

| R(x) |.

If | R(x) | = Ω(n33), then answering “R(x) ∈ A?” takes Ω((n33)3) = Ω(n99) steps, which is fine.

Suppose, on the other hand, that | R(x) | = o(n33).

Then R(x) must run in time Ω(n99) to make the overall time for answering “R(x) ∈ A?” take Ω(n99) steps.

In either case, the contradiction disappears.

(9)

hamiltonian path

A Hamiltonian path of a graph is a path that visits every node of the graph exactly once.

Suppose graph G has n nodes: 1, 2, . . . , n.

A Hamiltonian path can be expressed as a permutation π of { 1, 2, . . . , n } such that

π(i) = j means the ith position is occupied by node j.

(π(i), π(i + 1)) ∈ G for i = 1, 2, . . . , n − 1.

hamiltonian path asks if a graph has a Hamiltonian path.

(10)

Reduction of hamiltonian path to sat

Given a graph G, we shall construct a CNF R(G) such that R(G) is satisfiable iff G has a Hamiltonian path.

R(G) has n2 boolean variables xij, 1 ≤ i, j ≤ n.

xij means

the ith position in the Hamiltonian path is occupied by node j.

(11)

1

2

3

4

5 6

8 7 9

x12 = x21 = x34 = x45 = x53 = x69 = x76 = x88 = x97 = 1;

π(1) = 2, π(2) = 1, π(3) = 4, π(4) = 5, π(5) = 3, π(6) = 9, π(7) = 6, π(8) = 8, π(9) = 7.

(12)

The Clauses of R(G) and Their Intended Meanings

1. Each node j must appear in the path.

x1j ∨ x2j ∨ · · · ∨ xnj for each j.

2. No node j appears twice in the path.

¬xij ∨ ¬xkj for all i, j, k with i 6= k.

3. Every position i on the path must be occupied.

xi1 ∨ xi2 ∨ · · · ∨ xin for each i.

4. No two nodes j and k occupy the same position in the path.

¬xij ∨ ¬xik for all i, j, k with j 6= k.

5. Nonadjacent nodes i and j cannot be adjacent in the path.

¬xki ∨ ¬xk+1,j for all (i, j) 6∈ G and k = 1, 2, . . . , n − 1.

(13)

The Proof

R(G) contains O(n3) clauses.

R(G) can be computed efficiently (simple exercise).

Suppose T |= R(G).

From clauses of 1 and 2, for each node j there is a unique position i such that T |= xij.

From clauses of 3 and 4, for each position i there is a unique node j such that T |= xij.

So there is a permutation π of the nodes such that π(i) = j if and only if T |= xij.

(14)

The Proof (concluded)

Clauses of 5 furthermore guarantee that

(π(1), π(2), . . . , π(n)) is a Hamiltonian path.

Conversely, suppose G has a Hamiltonian path (π(1), π(2), . . . , π(n)),

where π is a permutation.

Clearly, the truth assignment

T (xij) = true if and only if π(i) = j satisfies all clauses of R(G).

(15)

A Comment

a

An answer to “Is R(G) satisfiable?” does answer “Is G Hamiltonian?”

But a positive answer does not give a Hamiltonian path for G.

Providing witness is not a requirement of reduction.

A positive answer to “Is R(G) satisfiable?” plus a satisfying truth assignment does provide us with a Hamiltonian path for G.

aContributed by Ms. Amy Liu (J94922016) on May 29, 2006.

(16)

Reduction of reachability to circuit value

Note that both problems are in P.

Given a graph G = (V, E), we shall construct a variable-free circuit R(G).

The output of R(G) is true if and only if there is a path from node 1 to node n in G.

Idea: the Floyd-Warshall algorithm.

(17)

The Gates

The gates are

gijk with 1 ≤ i, j ≤ n and 0 ≤ k ≤ n.

hijk with 1 ≤ i, j, k ≤ n.

gijk: There is a path from node i to node j without passing through a node bigger than k.

hijk: There is a path from node i to node j passing through k but not any node bigger than k.

Input gate gij0 = true if and only if i = j or (i, j) ∈ E.

(18)

The Construction

hijk is an and gate with predecessors gi,k,k−1 and gk,j,k−1, where k = 1, 2, . . . , n.

gijk is an or gate with predecessors gi,j,k−1 and hi,j,k, where k = 1, 2, . . . , n.

g1nn is the output gate.

Interestingly, R(G) uses no ¬ gates: It is a monotone circuit.

(19)

Reduction of circuit sat to sat

Given a circuit C, we will construct a boolean

expression R(C) such that R(C) is satisfiable iff C is.

R(C) will turn out to be a CNF.

R(C) is a depth-2 circuit; furthermore, each gate has out-degree 1.

The variables of R(C) are those of C plus g for each gate g of C.

g’s propagate the truth values for the CNF.

Each gate of C will be turned into equivalent clauses.

Recall that clauses are ∧-ed together by definition.

(20)

The Clauses of R(C)

g is a variable gate x: Add clauses (¬g ∨ x) and (g ∨ ¬x).

Meaning: g ⇔ x.

g is a true gate: Add clause (g).

Meaning: g must be true to make R(C) true.

g is a false gate: Add clause (¬g).

Meaning: g must be false to make R(C) true.

g is a ¬ gate with predecessor gate h: Add clauses (¬g ∨ ¬h) and (g ∨ h).

Meaning: g ⇔ ¬h.

(21)

The Clauses of R(C) (concluded)

g is a ∨ gate with predecessor gates h and h0: Add clauses (¬h ∨ g), (¬h0 ∨ g), and (h ∨ h0 ∨ ¬g).

Meaning: g ⇔ (h ∨ h0).

g is a ∧ gate with predecessor gates h and h0: Add clauses (¬g ∨ h), (¬g ∨ h0), and (¬h ∨ ¬h0 ∨ g).

Meaning: g ⇔ (h ∧ h0).

g is the output gate: Add clause (g).

Meaning: g must be true to make R(C) true.

Note: If gate g feeds gates h1, h2, . . ., then variable g appears in the clauses for h , h , . . . in R(C).

(22)

An Example

[ [ [

[

∧ ¬

K J K K J K

J J

J

(h1 ⇔ x1) ∧ (h2 ⇔ x2) ∧ (h3 ⇔ x3) ∧ (h4 ⇔ x4)

∧ [ g1 ⇔ (h1 ∧ h2) ] ∧ [ g2 ⇔ (h3 ∨ h4) ]

∧ [ g3 ⇔ (g1 ∧ g2) ] ∧ (g4 ⇔ ¬g2)

∧ [ g5 ⇔ (g3 ∨ g4) ] ∧ g5.

(23)

An Example (concluded)

In general, the result is a CNF.

The CNF has size proportional to the circuit’s number of gates.

The CNF adds new variables to the circuit’s original input variables.

(24)

Composition of Reductions

Proposition 25 If R12 is a reduction from L1 to L2 and R23 is a reduction from L2 to L3, then the composition R12 ◦ R23 is a reduction from L1 to L3.

Clearly x ∈ L1 if and only if R23(R12(x)) ∈ L3.

How to compute R12 ◦ R23 in space O(log n), as required by the definition of reduction?

(25)

The Proof (continued)

An obvious way is to generate R12(x) first and then feeding it to R23.

This takes polynomial time.a

It takes polynomial time to produce R12(x) of polynomial length.

– It also takes polynomial time to produce R23(R12(x)).

Trouble is R12(x) may consume up to polynomial space, much more than the logarithmic space required.

aHence our concern below disappears had we required reductions to

(26)

The Proof (concluded)

The trick is to let R23 drive the computation.

It asks R12 to deliver each bit of R12(x) when needed.

When R23 wants to read the ith bit, R12(x) will be simulated until the ith bit is available.

The initial i − 1 bits should not be written to the string.

This is feasible as R12(x) is produced in a write-only manner.

The ith output bit of R12(x) is well-defined because once it is written, it will never be overwritten by R12.

(27)

Completeness

a

As reducibility is transitive, problems can be ordered with respect to their difficulty.

Is there a maximal element?

It is not altogether obvious that there should be a maximal element.

– Many infinite structures (such as integers and real numbers) do not have maximal elements.

Hence it may surprise you that most of the complexity classes that we have seen so far have maximal elements.

aCook (1971) and Levin (1971).

(28)

Completeness (concluded)

Let C be a complexity class and L ∈ C.

L is C-complete if every L0 ∈ C can be reduced to L.

– Most complexity classes we have seen so far have complete problems!

Complete problems capture the difficulty of a class because they are the hardest problems in the class.

(29)

Hardness

Let C be a complexity class.

L is C-hard if every L0 ∈ C can be reduced to L.

It is not required that L ∈ C.

If L is C-hard, then by definition, every C-complete problem can be reduced to L.a

aContributed by Mr. Ming-Feng Tsai (D92922003) on October 15, 2003.

(30)

Illustration of Completeness and Hardness

A1

A2

A3

A4

L A1

A2

A3

A4 L

(31)

Closedness under Reductions

A class C is closed under reductions if whenever L is reducible to L0 and L0 ∈ C, then L ∈ C.

P, NP, coNP, L, NL, PSPACE, and EXP are all closed under reductions.

(32)

Complete Problems and Complexity Classes

Proposition 26 Let C0 and C be two complexity classes such that C0 ⊆ C. Assume C0 is closed under reductions and L is C-complete. Then C = C0 if and only if L ∈ C0.

Suppose L ∈ C0 first.

Every language A ∈ C reduces to L ∈ C0.

Because C0 is closed under reductions, A ∈ C0.

Hence C ⊆ C0.

As C0 ⊆ C, we conclude that C = C0.

(33)

The Proof (concluded)

On the other hand, suppose C = C0.

As L is C-complete, L ∈ C.

Thus, trivially, L ∈ C0.

(34)

Two Important Corollaries

Proposition 26 implies the following.

Corollary 27 P = NP if and only if an NP-complete problem in P.

Corollary 28 L = P if and only if a P-complete problem is in L.

(35)

Complete Problems and Complexity Classes

Proposition 29 Let C0 and C be two complexity classes closed under reductions. If L is complete for both C and C0, then C = C0.

All languages L ∈ C reduce to L ∈ C0.

Since C0 is closed under reductions, L ∈ C0.

Hence C ⊆ C0.

The proof for C0 ⊆ C is symmetric.

(36)

Table of Computation

Let M = (K, Σ, δ, s) be a single-string polynomial-time deterministic TM deciding L.

Its computation on input x can be thought of as a

| x |k × | x |k table, where | x |k is the time bound.

– It is a sequence of configurations.

Rows correspond to time steps 0 to | x |k − 1.

Columns are positions in the string of M .

The (i, j)th table entry represents the contents of position j of the string after i steps of computation.

(37)

Some Conventions To Simplify the Table

M halts after at most | x |k − 2 steps.

The string length hence never exceeds | x |k.

Assume a large enough k to make it true for | x | ≥ 2.

Pad the table with F

s so that each row has length | x |k. – The computation will never reach the right end of

the table for lack of time.

If the cursor scans the jth position at time i when M is at state q and the symbol is σ, then the (i, j)th entry is a new symbol σq.

(38)

Some Conventions To Simplify the Table (continued)

If q is “yes” or “no,” simply use “yes” or “no” instead of σq.

Modify M so that the cursor starts not at ¤ but at the first symbol of the input.

The cursor never visits the leftmost ¤ by telescoping two moves of M each time the cursor is about to move to the leftmost ¤.

So the first symbol in every row is a ¤ and not a ¤q.

(39)

Some Conventions To Simplify the Table (concluded)

Suppose M has halted before its time bound of | x |k, so that “yes” or “no” appears at a row before the last.

Then all subsequent rows will be identical to that row.

M accepts x if and only if the (| x |k − 1, j)th entry is

“yes” for some position j.

(40)

Comments

Each row is essentially a configuration.

If the input x = 010001, then the first row is

| x |k

z }| {

¤0s10001G G

· · ·G

A typical row may look like

| x |k

z }| {

¤10100q01110100G G

· · ·G

(41)

Comments (concluded)

The last rows must look like

| x |k

z }| {

¤ · · · “yes” · · · G

or

| x |k

z }| {

¤ · · · “no” · · ·G

Three out of the table’s 4 borders are known:

#DEFGHI

#

#





# 

# 

(42)

A P-Complete Problem

Theorem 30 (Ladner (1975)) circuit value is P-complete.

It is easy to see that circuit value ∈ P.

For any L ∈ P, we will construct a reduction R from L to circuit value.

Given any input x, R(x) is a variable-free circuit such that x ∈ L if and only if R(x) evaluates to true.

Let M decide L in time nk.

Let T be the computation table of M on x.

(43)

The Proof (continued)

When i = 0, or j = 0, or j = | x |k − 1, then the value of Tij is known.

The jth symbol of x or F

, a ¤, and a F

, respectively.

Recall that three out of T ’s 4 borders are known.

(44)

The Proof (continued)

Consider other entries Tij.

Tij depends on only Ti−1,j−1, Ti−1,j, and Ti−1,j+1. Ti−1,j−1 Ti−1,j Ti−1,j+1

Tij

Let Γ denote the set of all symbols that can appear on the table: Γ = Σ ∪ {σq : σ ∈ Σ, q ∈ K}.

Encode each symbol of Γ as an m-bit number, wherea m = dlog2 | Γ |e.

aState assignment in circuit design.

(45)

The Proof (continued)

Let the m-bit binary string Sij1Sij2 · · · Sijm encode Tij.

We may treat them interchangeably without ambiguity.

The computation table is now a table of binary entries Sij`, where

0 ≤ i ≤ nk − 1, 0 ≤ j ≤ nk − 1, 1 ≤ ` ≤ m.

(46)

The Proof (continued)

Each bit Sij` depends on only 3m other bits:

Ti−1,j−1: Si−1,j−1,1 Si−1,j−1,2 · · · Si−1,j−1,m Ti−1,j: Si−1,j,1 Si−1,j,2 · · · Si−1,j,m Ti−1,j+1: Si−1,j+1,1 Si−1,j+1,2 · · · Si−1,j+1,m

There is a boolean function F` with 3m inputs such that Sij` = F`(Si−1,j−1,1, Si−1,j−1,2, . . . , Si−1,j−1,m,

Si−1,j,1, Si−1,j,2, . . . , Si−1,j,m,

Si−1,j+1,1, Si−1,j+1,2, . . . , Si−1,j+1,m), where for all i, j > 0 and 1 ≤ ` ≤ m.

(47)

The Proof (continued)

These Fi’s depend only on M ’s specification, not on x.

Their sizes are fixed.

These boolean functions can be turned into boolean circuits.

Compose these m circuits in parallel to obtain circuit C with 3m-bit inputs and m-bit outputs.

Schematically, C(Ti−1,j−1, Ti−1,j, Ti−1,j+1) = Tij.a

aC is like an ASIC (application-specific IC) chip.

(48)

Circuit C

T i - 1,j - 1

T ij

T i - 1,j + 1

T i - 1,j

C

(49)

The Proof (concluded)

A copy of circuit C is placed at each entry of the table.

– Exceptions are the top row and the two extreme columns.

R(x) consists of (| x |k − 1)(| x |k − 2) copies of circuit C.

Without loss of generality, assume the output

“yes”/“no” appear at position (| x |k − 1, 1).

Encode “yes” as 1 and “no” as 0.

(50)

The Computation Tableau and R(x)

#DEFGHI

#

#





& & & & & &

& & & & & &

& & & & & &

(51)

A Corollary

The construction in the above proof yields the following, more general result.

Corollary 31 If L ∈ TIME(T (n)), then a circuit with O(T2(n)) gates can decide if x ∈ L for | x | = n.

參考文獻

相關文件

Too good security is trumping deployment Practical security isn’ t glamorous... USENIX Security

‰At the delivery month of a futures contract is approached, the futures price _________ the spot price of the underlying asset. Î When the delivery period is reached, the futures

From Remark 3.4, there exists a minimum kernel scale σ min , such that the correspondence produced by the HD model with the kernel scale σ ≥ σ min is the same as the correspondence

substance) is matter that has distinct properties and a composition that does not vary from sample

• When a system undergoes any chemical or physical change, the accompanying change in internal energy, ΔE, is the sum of the heat added to or liberated from the system, q, and the

(a) The magnitude of the gravitational force exerted by the planet on an object of mass m at its surface is given by F = GmM / R 2 , where M is the mass of the planet and R is

Courtesy: Ned Wright’s Cosmology Page Burles, Nolette & Turner, 1999?. Total Mass Density

In particular, in the context of folded supersymmetry it is pointed out in Ref.[4] that production of the squirk-antisquirk pair ˜ Q ˜ Q ∗ at the large hadron collider (LHC)