The Reachability Method •

46  Download (0)

Full text


The Reachability Method

• The computation of a time-bounded TM can be represented by a directed graph.

• The TM’s configurations constitute the nodes.

• Two nodes are connected by a directed edge if one yields the other in one step.

• The start node representing the initial configuration has zero in degree.


The Reachability Method (concluded)

• When the TM is nondeterministic, a node may have an out degree greater than one.

– The graph is the same as the computation tree

earlier except that identical configuration nodes are merged into one node.

• So M accepts the input if and only if there is a path from the start node to a node with a “yes” state.

• It is the reachability problem.


Illustration of the Reachability Method


yes Initial



Relations between Complexity Classes

Theorem 23 Suppose f (n) is proper. Then 1. SPACE(f (n)) ⊆ NSPACE(f(n)),

TIME(f (n)) ⊆ NTIME(f(n)).

2. NTIME(f (n)) ⊆ SPACE(f(n)).

3. NSPACE(f (n)) ⊆ TIME(klog n+f (n)).

• Proof of 2:

– Explore the computation tree of the NTM for “yes.”

– Specifically, generate an f (n)-bit sequence denoting the nondeterministic choices over f (n) steps.


Proof of Theorem 23(2)

• (continued)

– Simulate the NTM based on the choices.

– Recycle the space and repeat the above steps.

– Halt with “yes” when a “yes” is encountered or “no”

if the tree is exhausted.

– Each path simulation consumes at most O(f (n)) space because it takes O(f (n)) time.

– The total space is O(f (n)) because space is recycled.


Proof of Theorem 23(3)

• Let k-string NTM

M = (K, Σ, ∆, s)

with input and output decide L ∈ NSPACE(f(n)).

• Use the reachability method on the configuration graph of M on input x of length n.

• A configuration is a (2k + 1)-tuple

(q, w1, u1, w2, u2, . . . , wk, uk).


Proof of Theorem 23(3) (continued)

• We only care about

(q, i, w2, u2, . . . , wk−1, uk−1),

where i is an integer between 0 and n for the position of the first cursor.

• The number of configurations is therefore at most

|K| × (n + 1) × |Σ|(2k−4)f(n) = O(clog n+f (n)

1 ) (1)

for some c1, which depends on M .

• Add edges to the configuration graph based on M’s transition function.


Proof of Theorem 23(3) (concluded)

• x ∈ L ⇔ there is a path in the configuration graph from the initial configuration to a configuration of the form (“yes”, i, . . .).a

• This is reachability on a graph with O(clog n+f (n)

1 )


• It is in TIME(clog n+f (n)) for some c because reachability ∈ TIME(nj) for some j and


clog n+f (n) 1


= (cj1)log n+f (n)


aThere may be many of them.


Space-Bounded Computation and Proper Functions

• In the definition of space-bounded computations earlier (p. 95), the TMs are not required to halt at all.

• When the space is bounded by a proper function f, computations can be assumed to halt:

– Run the TM associated with f to produce a quasi-blank output of length f (n) first.

– The space-bounded computation must repeat a

configuration if it runs for more than clog n+f (n) steps for some c (p. 225).


Space-Bounded Computation and Proper Functions (concluded)

• (continued)

– So we can prevent infinite loops during simulation by pruning any path longer than clog n+f (n).

– In other words, we only simulate clog n+f (n) time steps per computation path.


A Grand Chain of Inclusions


• It is an easy application of Theorem 23 (p. 222) that L ⊆ NL ⊆ P ⊆ NP ⊆ PSPACE ⊆ EXP.

• By Corollary 20 (p. 217), we know L ( PSPACE.

• So the chain must break somewhere between L and EXP.

• It is suspected that all four inclusions are proper.

• But there are no proofs yet.

aWith input from Mr. Chin-Luei Chang (R93922004, D95922007) on October 22, 2004.


Nondeterministic Space and Deterministic Space

• By Theorem 4 (p. 101),

NTIME(f (n)) ⊆ TIME(cf (n)), an exponential gap.

• There is no proof yet that the exponential gap is inherent.

• How about NSPACE vs. SPACE?

• Surprisingly, the relation is only quadratic—a polynomial—by Savitch’s theorem.


Savitch’s Theorem

Theorem 24 (Savitch (1970))

reachability ∈ SPACE(log2 n).

• Let G(V, E) be a graph with n nodes.

• For i ≥ 0, let

PATH(x, y, i)

mean there is a path from node x to node y of length at most 2i.

• There is a path from x to y if and only if PATH(x, y,⌈log n⌉) holds.


The Proof (continued)

• For i > 0, PATH(x, y, i) if and only if there exists a z such that PATH(x, z, i − 1) and PATH(z, y, i − 1).

• For PATH(x, y, 0), check the input graph or if x = y.

• Compute PATH(x, y, ⌈log n⌉) with a depth-first search on a graph with nodes (x, y, z, i)s (see next page).a

• Like stacks in recursive calls, we keep only the current path of (x, y, i)s.

• The space requirement is proportional to the depth of the tree: ⌈log n⌉.

aContributed by Mr. Chuan-Yao Tan on October 11, 2011.


The Proof (continued): Algorithm for PATH(x, y, i)

1: if i = 0 then

2: if x = y or (x, y) ∈ E then

3: return true;

4: else

5: return false;

6: end if

7: else

8: for z = 1, 2, . . . , n do

9: if PATH(x, z, i − 1) and PATH(z, y, i − 1) then

10: return true;

11: end if

12: end for

13: return false;

14: end if


The Proof (concluded)

3$7+ [\ORJQ

3$7+ []ORJQ 3$7+ ]\ORJQ



• Depth is ⌈log n⌉, and each node (x, y, z, i) needs space O(log n).

• The total space is O(log2 n).


The Relation between Nondeterministic Space and Deterministic Space Only Quadratic

Corollary 25 Let f (n) ≥ log n be proper. Then NSPACE(f (n)) ⊆ SPACE(f2(n)).

• Apply Savitch’s proof to the configuration graph of the NTM on the input.

• From p. 225, the configuration graph has O(cf (n)) nodes; hence each node takes space O(f (n)).

• But if we construct explicitly the whole graph before applying Savitch’s theorem, we get O(cf (n)) space!


The Proof (continued)

• The way out is not to generate the graph at all.

• Instead, keep the graph implicit.

• In fact, we check node connectedness only when i = 0 on p. 233, by examining the input string G.

• There, given configurations x and y, we go over the Turing machine’s program to determine if there is an instruction that can turn x into y in one step.a

aThanks to a lively class discussion on October 15, 2003.


The Proof (concluded)

• The z variable in the algorithm on p. 233 simply runs through all possible valid configurations.

– Let z = 0, 1, . . . , O(cf (n)).

– Make sure z is a valid configuration before using it in the recursive calls.a

• Each z has length O(f(n)) by Eq. (1) on p. 225.

• So each node needs space O(f(n)).

• As the depth of the recursive call on p. 233 is

O(log cf (n)), the total space is therefore O(f2(n)).

aThanks to a lively class discussion on October 13, 2004.


Implications of Savitch’s Theorem


• Nondeterminism is less powerful with respect to space.

• Nondeterminism may be very powerful with respect to time as it is not known if P = NP.


Nondeterministic Space Is Closed under Complement

• Closure under complement is trivially true for deterministic complexity classes (p. 210).

• It is known thata

coNSPACE(f (n)) = NSPACE(f (n)). (2)

• So

coNL = NL,


• But it is not known whether coNP = NP.

aSzelepsc´enyi (1987) and Immerman (1988).


Reductions and Completeness


It is unworthy of excellent men to lose hours like slaves in the labor of computation.

— Gottfried Wilhelm von Leibniz (1646–1716)


Degrees of Difficulty

• When is a problem more difficult than another?

• B reduces to A if there is a transformation R which for every input x of B yields an input R(x) of A.a

– The answer to x for B is the same as the answer to R(x) for A.

– R is easy to compute.

• We say problem A is at least as hard as problem B if B reduces to A.

aSee also p. 148.


Degrees of Difficulty (concluded)

• This makes intuitive sense: If A is able to solve your problem B after only a little bit of work of R, then A must be at least as hard.

– If A is easy to solve, it combined with R (which is also easy) would make B easy to solve, too.a

– So if B is hard to solve, A must be hard (if not harder), too!

aThanks to a lively class discussion on October 13, 2009.



x R(x) yes/no

R algorithm

for A

Solving problem B by calling the algorithm for problem A once and without further processing its answer.




• Suppose B reduces to A via a transformation R.

• The input x is an instance of B.

• The output R(x) is an instance of A.

• R(x) may not span all possible instances of A.b

– Some instances of A may never appear in the range of R.

• But x must be a general instance for B.

aContributed by Mr. Ming-Feng Tsai (D92922003) on October 29, 2003.

bR(x) may not be onto; Mr. Alexandr Simak (D98922040) on October 13, 2009.


Is “Reduction” a Confusing Choice of Word?


• If B reduces to A, doesn’t that intuitively make A smaller and simpler?

– Sometimes, we say, “B can be reduced to A.”

• But our definition means just the opposite.

• Our definition says in this case B is a special case of A.

• Hence A is harder.

aMoore and Mertens (2011).


Reduction between Languages

• Language L1 is reducible to L2 if there is a function R computable by a deterministic TM in space O(log n).

• Furthermore, for all inputs x, x ∈ L1 if and only if R(x) ∈ L2.

• R is said to be a (Karp) reduction from L1 to L2.


Reduction between Languages (concluded)

• Note that by Theorem 23 (p. 222), R runs in polynomial time.

– In most cases, a polynomial-time R suffices for proofs.a

• Suppose R is a reduction from L1 to L2.

• Then solving “R(x) ∈ L2?” is an algorithm for solving

“x ∈ L1?”b

aIn fact, unless stated otherwise, we will only require that the reduc- tion R run in polynomial time.

bOf course, it may not be an optimal one.


A Paradox?

• Degree of difficulty is not defined in terms of absolute complexity.

• So a language B ∈ TIME(n99) may be “easier” than a language A ∈ TIME(n3).

– Again, this happens when B is reducible to A.

• But is this a contradiction if the best algorithm for B requires n99 steps?

• That is, how can a problem requiring n99 steps be reducible to a problem solvable in n3 steps?


Paradox Resolved

• The so-called contradiction does not hold.

• Suppose we solve the problem “x ∈ B?” via “R(x) ∈ A?”

• We must consider the time spent by R(x) and its length

| R(x) | because R(x) (not x) is presented to A.


hamiltonian path

• A Hamiltonian path of a graph is a path that visits every node of the graph exactly once.

• Suppose graph G has n nodes: 1, 2, . . . , n.

• A Hamiltonian path can be expressed as a permutation π of { 1, 2, . . . , n } such that

– π(i) = j means the ith position is occupied by node j.

– (π(i), π(i + 1)) ∈ G for i = 1, 2, . . . , n − 1.

• hamiltonian path asks if a graph has a Hamiltonian path.


Reduction of hamiltonian path to sat

• Given a graph G, we shall construct a CNF R(G) such that R(G) is satisfiable iff G has a Hamiltonian path.

• R(G) has n2 boolean variables xij, 1 ≤ i, j ≤ n.

• xij means

“the ith position in the Hamiltonian path is occupied by node j.”

• Our reduction will produce clauses.






5 6

8 7 9

x12 = x21 = x34 = x45 = x53 = x69 = x76 = x88 = x97 = 1;

π(1) = 2, π(2) = 1, π(3) = 4, π(4) = 5, π(5) = 3, π(6) = 9, π(7) = 6, π(8) = 8, π(9) = 7.


The Clauses of R(G) and Their Intended Meanings

1. Each node j must appear in the path.

• x1j ∨ x2j ∨ · · · ∨ xnj for each j.

2. No node j appears twice in the path.

• ¬xij ∨ ¬xkj(≡ ¬(xij ∧ xkj)) for all i, j, k with i ̸= k.

3. Every position i on the path must be occupied.

• xi1 ∨ xi2 ∨ · · · ∨ xin for each i.

4. No two nodes j and k occupy the same position in the path.

• ¬xij ∨ ¬xik(≡ ¬(xij ∧ xik)) for all i, j, k with j ̸= k.

5. Nonadjacent nodes i and j cannot be adjacent in the path.

• ¬xki ∨ ¬xk+1,j for all (i, j) ̸∈ G and k = 1, 2, . . . , n − 1.


The Proof

• R(G) contains O(n3) clauses.

• R(G) can be computed efficiently (simple exercise).

• Suppose T |= R(G).

• From the 1st and 2nd types of clauses, for each node j there is a unique position i such that T |= xij.

• From the 3rd and 4th types of clauses, for each position i there is a unique node j such that T |= xij.

• So there is a permutation π of the nodes such that π(i) = j if and only if T |= xij.


The Proof (concluded)

• The 5th type of clauses furthermore guarantee that (π(1), π(2), . . . , π(n)) is a Hamiltonian path.

• Conversely, suppose G has a Hamiltonian path (π(1), π(2), . . . , π(n)),

where π is a permutation.

• Clearly, the truth assignment

T (xij) = true if and only if π(i) = j satisfies all clauses of R(G).


A Comment


• An answer to “Is R(G) satisfiable?” does answer “Is G Hamiltonian?”

• But a positive answer does not give a Hamiltonian path for G.

– Providing a witness is not a requirement of reduction.

• A positive answer to “Is R(G) satisfiable?” plus a satisfying truth assignment does provide us with a Hamiltonian path for G.

aContributed by Ms. Amy Liu (J94922016) on May 29, 2006.


Reduction of reachability to circuit value

• Note that both problems are in P.

• Given a graph G = (V, E), we shall construct a variable-free circuit R(G).

• The output of R(G) is true if and only if there is a path from node 1 to node n in G.

• Idea: the Floyd-Warshall algorithm.


The Gates

• The gates are

– gijk with 1 ≤ i, j ≤ n and 0 ≤ k ≤ n.

– hijk with 1 ≤ i, j, k ≤ n.

• gijk: There is a path from node i to node j without passing through a node bigger than k.

• hijk: There is a path from node i to node j passing through k but not any node bigger than k.

• Input gate gij0 = true if and only if i = j or (i, j) ∈ E.


The Construction

• hijk is an and gate with predecessors gi,k,k−1 and gk,j,k−1, where k = 1, 2, . . . , n.

• gijk is an or gate with predecessors gi,j,k−1 and hi,j,k, where k = 1, 2, . . . , n.

• g1nn is the output gate.

• Interestingly, R(G) uses no ¬ gates.

– It is a monotone circuit.


Reduction of circuit sat to sat

• Given a circuit C, we will construct a boolean

expression R(C) such that R(C) is satisfiable iff C is.

– R(C) will turn out to be a CNF.

– R(C) is basically a depth-2 circuit; furthermore, each gate has out-degree 1.

• The variables of R(C) are those of C plus g for each gate g of C.

– The g’s propagate the truth values for the CNF.

• Each gate of C will be turned into equivalent clauses.

• Recall that clauses are ∧ed together by definition.


The Clauses of R(C)

g is a variable gate x: Add clauses (¬g ∨ x) and (g ∨ ¬x).

• Meaning: g ⇔ x.

g is a true gate: Add clause (g).

• Meaning: g must be true to make R(C) true.

g is a false gate: Add clause (¬g).

• Meaning: g must be false to make R(C) true.

g is a ¬ gate with predecessor gate h: Add clauses (¬g ∨ ¬h) and (g ∨ h).

• Meaning: g ⇔ ¬h.


The Clauses of R(C) (concluded)

g is a ∨ gate with predecessor gates h and h: Add clauses (¬h ∨ g), (¬h ∨ g), and (h ∨ h ∨ ¬g).

• Meaning: g ⇔ (h ∨ h).

g is a ∧ gate with predecessor gates h and h: Add clauses (¬g ∨ h), (¬g ∨ h), and (¬h ∨ ¬h ∨ g).

• Meaning: g ⇔ (h ∧ h).

g is the output gate: Add clause (g).

• Meaning: g must be true to make R(C) true.

Note: If gate g feeds gates h1, h2, . . ., then variable g appears in the clauses for h1, h2, . . . in R(C).


An Example

[ [ [


∧ ¬




(h1 ⇔ x1) ∧ (h2 ⇔ x2) ∧ (h3 ⇔ x3) ∧ (h4 ⇔ x4)

∧ [ g1 ⇔ (h1 ∧ h2) ] ∧ [ g2 ⇔ (h3 ∨ h4) ]

∧ [ g3 ⇔ (g1 ∧ g2) ] ∧ (g4 ⇔ ¬g2)

∧ [ g5 ⇔ (g3 ∨ g4) ] ∧ g5.




Related subjects :