# The Reachability Method •

## Full text

(1)

### The Reachability Method

• The computation of a time-bounded TM can be represented by a directed graph.

• The TM’s conﬁgurations constitute the nodes.

• Two nodes are connected by a directed edge if one yields the other in one step.

• The start node representing the initial conﬁguration has zero in degree.

(2)

### The Reachability Method (concluded)

• When the TM is nondeterministic, a node may have an out degree greater than one.

– The graph is the same as the computation tree

earlier except that identical conﬁguration nodes are merged into one node.

• So M accepts the input if and only if there is a path from the start node to a node with a “yes” state.

• It is the reachability problem.

(3)

yes

yes Initial

configuration

(4)

### Relations between Complexity Classes

Theorem 23 Suppose f (n) is proper. Then 1. SPACE(f (n)) ⊆ NSPACE(f(n)),

TIME(f (n)) ⊆ NTIME(f(n)).

2. NTIME(f (n)) ⊆ SPACE(f(n)).

3. NSPACE(f (n)) ⊆ TIME(klog n+f (n)).

• Proof of 2:

– Explore the computation tree of the NTM for “yes.”

– Speciﬁcally, generate an f (n)-bit sequence denoting the nondeterministic choices over f (n) steps.

(5)

### Proof of Theorem 23(2)

• (continued)

– Simulate the NTM based on the choices.

– Recycle the space and repeat the above steps.

– Halt with “yes” when a “yes” is encountered or “no”

if the tree is exhausted.

– Each path simulation consumes at most O(f (n)) space because it takes O(f (n)) time.

– The total space is O(f (n)) because space is recycled.

(6)

### Proof of Theorem 23(3)

• Let k-string NTM

M = (K, Σ, ∆, s)

with input and output decide L ∈ NSPACE(f(n)).

• Use the reachability method on the conﬁguration graph of M on input x of length n.

• A conﬁguration is a (2k + 1)-tuple

(q, w1, u1, w2, u2, . . . , wk, uk).

(7)

### Proof of Theorem 23(3) (continued)

(q, i, w2, u2, . . . , wk−1, uk−1),

where i is an integer between 0 and n for the position of the ﬁrst cursor.

• The number of conﬁgurations is therefore at most

|K| × (n + 1) × |Σ|(2k−4)f(n) = O(clog n+f (n)

1 ) (1)

for some c1, which depends on M .

• Add edges to the conﬁguration graph based on M’s transition function.

(8)

### Proof of Theorem 23(3) (concluded)

• x ∈ L ⇔ there is a path in the conﬁguration graph from the initial conﬁguration to a conﬁguration of the form (“yes”, i, . . .).a

• This is reachability on a graph with O(clog n+f (n)

1 )

nodes.

• It is in TIME(clog n+f (n)) for some c because reachability ∈ TIME(nj) for some j and

[

clog n+f (n) 1

]j

= (cj1)log n+f (n)

.

aThere may be many of them.

(9)

### Space-Bounded Computation and Proper Functions

• In the deﬁnition of space-bounded computations earlier (p. 95), the TMs are not required to halt at all.

• When the space is bounded by a proper function f, computations can be assumed to halt:

– Run the TM associated with f to produce a quasi-blank output of length f (n) ﬁrst.

– The space-bounded computation must repeat a

conﬁguration if it runs for more than clog n+f (n) steps for some c (p. 225).

(10)

### Space-Bounded Computation and Proper Functions (concluded)

• (continued)

– So we can prevent inﬁnite loops during simulation by pruning any path longer than clog n+f (n).

– In other words, we only simulate clog n+f (n) time steps per computation path.

(11)

### A Grand Chain of Inclusions

a

• It is an easy application of Theorem 23 (p. 222) that L ⊆ NL ⊆ P ⊆ NP ⊆ PSPACE ⊆ EXP.

• By Corollary 20 (p. 217), we know L ( PSPACE.

• So the chain must break somewhere between L and EXP.

• It is suspected that all four inclusions are proper.

• But there are no proofs yet.

aWith input from Mr. Chin-Luei Chang (R93922004, D95922007) on October 22, 2004.

(12)

### Nondeterministic Space and Deterministic Space

• By Theorem 4 (p. 101),

NTIME(f (n)) ⊆ TIME(cf (n)), an exponential gap.

• There is no proof yet that the exponential gap is inherent.

• How about NSPACE vs. SPACE?

• Surprisingly, the relation is only quadratic—a polynomial—by Savitch’s theorem.

(13)

### Savitch’s Theorem

Theorem 24 (Savitch (1970))

reachability ∈ SPACE(log2 n).

• Let G(V, E) be a graph with n nodes.

• For i ≥ 0, let

PATH(x, y, i)

mean there is a path from node x to node y of length at most 2i.

• There is a path from x to y if and only if PATH(x, y,⌈log n⌉) holds.

(14)

### The Proof (continued)

• For i > 0, PATH(x, y, i) if and only if there exists a z such that PATH(x, z, i − 1) and PATH(z, y, i − 1).

• For PATH(x, y, 0), check the input graph or if x = y.

• Compute PATH(x, y, ⌈log n⌉) with a depth-ﬁrst search on a graph with nodes (x, y, z, i)s (see next page).a

• Like stacks in recursive calls, we keep only the current path of (x, y, i)s.

• The space requirement is proportional to the depth of the tree: ⌈log n⌉.

aContributed by Mr. Chuan-Yao Tan on October 11, 2011.

(15)

### The Proof (continued): Algorithm for PATH(x, y, i)

1: if i = 0 then

2: if x = y or (x, y) ∈ E then

3: return true;

4: else

5: return false;

6: end if

7: else

8: for z = 1, 2, . . . , n do

9: if PATH(x, z, i − 1) and PATH(z, y, i − 1) then

10: return true;

11: end if

12: end for

13: return false;

14: end if

(16)

### The Proof (concluded)

3\$7+ [\ORJQ

3\$7+ []ORJQ 3\$7+ ]\ORJQ

Ø\HVÙ

ØQRÙ ØQRÙ

• Depth is ⌈log n⌉, and each node (x, y, z, i) needs space O(log n).

• The total space is O(log2 n).

(17)

### The Relation between Nondeterministic Space and Deterministic Space Only Quadratic

Corollary 25 Let f (n) ≥ log n be proper. Then NSPACE(f (n)) ⊆ SPACE(f2(n)).

• Apply Savitch’s proof to the conﬁguration graph of the NTM on the input.

• From p. 225, the conﬁguration graph has O(cf (n)) nodes; hence each node takes space O(f (n)).

• But if we construct explicitly the whole graph before applying Savitch’s theorem, we get O(cf (n)) space!

(18)

### The Proof (continued)

• The way out is not to generate the graph at all.

• Instead, keep the graph implicit.

• In fact, we check node connectedness only when i = 0 on p. 233, by examining the input string G.

• There, given conﬁgurations x and y, we go over the Turing machine’s program to determine if there is an instruction that can turn x into y in one step.a

aThanks to a lively class discussion on October 15, 2003.

(19)

### The Proof (concluded)

• The z variable in the algorithm on p. 233 simply runs through all possible valid conﬁgurations.

– Let z = 0, 1, . . . , O(cf (n)).

– Make sure z is a valid conﬁguration before using it in the recursive calls.a

• Each z has length O(f(n)) by Eq. (1) on p. 225.

• So each node needs space O(f(n)).

• As the depth of the recursive call on p. 233 is

O(log cf (n)), the total space is therefore O(f2(n)).

aThanks to a lively class discussion on October 13, 2004.

(20)

### Implications of Savitch’s Theorem

• PSPACE = NPSPACE.

• Nondeterminism is less powerful with respect to space.

• Nondeterminism may be very powerful with respect to time as it is not known if P = NP.

(21)

### Nondeterministic Space Is Closed under Complement

• Closure under complement is trivially true for deterministic complexity classes (p. 210).

• It is known thata

coNSPACE(f (n)) = NSPACE(f (n)). (2)

• So

coNL = NL,

coNPSPACE = NPSPACE.

• But it is not known whether coNP = NP.

aSzelepsc´enyi (1987) and Immerman (1988).

(22)

## Reductions and Completeness

(23)

It is unworthy of excellent men to lose hours like slaves in the labor of computation.

— Gottfried Wilhelm von Leibniz (1646–1716)

(24)

### Degrees of Diﬃculty

• When is a problem more diﬃcult than another?

• B reduces to A if there is a transformation R which for every input x of B yields an input R(x) of A.a

– The answer to x for B is the same as the answer to R(x) for A.

– R is easy to compute.

• We say problem A is at least as hard as problem B if B reduces to A.

(25)

### Degrees of Diﬃculty (concluded)

• This makes intuitive sense: If A is able to solve your problem B after only a little bit of work of R, then A must be at least as hard.

– If A is easy to solve, it combined with R (which is also easy) would make B easy to solve, too.a

– So if B is hard to solve, A must be hard (if not harder), too!

aThanks to a lively class discussion on October 13, 2009.

(26)

### for A

Solving problem B by calling the algorithm for problem A once and without further processing its answer.

(27)

a

• Suppose B reduces to A via a transformation R.

• The input x is an instance of B.

• The output R(x) is an instance of A.

• R(x) may not span all possible instances of A.b

– Some instances of A may never appear in the range of R.

• But x must be a general instance for B.

aContributed by Mr. Ming-Feng Tsai (D92922003) on October 29, 2003.

bR(x) may not be onto; Mr. Alexandr Simak (D98922040) on October 13, 2009.

(28)

### Is “Reduction” a Confusing Choice of Word?

a

• If B reduces to A, doesn’t that intuitively make A smaller and simpler?

– Sometimes, we say, “B can be reduced to A.”

• But our deﬁnition means just the opposite.

• Our deﬁnition says in this case B is a special case of A.

• Hence A is harder.

aMoore and Mertens (2011).

(29)

### Reduction between Languages

• Language L1 is reducible to L2 if there is a function R computable by a deterministic TM in space O(log n).

• Furthermore, for all inputs x, x ∈ L1 if and only if R(x) ∈ L2.

• R is said to be a (Karp) reduction from L1 to L2.

(30)

### Reduction between Languages (concluded)

• Note that by Theorem 23 (p. 222), R runs in polynomial time.

– In most cases, a polynomial-time R suﬃces for proofs.a

• Suppose R is a reduction from L1 to L2.

• Then solving “R(x) ∈ L2?” is an algorithm for solving

“x ∈ L1?”b

aIn fact, unless stated otherwise, we will only require that the reduc- tion R run in polynomial time.

bOf course, it may not be an optimal one.

(31)

• Degree of diﬃculty is not deﬁned in terms of absolute complexity.

• So a language B ∈ TIME(n99) may be “easier” than a language A ∈ TIME(n3).

– Again, this happens when B is reducible to A.

• But is this a contradiction if the best algorithm for B requires n99 steps?

• That is, how can a problem requiring n99 steps be reducible to a problem solvable in n3 steps?

(32)

• The so-called contradiction does not hold.

• Suppose we solve the problem “x ∈ B?” via “R(x) ∈ A?”

• We must consider the time spent by R(x) and its length

| R(x) | because R(x) (not x) is presented to A.

(33)

### hamiltonian path

• A Hamiltonian path of a graph is a path that visits every node of the graph exactly once.

• Suppose graph G has n nodes: 1, 2, . . . , n.

• A Hamiltonian path can be expressed as a permutation π of { 1, 2, . . . , n } such that

– π(i) = j means the ith position is occupied by node j.

– (π(i), π(i + 1)) ∈ G for i = 1, 2, . . . , n − 1.

• hamiltonian path asks if a graph has a Hamiltonian path.

(34)

### Reduction of hamiltonian path to sat

• Given a graph G, we shall construct a CNF R(G) such that R(G) is satisﬁable iﬀ G has a Hamiltonian path.

• R(G) has n2 boolean variables xij, 1 ≤ i, j ≤ n.

• xij means

“the ith position in the Hamiltonian path is occupied by node j.”

• Our reduction will produce clauses.

(35)

1

2

3

4

5 6

8 7 9

x12 = x21 = x34 = x45 = x53 = x69 = x76 = x88 = x97 = 1;

π(1) = 2, π(2) = 1, π(3) = 4, π(4) = 5, π(5) = 3, π(6) = 9, π(7) = 6, π(8) = 8, π(9) = 7.

(36)

### The Clauses of R(G) and Their Intended Meanings

1. Each node j must appear in the path.

• x1j ∨ x2j ∨ · · · ∨ xnj for each j.

2. No node j appears twice in the path.

• ¬xij ∨ ¬xkj(≡ ¬(xij ∧ xkj)) for all i, j, k with i ̸= k.

3. Every position i on the path must be occupied.

• xi1 ∨ xi2 ∨ · · · ∨ xin for each i.

4. No two nodes j and k occupy the same position in the path.

• ¬xij ∨ ¬xik(≡ ¬(xij ∧ xik)) for all i, j, k with j ̸= k.

5. Nonadjacent nodes i and j cannot be adjacent in the path.

• ¬xki ∨ ¬xk+1,j for all (i, j) ̸∈ G and k = 1, 2, . . . , n − 1.

(37)

### The Proof

• R(G) contains O(n3) clauses.

• R(G) can be computed eﬃciently (simple exercise).

• Suppose T |= R(G).

• From the 1st and 2nd types of clauses, for each node j there is a unique position i such that T |= xij.

• From the 3rd and 4th types of clauses, for each position i there is a unique node j such that T |= xij.

• So there is a permutation π of the nodes such that π(i) = j if and only if T |= xij.

(38)

### The Proof (concluded)

• The 5th type of clauses furthermore guarantee that (π(1), π(2), . . . , π(n)) is a Hamiltonian path.

• Conversely, suppose G has a Hamiltonian path (π(1), π(2), . . . , π(n)),

where π is a permutation.

• Clearly, the truth assignment

T (xij) = true if and only if π(i) = j satisﬁes all clauses of R(G).

(39)

### A Comment

a

• An answer to “Is R(G) satisﬁable?” does answer “Is G Hamiltonian?”

• But a positive answer does not give a Hamiltonian path for G.

– Providing a witness is not a requirement of reduction.

• A positive answer to “Is R(G) satisﬁable?” plus a satisfying truth assignment does provide us with a Hamiltonian path for G.

aContributed by Ms. Amy Liu (J94922016) on May 29, 2006.

(40)

### Reduction of reachability to circuit value

• Note that both problems are in P.

• Given a graph G = (V, E), we shall construct a variable-free circuit R(G).

• The output of R(G) is true if and only if there is a path from node 1 to node n in G.

• Idea: the Floyd-Warshall algorithm.

(41)

### The Gates

• The gates are

– gijk with 1 ≤ i, j ≤ n and 0 ≤ k ≤ n.

– hijk with 1 ≤ i, j, k ≤ n.

• gijk: There is a path from node i to node j without passing through a node bigger than k.

• hijk: There is a path from node i to node j passing through k but not any node bigger than k.

• Input gate gij0 = true if and only if i = j or (i, j) ∈ E.

(42)

### The Construction

• hijk is an and gate with predecessors gi,k,k−1 and gk,j,k−1, where k = 1, 2, . . . , n.

• gijk is an or gate with predecessors gi,j,k−1 and hi,j,k, where k = 1, 2, . . . , n.

• g1nn is the output gate.

• Interestingly, R(G) uses no ¬ gates.

– It is a monotone circuit.

(43)

### Reduction of circuit sat to sat

• Given a circuit C, we will construct a boolean

expression R(C) such that R(C) is satisﬁable iﬀ C is.

– R(C) will turn out to be a CNF.

– R(C) is basically a depth-2 circuit; furthermore, each gate has out-degree 1.

• The variables of R(C) are those of C plus g for each gate g of C.

– The g’s propagate the truth values for the CNF.

• Each gate of C will be turned into equivalent clauses.

• Recall that clauses are ∧ed together by deﬁnition.

(44)

### The Clauses of R(C)

g is a variable gate x: Add clauses (¬g ∨ x) and (g ∨ ¬x).

• Meaning: g ⇔ x.

g is a true gate: Add clause (g).

• Meaning: g must be true to make R(C) true.

g is a false gate: Add clause (¬g).

• Meaning: g must be false to make R(C) true.

g is a ¬ gate with predecessor gate h: Add clauses (¬g ∨ ¬h) and (g ∨ h).

• Meaning: g ⇔ ¬h.

(45)

### The Clauses of R(C) (concluded)

g is a ∨ gate with predecessor gates h and h: Add clauses (¬h ∨ g), (¬h ∨ g), and (h ∨ h ∨ ¬g).

• Meaning: g ⇔ (h ∨ h).

g is a ∧ gate with predecessor gates h and h: Add clauses (¬g ∨ h), (¬g ∨ h), and (¬h ∨ ¬h ∨ g).

• Meaning: g ⇔ (h ∧ h).

g is the output gate: Add clause (g).

• Meaning: g must be true to make R(C) true.

Note: If gate g feeds gates h1, h2, . . ., then variable g appears in the clauses for h1, h2, . . . in R(C).

(46)

### An Example

[ [ [

[

∧ ¬

K J K K J K

J J

J

(h1 ⇔ x1) ∧ (h2 ⇔ x2) ∧ (h3 ⇔ x3) ∧ (h4 ⇔ x4)

∧ [ g1 ⇔ (h1 ∧ h2) ] ∧ [ g2 ⇔ (h3 ∨ h4) ]

∧ [ g3 ⇔ (g1 ∧ g2) ] ∧ (g4 ⇔ ¬g2)

∧ [ g5 ⇔ (g3 ∨ g4) ] ∧ g5.

Updating...

## References

Related subjects :