### The Reachability Method

*• The computation of a time-bounded TM can be*
represented by a directed graph.

*• The TM’s conﬁgurations constitute the nodes.*

*• Two nodes are connected by a directed edge if one yields*
the other in one step.

*• The start node representing the initial conﬁguration has*
zero in degree.

### The Reachability Method (concluded)

*• When the TM is nondeterministic, a node may have an*
out degree greater than one.

**– The graph is the same as the computation tree**

earlier except that identical conﬁguration nodes are merged into one node.

*• So M accepts the input if and only if there is a path*
from the start node to a node with a “yes” state.

*• It is the reachability problem.*

### Illustration of the Reachability Method

yes

yes Initial

configuration

### Relations between Complexity Classes

**Theorem 23 Suppose f (n) is proper. Then***1. SPACE(f (n))* *⊆ NSPACE(f(n)),*

*TIME(f (n))* *⊆ NTIME(f(n)).*

*2. NTIME(f (n))* *⊆ SPACE(f(n)).*

*3. NSPACE(f (n))* *⊆ TIME(k**log n+f (n)**).*

*• Proof of 2:*

**– Explore the computation tree of the NTM for “yes.”**

**– Speciﬁcally, generate an f (n)-bit sequence denoting***the nondeterministic choices over f (n) steps.*

### Proof of Theorem 23(2)

*• (continued)*

**– Simulate the NTM based on the choices.**

**– Recycle the space and repeat the above steps.**

**– Halt with “yes” when a “yes” is encountered or “no”**

if the tree is exhausted.

**– Each path simulation consumes at most O(f (n))***space because it takes O(f (n)) time.*

**– The total space is O(f (n)) because space is recycled.**

### Proof of Theorem 23(3)

*• Let k-string NTM*

*M = (K, Σ, ∆, s)*

*with input and output decide L* *∈ NSPACE(f(n)).*

*• Use the reachability method on the conﬁguration graph*
*of M on input x of length n.*

*• A conﬁguration is a (2k + 1)-tuple*

*(q, w*_{1}*, u*_{1}*, w*_{2}*, u*_{2}*, . . . , w*_{k}*, u*_{k}*).*

### Proof of Theorem 23(3) (continued)

*• We only care about*

*(q, i, w*_{2}*, u*_{2}*, . . . , w*_{k}_{−1}*, u*_{k}_{−1}*),*

*where i is an integer between 0 and n for the position of*
the ﬁrst cursor.

*• The number of conﬁgurations is therefore at most*

*|K| × (n + 1) × |Σ|*^{(2k}^{−4)f(n)}*= O(c**log n+f (n)*

1 ) (1)

*for some c*_{1}*, which depends on M .*

*• Add edges to the conﬁguration graph based on M’s*
transition function.

### Proof of Theorem 23(3) (concluded)

*• x ∈ L ⇔ there is a path in the conﬁguration graph from*
the initial conﬁguration to a conﬁguration of the form
*(“yes”, i, . . .).*^{a}

*• This is reachability on a graph with O(c**log n+f (n)*

1 )

nodes.

*• It is in TIME(c**log n+f (n)**) for some c because*
*reachability ∈ TIME(n*^{j}*) for some j and*

[

*c**log n+f (n)*
1

]*j*

*= (c*^{j}_{1})*log n+f (n)*

*.*

aThere may be many of them.

### Space-Bounded Computation and Proper Functions

*• In the deﬁnition of space-bounded computations earlier*
(p. 95), the TMs are not required to halt at all.

*• When the space is bounded by a proper function f,*
computations can be assumed to halt:

**– Run the TM associated with f to produce a***quasi-blank output of length f (n) ﬁrst.*

**– The space-bounded computation must repeat a**

*conﬁguration if it runs for more than c**log n+f (n)* steps
*for some c (p. 225).*

### Space-Bounded Computation and Proper Functions (concluded)

*• (continued)*

**– So we can prevent inﬁnite loops during simulation by**
*pruning any path longer than c**log n+f (n)*.

**– In other words, we only simulate c***log n+f (n)* time
steps per computation path.

### A Grand Chain of Inclusions

^{a}

*• It is an easy application of Theorem 23 (p. 222) that*
L *⊆ NL ⊆ P ⊆ NP ⊆ PSPACE ⊆ EXP.*

*• By Corollary 20 (p. 217), we know L ( PSPACE.*

*• So the chain must break somewhere between L and EXP.*

*• It is suspected that all four inclusions are proper.*

*• But there are no proofs yet.*

aWith input from Mr. Chin-Luei Chang (R93922004, D95922007) on October 22, 2004.

### Nondeterministic Space and Deterministic Space

*• By Theorem 4 (p. 101),*

*NTIME(f (n))* *⊆ TIME(c*^{f (n)}*),*
an exponential gap.

*• There is no proof yet that the exponential gap is*
inherent.

*• How about NSPACE vs. SPACE?*

*• Surprisingly, the relation is only quadratic—a*
polynomial—by Savitch’s theorem.

### Savitch’s Theorem

**Theorem 24 (Savitch (1970))**

*reachability ∈ SPACE(log*^{2} *n).*

*• Let G(V, E) be a graph with n nodes.*

*• For i ≥ 0, let*

*PATH(x, y, i)*

*mean there is a path from node x to node y of length at*
most 2* ^{i}*.

*• There is a path from x to y if and only if*
*PATH(x, y,⌈log n⌉)*
holds.

### The Proof (continued)

*• For i > 0, PATH(x, y, i) if and only if there exists a z*
*such that PATH(x, z, i* *− 1) and PATH(z, y, i − 1).*

*• For PATH(x, y, 0), check the input graph or if x = y.*

*• Compute PATH(x, y, ⌈log n⌉) with a depth-ﬁrst search*
*on a graph with nodes (x, y, z, i)s (see next page).*^{a}

*• Like stacks in recursive calls, we keep only the current*
*path of (x, y, i)s.*

*• The space requirement is proportional to the depth of*
the tree: *⌈log n⌉.*

aContributed by Mr. Chuan-Yao Tan on October 11, 2011.

*The Proof (continued): Algorithm for PATH(x, y, i)*

1: **if i = 0 then**

2: **if x = y or (x, y)****∈ E then**

3: **return true;**

4: **else**

5: **return false;**

6: **end if**

7: **else**

8: **for z = 1, 2, . . . , n do**

9: **if PATH(x, z, i****− 1) and PATH(z, y, i − 1) then**

10: **return true;**

11: **end if**

12: **end for**

13: **return false;**

14: **end if**

### The Proof (concluded)

3$7+[\ORJQ

3$7+[]ORJQ 3$7+]\ORJQ

Ø\HVÙ

ØQRÙ ØQRÙ

*• Depth is ⌈log n⌉, and each node (x, y, z, i) needs space*
*O(log n).*

*• The total space is O(log*^{2} *n).*

### The Relation between Nondeterministic Space and Deterministic Space Only Quadratic

**Corollary 25 Let f (n)***≥ log n be proper. Then*
*NSPACE(f (n))* *⊆ SPACE(f*^{2}*(n)).*

*• Apply Savitch’s proof to the conﬁguration graph of the*
NTM on the input.

*• From p. 225, the conﬁguration graph has O(c** ^{f (n)}*)

*nodes; hence each node takes space O(f (n)).*

*• But if we construct explicitly the whole graph before*
*applying Savitch’s theorem, we get O(c** ^{f (n)}*) space!

### The Proof (continued)

*• The way out is not to generate the graph at all.*

*• Instead, keep the graph implicit.*

*• In fact, we check node connectedness only when i = 0 on*
*p. 233, by examining the input string G.*

*• There, given conﬁgurations x and y, we go over the*
Turing machine’s program to determine if there is an
*instruction that can turn x into y in one step.*^{a}

aThanks to a lively class discussion on October 15, 2003.

### The Proof (concluded)

*• The z variable in the algorithm on p. 233 simply runs*
through all possible valid conﬁgurations.

**– Let z = 0, 1, . . . , O(c*** ^{f (n)}*).

* – Make sure z is a valid conﬁguration before using it in*
the recursive calls.

^{a}

*• Each z has length O(f(n)) by Eq. (1) on p. 225.*

*• So each node needs space O(f(n)).*

*• As the depth of the recursive call on p. 233 is*

*O(log c*^{f (n)}*), the total space is therefore O(f*^{2}*(n)).*

aThanks to a lively class discussion on October 13, 2004.

### Implications of Savitch’s Theorem

*• PSPACE = NPSPACE.*

*• Nondeterminism is less powerful with respect to space.*

*• Nondeterminism may be very powerful with respect to*
time as it is not known if P = NP.

### Nondeterministic Space Is Closed under Complement

*• Closure under complement is trivially true for*
deterministic complexity classes (p. 210).

*• It is known that*^{a}

*coNSPACE(f (n)) = NSPACE(f (n)).* (2)

*• So*

coNL = *NL,*

coNPSPACE = *NPSPACE.*

*• But it is not known whether coNP = NP.*

aSzelepsc´enyi (1987) and Immerman (1988).

*Reductions and Completeness*

It is unworthy of excellent men to lose hours like slaves in the labor of computation.

— Gottfried Wilhelm von Leibniz (1646–1716)

### Degrees of Diﬃculty

*• When is a problem more diﬃcult than another?*

**• B reduces to A if there is a transformation R which for***every input x of B yields an input R(x) of A.*^{a}

**– The answer to x for B is the same as the answer to***R(x) for A.*

**– R is easy to compute.**

*• We say problem A is at least as hard as problem B if B*
reduces to A.

aSee also p. 148.

### Degrees of Diﬃculty (concluded)

*• This makes intuitive sense: If A is able to solve your*
*problem B after only a little bit of work of R, then A*
must be at least as hard.

* – If A is easy to solve, it combined with R (which is*
also easy) would make B easy to solve, too.

^{a}

**– So if B is hard to solve, A must be hard (if not**
harder), too!

aThanks to a lively class discussion on October 13, 2009.

### Reduction

*x* *R(x)* yes/no

*R* algorithm

### for A

Solving problem B by calling the algorithm for problem A
*once and without further processing its answer.*

### Comments

^{a}

*• Suppose B reduces to A via a transformation R.*

*• The input x is an instance of B.*

*• The output R(x) is an instance of A.*

*• R(x) may not span all possible instances of A.*^{b}

**– Some instances of A may never appear in the range**
*of R.*

*• But x must be a general instance for B.*

aContributed by Mr. Ming-Feng Tsai (D92922003) on October 29, 2003.

b*R(x) may not be onto; Mr. Alexandr Simak (D98922040) on October*
13, 2009.

### Is “Reduction” a Confusing Choice of Word?

^{a}

*• If B reduces to A, doesn’t that intuitively make A*
smaller and simpler?

**– Sometimes, we say, “B can be reduced to A.”**

*• But our deﬁnition means just the opposite.*

*• Our deﬁnition says in this case B is a special case of A.*

*• Hence A is harder.*

aMoore and Mertens (2011).

### Reduction between Languages

*• Language L*^{1} **is reducible to L**_{2} *if there is a function R*
*computable by a deterministic TM in space O(log n).*

*• Furthermore, for all inputs x, x ∈ L*^{1} if and only if
*R(x)* *∈ L*^{2}.

**• R is said to be a (Karp) reduction from L**^{1} *to L*_{2}.

### Reduction between Languages (concluded)

*• Note that by Theorem 23 (p. 222), R runs in polynomial*
time.

* – In most cases, a polynomial-time R suﬃces for*
proofs.

^{a}

*• Suppose R is a reduction from L*^{1} *to L*_{2}.

*• Then solving “R(x) ∈ L*^{2}?” is an algorithm for solving

*“x* *∈ L*^{1}?”^{b}

aIn fact, unless stated otherwise, we will only require that the reduc-
*tion R run in polynomial time.*

bOf course, it may not be an optimal one.

### A Paradox?

*• Degree of diﬃculty is not deﬁned in terms of absolute*
complexity.

*• So a language B ∈ TIME(n*^{99}) may be “easier” than a
language A *∈ TIME(n*^{3}).

**– Again, this happens when B is reducible to A.**

*• But is this a contradiction if the best algorithm for B*
*requires n*^{99} steps?

*• That is, how can a problem requiring n*^{99} steps be
*reducible to a problem solvable in n*^{3} steps?

### Paradox Resolved

*• The so-called contradiction does not hold.*

*• Suppose we solve the problem “x ∈ B?” via “R(x) ∈ A?”*

*• We must consider the time spent by R(x) and its length*

*| R(x) | because R(x) (not x) is presented to A.*

### hamiltonian path

* • A Hamiltonian path of a graph is a path that visits*
every node of the graph exactly once.

*• Suppose graph G has n nodes: 1, 2, . . . , n.*

*• A Hamiltonian path can be expressed as a permutation*
*π of* *{ 1, 2, . . . , n } such that*

**– π(i) = j means the ith position is occupied by node j.**

**– (π(i), π(i + 1))***∈ G for i = 1, 2, . . . , n − 1.*

*• hamiltonian path asks if a graph has a Hamiltonian*
path.

### Reduction of hamiltonian path to sat

*• Given a graph G, we shall construct a CNF R(G) such*
*that R(G) is satisﬁable iﬀ G has a Hamiltonian path.*

*• R(G) has n*^{2} *boolean variables x** _{ij}*, 1

*≤ i, j ≤ n.*

*• x** ^{ij}* means

*“the ith position in the Hamiltonian path is*
*occupied by node j.”*

*• Our reduction will produce clauses.*

1

2

3

4

5 6

8 7 9

*x*_{12} *= x*_{21} *= x*_{34} *= x*_{45} *= x*_{53} *= x*_{69} *= x*_{76} *= x*_{88} *= x*_{97} = 1;

*π(1) = 2, π(2) = 1, π(3) = 4, π(4) = 5, π(5) = 3, π(6) =*
*9, π(7) = 6, π(8) = 8, π(9) = 7.*

*The Clauses of R(G) and Their Intended Meanings*

*1. Each node j must appear in the path.*

*• x*^{1j}*∨ x*^{2j}*∨ · · · ∨ x*^{nj}*for each j.*

*2. No node j appears twice in the path.*

*• ¬x*^{ij}*∨ ¬x** ^{kj}*(

*≡ ¬(x*

^{ij}*∧ x*

^{kj}*)) for all i, j, k with i*

*̸= k.*

*3. Every position i on the path must be occupied.*

*• x*^{i1}*∨ x*^{i2}*∨ · · · ∨ x*^{in}*for each i.*

*4. No two nodes j and k occupy the same position in the path.*

*• ¬x*^{ij}*∨ ¬x** ^{ik}*(

*≡ ¬(x*

^{ij}*∧ x*

^{ik}*)) for all i, j, k with j*

*̸= k.*

*5. Nonadjacent nodes i and j cannot be adjacent in the path.*

*• ¬x*^{ki}*∨ ¬x*^{k+1,j}*for all (i, j)* *̸∈ G and k = 1, 2, . . . , n − 1.*

### The Proof

*• R(G) contains O(n*^{3}) clauses.

*• R(G) can be computed eﬃciently (simple exercise).*

*• Suppose T |= R(G).*

*• From the 1st and 2nd types of clauses, for each node j*
*there is a unique position i such that T* *|= x** ^{ij}*.

*• From the 3rd and 4th types of clauses, for each position*
*i there is a unique node j such that T* *|= x** ^{ij}*.

*• So there is a permutation π of the nodes such that*
*π(i) = j if and only if T* *|= x** ^{ij}*.

### The Proof (concluded)

*• The 5th type of clauses furthermore guarantee that*
*(π(1), π(2), . . . , π(n)) is a Hamiltonian path.*

*• Conversely, suppose G has a Hamiltonian path*
*(π(1), π(2), . . . , π(n)),*

*where π is a permutation.*

*• Clearly, the truth assignment*

*T (x*_{ij}*) = true if and only if π(i) = j*
*satisﬁes all clauses of R(G).*

### A Comment

^{a}

*• An answer to “Is R(G) satisﬁable?” does answer “Is G*
Hamiltonian?”

*• But a positive answer does not give a Hamiltonian path*
*for G.*

**– Providing a witness is not a requirement of reduction.**

*• A positive answer to “Is R(G) satisﬁable?” plus a*
*satisfying truth assignment does provide us with a*
*Hamiltonian path for G.*

aContributed by Ms. Amy Liu (J94922016) on May 29, 2006.

### Reduction of reachability to circuit value

*• Note that both problems are in P.*

*• Given a graph G = (V, E), we shall construct a*
*variable-free circuit R(G).*

*• The output of R(G) is true if and only if there is a path*
*from node 1 to node n in G.*

*• Idea: the Floyd-Warshall algorithm.*

### The Gates

*• The gates are*

**– g*** _{ijk}* with 1

*≤ i, j ≤ n and 0 ≤ k ≤ n.*

**– h*** _{ijk}* with 1

*≤ i, j, k ≤ n.*

*• g*^{ijk}*: There is a path from node i to node j without*
*passing through a node bigger than k.*

*• h*^{ijk}*: There is a path from node i to node j passing*
*through k but not any node bigger than k.*

*• Input gate g*^{ij0}*= true if and only if i = j or (i, j)* *∈ E.*

### The Construction

*• h*^{ijk}*is an and gate with predecessors g*^{i,k,k}* _{−1}* and

*g*

_{k,j,k}

_{−1}*, where k = 1, 2, . . . , n.*

*• g*^{ijk}*is an or gate with predecessors g*^{i,j,k}_{−1}*and h** _{i,j,k}*,

*where k = 1, 2, . . . , n.*

*• g** ^{1nn}* is the output gate.

*• Interestingly, R(G) uses no ¬ gates.*

**– It is a monotone circuit.**

### Reduction of circuit sat to sat

*• Given a circuit C, we will construct a boolean*

*expression R(C) such that R(C) is satisﬁable iﬀ C is.*

**– R(C) will turn out to be a CNF.**

* – R(C) is basically a depth-2 circuit; furthermore, each*
gate has out-degree 1.

*• The variables of R(C) are those of C plus g for each*
*gate g of C.*

**– The g’s propagate the truth values for the CNF.**

*• Each gate of C will be turned into equivalent clauses.*

*• Recall that clauses are ∧ed together by deﬁnition.*

*The Clauses of R(C)*

**g is a variable gate x: Add clauses (**¬g ∨ x) and (g ∨ ¬x).

*• Meaning: g ⇔ x.*

**g is a true gate: Add clause (g).**

*• Meaning: g must be true to make R(C) true.*

**g is a false gate: Add clause (**¬g).

*• Meaning: g must be false to make R(C) true.*

**g is a*** ¬ gate with predecessor gate h: Add clauses*
(

*¬g ∨ ¬h) and (g ∨ h).*

*• Meaning: g ⇔ ¬h.*

*The Clauses of R(C) (concluded)*

**g is a****∨ gate with predecessor gates h and h**^{′}**: Add**
clauses (*¬h ∨ g), (¬h*^{′}*∨ g), and (h ∨ h*^{′}*∨ ¬g).*

*• Meaning: g ⇔ (h ∨ h** ^{′}*).

**g is a****∧ gate with predecessor gates h and h**^{′}**: Add**
clauses (*¬g ∨ h), (¬g ∨ h** ^{′}*), and (

*¬h ∨ ¬h*

^{′}*∨ g).*

*• Meaning: g ⇔ (h ∧ h** ^{′}*).

**g is the output gate: Add clause (g).**

*• Meaning: g must be true to make R(C) true.*

*Note: If gate g feeds gates h*_{1}*, h*_{2}*, . . ., then variable g*
*appears in the clauses for h*_{1}*, h*_{2}*, . . . in R(C).*

### An Example

∧

[_{} [_{} [_{}

∨

[_{}

∧ ¬

∨

K_{} J_{} K_{} K_{} J_{} K_{}

J_{} J_{}

J_{}

*(h*_{1} *⇔ x*^{1}) *∧ (h*^{2} *⇔ x*^{2}) *∧ (h*^{3} *⇔ x*^{3}) *∧ (h*^{4} *⇔ x*^{4})

*∧ [ g*^{1} *⇔ (h*^{1} *∧ h*^{2}) ] *∧ [ g*^{2} *⇔ (h*^{3} *∨ h*^{4}) ]

*∧ [ g*^{3} *⇔ (g*^{1} *∧ g*^{2}) ] *∧ (g*^{4} *⇔ ¬g*^{2})

*∧ [ g*^{5} *⇔ (g*^{3} *∨ g*^{4}) ] *∧ g*^{5}*.*