### Savitch’s Theorem

**Theorem 25 (Savitch (1970))**

*reachability ∈ SPACE(log*^{2} *n).*

*• Let G(V, E) be a graph with n nodes.*

*• For i ≥ 0, let*

*PATH(x, y, i)*

*mean there is a path from node x to node y of length at*
most 2* ^{i}*.

*• There is a path from x to y if and only if*
*PATH(x, y,⌈log n⌉)*

### The Proof (continued)

*• For i > 0, PATH(x, y, i) if and only if there exists a z*
*such that PATH(x, z, i* *− 1) and PATH(z, y, i − 1).*

*• For PATH(x, y, 0), check the input graph or if x = y.*

*• Compute PATH(x, y, ⌈log n⌉) with a depth-first search*
*on a graph with nodes (x, y, z, i)s (see next page).*^{a}

*• Like stacks in recursive calls, we keep only the current*
*path of (x, y, i)s.*

*• The space requirement is proportional to the depth of*
the tree (*⌈log n⌉) times the size of the items stored at*
each node.

aContributed by Mr. Chuan-Yao Tan on October 11, 2011.

*The Proof (continued): Algorithm for PATH(x, y, i)*

1: **if i = 0 then**

2: **if x = y or (x, y)****∈ E then**

3: **return true;**

4: **else**

5: **return false;**

6: **end if**

7: **else**

8: **for z = 1, 2, . . . , n do**

9: **if PATH(x, z, i****− 1) and PATH(z, y, i − 1) then**

10: **return true;**

11: **end if**

12: **end for**

13: **return false;**

### The Proof (continued)

3$7+[\ORJQ

3$7+[]ORJQ 3$7+]\ORJQ

Ø\HVÙ

ØQRÙ ØQRÙ

### The Proof (concluded)

*• Depth is ⌈log n⌉, and each node (x, y, z, i) needs space*
*O(log n).*

*• The total space is O(log*^{2} *n).*

### The Relation between Nondeterministic Space and Deterministic Space Only Quadratic

**Corollary 26 Let f (n)***≥ log n be proper. Then*
*NSPACE(f (n))* *⊆ SPACE(f*^{2}*(n)).*

*• Apply Savitch’s proof to the configuration graph of the*
NTM on the input.

*• From p. 242, the configuration graph has O(c** ^{f (n)}*)

*nodes; hence each node takes space O(f (n)).*

*• But if we construct explicitly the whole graph before*
*applying Savitch’s theorem, we get O(c** ^{f (n)}*) space!

### The Proof (continued)

*• The way out is not to generate the graph at all.*

*• Instead, keep the graph implicit.*

*• In fact, we check node connectedness only when i = 0 on*
*p. 250, by examining the input string G.*

*• There, given configurations x and y, we go over the*
Turing machine’s program to determine if there is an
*instruction that can turn x into y in one step.*^{a}

aThanks to a lively class discussion on October 15, 2003.

### The Proof (concluded)

*• The z variable in the algorithm on p. 250 simply runs*
through all possible valid configurations.

**– Let z = 0, 1, . . . , O(c*** ^{f (n)}*).

* – Make sure z is a valid configuration before using it in*
the recursive calls.

^{a}

*• Each z has length O(f(n)) by Eq. (2) on p. 242.*

*• So each node needs space O(f(n)).*

*• The depth of the recursive call on p. 250 is O(log c** ^{f (n)}*),

*which is O(f (n)).*

*• The total space is therefore O(f*^{2}*(n)).*

aThanks to a lively class discussion on October 13, 2004.

### Implications of Savitch’s Theorem

*• PSPACE = NPSPACE.*

*• Nondeterminism is less powerful with respect to space.*

*• Nondeterminism may be very powerful with respect to*
time as it is not known if P = NP.

### Nondeterministic Space Is Closed under Complement

*• Closure under complement is trivially true for*
deterministic complexity classes (p. 227).

*• It is known that*^{a}

*coNSPACE(f (n)) = NSPACE(f (n)).* (3)

*• So*

coNL = *NL,*

coNPSPACE = *NPSPACE.*

*• But it is not known whether coNP = NP.*

aSzelepsc´enyi (1987) and Immerman (1988).

*Reductions and Completeness*

It is unworthy of excellent men to lose hours like slaves in the labor of computation.

— Gottfried Wilhelm von Leibniz (1646–1716)

### Degrees of Diﬃculty

*• When is a problem more diﬃcult than another?*

**• B reduces to A if there is a transformation R which for***every input x of B yields an input R(x) of A.*^{a}

**– The answer to x for B is the same as the answer to***R(x) for A.*

**– R is easy to compute.**

*• We say problem A is at least as hard as*^{b} problem B if B
reduces to A.

aSee also p. 164.

bOr simply “harder than” for brevity.

### Reduction

*x* *R(x)* yes/no

*R* algorithm

### for A

Solving problem B by calling the algorithm for problem A
*once and without further processing its answer.*

### Degrees of Diﬃculty (concluded)

*• This makes intuitive sense: If A is able to solve your*
*problem B after only a little bit of work of R, then A*
must be at least as hard.

* – If A is easy to solve, it combined with R (which is*
also easy) would make B easy to solve, too.

^{a}

**– So if B is hard to solve, A must be hard (if not**
harder), too!

aThanks to a lively class discussion on October 13, 2009.

### Comments

^{a}

*• Suppose B reduces to A via a transformation R.*

*• The input x is an instance of B.*

*• The output R(x) is an instance of A.*

*• R(x) may not span all possible instances of A.*^{b}

**– Some instances of A may never appear in the range**
*of R.*

*• But x must be a general instance for B.*

aContributed by Mr. Ming-Feng Tsai (D92922003) on October 29, 2003.

b*R(x) may not be onto; Mr. Alexandr Simak (D98922040) on October*
13, 2009.

### Is “Reduction” a Confusing Choice of Word?

^{a}

*• If B reduces to A, doesn’t that intuitively make A*
smaller and simpler?

**– Sometimes, we say, “B can be reduced to A.”**

*• But our definition means just the opposite.*

*• Our definition says in this case B is a special case of A.*

*• Hence A is harder.*

aMoore and Mertens (2011).

### Reduction between Languages

*• Language L*^{1} **is reducible to L**_{2} *if there is a function R*
*computable by a deterministic TM in space O(log n).*

*• Furthermore, for all inputs x, x ∈ L*^{1} if and only if
*R(x)* *∈ L*^{2}.

**• R is said to be a (Karp) reduction from L**^{1} *to L*_{2}.

### Reduction between Languages (concluded)

*• Note that by Theorem 24 (p. 239), R runs in polynomial*
time.

* – In most cases, a polynomial-time R suﬃces for*
proofs.

^{a}

*• Suppose R is a reduction from L*^{1} *to L*_{2}.

*• Then solving “R(x) ∈ L*^{2}?” is an algorithm for solving

*“x* *∈ L*^{1}?”^{b}

aIn fact, unless stated otherwise, we will only require that the reduc-
*tion R run in polynomial time.*

bOf course, it may not be an optimal one.

### A Paradox?

*• Degree of diﬃculty is not defined in terms of absolute*
complexity.

*• So a language B ∈ TIME(n*^{99}) may be “easier” than a
language A *∈ TIME(n*^{3}).

**– Again, this happens when B is reducible to A.**

*• But isn’t this a contradiction if the best algorithm for B*
*requires n*^{99} steps?

*• That is, how can a problem requiring n*^{99} steps be
*reducible to a problem solvable in n*^{3} steps?

### Paradox Resolved

*• The so-called contradiction does not hold.*

*• Suppose we solve the problem “x ∈ B?” via “R(x) ∈ A?”*

*• We must consider the time spent by R(x) and its length*

*| R(x) |:*

**– Because R(x) (not x) is solved by A.**

### hamiltonian path

* • A Hamiltonian path of a graph is a path that visits*
every node of the graph exactly once.

*• Suppose graph G has n nodes: 1, 2, . . . , n.*

*• A Hamiltonian path can be expressed as a permutation*
*π of* *{ 1, 2, . . . , n } such that*

**– π(i) = j means the ith position is occupied by node j.**

**– (π(i), π(i + 1))***∈ G for i = 1, 2, . . . , n − 1.*

*• hamiltonian path asks if a graph has a Hamiltonian*
path.

### Reduction of hamiltonian path to sat

*• Given a graph G, we shall construct a CNF R(G) such*
*that R(G) is satisfiable iﬀ G has a Hamiltonian path.*

*• R(G) has n*^{2} *boolean variables x** _{ij}*, 1

*≤ i, j ≤ n.*

*• x** ^{ij}* means

*“the ith position in the Hamiltonian path is*
*occupied by node j.”*

*• Our reduction will produce clauses.*

1

2

3

4

5 6

8 7 9

*x*_{12} *= x*_{21} *= x*_{34} *= x*_{45} *= x*_{53} *= x*_{69} *= x*_{76} *= x*_{88} *= x*_{97} = 1;

*π(1) = 2, π(2) = 1, π(3) = 4, π(4) = 5, π(5) = 3, π(6) =*
*9, π(7) = 6, π(8) = 8, π(9) = 7.*

*The Clauses of R(G) and Their Intended Meanings*

*1. Each node j must appear in the path.*

*• x*^{1j}*∨ x*^{2j}*∨ · · · ∨ x*^{nj}*for each j.*

*2. No node j appears twice in the path.*

*• ¬x*^{ij}*∨ ¬x** ^{kj}*(

*≡ ¬(x*

^{ij}*∧ x*

^{kj}*)) for all i, j, k with i*

*̸= k.*

*3. Every position i on the path must be occupied.*

*• x*^{i1}*∨ x*^{i2}*∨ · · · ∨ x*^{in}*for each i.*

*4. No two nodes j and k occupy the same position in the path.*

*• ¬x*^{ij}*∨ ¬x** ^{ik}*(

*≡ ¬(x*

^{ij}*∧ x*

^{ik}*)) for all i, j, k with j*

*̸= k.*

*5. Nonadjacent nodes i and j cannot be adjacent in the path.*

*• ¬x*^{ki}*∨ ¬x*^{k+1,j}*for all (i, j)* *̸∈ G and k = 1, 2, . . . , n − 1.*

### The Proof

*• R(G) contains O(n*^{3}) clauses.

*• R(G) can be computed eﬃciently (simple exercise).*

*• Suppose T |= R(G).*

*• From the 1st and 2nd types of clauses, for each node j*
*there is a unique position i such that T* *|= x** ^{ij}*.

*• From the 3rd and 4th types of clauses, for each position*
*i there is a unique node j such that T* *|= x** ^{ij}*.

*• So there is a permutation π of the nodes such that*
*π(i) = j if and only if T* *|= x** ^{ij}*.

### The Proof (concluded)

*• The 5th type of clauses furthermore guarantee that*
*(π(1), π(2), . . . , π(n)) is a Hamiltonian path.*

*• Conversely, suppose G has a Hamiltonian path*
*(π(1), π(2), . . . , π(n)),*

*where π is a permutation.*

*• Clearly, the truth assignment*

*T (x*_{ij}*) = true if and only if π(i) = j*
*satisfies all clauses of R(G).*

### A Comment

^{a}

*• An answer to “Is R(G) satisfiable?” does answer “Is G*
Hamiltonian?”

*• But a positive answer does not give a Hamiltonian path*
*for G.*

**– Providing a witness is not a requirement of reduction.**

*• A positive answer to “Is R(G) satisfiable?” plus a*
*satisfying truth assignment does provide us with a*
*Hamiltonian path for G.*

aContributed by Ms. Amy Liu (J94922016) on May 29, 2006.

### Reduction of reachability to circuit value

*• Note that both problems are in P.*

*• Given a graph G = (V, E), we shall construct a*
*variable-free circuit R(G).*

*• The output of R(G) is true if and only if there is a path*
*from node 1 to node n in G.*

*• Idea: the Floyd-Warshall algorithm.*

### The Gates

*• The gates are*

**– g*** _{ijk}* with 1

*≤ i, j ≤ n and 0 ≤ k ≤ n.*

**– h*** _{ijk}* with 1

*≤ i, j, k ≤ n.*

*• g*^{ijk}*: There is a path from node i to node j without*
*passing through a node bigger than k.*

*• h*^{ijk}*: There is a path from node i to node j passing*
*through k but not any node bigger than k.*

*• Input gate g*^{ij0}*= true if and only if i = j or (i, j)* *∈ E.*

### The Construction

*• h*^{ijk}*is an and gate with predecessors g*^{i,k,k}* _{−1}* and

*g*

_{k,j,k}

_{−1}*, where k = 1, 2, . . . , n.*

*• g*^{ijk}*is an or gate with predecessors g*^{i,j,k}_{−1}*and h** _{i,j,k}*,

*where k = 1, 2, . . . , n.*

*• g** ^{1nn}* is the output gate.

*• Interestingly, R(G) uses no ¬ gates.*

**– It is a monotone circuit.**

### Reduction of circuit sat to sat

*• Given a circuit C, we will construct a boolean*

*expression R(C) such that R(C) is satisfiable iﬀ C is.*

**– R(C) will turn out to be a CNF.**

* – R(C) is basically a depth-2 circuit; furthermore, each*
gate has out-degree 1.

*• The variables of R(C) are those of C plus g for each*
*gate g of C.*

**– The g’s propagate the truth values for the CNF.**

*• Each gate of C will be turned into equivalent clauses.*

*• Recall that clauses are ∧ed together by definition.*

*The Clauses of R(C)*

**g is a variable gate x: Add clauses (**¬g ∨ x) and (g ∨ ¬x).

*• Meaning: g ⇔ x.*

**g is a true gate: Add clause (g).**

*• Meaning: g must be true to make R(C) true.*

**g is a false gate: Add clause (**¬g).

*• Meaning: g must be false to make R(C) true.*

**g is a*** ¬ gate with predecessor gate h: Add clauses*
(

*¬g ∨ ¬h) and (g ∨ h).*

*• Meaning: g ⇔ ¬h.*

*The Clauses of R(C) (concluded)*

**g is a****∨ gate with predecessor gates h and h**^{′}**: Add**
clauses (*¬h ∨ g), (¬h*^{′}*∨ g), and (h ∨ h*^{′}*∨ ¬g).*

*• Meaning: g ⇔ (h ∨ h** ^{′}*).

**g is a****∧ gate with predecessor gates h and h**^{′}**: Add**
clauses (*¬g ∨ h), (¬g ∨ h** ^{′}*), and (

*¬h ∨ ¬h*

^{′}*∨ g).*

*• Meaning: g ⇔ (h ∧ h** ^{′}*).

**g is the output gate: Add clause (g).**

*• Meaning: g must be true to make R(C) true.*

*Note: If gate g feeds gates h*_{1}*, h*_{2}*, . . ., then variable g*
*appears in the clauses for h*_{1}*, h*_{2}*, . . . in R(C).*

### An Example

∧

[_{} [_{} [_{}

∨

[_{}

∧ ¬

∨

K_{} J_{} K_{} K_{} J_{} K_{}

J_{} J_{}

J_{}

*(h*_{1} *⇔ x*^{1}) *∧ (h*^{2} *⇔ x*^{2}) *∧ (h*^{3} *⇔ x*^{3}) *∧ (h*^{4} *⇔ x*^{4})

*∧ [ g*^{1} *⇔ (h*^{1} *∧ h*^{2}) ] *∧ [ g*^{2} *⇔ (h*^{3} *∨ h*^{4}) ]

*∧ [ g*^{3} *⇔ (g*^{1} *∧ g*^{2}) ] *∧ (g*^{4} *⇔ ¬g*^{2})

*∧ [ g* *⇔ (g* *∨ g* ) ] *∧ g* *.*

### An Example (concluded)

*• In general, the result is a CNF.*

*• The CNF has size proportional to the circuit’s number*
of gates.

*• The CNF adds new variables to the circuit’s original*
input variables.

*• Had we used the idea on p. 209 for the reduction, the*
resulting formula may have an exponential length

because of the copying.^{a}

aContributed by Mr. Ching-Hua Yu (D00921025) on October 16, 2012.

### Composition of Reductions

**Proposition 27 If R**_{12} *is a reduction from L*_{1} *to L*_{2} *and*
*R*_{23} *is a reduction from L*_{2} *to L*_{3}*, then the composition*
*R*_{12} *◦ R*^{23} *is a reduction from L*_{1} *to L*_{3}*.*

*• So reducibility is transitive.*

### Completeness

^{a}

*• As reducibility is transitive, problems can be ordered*
with respect to their diﬃculty.

*• Is there a maximal element (the hardest problem)?*

*• It is not obvious that there should be a maximal*
element.

**– Many infinite structures (such as integers and real**
numbers) do not have maximal elements.

*• Hence it may surprise you that most of the complexity*
classes that we have seen so far have maximal elements.

aCook (1971) and Levin (1973).

### Completeness (concluded)

*• Let C be a complexity class and L ∈ C.*

**• L is C-complete if every L**^{′}*∈ C can be reduced to L.*

**– Most complexity classes we have seen so far have**
complete problems!

*• Complete problems capture the diﬃculty of a class*
because they are the hardest problems in the class.

### Hardness

*• Let C be a complexity class.*

**• L is C-hard if every L**^{′}*∈ C can be reduced to L.*

*• It is not required that L ∈ C.*

*• If L is C-hard, then by definition, every C-complete*
*problem can be reduced to L.*^{a}

aContributed by Mr. Ming-Feng Tsai (D92922003) on October 15, 2003.

### Illustration of Completeness and Hardness

*A*_{1}

*A*_{2}

*A*_{3}

*A*_{4}

*L*
*A*_{1}

*A*_{2}

*A*_{3}

*A*_{4}
*L*

### Closedness under Reductions

**• A class C is closed under reductions if whenever L is***reducible to L*^{′}*and L*^{′}*∈ C, then L ∈ C.*

*• It is easy to show that P, NP, coNP, L, NL, PSPACE,*
and EXP are all closed under reductions.

### Complete Problems and Complexity Classes

**Proposition 28 Let***C*^{′}*and* *C be two complexity classes*
*such that* *C*^{′}*⊆ C. Assume C*^{′}*is closed under reductions and*
*L is* *C-complete. Then C = C*^{′}*if and only if L* *∈ C*^{′}*.*

*• Suppose L ∈ C** ^{′}* first.

*• Every language A ∈ C reduces to L ∈ C** ^{′}*.

*• Because C*^{′}*is closed under reductions, A* *∈ C** ^{′}*.

*• Hence C ⊆ C** ^{′}*.

*• As C*^{′}*⊆ C, we conclude that C = C** ^{′}*.

### The Proof (concluded)

*• On the other hand, suppose C = C** ^{′}*.

*• As L is C-complete, L ∈ C.*

*• Thus, trivially, L ∈ C** ^{′}*.

### Two Important Corollaries

Proposition 28 implies the following.

**Corollary 29 P = NP if and only if an NP-complete***problem in P.*

**Corollary 30 L = P if and only if a P-complete problem is***in L.*

### Complete Problems and Complexity Classes

**Proposition 31 Let***C*^{′}*and* *C be two complexity classes*
*closed under reductions. If L is complete for both* *C and C*^{′}*,*
*then* *C = C*^{′}*.*

*• All languages L ∈ C reduce to L ∈ C and L ∈ C** ^{′}*.

*• Since C** ^{′}* is closed under reductions,

*L ∈ C*

*.*

^{′}*• Hence C ⊆ C** ^{′}*.

*• The proof for C*^{′}*⊆ C is symmetric.*

### Table of Computation

*• Let M = (K, Σ, δ, s) be a single-string polynomial-time*
*deterministic TM deciding L.*

*• Its computation on input x can be thought of as a*

*| x |*^{k}*× | x |** ^{k}* table, where

*| x |*

*is the time bound.*

^{k}**– It is essentially a sequence of configurations.**

*• Rows correspond to time steps 0 to | x |*^{k}*− 1.*

*• Columns are positions in the string of M.*

*• The (i, j)th table entry represents the contents of*
*position j of the string after i steps of computation.*

### Some Conventions To Simplify the Table

*• M halts after at most | x |*^{k}*− 2 steps.*

*• Assume a large enough k to make it true for | x | ≥ 2.*

*• Pad the table with* ⊔

s so that each row has length *| x |** ^{k}*.

**– The computation will never reach the right end of**

the table for lack of time.

*• If the cursor scans the jth position at time i when M is*
*at state q and the symbol is σ, then the (i, j)th entry is*
*a new symbol σ** _{q}*.

### Some Conventions To Simplify the Table (continued)

*• If q is “yes” or “no,” simply use “yes” or “no” instead of*
*σ** _{q}*.

*• Modify M so that the cursor starts not at but at the*
first symbol of the input.

*• The cursor never visits the leftmost by telescoping*
*two moves of M each time the cursor is about to move*
to the leftmost *.*

*• So the first symbol in every row is a and not a ** ^{q}*.

### Some Conventions To Simplify the Table (concluded)

*• Suppose M has halted before its time bound of | x |** ^{k}*, so
that “yes” or “no” appears at a row before the last.

*• Then all subsequent rows will be identical to that row.*

*• M accepts x if and only if the (| x |*^{k}*− 1, j)th entry is*

*“yes” for some position j.*

### Comments

*• Each row is essentially a configuration.*

*• If the input x = 010001, then the first row is*

*| x |*^{k}

z }| {

*0*^{s}10001⊔ ⊔

*· · ·*⊔

*• A typical row may look like*

*| x |*^{k}

z }| {

*10100*^{q}01110100⊔ ⊔

*· · ·*⊔

### Comments (concluded)

*• The last rows must look like*

*| x |*^{k}

z }| {

* · · · “yes” · · ·* ⊔

or

*| x |*^{k}

z }| {

* · · · “no” · · ·*⊔

*• Three out of the table’s 4 borders are known:*

### #DEFGHI

### #

### #

### #

### #

...