• 沒有找到結果。

# ALGORITHMS FOR CONTEXT-FREE GRAMMARS

## Context-Free Languages

### 3.6 ALGORITHMS FOR CONTEXT-FREE GRAMMARS

In this section we consider the computational problems related to context-free languages, we develop algorithms for these problems, and we analyze their com-plexity. All in all, we establish the following results.

Theorem 3.6.1: (a) There is a polynomial algorithm which, given a context-free grammar, constructs an equivalent pushdown automaton.

(b) There is a polynomial algorithm which, given a pushdown automaton, con-structs an equivalent context-free grammar.

(c) There is a polynomial algorithm which, given a context-free grammar G and a string x, decides whether x E L( G).

It is instructive to compare Theorem 3.6.1 with the corresponding statement summarizing the algorithmic aspects of finite automata (Theorem 2.6.1). To be sure, there are certain similarities: in both cases there are algorithms which transform acceptors to generators and vice versa -then finite automata to reg-ular expressions and back, now pushdown automata to context-free grammars and back. But the differences are perhaps more striking. First, in Theorem 2.6.1 there was no need for an analog of part (c) above, since regular languages are rep-resented in terms of an efficient algorithm for deciding precisely the membership question in (c): a deterministic finite automaton. In contrast, for context-free languages we have so far introduced only non-algorithmic, nondeterministic ac-ceptors -pushdown automata. In order to establish part (c), we show in the next subsection that for any context-free language we can construct a deter-ministic acceptor; the construction is rather indirect and sophisticated, and the resulting algorithm, although polynomial, is no longer linear in the length of the input.

A second major difference between Theorem 2.6.1 and Theorem 3.6.1 is that in the present case we do not mention any algorithms for testing whether two given context-free grammars (or two pushdown automata) are equivalent;

neither do we claim that there are algorithms for minimizing the number of states in a pushdown automaton. We shall see in Chapter 5 that such questions about

3.6: Algorithms for Context-Free Grammars 151 context-free grammars and pushdown automata are not amenable to solution by any algorithm -however inefficient!

The Dynamic Programming Algorithm

We turn now to proving part (c) of the Theorem (parts (a) and (b) are straight-forward consequences of the constructions in the proofs of the Lemmata 3.4.1 and 3.4.2). Our algorithm for deciding context-free languages is based on a useful way of "standardizing" context-free grammars.

Definition 3.6.1: A context-free grammar G

### =

(V,~, R, S) is said to be in Chomsky normal form if R ~ (V - ~) X V2.

In other words, the right-hand side of a rule in a context-free grammar in Chomsky normal form must have length two. Notice that no grammar in Chomsky normal form would be able to produce strings of length less than two, such as a, b, or e; therefore, context-free languages containing such strings cannot be generated by grammars in Chomsky normal form. However, the next result states that this is the only loss of generality that comes with Chomsky normal form:

Theorem 3.6.2: For any context-free grammar G there is a context-free gram-mar G' 'in Chomsky normal form such that L( G') = L( G) - (~ U {e } ). Further-more, the construction of G' can be carried out in time polynomial in the size of G.

In other words, G' generates exactly the strings that G does, with the possible exception of strings of length less than two -since G' is in Chomsky normal form, we know that it cannot generate such strings.

Proof: We shall show how to transform any given context-free grammar G

### =

(V,~, R, S) into a context-free grammar in Chomsky normal form. There are three ways in which the right-hand side of a rule A --+ x may violate the con-straints of Chomsky normal form: long rules (those whose right-hand side has length three or more), e-rules (of the form A --+ e), and short rules (of the form A --+ a or A --+ B). We shall show how to remove these violations one by one.

We first deal with the long rules of G. Let A --+ B1B2 •.. Bn E R, where B 1, ... ,Bn E V and n

### 2.

3. We replace this rule with n - 1 new rules, namely:

152 Chapter 3: CONTEXT-FREE LANGUAGES where AI, ... ,An - 2 are new nonterminals, not used anywhere else in the gram-mar. Since the rule A --+ B1B2 •.. Bn can be simulated by the newly inserted rules, and this is the only way in which the newly added rules can be used, it should be clear that the resulting context-free grammar is equivalent to the original one. We repeat this for each long rule of the grammar. The resulting grammar is equivalent to the original one, and has rules with right-hand sides of length two or less.

Example 3.6.1: Let us take the grammar generating the set of balanced paren-theses, with rules S --+ SS,S --+ (S), S --+ e. There is only one long rule, S --+ (S). It is replaced by the two rules S --+ (SI and SI --+ S).¢

We must next take on the e-rules. To this end, we first determine the set of erasable nonterminals

E

### =

{A E V - ~ : A::::}* e},

that is, the set of all nonterminals that may derive the empty string. This is done by a simple closure calculation:

E:=

### 0

while there is a rule A --+ a with Q E E* and A ~ E do add A to E.

Once we have the set E, we delete from G all e-rules, and repeat the follow-ing: For each rule of the form A --+ Be or A --+

B with BEE and

### e

E V, we

add to the grammar the rule A --+

### e.

Any derivation in the original grammar can be simulated in the new, and vice versa -with one exception: e cannot be derived in the language any longer, since we may have omitted the rule S --+ e during this step. Fortunately, the statement of the Theorem allows for this exclusion.

Example 3.6.1 (continued): Let us continue from the grammar with rules S --+ SS, S --+ (SI, SI --+ S), S --+ e.

We start by computing the set E of vanishing nonterminals: Initially E =

then E

### = is},

because of the rule S --+ e; and this is the final value of E. We omit from the grammar the e-rules (of which there is only one, S --+ e), and add variants of all rules with an occurrence of S, with that occurrence omitted. The new set of rules is

3.6: Algorithms for Context-Free Grammars 153 The rule 5 --+ 5 was added because of the rule 5 --+ 55 with 5 E E; it is of course useless and can be omitted. The rule 51 --+) was added because of the rule 51 --+ 5) with 5 E E.

For example, the derivation in the original grammar 5::::} 55

### =>

5(5) => 50 => () can now simulated by

-omitting the 5

### =>

55 part, since the first 5 would be eventually erased- and finally

-using the 51

### =»

rule to anticipate the erasing of the 5 in the rule 51 => 5).¢

Our grammar now has only rules whose right-hand sides have length one and two. We must next get rid of the short rules, those with right-hand sides with length one. We accomplish this as follows: For each A E V we compute, again by a simple closure algorithm, the set D(A) of symbols that can be derived from A in the grammar, D(A)

{B E

### v:

A =>* B}, as follows:

D(A) := {A}

while there is a rule B -+ G with B E D(A) and G ~ D(A) do add G to D(A).

Notice that for all symbols A, A E D(A); and if a is a terminal, then D(a)

### =

{a}.

In our third and final step of the transformation of our grammar to one in Chomsky normal form, we omit all short rules from the grammar, and we replace each rule of the form A --+ BG with all possible rules of the form A --+ B'G' where B' E D(B) and G' E D( G). Such a rule simulates the effect of the original rule A --+ BG, with the sequence of short rules that produce B' from B and G' from G. Finally, we add the rules 5 --+ BG for each rule A --+ BG such that A E D(5) - {5}.

Again, the resulting grammar is equivalent to the one before the omission of the short rules, since the effect of a short rule is simulated by "anticipating"

its use when the left-hand side first appears in the derivation (if the left-hand side is 5, and thus it starts the derivation, the rules 5 --+ BG added in the last part of the construction suffice to guarantee equivalence). There is again only one exception: we may have removed a rule 5 --+ a, thus omitting the string a from the language generated by G. Once again, fortunately this omission is allowed by the statement of the theorem.

154 Chapter 3: CONTEXT-FREE LANGUAGES Example 3.6.1 (continued): In our modified grammar with rules

we have D(Sd = {S1,)}, and D(A) = {A} for all A E V - {Sd. We omit all length-one rules, of which there is only one, S1 -+). The only nonterminal with a nontrivial set '0, Sl, appears on the right-hand side of only the second rule.

This rule is therefore replaced by the two rules S -+ (Sl, S -+

### 0,

corresponding to the two elements of D(Sd. The final grammar in Chomsky normal form is

S -+ SS, S -+ (Sl, S1 -+ S), S -+

### o.

After the three steps, the grammar is in Chomsky normal form, and, except for the possible omission of strings of length less than two, it generates the same language as the original one.

In order to complete the proof of the theorem, we must establish that the whole construction can be carried out in time polynomial in the size of the original grammar G. By "size of G" we mean the length of a string that suffices to fully describe G -that is to say, the sum of the lengths of the rules of G.

Let n be this quantity. The first part of the transformation (getting rid of long rules) takes time O(n) and creates a grammar of size again O(n). The second part, getting rid of e-rules, takes 0(n2) time for the closure computation (O(n) iterations, each doable in O(n) time), plus O(n) for adding the new rules.

Finally, the third part (taking care of short rules) can also be carried out in polynomial time (O(n) closure computations, each taking time 0(n2)). This completes the proof of the theorem . •

The advantage of Chomsky normal form is that it enables a simple polyno-mial algorithm for deciding whether a string can be generated by the grammar.

Suppose that we are given a context-free grammar G

### =

(V, ~,R, S) in Chomsky normal form, and we are asked whether the string x = Xl •.• Xn , with n ~ 2, is in L(G). The algorithm is shown below. It decides whether X E L(G) by analyzing all substrings of x. For each i and s such that 1 ::::: i ::::: i

### +

s ::::: n, define N[i, i

### +

s]

to be the set of all symbols in V that can derive in G the string Xi··· Xi+8.

The algorithm computes these sets. It proceeds computing N[i, i

### +

s] from short strings (s small) to longer and longer strings. This general philosophy of solving a problem by starting from minuscule subproblems and building up solutions to larger and larger subproblems until the whole problem is solved is known as dynamic programming.

3.6: Algorithms for Context-Free Grammars

for i := 1 to n do N[i, i] := {xil; all other N[i, j] are initially empty for s := 1 to n - 1 do

for i := 1 to n - s do for k :

i to i

### +

s - 1 do

155

if there is a rule A --+ BC E R with B E N[i, k] and C E N[k

1, i

### +

s]

then add A to N[i, i

### +

s].

Accept x if S E N[l, n].

In order to establish that the algorithm above correctly determines whether x E L( G), we shall prove the following claim.

Claim: For each nat'ural number s with 0

s

### S

n, after the sth iteration of the algorithm, for all i = 1, ... , n - s,

N[i,i

s]

### =

{A E V: A::::}* Xi" 'Xi+S}'

Proof of the Claim: The proof of this claim is by induction on s.

Basis Step. When s

### =

0 -where by "the zeroth iteration of the algorithm" we understand the first (initialization) line- the statement is true: since G is in Chomsky normal form, the only symbol that can generate the terminal Xi is Xi

itself.

Induction Step: Suppose that the claim is true for all integers less than s

### >

O.

Consider a derivation of the substring Xi" . Xi+s, say from a nonterminal A.

Since G is in Chomsky normal form, the derivation starts with a rule of the form A --+ BC, that is,

where B, C E V. Therefore, for some k with i

k

i

### +

s,

We conclude that A E {A E V : A ::::} * Xi'" Xi+s} if and only if there is an integer k, i

k

i

### +

s, and two symbols B E {A E V: A ::::}* Xi"

### .xd

and C E {.4 E V : A ::::}* Xk+l ... Xi+s} such that A --+ BC E R. We can rewrite the string Xi' .. Xk as Xi' .. Xi+s', where Sf

### =

k - i, and the string Xk+l ... Xi+s as

Xk+l ... Xk+l+s", where s" = i

### +

s - k - 1. Notice that, since i

k

i

s, we

must have Sf, S"

### <

s. Hence, the induction hypothesis applies!

By the induction hypothesis, {A E V : A ::::}* Xi'"

### xd

= N[i, k], and {A E V: A::::}* Xk+l" 'Xi+s} = N[k

1,i

### +

s]. We conclude that A E {A E V : A :=} * Xi'" Xi+s} if and only if there is an integer k, i

k

i

### +

s, and

two symbols B E N[i, k] and C E N[k

1, i

### +

s] such that A --+ BC E R.

156 Chapter 3: CONTEXT-FREE LANGUAGES But these are precisely the circumstances under which our algorithm adds A to N[i, i

### +

s]. Therefore the claim holds for s as well, and this concludes the proof of the induction hypothesis -and of the claim . •

It follows immediately from the claim that the algorithm above correctly decides whether x E L(G): At the end, the set N[l, n] will contain all symbols that derive the string Xl··· Xn = x. Therefore, X E L(G) if and only if S E N[l,n].

To analyze the time performance of the algorithm, notice that it consists of three nested loops, each with a range bounded by

### Ixl

= n. At the heart of the loop we must examine for each rule of the form A -+ BC whether B E N[i, j]

and C E N[j

1, i

### +

s]; this can be carried out in time proportional to the size of the grammar G -the length of its rules. We conclude that the total number of operations is

3

### 1GI)

- a polynomial in both the length of X and the size of

### G.

For any fixed grammar

### G

(that is, when we consider

### IGI

to be a constant), the algorithm runs in time O(n3) . •

Example 3.6.1 (continued): Let us apply the dynamic programming algo-rithm to the grammar for the balanced parentheses, as was rendered in Chomsky normal form with rules

S -+ SS,S -+ (Sl,Sl -+ S),S -+

### o.

Suppose we wish to tell whether the string (()(())) can be generated by G. We display in Figure 3.10 the values of N[i,

s] for 1

j

### S

n = 8, resulting from the iterations of the algorithm. The computation proceeds along parallel diagonals of the table. The main diagonal, corresponding to s = 0, contains the string being parsed. To fill a box, say [2,7], we look at all pairs of boxes of the form N[2, k] and N[k

1,7] with 2

k

### <

7. All these boxes lie either on the left of or above the box being filled. For k = 3, we notice that S E N[2, 3], S E N[4, 7], and S -+ SS is a rule; thus we must add the left-hand side S to the box N[2, 7]. And so on. The lower-right corner is N[l, n], and it does contain S; therefore the string is indeed in L(G). In fact, by inspecting this table it is easy to recover an actual derivation of the string (()(())) in G. The dynamic programming algorithm can be easily modified to produce such a derivation; see Problem 3.6.2.0

Part (c) of Theorem 3.6.1 now follows by combining Theorems 3.6.2 and the claim above: Given a context-free grammar G and a string x, we determine whether x E L(G) as follows: First, we transform G into an equivalent context-free grammar G' in Chomsky normal form, according to the construction in the proof of Theorem 3.6.2, in polynomial time. In the special case in which

### Ixl S

1, we can already decide whether x E L(G): It is if and only if during

3.7: Determinism and Parsing 157 8

### -

)

7 ) 0 6 ) 0 0 5 ( 8 81 0 4 ( 0 0 8 81 3 ) 0 0 0 0 0 2 ( 8 0 0 0 8 81 1

### l (

0 0 0 0 0 0 8

1 2 3 4 5 6 7 8

Figure 3-10

the transformation we had to delete a rule 8 --+ x. Otherwise, we run the dynamic programming algorithm described above for the grammar G' and the string x. The total number of operations used by the algorithm is bounded by a polynomial in the size of the original grammar G and the length of the string x . •

Problems for Section 3.6

3.6.1. Convert the context-free grammar G given in Example 3.1.3 generating arithmetic expressions into an equivalent context-free grammar in Chom-sky normal form. Apply the dynamic programming algorithm for deciding whether the string x = (id

id

id)

### *

(id) is in L(G).

3.6.2. How would you modify the dynamic programming algorithm in such a way that, when the input x is indeed in the language generated by G, then the algorithm produces an actual derivation of x in G?

3.6.3. (a) Let G = (V,~, R, 8) be a context-free language. Call a nonterminal A E V - ~ productive if A ~G x for some x E ~'. Give a polynomial algorithm for finding all productive nonterminals of G. (Hint: It is a closure algorithm. )

(b) Give a polynomial algorithm which, given a context-free grammar G, decides whether L(G) =

### 0.

3.6.4. Describe an algorithm which, given a context-free grammar G, decides whether L(G) is infinite. (Hint: One approach uses the Pumping Theorem.) What is the complexity of your algorithm? Can you find a polynomial-time algorithm?

158 Chapter 3: CONTEXT-FREE LANGUAGES

### B DETERMINISM AND PARSING

Context-free grammars are used extensively in modeling the syntax of program-ming languages, as was suggested by Example 3.1.3. A compiler for such a programming language must then embody a parser, that is, an algorithm to determine whether a given string is in the language generated by a given context-free grammar, and, if so, to construct the parse tree of the string. (The compiler would then go on to translate this parse tree into a program in a more ba-sic language, such as assembly language.) The general context-free parser we have developed in the previous section, the dynamic programming algorithm, although perfectly polynomial, is far too slow for handling programs with tens of thousands of instructions (recall its cubic dependence on the length of the string). Many approaches to the parsing problem have been developed by com-piler designers over the past four decades. Interestingly, the most successful ones among them are rooted in the idea of a pushdown automaton. After all, the equivalence of pushdown automata and context-free grammars, which was proved in Section 3.4, should be put to work. However, a pushdown automaton is not of immediate practical use in parsing, because it is a nondeterministic de-vice. The question then arises, can we always make pushdown automata operate deterministically (as we were able to do in the case of finite automata)?

Our first objective in this section is to study the question of deterministic pushdown automata. We shall see that there are some context-free languages that cannot be accepted by deterministic pushdown automata. This is rather dis-appointing; it suggests that the conversion of grammars to automata in Section 3.4 cannot be the basis for any practical method. Nevertheless, all is not lost. It turns out that for most programming languages one can construct deterministic pushdown automata that accept all syntactically correct programs. Later in this section we shall give some heuristic rules ~rules of thumb~ that are useful for constructing deterministic pushdown automata from suitable context-free gram-mars. These rules will not invariably produce a useful pushdown automaton from any context-free grammar; we have already said that that would be impos-sible. But they are typical of the methods actually used in the construction of compilers for programming languages.

### Deterministic Context-free Languages

A pushdown automaton Al is deterministic if for each configuration there is at most one configuration that can succeed it in a computation by M. This con-dition can be rephrased in an equivalent way. Call two strings consistent if the first is a prefix of the second, or vice versa. Call two transitions ((p, a,

### 13),

(q, ,)) and ((p, a',

### 13'),

(q', ,')) compatible if a and a' are consistent, and

and

### 13'

are also consistent-in other words, if there is a situation in which both transitions

3.7: Determinism and Parsing 159 are applicable. Then M is deterministic if it has no two distinct compatible transitions.

For example, the machine we constructed in Example 3.3.1 to accept the language {wcwR : w E {a, b} *} is deterministic: For each choice of state ahd input symbol, there is only one possible transition. On the other hand, the machine we constructed in Example 3.3.2 to accept {wwR : w E {a, b} *} is not deterministic: Transition 3 is compatible with both Transitions 1 and 2; notice that these are the transitions that "guess" the middle of the string -an action which is intuitively nondeterministic.

Deterministic context-free languages are essentially those that are accepted by deterministic pushdown automata. However, for reasons that will become clear very soon, we have to modify the acceptance convention slightly. A lan-guage is said to be deterministic context-free if it is recognized by a deterministic pushdown automaton that also has the extra capability of sensing the end of the input string. Formally, we call a language L ~ ~* deterministic context-free if L\$ = L(M) for some deterministic pushdown automaton M. Here \$ is a new symbol, not in ~, which is appended to each input string for the purpose of marking its end.

Every deterministic context-free language, as just defined, is a context-free language. To see this, suppose a deterministic pushdown automaton M accepts L\$. Then a (nondeterministic) pushdown automaton M' that accepts L can be constructed. At any point, M' may "imagine" a \$ in the input and jump to a new set of states from which it reads no further input.

If, on the other hand, we had not adopted this special acceptance conven-tion, then many context-free languages that are deterministic intuitively would not be deterministic by our definition. One example is L

### =

a* U {anbn : n ~ I}.

A deterministic pushdown automaton cannot both remember how many a's it has seen, in order to check the string of b's that may follow, and at the same time be ready to accept with empty stack in case no b's follow. However, one can easily design a deterministic pushdown automaton accepting L\$: If a \$ is met while the machine is still accumulating a's, then the input was a string in a*. If this happens, the stack is emptied and the input accepted.

The natural question at this point is whether every context-free language is deterministic -just as every regular language is accepted by a deterministic finite automaton. It would be surprising if this were so. Consider, for example, the context-free language

L = {anbmd' : m, n,p ~ 0, and m

nor m

### "I-

pl·

It would seem that a pushdown automaton could accept this language only by guessing which two blocks of symbols to compare: the a's with the b's, or the b's with the c's. Without so using nondeterminism, it would seem, the machine

160 Chapter 3: CONTEXT-FREE LANGUAGES could not compare the b's with the a's, while at the same time preparing to compare the b's with the c's. However, to prove that L is not deterministic requires a more indirect argument: The complement of L is not context-free.

Theorem 3.7.1: The class of deterministic context-free languages is closed un-der complement.

Proof: Let L ~ ~. be a language such that L\$ is accepted by the deterministic pushdown automaton M = (K,~, r,~, s, F). It will be convenient to assume, as in the proof of Lemma 3.4.2, that M is simple, that is, no transition of M pops more than one symbol from the stack, while an initial transition places a stack bottom symbol Z on the stack that is removed just before the end of the computation; it is easy to see that the construction employed to this end in the proof of Lemma 3.4.2 does not affect the deterministic nature of M.

Since M is deterministic, it would appear that all that is required in order to obtain a device that accepts (~. - L)\$ is to reverse accepting and non-accepting states -as we have done with deterministic finite automata in the proof of Theorem 2.3.1(d), and will do again in the next chapter with more complex deterministic devices. In the present situation, however, this simple reversal will not work, because a deterministic pushdown automaton may reject an input not only by reading it and finally reaching a non-accepting state, but also by never finishing reading its input. This intriguing possibility may arise in two ways:

First, 111 may enter a configuration C at which none of the transitions in ~ is applicable. Second, and perhaps more intiguingly, 111 may enter a configuration from which M executes a never-ending sequence of e-moves (transitions of the form (q, e, a)(p, {3)).

Let us call a configuration C

### =

(q, w, a) of M a dead end if the following is true: If C

### f-'M

C' for some other configuration C' = (q',

### w',

a'), then w' = w and

2:

### lal.

That is, a configuration is said to be a dead end if no progress can be made starting from it towards either reading more input, or reducing the height of the stack. Obviously, if AI is at a dead-end configuration, then it will indeed fail to read its input to the end. Conversely, it is not hard to see that, if M has no dead-end configurations, then it will definitely read all its input. This is because, in the absence of dead-end configurations, at all times there is a time in the future in which either the next input symbol will be read, or the height of the stack will be decreased -and the second option can only be taken finitely many times, since the stack length cannot be decreased infinitely many times.

\Ve shall show how to transform any simple deterministic pushdown automa-ton M into an equivalent deterministic pushdown automaton without dead-end configurations. The point is that, since M is assumed to be simple, whether a configuration is or is not a dead end only depends on the current state, the next input symbol, and the top stack symbol. In particular, let q E K be a state, a E ~

3.7: Determinism and Parsing 161

an input symbol, and A Era stack symbol. We say that the triple (q, a, A) is a dead end if there is no state p and stack symbol string 0 such that the configu-ration (q, a, A.) yields either (p, e, a) or (p, a, e). That is, a triple (q, a, A) is dead end if it is a dead end when considered as a configuration. Let D ~ K x ~ x r denote the set of all dead-end triples. Notice that we are not claiming that we can effectively tell by examining a triple whether it is in D or not (although it can be done); all we are saying is that the set D is a well-defined, finite set of triples.

Our modification of M is the following: For each triple (q, a, A) E D we remove from ~ all transitions compatible with (q, a, A.), and we add to ~ the transition ((q, a, A), (r, e)), where r is a new, non-accepting state. Finally, we add to ~ these transitions: ((r,a,e),(r,e)) for all a E~, ((r,\$,e),(r',e)), and (r', e, A), (r' , e)) for each A E r

### u

{Z}, where r' is another new, non-accepting state. These transitions enable M', when in state r, to read the whole input (without consulting the stack), and, upon reading a \$, to empty the stack and reject. Call the resulting pushdown automaton M'.

lt is easy to check that M' is deterministic, and accepts the same language as M (M' simply rejects explicitly whenever M would have rejected implicitly by failing to read the rest of the input). Furthermore, AI' was constructed so that it has no dead end configurations -and hence, it will always end up reading its whole input. Now reversing the role of accepting and non-accepting states of M' produces a deterministic pushdown autom&ton that accepts (~* - L )\$, and the proof is complete. •

Theorem 3.71 indeed establishes that the context-free language L = {anbmcp m

nor m

### "I-

p} above is not deterministic: If L were deterministic, then its complement,

### L

would also be deterministic context-free - and therefore certainly context-free. Hence, the intersection ofL with the regular language a*b*c* would be context-free, by Theorem 3.5.2. But it is easy to see that

### L n

a*b*c* is pre-cisely the language {anbncn : n 2: O}, which we know is not context-free. We conclude that the context-free language L is not deterministic context-free:

Corollary: The class of deterministic context free languages is properly con-tained in the class of context-free languages

In other words, nondeterminism is more powerful than determinism in the con ted of pushdown automata. In contrast, we saw in the last chapter that non-determinism adds nothing to the power of finite automata -unless the number of states is taken into account, in which case it is exponentially more powerful.

This intriguing issue of the power of nondeterminism in various computational contexts is perhaps the single most important thread that runs through this book.

Outline