## Primality Tests

*• primes asks if a number N is a prime.*

*• The classic algorithm tests if k | N for k = 2, 3, . . . ,√*
*N .*

*• But it runs in Ω(2*^{(log}^{2} * ^{N)/2}*) steps.

## Primality Tests (concluded)

*• Suppose N = P Q is a product of 2 distinct primes.*

*• The probability of success of the density attack (p. 484)*
is

*≈* 2

*√N*
when *P ≈ Q.*

*• This probability is exponentially small in terms of the*
input length log_{2} *N.*

## The Fermat Test for Primality

Fermat’s “little” theorem (p. 487) suggests the following
primality test for any given number *N:*

1: Pick a number *a randomly from { 1, 2, . . . , N − 1 };*

2: **if** *a*^{N−1}**≡ 1 mod N then**

3: **return “***N is composite”;*

4: **else**

5: **return “***N is (probably) a prime”;*

6: **end if**

## The Fermat Test for Primality (concluded)

**• Carmichael numbers are composite numbers that will***pass the Fermat test for all* *a ∈ { 1, 2, . . . , N − 1 }.*^{a}

**– The Fermat test will return “***N is a prime” for all*
Carmichael numbers *N.*

*• Unfortunately, there are inﬁnitely many Carmichael*
numbers.^{b}

*• In fact, the number of Carmichael numbers less than N*
exceeds *N** ^{2/7}* for

*N large enough.*

*• So the Fermat test is an incorrect algorithm for primes.*

aCarmichael (1910). Lo (1994) mentions an investment strategy based on such numbers!

bAlford, Granville, & Pomerance (1992).

## Square Roots Modulo a Prime

*• Equation x*^{2} *≡ a mod p has at most two (distinct) roots*
by Lemma 63 (p. 492).

**– The roots are called square roots.**

**– Numbers** *a with square roots and gcd(a, p) = 1 are*
**called quadratic residues.**

*∗ They are*

1^{2} mod *p, 2*^{2} mod *p, . . . , (p − 1)*^{2} mod *p.*

*• We shall show that a number either has two roots or has*
none, and testing which is the case is trivial.^{a}

aBut no eﬃcient *deterministic general-purpose square-root-extracting*
algorithms are known yet.

## Euler’s Test

**Lemma 68 (Euler) Let***p be an odd prime and*
*a = 0 mod p.*

*1. If*

*a*^{(p−1)/2}*≡ 1 mod p,*
*then* *x*^{2} *≡ a mod p has two roots.*

*2. If*

*a*^{(p−1)/2}*≡ 1 mod p,*
*then*

*a*^{(p−1)/2}*≡ −1 mod p*
*and* *x*^{2} *≡ a mod p has no roots.*

## The Proof (continued)

*• Let r be a primitive root of p.*

*• Fermat’s “little” theorem says r*^{p−1}*≡ 1 mod p, so*
*r*^{(p−1)/2}

is a square root of 1.

*• In particular,*

*r*^{(p−1)/2}*≡ 1 or −1 mod p.*

*• But as r is a primitive root, r*^{(p−1)/2}*≡ 1 mod p.*

*• Hence r*^{(p−1)/2}*≡ −1 mod p.*

## The Proof (continued)

*• Let a = r** ^{k}* mod

*p for some k.*

*• Suppose a*^{(p−1)/2}*≡ 1 mod p.*

*• Then*

1 *≡ a*^{(p−1)/2}*≡ r*^{k(p−1)/2}*≡*

*r*^{(p−1)/2}_{k}

*≡ (−1)** ^{k}* mod

*p.*

*• So k must be even.*

## The Proof (continued)

*• Suppose a = r** ^{2j}* mod

*p for some 1 ≤ j ≤ (p − 1)/2.*

*• Then*

*a*^{(p−1)/2}*≡ r*^{j(p−1)}*≡ 1 mod p.*

*• The two distinct roots of a are*

*r*^{j}*, −r** ^{j}*(

*≡ r*

*mod*

^{j+(p−1)/2}*p).*

**– If** *r*^{j}*≡ −r** ^{j}* mod

*p, then 2r*

^{j}*≡ 0 mod p, which implies*

*r*

^{j}*≡ 0 mod p, a contradiction as r is a primitive root.*

## The Proof (continued)

*• As 1 ≤ j ≤ (p − 1)/2, there are (p − 1)/2 such a’s.*

*• Each such a ≡ r** ^{2j}* mod

*p has 2 distinct square roots.*

*• The square roots of all these a’s are distinct.*

**– The square roots of diﬀerent***a’s must be diﬀerent.*

*• Hence the set of square roots is { 1, 2, . . . , p − 1 }.*

*• As a result,*

*a = r** ^{2j}* mod

*p, 1 ≤ j ≤ (p − 1)/2,*exhaust all the quadratic residues.

## The Proof (concluded)

*• Suppose a = r** ^{2j+1}* mod

*p now.*

*• Then it has no square roots because all the square roots*
have been taken.

*• Finally,*

*a*^{(p−1)/2}*≡*

*r*^{(p−1)/2}_{2j+1}

*≡ (−1)*^{2j+1}*≡ −1 mod p.*

The Legendre Symbol^{a} and Quadratic Residuacity Test

*• By Lemma 68 (p. 554),*

*a** ^{(p−1)/2}* mod

*p = ±1*for

*a ≡ 0 mod p.*

**• For odd prime p, deﬁne the Legendre symbol (a | p) as**

(*a | p) =*

⎧⎪

⎪⎨

⎪⎪

⎩

0 if *p | a,*

1 if *a is a quadratic residue modulo p,*

**−1 if a is a quadratic nonresidue modulo p.**

*• It is sometimes pronounced “a over p.”*

aAndrien-Marie Legendre (1752–1833).

The Legendre Symbol and Quadratic Residuacity Test (concluded)

*• Euler’s test (p. 554) implies*

*a*^{(p−1)/2}*≡ (a | p) mod p*
for any odd prime *p and any integer a.*

*• Note that (ab | p) = (a | p)(b | p).*

## Gauss’s Lemma

**Lemma 69 (Gauss) Let***p and q be two distinct odd*
*primes. Then (q | p) = (−1)*^{m}*, where* *m is the number of*
*residues in* *R = { iq mod p : 1 ≤ i ≤ (p − 1)/2 } that are*
*greater than (p − 1)/2.*

*• All residues in R are distinct.*

**– If** *iq = jq mod p, then p | (j − i) or p | q.*

**– But neither is possible.**

*• No two elements of R add up to p.*

**– If** *iq + jq ≡ 0 mod p, then p | (i + j) or p | q.*

**– But neither is possible.**

## The Proof (continued)

*• Replace each of the m elements a ∈ R such that*
*a > (p − 1)/2 by p − a.*

**– This is equivalent to performing** *−a mod p.*

*• Call the resulting set of residues R** ^{}*.

*• All numbers in R** ^{}* are at most (

*p − 1)/2.*

*• In fact, R** ^{}* =

*{ 1, 2, . . . , (p − 1)/2 } (see illustration next*page).

**– Otherwise, two elements of** *R would add up to p,*^{a}
which has been shown to be impossible.

aBecause then *iq ≡ −jq mod p for some i = j.*

5 1 2 3 4

6 5

1 2 3 4

6

*p = 7 and q = 5.*

## The Proof (concluded)

*• Alternatively, R** ^{}* =

*{ ±iq mod p : 1 ≤ i ≤ (p − 1)/2 },*where exactly

*m of the elements have the minus sign.*

*• Take the product of all elements in the two*
representations of *R** ^{}*.

*• So*

[(*p − 1)/2]! = (−1)*^{m}*q** ^{(p−1)/2}*[(

*p − 1)/2]! mod p.*

*• Because gcd([(p − 1)/2]!, p) = 1, the above implies*
1 = (*−1)*^{m}*q** ^{(p−1)/2}* mod

*p.*

## Legendre’s Law of Quadratic Reciprocity

^{a}

*• Let p and q be two distinct odd primes.*

*• The next result says (p | q) and (q | p) are distinct if and*
only if both *p and q are 3 mod 4.*

**Lemma 70 (Legendre, 1785; Gauss)**

(*p | q)(q | p) = (−1)*^{p−1}^{2} ^{q−1}^{2} *.*

aFirst stated by Euler in 1751. Legendre (1785) did not give a cor- rect proof. Gauss proved the theorem when he was 19. He gave at least 8 diﬀerent proofs during his life. The 152nd proof appeared in 1963. A computer-generated formal proof was given in Russinoﬀ (1990).

As of 2008, there had been 4 such proofs. Wiedijk (2008), “the Law of Quadratic Reciprocity is the ﬁrst nontrivial theorem that a student encounters in the mathematics curriculum.”

## The Proof (continued)

*• Sum the elements of R** ^{}* in the previous proof in mod2.

*• On one hand, this is just* _{(p−1)/2}

*i=1* *i mod 2.*

*• On the other hand, the sum equals*
*mp +*

*(p−1)/2*

*i=1*

*iq − p*

*iq*
*p*

mod 2

= *mp +*

⎛

*⎝q* ^{(p−1)/2}

*i=1*

*i − p*

*(p−1)/2*

*i=1*

*iq*
*p*

⎞*⎠ mod 2.*

**–** *m of the iq mod p are replaced by p − iq mod p.*

**– But signs are irrelevant under mod2.**

**–** *m is as in Lemma 69 (p. 562).*

## The Proof (continued)

*• Ignore odd multipliers to make the sum equal*

*m +*

⎛

⎝^{(p−1)/2}

*i=1*

*i −*

*(p−1)/2*

*i=1*

*iq*
*p*

⎞*⎠ mod 2.*

*• Equate the above with* _{(p−1)/2}

*i=1* *i modulo 2.*

*• Now simplify to obtain*
*m ≡*

*(p−1)/2*

*i=1*

*iq*
*p*

mod 2*.*

## The Proof (continued)

*•* _{(p−1)/2}

*i=1* * *^{iq}_{p}* is the number of integral points below the*
line

*y = (q/p) x*
for 1 *≤ x ≤ (p − 1)/2.*

*• Gauss’s lemma (p. 562) says (q | p) = (−1)** ^{m}*.

*• Repeat the proof with p and q reversed.*

*• Then (p | q) = (−1)*^{m}* ^{}*, where

*m*

*is the number of*

^{}*integral points above the line*

*y = (q/p) x for*

1 *≤ y ≤ (q − 1)/2.*

## The Proof (concluded)

*• As a result,*

(*p | q)(q | p) = (−1)*^{m+m}^{}*.*

*• But m + m** ^{}* is the total number of integral points in the
[1

*,*

^{p−1}_{2}]

*× [1,*

^{q−1}_{2}] rectangle, which is

*p − 1*
2

*q − 1*
2 *.*

## Eisenstein’s Rectangle

*(p,q)*

*(p - 1)/2*
*(q - 1)/2*

Above, *p = 11, q = 7, m = 7, m** ^{}* = 8.

## The Jacobi Symbol

^{a}

*• The Legendre symbol only works for odd prime moduli.*

* • The Jacobi symbol (a | m) extends it to cases where m*
is not prime.

**–** *a is sometimes called the numerator and m the*
denominator.

*• Trivially, (1 | m) = 1.*

*• Deﬁne (a | 1) = 1.*

aCarl Jacobi (1804–1851).

## The Jacobi Symbol (concluded)

*• Let m = p*^{1}*p*^{2} *· · · p**k* be the prime factorization of *m.*

*• When m > 1 is odd and gcd(a, m) = 1, then*

(*a | m) =*

*k*
*i=1*

(*a | p** _{i}*)

*.*

**– Note that the Jacobi symbol equals** *±1.*

**– It reduces to the Legendre symbol when** *m is a prime.*

## Properties of the Jacobi Symbol

The Jacobi symbol has the following properties when it is deﬁned.

1. (*ab | m) = (a | m)(b | m).*

2. (*a | m*1*m*2) = (*a | m*1)(*a | m*2).

3. If *a ≡ b mod m, then (a | m) = (b | m).*

4. (*−1 | m) = (−1)** ^{(m−1)/2}* (by Lemma 69 on p. 562).

5. (2 *| m) = (−1)*^{(m}^{2}* ^{−1)/8}*.

^{a}

6. If *a and m are both odd, then*
(*a | m)(m | a) = (−1)**(a−1)(m−1)/4*.

aBy Lemma 69 (p. 562) and some parity arguments.

## Properties of the Jacobi Symbol (concluded)

*• Properties 3–6 allow us to calculate the Jacobi symbol*
*without factorization.*

**– It will also yield the same result as Euler’s test (p.**

554) when *m is an odd prime.*

*• This situation is similar to the Euclidean algorithm.*

*• Note also that (a | m) = 1/(a | m) because (a | m) = ±1.*^{a}

aContributed by Mr. Huang, Kuan-Lin (B96902079, R00922018) on December 6, 2011.

## Calculation of *(2200 | 999)*

(2200*| 999) = (202 | 999)*

= (2 *| 999)(101 | 999)*

= (*−1)*^{(999}^{2}* ^{−1)/8}*(101

*| 999)*

= (*−1)*^{124750}(101*| 999) = (101 | 999)*

= (*−1)**(100)(998)/4*(999*| 101) = (−1)*^{24950}(999*| 101)*

= (999*| 101) = (90 | 101) = (−1)*^{(101}^{2}* ^{−1)/8}*(45

*| 101)*

= (*−1)*^{1275}(45*| 101) = −(45 | 101)*

= *−(−1)**(44)(100)/4*(101*| 45) = −(101 | 45) = −(11 | 45)*

= *−(−1)** ^{(10)(44)/4}*(45

*| 11) = −(45 | 11)*

= *−(1 | 11) = −1.*

## A Result Generalizing Proposition 10.3 in the Textbook

* Theorem 71 The group of set Φ(n) under multiplication*
mod

*n has a primitive root if and only if n is either 1, 2, 4,*

*p*

^{k}*, or 2p*

^{k}*for some nonnegative integer*

*k and an odd prime*

*p.*

This result is essential in the proof of the next lemma.

## The Jacobi Symbol and Primality Test

^{a}

**Lemma 72 If (**M | N) ≡ M* ^{(N−1)/2}* mod

*N for all*

*M ∈ Φ(N), then N is a prime. (Assume N is odd.)*

*• Assume N = mp, where p is an odd prime, gcd(m, p) = 1,*
and *m > 1 (not necessarily prime).*

*• Let r ∈ Φ(p) such that (r | p) = −1.*

*• The Chinese remainder theorem says that there is an*
*M ∈ Φ(N) such that*

*M = r mod p,*
*M = 1 mod m.*

aMr. Clement Hsiao (B4506061, R88526067) pointed out that the text- book’s proof for Lemma 11.8 is incorrect in January 1999 while he was a senior.

## The Proof (continued)

*• By the hypothesis,*

*M** ^{(N−1)/2}* = (

*M | N) = (M | p)(M | m) = −1 mod N.*

*• Hence*

*M** ^{(N−1)/2}* =

*−1 mod m.*

*• But because M = 1 mod m,*

*M** ^{(N−1)/2}* = 1 mod

*m,*a contradiction.

## The Proof (continued)

*• Second, assume that N = p** ^{a}*, where

*p is an odd prime*and

*a ≥ 2.*

*• By Theorem 71 (p. 577), there exists a primitive root r*
modulo *p** ^{a}*.

*• From the assumption,*
*M** ^{N−1}* =

*M** ^{(N−1)/2}* 2

= (*M|N)*^{2} = 1 mod *N*
for all *M ∈ Φ(N).*

## The Proof (continued)

*• As r ∈ Φ(N) (prove it), we have*

*r** ^{N−1}* = 1 mod

*N.*

*• As r’s exponent modulo N = p** ^{a}* is

*φ(N) = p*

*(*

^{a−1}*p − 1),*

*p*

*(*

^{a−1}*p − 1) | (N − 1),*

which implies that *p | (N − 1).*

*• But this is impossible given that p | N.*

## The Proof (continued)

*• Third, assume that N = mp** ^{a}*, where

*p is an odd prime,*gcd(

*m, p) = 1, m > 1 (not necessarily prime), and a is*even.

*• The proof mimics that of the second case.*

*• By Theorem 71 (p. 577), there exists a primitive root r*
modulo *p** ^{a}*.

*• From the assumption,*
*M** ^{N−1}* =

*M** ^{(N−1)/2}* 2

= (*M|N)*^{2} = 1 mod *N*
for all *M ∈ Φ(N).*

## The Proof (continued)

*• In particular,*

*M** ^{N−1}* = 1 mod

*p*

*(14)*

^{a}for all *M ∈ Φ(N).*

*• The Chinese remainder theorem says that there is an*
*M ∈ Φ(N) such that*

*M = r mod p*^{a}*,*
*M = 1 mod m.*

*• Because M = r mod p** ^{a}* and Eq. (14),

*r*

*= 1 mod*

^{N−1}*p*

^{a}*.*

## The Proof (concluded)

*• As r’s exponent modulo N = p** ^{a}* is

*φ(N) = p*

*(*

^{a−1}*p − 1),*

*p*

*(*

^{a−1}*p − 1) | (N − 1),*

which implies that *p | (N − 1).*

*• But this is impossible given that p | N.*

## The Number of Witnesses to Compositeness

**Theorem 73 (Solovay & Strassen, 1977) If***N is an*

*odd composite, then (M | N) ≡ M** ^{(N−1)/2}* mod

*N for at most*

*half of*

*M ∈ Φ(N).*

*• By Lemma 72 (p. 578) there is at least one a ∈ Φ(N)*
such that (*a | N) ≡ a** ^{(N−1)/2}* mod

*N.*

*• Let B = { b*1*, b*2*, . . . , b*_{k}*} ⊆ Φ(N) be the set of all*

distinct residues such that (*b**i* *| N) ≡ b*^{(N−1)/2}* _{i}* mod

*N.*

*• Let aB = { ab**i* mod *N : i = 1, 2, . . . , k }.*

*• Clearly, aB ⊆ Φ(N), too.*

## The Proof (concluded)

*• | aB | = k.*

**–** *ab*_{i}*≡ ab** _{j}* mod

*N implies N | a(b*

_{i}*− b*

*), which is*

_{j}impossible because gcd(*a, N) = 1 and N > | b**i* *− b**j* *|.*

*• aB ∩ B = ∅ because*

(*ab**i*)^{(N−1)/2}*≡ a*^{(N−1)/2}*b*^{(N−1)/2}_{i}*≡ (a | N)(b**i* *| N) ≡ (ab**i* *| N).*

*• Combining the above two results, we know*

*| B |*

*φ(N)* *≤* *| B |*

*| B ∪ aB |* = 0*.5.*

1: **if N is even but N = 2 then**

2: **return “N is composite”;**

3: **else if N = 2 then**

4: **return “N is a prime”;**

5: **end if**

6: Pick *M ∈ { 2, 3, . . . , N − 1 } randomly;*

7: **if gcd(M, N ) > 1 then**

8: **return “N is composite”;**

9: **else**

10: **if (M | N ) ≡ M*** ^{(N−1)/2}* mod

**N then**11: **return “N is (probably) a prime”;**

12: **else**

13: **return “N is composite”;**

14: **end if**

15: **end if**

## Analysis

*• The algorithm certainly runs in polynomial time.*

*• There are no false positives (for compositeness).*

**– When the algorithm says the number is composite, it**
is always correct.

## Analysis (concluded)

*• The probability of a false negative (again, for*
compositeness) is at most one half.

**– Suppose the input is composite.**

**– By Theorem 73 (p. 585),**

prob[ algorithm answers “no”*| N is composite ] ≤ 0.5.*

**– Note that we are not referring to the probability that**
*N is composite when the algorithm says “no.”*

*• So it is a Monte Carlo algorithm for compositeness.*^{a}

aNot primes.

## The Improved Density Attack for compositeness

*All numbers < N*

Witnesses to compositeness of

*N via Jacobi*
Witnesses to

compositeness of
*N via common*

factor

## Randomized Complexity Classes; RP

*• Let N be a polynomial-time precise NTM that runs in*
time *p(n) and has 2 nondeterministic choices at each*
step.

* • N is a polynomial Monte Carlo Turing machine*
for a language

*L if the following conditions hold:*

**– If** *x ∈ L, then at least half of the 2** ^{p(n)}* computation
paths of

*N on x halt with “yes” where n = | x |.*

**– If** *x ∈ L, then all computation paths halt with “no.”*

*• The class of all languages with polynomial Monte Carlo*
**TMs is denoted RP (randomized polynomial time).**^{a}

aAdleman & Manders (1977).

## Comments on RP

*• In analogy to Proposition 40 (p. 328), a “yes” instance*
of an RP problem has many certiﬁcates (witnesses).

*• There are no false positives.*

*• If we associate nondeterministic steps with ﬂipping fair*
coins, then we can phrase RP in the language of

probability.

**– If** *x ∈ L, then N(x) halts with “yes” with probability*
at least 0.5 .

**– If** *x ∈ L, then N(x) halts with “no.”*

## Comments on RP (concluded)

*• The probability of false negatives is ≤ 0.5.*

*• But any constant between 0 and 1 can replace 0.5.*

**– Repeat the algorithm** *k = −*_{log}^{1}_{2} _{}* times and answer*

“no” only if all the runs answer “no.”

**– The probability of false negatives becomes** ^{k}*≤ 0.5.*

## Where RP Fits

*• P ⊆ RP ⊆ NP.*

**– A deterministic TM is like a Monte Carlo TM except**
that all the coin ﬂips are ignored.

**– A Monte Carlo TM is an NTM with more demands**
on the number of accepting paths.

*• compositeness ∈ RP;*^{a} *primes ∈ coRP;*

*primes ∈ RP.*^{b}

**– In fact,** *primes ∈ P.*^{c}

*• RP ∪ coRP is an alternative “plausible” notion of*
eﬃcient computation.

aRabin (1976); Solovay & Strassen (1977).

bAdleman & Huang (1987).

cAgrawal, Kayal, & Saxena (2002).

## ZPP

^{a}

## (Zero Probabilistic Polynomial)

**• The class ZPP is deﬁned as RP ∩ coRP.**

*• A language in ZPP has two Monte Carlo algorithms, one*
with no false positives (RP) and the other with no false
negatives (coRP).

*• If we repeatedly run both Monte Carlo algorithms,*
*eventually one deﬁnite answer will come (unlike RP).*

* – A positive answer from the one without false*
positives.

* – A negative answer from the one without false*
negatives.

aGill (1977).

**The ZPP Algorithm (Las Vegas)**

1: *{Suppose L ∈ ZPP.}*

2: *{N*^{1} has no false positives, and *N*^{2} has no false
negatives.*}*

3: **while** **true do**

4: **if** *N*^{1}(**x) = “yes” then**

5: **return “yes”;**

6: **end if**

7: **if** *N*2(**x) = “no” then**

8: **return “no”;**

9: **end if**

10: **end while**

## ZPP (concluded)

*• The expected running time for the correct answer to*
emerge is polynomial.

**– The probability that a run of the 2 algorithms does**
not generate a deﬁnite answer is 0.5 (why?).

**– Let** *p(n) be the running time of each run of the*
while-loop.

**– The expected running time for a deﬁnite answer is**

*∞*
*i=1*

0*.5*^{i}*ip(n) = 2p(n).*

*• Essentially, ZPP is the class of problems that can be*
solved, without errors, in expected polynomial time.

## Large Deviations

*• Suppose you have a biased coin.*

*• One side has probability 0.5 + to appear and the other*
0*.5 − , for some 0 < < 0.5.*

*• But you do not know which is which.*

*• How to decide which side is the more likely side—with*
high conﬁdence?

*• Answer: Flip the coin many times and pick the side that*
appeared the most times.

*• Question: Can you quantify your conﬁdence?*

## The Chernoﬀ Bound

^{a}

**Theorem 74 (Chernoﬀ, 1952) Suppose***x*^{1}*, x*^{2}*, . . . , x**n* *are*
*independent random variables taking the values 1 and 0 with*
*probabilities* *p and 1 − p, respectively. Let X =* _{n}

*i=1* *x*_{i}*.*
*Then for all 0* *≤ θ ≤ 1,*

prob[*X ≥ (1 + θ) pn ] ≤ e*^{−θ}^{2}^{pn/3}*.*

**• The probability that the deviate of a binomial****random variable from its expected value**

*E[ X ] = E*

_{n}

*i=1*

*x*_{i}

= *pn*
decreases exponentially with the deviation.

aHerman Chernoﬀ (1923–). The bound is asymptotically optimal.

## The Proof

*• Let t be any positive real number.*

*• Then*

prob[*X ≥ (1 + θ) pn ] = prob[ e*^{tX}*≥ e** ^{t(1+θ) pn}* ]

*.*

*• Markov’s inequality (p. 535) generalized to real-valued*
random variables says that

prob

*e*^{tX}*≥ kE[ e** ^{tX}* ]

*≤ 1/k.*

*• With k = e*^{t(1+θ) pn}*/E[ e** ^{tX}* ], we have

^{a}

prob[*X ≥ (1 + θ) pn ] ≤ e*^{−t(1+θ) pn}*E[ e** ^{tX}* ]

*.*

aNote that *X does not appear in k. Contributed by Mr. Ao Sun*
(R05922147) on December 20, 2016.

## The Proof (continued)

*• Because X =* _{n}

*i=1* *x**i* and *x**i*’s are independent,
*E[ e** ^{tX}* ] = (

*E[ e*

^{tx}^{1}])

*= [ 1 +*

^{n}*p(e*

^{t}*− 1) ]*

^{n}*.*

*• Substituting, we obtain*

prob[*X ≥ (1 + θ) pn ] ≤ e** ^{−t(1+θ) pn}*[ 1 +

*p(e*

^{t}*− 1) ]*

^{n}*≤ e*^{−t(1+θ) pn}*e*^{pn(e}^{t}* ^{−1)}*
as (1 +

*a)*

^{n}*≤ e*

*for all*

^{an}*a > 0.*

## The Proof (concluded)

*• With the choice of t = ln(1 + θ), the above becomes*
prob[*X ≥ (1 + θ) pn ] ≤ e**pn[ θ−(1+θ) ln(1+θ) ]**.*

*• The exponent expands to*

*−θ*^{2}

2 + *θ*^{3}

6 *−* *θ*^{4}

12 + *· · ·*
for 0 *≤ θ ≤ 1.*

*• But it is less than*

*−θ*^{2}

2 + *θ*^{3}

6 *≤ θ*^{2}

*−*1

2 + *θ*
6

*≤ θ*^{2}

*−*1

2 + 1 6

= *−θ*^{2}
3 *.*

## Other Variations of the Chernoﬀ Bound

The following can be proved similarly (prove it).

**Theorem 75 Given the same terms as Theorem 74***(p. 599),*

prob[*X ≤ (1 − θ) pn ] ≤ e*^{−θ}^{2}^{pn/2}*.*

The following slightly looser inequalities achieve symmetry.

**Theorem 76 (Karp, Luby, & Madras, 1989) Given the***same terms as Theorem 74 (p. 599) except with 0* *≤ θ ≤ 2,*

prob[*X ≥ (1 + θ) pn ] ≤ e*^{−θ}^{2}^{pn/4}*,*
prob[*X ≤ (1 − θ) pn ] ≤ e*^{−θ}^{2}^{pn/4}*.*

## Power of the Majority Rule

The next result follows from Theorem 75 (p. 603).

**Corollary 77 If***p = (1/2) + for some 0 ≤ ≤ 1/2, then*
prob

_{n}

*i=1*

*x**i* *≤ n/2*

*≤ e*^{−}^{2}^{n/2}*.*

*• The textbook’s corollary to Lemma 11.9 seems too*
loose, at *e*^{−}^{2}* ^{n/6}*.

^{a}

*• Our original problem (p. 598) hence demands, e.g.,*

*n ≈ 1.4k/*^{2} independent coin ﬂips to guarantee making
an error with probability *≤ 2** ^{−k}* with the majority rule.

aSee Dubhashi & Panconesi (2012) for many Chernoﬀ-type bounds.

## BPP

^{a}

## (Bounded Probabilistic Polynomial)

* • The class BPP contains all languages L for which there*
is a precise polynomial-time NTM

*N such that:*

**– If** *x ∈ L, then at least 3/4 of the computation paths*
of *N on x lead to “yes.”*

**– If** *x ∈ L, then at least 3/4 of the computation paths*
of *N on x lead to “no.”*

*• So N accepts or rejects by a clear majority.*

aGill (1977).