## How To Test If a Polynomial Is Identically Zero?

*• det(A*^{G}*) is a polynomial in n*^{2} variables.

*• There are exponentially many terms in det(A** ^{G}*).

*• Expanding the determinant polynomial is not feasible.*

**– Too many terms.**

*• If det(A** ^{G}*)

*≡ 0, then it remains zero if we substitute*

*arbitrary integers for the variables x*

_{11}

*, . . . , x*

*.*

_{nn}*• But what is the likelihood of obtaining a zero when*
*det(A** ^{G}*)

*̸≡ 0?*

## Number of Roots of a Polynomial

**Lemma 59 (Schwartz (1980)) Let p(x**_{1}*, x*_{2}*, . . . , x** _{m}*)

*̸≡ 0*

*be a polynomial in m variables each of degree at most d. Let*

*M*

*∈ Z*

^{+}

*. Then the number of m-tuples*

*(x*_{1}*, x*_{2}*, . . . , x** _{m}*)

*∈ {0, 1, . . . , M − 1}*

^{m}*such that p(x*

_{1}

*, x*

_{2}

*, . . . , x*

_{m}*) = 0 is*

*≤ mdM*^{m}^{−1}*.*

*• By induction on m (consult the textbook).*

## Density Attack

*• The density of roots in the domain is at most*
*mdM*^{m}^{−1}

*M** ^{m}* =

*md*

*M* *.* (8)

*• So suppose p(x*^{1}*, x*_{2}*, . . . , x** _{m}*)

*̸≡ 0.*

*• Then a random*

*(x*_{1}*, x*_{2}*, . . . , x** _{m}*)

*∈ { 0, 1, . . . , M − 1 }*

*has a probability of*

^{m}*≤ md/M of being a root of p.*

*• Note that M is under our control!*

**– One can raise M to lower the error probability, e.g.**

## Density Attack (concluded)

*Here is a sampling algorithm to test if p(x*_{1}*, x*_{2}*, . . . , x** _{m}*)

*̸≡ 0.*

1: *Choose i*_{1}*, . . . , i** _{m}* from

*{0, 1, . . . , M − 1} randomly;*

2: **if p(i**_{1}*, i*_{2}*, . . . , i** _{m}*)

**̸= 0 then**3: **return “p is not identically zero”;**

4: **else**

5: **return “p is (probably) identically zero”;**

6: **end if**

## A Randomized Bipartite Perfect Matching Algorithm

^{a}

We now return to the original problem of bipartite perfect matching.

1: *Choose n*^{2} *integers i*_{11}*, . . . , i** _{nn}* from

*{0, 1, . . . , 2n*

^{2}

*− 1}*

randomly; *{So M = 2n*^{2}.*}*

2: *Calculate det(A*^{G}*(i*_{11}*, . . . , i** _{nn}*)) by Gaussian elimination;

3: **if det(A**^{G}*(i*_{11}*, . . . , i** _{nn}*))

**̸= 0 then**4: **return “G has a perfect matching”;**

5: **else**

6: **return “G has no perfect matchings”;**

7: **end if**

aLov´asz (1979). According to Paul Erd˝os, Lov´asz wrote his ﬁrst sig- niﬁcant paper “at the ripe old age of 17.”

## Analysis

*• If G has no perfect matchings, the algorithm will always*
*be correct as det(A*^{G}*(i*_{11}*, . . . , i** _{nn}*)) = 0.

*• Suppose G has a perfect matching.*

**– The algorithm will answer incorrectly with**

*probability at most md/M = 0.5 with m = n*^{2}*, d = 1*
*and M = 2n*^{2} in Eq. (8) on p. 473.

**– Run the algorithm independently k times.**

**– Output “G has no perfect matchings” if and only if***all say no.*

**– The error probability is now reduced to at most 2*** ^{−k}*.

## L´ oszl´ o Lov´ asz (1948–)

## Remarks

^{a}

*• Note that we are calculating*

prob[ algorithm answers “no”*| G has no perfect matchings ],*
prob[ algorithm answers “yes”*| G has a perfect matching ].*

*• We are not calculating*^{b}

*prob[ G has no perfect matchings| algorithm answers “no” ],*
*prob[ G has a perfect matching| algorithm answers “yes” ].*

aThanks to a lively class discussion on May 1, 2008.

b*Numerical Recipes in C (1988), “[As] we already remarked, statistics*
*is not a branch of mathematics!”*

*But How Large Can det(A*

^{G}*(i*

_{11}

*, . . . , i*

_{nn}## )) Be?

*• It is at most*

*n!* (

*2n*^{2})*n*

*.*

*• Stirling’s formula says n! ∼* *√*

*2πn (n/e)** ^{n}*.

*• Hence*

log_{2} *det(A*^{G}*(i*_{11}*, . . . , i*_{nn}*)) = O(n log*_{2} *n)*
bits are suﬃcient for representing the determinant.

*• We skip the details about how to make sure that all*
*intermediate results are of polynomial sizes.*

## An Intriguing Question

^{a}

*• Is there an (i*^{11}*, . . . , i** _{nn}*) that will always give correct
answers for the algorithm on p. 475?

*• A theorem on p. 571 shows that such an (i*^{11}*, . . . , i** _{nn}*)
exists!

**– Whether it can be found eﬃciently is another matter.**

*• Once (i*^{11}*, . . . , i** _{nn}*) is available, the algorithm can be
made deterministic.

aThanks to a lively class discussion on November 24, 2004.

## Randomization vs. Nondeterminism

^{a}

*• What are the diﬀerences between randomized algorithms*
and nondeterministic algorithms?

*• One can think of a randomized algorithm as a*

nondeterministic algorithm but with a probability associated with every guess/branch.

*• So each computation path of a randomized algorithm*
has a probability associated with it.

aContributed by Mr. Olivier Valery (D01922033) and Mr. Hasan Al- hasan (D01922034) on November 27, 2012.

## Monte Carlo Algorithms

^{a}

*• The randomized bipartite perfect matching algorithm is*
**called a Monte Carlo algorithm in the sense that**

**– If the algorithm ﬁnds that a matching exists, it is**
**always correct (no false positives).**

**– If the algorithm answers in the negative, then it may**
**make an error (false negatives).**

aMetropolis and Ulam (1949).

## Monte Carlo Algorithms (concluded)

*• The algorithm makes a false negative with probability*

*≤ 0.5.*^{a}

**– Note this probability refers to**^{b}

prob[ algorithm answers “no”*| G has a perfect matching ]*
not

*prob[ G has a perfect matching| algorithm answers “no” ].*

*• This probability is not over the space of all graphs or*
*determinants, but over the algorithm’s own coin ﬂips.*

**– It holds for any bipartite graph.**

aEquivalently, among the coin ﬂip sequences, at most half of them lead to the wrong answer.

bIn general, prob[ algorithm answers “no”*| input is a “yes” instance ].*

## The Markov Inequality

^{a}

**Lemma 60 Let x be a random variable taking nonnegative***integer values. Then for any k > 0,*

*prob[ x* *≥ kE[ x ] ] ≤ 1/k.*

*• Let p*^{i}*denote the probability that x = i.*

*E[ x ]* = ∑

*i*

*ip*_{i}

= ∑

*i<kE[ x ]*

*ip** _{i}* + ∑

*i**≥kE[ x ]*

*ip*_{i}

*≥ kE[ x ] × prob[x ≥ kE[ x ]].*

aAndrei Andreyevich Markov (1856–1922).

## Andrei Andreyevich Markov (1856–1922)

## An Application of Markov’s Inequality

*• Suppose algorithm C runs in expected time T (n) and*
always gives the right answer.

*• Consider an algorithm that runs C for time kT (n) and*
*rejects the input if C does not stop within the time*
bound.

*• By Markov’s inequality, this new algorithm runs in time*
*kT (n) and gives the wrong answer with probability*

*≤ 1/k.*

## An Application of Markov’s Inequality (concluded)

*• By running this algorithm m times (the total running*
*time is mkT (n)), we reduce the error probability to*

*≤ k** ^{−m}*.

^{a}

*• Suppose, instead, we run the algorithm for the same*
*running time mkT (n) once and rejects the input if it*
does not stop within the time bound.

*• By Markov’s inequality, this new algorithm gives the*
wrong answer with probability *≤ 1/(mk).*

*• This is much worse than the previous algorithm’s error*
probability of *≤ k** ^{−m}* for the same amount of time.

aWith the same input. Thanks to a question on December 7, 2010.

*fsat for k-sat Formulas (p. 453)*

*• Let ϕ(x*^{1}*, x*_{2}*, . . . , x*_{n}*) be a k-sat formula.*

*• If ϕ is satisﬁable, then return a satisfying truth*
assignment.

*• Otherwise, return “no.”*

*• We next propose a randomized algorithm for this*
problem.

*A Random Walk Algorithm for ϕ in CNF Form*

1: *Start with an arbitrary truth assignment T ;*

2: **for i = 1, 2, . . . , r do**

3: **if T****|= ϕ then**

4: **return “ϕ is satisﬁable with T ”;**

5: **else**

6: *Let c be an unsatisﬁed clause in ϕ under T ;* *{All of*
*its literals are false under T .}*

7: *Pick any x of these literals at random;*

8: *Modify T to make x true;*

9: **end if**

10: **end for**

11: **return “ϕ is unsatisﬁable”;**

## 3sat vs. 2sat Again

*• Note that if ϕ is unsatisﬁable, the algorithm will not*
refute it.

*• The random walk algorithm needs expected exponential*
time for 3sat.

**– In fact, it runs in expected O((1.333**· · · + ϵ)* ^{n}*) time

*with r = 3n,*

^{a}

*much better than O(2*

*).*

^{n}^{b}

*• We will show immediately that it works well for 2sat.*

*• The state of the art as of 2006 is expected O(1.322** ^{n}*)

*time for 3sat and expected O(1.474*

*) time for 4sat.*

^{n}^{c}

aUse this setting per run of the algorithm.

bSch¨oning (1999).

cKwama and Tamaki (2004); Rolf (2006).

## Random Walk Works for 2sat

^{a}

**Theorem 61 Suppose the random walk algorithm with***r = 2n*^{2} *is applied to any satisfiable 2sat problem with n*
*variables. Then a satisfying truth assignment will be*

*discovered with probability at least 0.5.*

*• Let ˆT be a truth assignment such that ˆT* *|= ϕ.*

*• Assume our starting T diﬀers from ˆT in i values.*

**– Their Hamming distance is i.**

**– Recall T is arbitrary.**

aPapadimitriou (1991).

## The Proof

*• Let t(i) denote the expected number of repetitions of the*
ﬂipping step^{a} until a satisfying truth assignment is

found.

*• It can be shown that t(i) is ﬁnite.*

*• t(0) = 0 because it means that T = ˆT and hence T* *|= ϕ.*

*• If T ̸= ˆT or any other satisfying truth assignment, then*
we need to ﬂip the coin at least once.

*• We ﬂip a coin to pick among the 2 literals of a clause*
*not satisﬁed by the present T .*

*• At least one of the 2 literals is true under ˆT because ˆT*
satisﬁes all clauses.

aThat is, Statement 7.

## The Proof (continued)

*• So we have at least 0.5 chance of moving closer to ˆT .*

*• Thus*

*t(i)* *≤* *t(i* *− 1) + t(i + 1)*

2 + 1

*for 0 < i < n.*

* – Inequality is used because, for example, T may diﬀer*
from ˆ

*T in both literals.*

*• It must also hold that*

*t(n)* *≤ t(n − 1) + 1*
*because at i = n, we can only decrease i.*

## The Proof (continued)

*• Now, put the necessary relations together:*

*t(0)* = *0,* (9)

*t(i)* *≤* *t(i* *− 1) + t(i + 1)*

2 *+ 1, 0 < i < n,* (10)

*t(n)* *≤ t(n − 1) + 1.* (11)

*• Technically, this is a one-dimensional random walk with*
*an absorbing barrier at i = 0 and a reﬂecting barrier at*
*i = n (if we replace “≤” with “=”).*^{a}

aThe proof in the textbook does exactly that. But a student pointed out diﬃculties with this proof technique on December 8, 2004. So our proof here uses the original inequalities.

## The Proof (continued)

*• Add up the relations for*

*2t(1), 2t(2), 2t(3), . . . , 2t(n* *− 1), t(n) to obtain*^{a}
*2t(1) + 2t(2) +* *· · · + 2t(n − 1) + t(n)*

*≤ t(0) + t(1) + 2t(2) + · · · + 2t(n − 2) + 2t(n − 1) + t(n)*
*+2(n* *− 1) + 1.*

*• Simplify it to yield*

*t(1)* *≤ 2n − 1.* (12)

a*Adding up the relations for t(1), t(2), t(3), . . . , t(n**−1) will also work,*
thanks to Mr. Yen-Wu Ti (D91922010).

## The Proof (continued)

*• Add up the relations for 2t(2), 2t(3), . . . , 2t(n − 1), t(n)*
to obtain

*2t(2) +* *· · · + 2t(n − 1) + t(n)*

*≤ t(1) + t(2) + 2t(3) + · · · + 2t(n − 2) + 2t(n − 1) + t(n)*
*+2(n* *− 2) + 1.*

*• Simplify it to yield*

*t(2)* *≤ t(1) + 2n − 3 ≤ 2n − 1 + 2n − 3 = 4n − 4*
by Eq. (12) on p. 495.

## The Proof (continued)

*• Continuing the process, we shall obtain*
*t(i)* *≤ 2in − i*^{2}*.*

*• The worst upper bound happens when i = n, in which*
case

*t(n)* *≤ n*^{2}*.*

*• We conclude that*

*t(i)* *≤ t(n) ≤ n*^{2}
for 0 *≤ i ≤ n.*

## The Proof (concluded)

*• So the expected number of steps is at most n*^{2}.

*• The algorithm picks r = 2n*^{2}.

**– This amounts to invoking the Markov inequality**

*(p. 484) with k = 2, resulting in a probability of 0.5.*^{a}

*• The proof does not yield a polynomial bound for 3sat.*^{b}

aRecall p. 486.

bContributed by Mr. Cheng-Yu Lee (R95922035) on November 8, 2006.

## Christos Papadimitriou (1949–)

## Boosting the Performance

*• We can pick r = 2mn*^{2} to have an error probability of

*≤* 1
*2m*
by Markov’s inequality.

*• Alternatively, with the same running time, we can run*
*the “r = 2n*^{2}*” algorithm m times.*

*• The error probability is now reduced to*

*≤ 2*^{−m}*.*

## Primality Tests

*• primes asks if a number N is a prime.*

*• The classic algorithm tests if k | N for k = 2, 3, . . . ,√*
*N .*

*• But it runs in Ω(2*^{(log}^{2} * ^{N )/2}*) steps.

## Primality Tests (concluded)

*• Suppose N = P Q is a product of 2 distinct primes.*

*• The probability of success of the density attack (p. 434)*
is

*≈* 2

*√N*
*when P* *≈ Q.*

*• This probability is exponentially small in terms of the*
input length log_{2} *N .*

## The Fermat Test for Primality

Fermat’s “little” theorem (p. 437) suggests the following
*primality test for any given number N :*

1: *Pick a number a randomly from* *{1, 2, . . . , N − 1};*

2: **if a**^{N}^{−1}**̸= 1 mod N then**

3: **return “N is composite”;**

4: **else**

5: **return “N is a prime”;**

6: **end if**

## The Fermat Test for Primality (concluded)

**• Carmichael numbers are composite numbers that will***pass the Fermat test for all a* *∈ {1, 2, . . . , N − 1}.*^{a}

**– The Fermat test will return “N is a prime” for all***Carmichael numbers N .*

*• Unfortunately, there are inﬁnitely many Carmichael*
numbers.^{b}

*• In fact, the number of Carmichael numbers less than N*
*exceeds N*^{2/7}*for N large enough.*

*• So the Fermat test is an incorrect algorithm for primes.*

aCarmichael (1910).

bAlford, Granville, and Pomerance (1992).

## Square Roots Modulo a Prime

*• Equation x*^{2} *= a mod p has at most two (distinct) roots*
by Lemma 57 (p. 442).

**– The roots are called square roots.**

**– Numbers a with square roots and gcd(a, p) = 1 are****called quadratic residues.**

*∗ They are*

1^{2} *mod p, 2*^{2} *mod p, . . . , (p* *− 1)*^{2} *mod p.*

*• We shall show that a number either has two roots or has*
none, and testing which is the case is trivial.^{a}

a*But no eﬃcient deterministic general-purpose square-root-extracting*
algorithms are known yet.

## Euler’s Test

**Lemma 62 (Euler) Let p be an odd prime and***a* *̸= 0 mod p.*

*1. If*

*a*^{(p}^{−1)/2}*= 1 mod p,*
*then x*^{2} *= a mod p has two roots.*

*2. If*

*a*^{(p}^{−1)/2}*̸= 1 mod p,*
*then*

*a*^{(p}* ^{−1)/2}* =

*−1 mod p*

*and x*

^{2}

*= a mod p has no roots.*

## The Proof (continued)

*• Let r be a primitive root of p.*

*• By Fermat’s “little” theorem,*
*r*^{(p}* ^{−1)/2}*
is a square root of 1.

*• So*

*r*^{(p}* ^{−1)/2}* = 1 or

*−1 mod p.*

*• But as r is a primitive root, r*^{(p}^{−1)/2}*̸= 1 mod p.*

*• Hence*

*r*^{(p}* ^{−1)/2}* =

*−1 mod p.*

## The Proof (continued)

*• Let a = r*^{k}*mod p for some k.*

*• Then*

*1 = a*^{(p}^{−1)/2}*= r*^{k(p}* ^{−1)/2}* =
[

*r*^{(p}* ^{−1)/2}*
]

*k*

= (*−1)*^{k}*mod p.*

*• So k must be even.*

*• Suppose a = r** ^{2j}* for some 1

*≤ j ≤ (p − 1)/2.*

*• Then a*^{(p}^{−1)/2}*= r*^{j(p}^{−1)}*= 1 mod p, and a’s two distinct*
*roots are r*^{j}*,−r*^{j}*(= r*^{j+(p}^{−1)/2}*mod p).*

**– If r*** ^{j}* =

*−r*

^{j}*mod p, then 2r*

^{j}*= 0 mod p, which implies*

*r*

^{j}*= 0 mod p, a contradiction.*

## The Proof (continued)

*• As 1 ≤ j ≤ (p − 1)/2, there are (p − 1)/2 such a’s.*

*• Each such a has 2 distinct square roots.*

*• The square roots of all the a’s are distinct.*

**– The square roots of diﬀerent a’s must be diﬀerent.**

*• Hence the set of square roots is {1, 2, . . . , p − 1}.*

*• As a result, a = r** ^{2j}*, 1

*≤ j ≤ (p − 1)/2, exhaust all the*quadratic residues.

## The Proof (concluded)

*• If a = r** ^{2j+1}*, then it has no roots because all the square
roots have been taken.

*• Now,*

*a*^{(p}* ^{−1)/2}* =
[

*r*^{(p}^{−1)/2}

]*2j+1*

= (*−1)** ^{2j+1}* =

*−1 mod p.*

The Legendre Symbol^{a} and Quadratic Residuacity Test

*• By Lemma 62 (p. 506) a*^{(p}^{−1)/2}*mod p =* *±1 for*
*a* *̸= 0 mod p.*

**• For odd prime p, deﬁne the Legendre symbol (a | p) as**

*(a**| p) =*

0 *if p**| a,*

1 *if a is a quadratic residue modulo p,*

**−1 if a is a quadratic nonresidue modulo p.**

*• Euler’s test (p. 506) implies*

*a*^{(p}^{−1)/2}*= (a| p) mod p*
*for any odd prime p and any integer a.*

*• Note that (ab|p) = (a|p)(b|p).*

aAndrien-Marie Legendre (1752–1833).

## Gauss’s Lemma

**Lemma 63 (Gauss) Let p and q be two odd primes. Then***(q|p) = (−1)*^{m}*, where m is the number of residues in*

*R =* *{ iq mod p : 1 ≤ i ≤ (p − 1)/2 } that are greater than*
*(p* *− 1)/2.*

*• All residues in R are distinct.*

**– If iq = jq mod p, then p**|(j − i) q or p|q.

**– But neither is possible.**

*• No two elements of R add up to p.*

**– If iq + jq = 0 mod p, then p**|(i + j) or p|q.

**– But neither is possible.**

## The Proof (continued)

*• Replace each of the m elements a ∈ R such that*
*a > (p* *− 1)/2 by p − a.*

**– This is equivalent to performing** *−a mod p.*

*• Call the resulting set of residues R** ^{′}*.

*• All numbers in R*^{′}*are at most (p* *− 1)/2.*

*• In fact, R** ^{′}* =

*{1, 2, . . . , (p − 1)/2} (see illustration next*page).

* – Otherwise, two elements of R would add up to p,*
which has been shown to be impossible.

5 1 2 3 4

6 5

1 2 3 4

6

*p = 7 and q = 5.*

## The Proof (concluded)

*• Alternatively, R** ^{′}* =

*{±iq mod p : 1 ≤ i ≤ (p − 1)/2},*

*where exactly m of the elements have the minus sign.*

*• Take the product of all elements in the two*
*representations of R** ^{′}*.

*• So*

*[(p* *− 1)/2]! = (−1)*^{m}*q*^{(p}^{−1)/2}*[(p* *− 1)/2]! mod p.*

*• Because gcd([(p − 1)/2]!, p) = 1, the above implies*
1 = (*−1)*^{m}*q*^{(p}^{−1)/2}*mod p.*

## Legendre’s Law of Quadratic Reciprocity

^{a}

*• Let p and q be two odd primes.*

*• The next result says their Legendre symbols are distinct*
if and only if both numbers are 3 mod 4.

**Lemma 64 (Legendre (1785), Gauss)**

*(p|q)(q|p) = (−1)*^{p}^{−1}^{2} ^{q}^{−1}^{2} *.*

aFirst stated by Euler in 1751. Legendre (1785) did not give a correct proof. Gauss proved the theorem when he was 19. He gave at least 6 diﬀerent proofs during his life. The 152nd proof appeared in 1963.

## The Proof (continued)

*• Sum the elements of R** ^{′}* in the previous proof in mod2.

*• On one hand, this is just* ∑*(p**−1)/2*

*i=1* *i mod 2.*

*• On the other hand, the sum equals*

*mp +*

*(p*∑*−1)/2*
*i=1*

(

*iq* *− p*

⌊*iq*
*p*

⌋)

mod 2

= *mp +*

*q*

*(p**−1)/2*∑

*i=1*

*i* *− p*

*(p**−1)/2*∑

*i=1*

⌊*iq*
*p*

⌋* mod 2.*

**– m of the iq mod p are replaced by p***− iq mod p.*

**– But signs are irrelevant under mod2.**

**– m is as in Lemma 63 (p. 512).**

## The Proof (continued)

*• Ignore odd multipliers to make the sum equal*

*m +*

*(p*∑*−1)/2*
*i=1*

*i* *−*

*(p**−1)/2*∑

*i=1*

⌊*iq*
*p*

⌋* mod 2.*

*• Equate the above with* ∑*(p**−1)/2*

*i=1* *i mod 2 to obtain*
*m =*

*(p*∑*−1)/2*
*i=1*

⌊*iq*
*p*

⌋

*mod 2.*

## The Proof (concluded)

*•* ∑*(p**−1)/2*

*i=1* *⌊*^{iq}_{p}*⌋ is the number of integral points below the*
line

*y = (q/p) x*
for 1 *≤ x ≤ (p − 1)/2.*

*• Gauss’s lemma (p. 512) says (q|p) = (−1)** ^{m}*.

*• Repeat the proof with p and q reversed.*

*• Then (p|q) = (−1)*^{m}^{′}*, where m** ^{′}* is the number of integral

*points above the line y = (q/p) x for 1*

*≤ y ≤ (q − 1)/2.*

*• As a result, (p|q)(q|p) = (−1)*^{m+m}* ^{′}*.

*• But m + m** ^{′}* is the total number of integral points in the

*[1,*

^{p}

^{−1}_{2}]

*× [1,*

^{q}

^{−1}_{2}] rectangle, which is

^{p}

^{−1}_{2}

^{q}

^{−1}_{2}.

## Eisenstein’s Rectangle

*(p,q)*

*(p - 1)/2*
*(q - 1)/2*

*Above, p = 11 and q = 7.*

## The Jacobi Symbol

^{a}

*• The Legendre symbol only works for odd prime moduli.*

* • The Jacobi symbol (a | m) extends it to cases where m*
is not prime.

*• Let m = p*^{1}*p*_{2} *· · · p*^{k}*be the prime factorization of m.*

*• When m > 1 is odd and gcd(a, m) = 1, then*

*(a|m) =*

∏*k*
*i=1*

*(a| p*^{i}*).*

**– Note that the Jacobi symbol equals** *±1.*

**– It reduces to the Legendre symbol when m is a prime.**

*• Deﬁne (a | 1) = 1.*

aCarl Jacobi (1804–1851).

## Properties of the Jacobi Symbol

The Jacobi symbol has the following properties, for arguments for which it is deﬁned.

*1. (ab* *| m) = (a | m)(b | m).*

*2. (a| m*^{1}*m*_{2}*) = (a| m*^{1}*)(a* *| m*^{2}).

*3. If a = b mod m, then (a* *| m) = (b | m).*

4. (*−1 | m) = (−1)*^{(m}* ^{−1)/2}* (by Lemma 63 on p. 512).

5. (2*| m) = (−1)*^{(m}^{2}* ^{−1)/8}*.

^{a}

*6. If a and m are both odd, then*
*(a| m)(m | a) = (−1)*^{(a}* ^{−1)(m−1)/4}*.

aBy Lemma 63 (p. 512) and some parity arguments.

## Properties of the Jacobi Symbol (concluded)

*• These properties allow us to calculate the Jacobi symbol*
*without factorization.*

*• This situation is similar to the Euclidean algorithm.*

*• Note also that (a | m) = 1/(a | m) because (a | m) = ±1.*^{a}

aContributed by Mr. Huang, Kuan-Lin (B96902079, R00922018) on December 6, 2011.

## Calculation of (2200 *|999)*

(202*|999) = (2|999)(101|999)*

= (*−1)*^{(999}^{2}* ^{−1)/8}*(101

*|999)*

= (*−1)*^{124750}(101*|999) = (101|999)*

= (*−1)**(100)(998)/4*

(999*|101) = (−1)*^{24950}(999*|101)*

= (999*|101) = (90|101) = (−1)*^{(101}^{2}* ^{−1)/8}*(45

*|101)*

= (*−1)*^{1275}(45*|101) = −(45|101)*

= *−(−1)**(44)(100)/4*

(101*|45) = −(101|45) = −(11|45)*

= *−(−1)** ^{(10)(44)/4}*(45

*|11) = −(45|11)*

= *−(1|11) = −1.*

## A Result Generalizing Proposition 10.3 in the Textbook

**Theorem 65 The group of set Φ(n) under multiplication***mod n has a primitive root if and only if n is either 1, 2, 4,*
*p*^{k}*, or 2p*^{k}*for some nonnegative integer k and and odd*

*prime p.*

This result is essential in the proof of the next lemma.

## The Jacobi Symbol and Primality Test

^{a}

**Lemma 66 If (M**|N) = M^{(N}^{−1)/2}*mod N for all*
*M* *∈ Φ(N), then N is a prime. (Assume N is odd.)*

*• Assume N = mp, where p is an odd prime, gcd(m, p) = 1,*
*and m > 1 (not necessarily prime).*

*• Let r ∈ Φ(p) such that (r | p) = −1.*

*• The Chinese remainder theorem says that there is an*
*M* *∈ Φ(N) such that*

*M* = *r mod p,*
*M* = *1 mod m.*

aMr. Clement Hsiao (B4506061, R88526067) pointed out that the text- book’s proof for Lemma 11.8 is incorrect in January 1999 while he was a senior.

## The Proof (continued)

*• By the hypothesis,*

*M*^{(N}^{−1)/2}*= (M* *| N) = (M | p)(M | m) = −1 mod N.*

*• Hence*

*M*^{(N}* ^{−1)/2}* =

*−1 mod m.*

*• But because M = 1 mod m,*

*M*^{(N}^{−1)/2}*= 1 mod m,*
a contradiction.

## The Proof (continued)

*• Second, assume that N = p*^{a}*, where p is an odd prime*
*and a* *≥ 2.*

*• By Theorem 65 (p. 525), there exists a primitive root r*
*modulo p** ^{a}*.

*• From the assumption,*
*M*^{N}* ^{−1}* =

[

*M*^{(N}* ^{−1)/2}*
]2

*= (M|N)*^{2} *= 1 mod N*
*for all M* *∈ Φ(N).*

## The Proof (continued)

*• As r ∈ Φ(N) (prove it), we have*

*r*^{N}^{−1}*= 1 mod N.*

*• As r’s exponent modulo N = p*^{a}*is ϕ(N ) = p*^{a}^{−1}*(p* *− 1),*
*p*^{a}^{−1}*(p* *− 1) | (N − 1),*

*which implies that p| (N − 1).*

*• But this is impossible given that p | N.*

## The Proof (continued)

*• Third, assume that N = mp*^{a}*, where p is an odd prime,*
*gcd(m, p) = 1, m > 1 (not necessarily prime), and a is*
even.

*• The proof mimics that of the second case.*

*• By Theorem 65 (p. 525), there exists a primitive root r*
*modulo p** ^{a}*.

*• From the assumption,*
*M*^{N}* ^{−1}* =

[

*M*^{(N}* ^{−1)/2}*
]2

*= (M|N)*^{2} *= 1 mod N*
*for all M* *∈ Φ(N).*

## The Proof (continued)

*• In particular,*

*M*^{N}^{−1}*= 1 mod p** ^{a}* (13)

*for all M* *∈ Φ(N).*

*• The Chinese remainder theorem says that there is an*
*M* *∈ Φ(N) such that*

*M* = *r mod p*^{a}*,*
*M* = *1 mod m.*

*• Because M = r mod p** ^{a}* and Eq. (13),

*r*

^{N}

^{−1}*= 1 mod p*

^{a}*.*

## The Proof (concluded)

*• As r’s exponent modulo N = p*^{a}*is ϕ(N ) = p*^{a}^{−1}*(p* *− 1),*
*p*^{a}^{−1}*(p* *− 1) | (N − 1),*

*which implies that p| (N − 1).*

*• But this is impossible given that p | N.*