### Function Problems

*• Decision problems are yes/no problems (sat, tsp (d),*
etc.).

* • Function problems require a solution (a satisfying*
truth assignment, a best tsp tour, etc.).

*• Optimization problems are clearly function problems.*

*• What is the relation between function and decision*
problems?

*• Which one is harder?*

### Function Problems Cannot Be Easier than Decision Problems

*• If we know how to generate a solution, we can solve the*
corresponding decision problem.

**– If you can ﬁnd a satisfying truth assignment**
eﬃciently, then sat is in P.

**– If you can ﬁnd the best tsp tour eﬃciently, then tsp**
(d) is in P.

*• But we shall see that decision problems can be as hard*
as the corresponding function problems. immediately.

### fsat

*• fsat is this function problem:*

* – Let φ(x*1

*, x*2

*, . . . , x*

*) be a boolean expression.*

_{n}* – If φ is satisﬁable, then return a satisfying truth*
assignment.

**– Otherwise, return “no.”**

*• We next show that if sat ∈ P, then fsat has a*
polynomial-time algorithm.

*• sat is a subroutine (black box) that returns “yes” or*

“no” on the satisﬁability of the input.

### An Algorithm for fsat Using sat

1: *t := ; {Truth assignment.}*

**2: if φ ∈ sat then**

3: **for i = 1, 2, . . . , n do**

4: **if φ[ x***i* * = true ] ∈ sat then*
5:

*t := t ∪ { x*

_{i}*= true };*

6: *φ := φ[ x** _{i}* = true ];

7: **else**

8: *t := t ∪ { x**i* *= false };*

9: *φ := φ[ x** _{i}* = false ];

10: **end if**
11: **end for**
12: **return t;**

**13: else**

14: **return “no”;**

**15: end if**

### Analysis

*• If sat can be solved in polynomial time, so can fsat.*

**– There are** *≤ n + 1 calls to the algorithm for sat.*^{a}
**– Boolean expressions shorter than φ are used in each**

call to the algorithm for sat.

*• Hence sat and fsat are equally hard (or easy).*

aContributed by Ms. Eva Ou (R93922132) on November 24, 2004.

### Analysis (concluded)

*• Note that this reduction from fsat to sat is not a Karp*
reduction.^{a}

**– Will the set of NP-complete problems diﬀer under**
diﬀerent reductions?^{b}

*• Instead, it calls sat multiple times as a subroutine, and*
its answers guide the search on the computation tree.

aRecall p. 275 and p. 280.

bContributed by Mr. Yu-Ming Lu (R06723032, D08922008) and Mr.

Han-Ting Chen (R10922073) on December 9, 2021.

### tsp and tsp (d) Revisited

*• We are given n cities 1, 2, . . . , n and integer distances*
*d**ij* *= d**ji* *between any two cities i and j.*

*• tsp (d) asks if there is a tour with a total distance at*
*most B.*

*• tsp asks for a tour with the shortest total distance.*

**– The shortest total distance is at most**

*i,j* *d** _{ij}*.

*∗ Recall that the input string contains d*11*, . . . , d**nn*.

*• Thus the shortest total distance is less than 2** ^{| x |}* in

*magnitude, where x is the input (why?).*

*• We next show that if tsp (d) ∈ P, then tsp has a*

### An Algorithm for tsp Using tsp (d)

1: *Perform a binary search over interval [ 0, 2** ^{| x |}* ] by calling

*tsp (d) to obtain the shortest distance, C;*

2: **for i, j = 1, 2, . . . , n do**

3: *Call tsp (d) with B = C and d*_{ij}*= C + 1;*

4: **if “no” then**

5: *Restore d**ij* to its old value; *{Edge [ i, j ] is critical.}*

6: **end if**

7: **end for**

8: **return the tour with edges whose d**_{ij}*≤ C;*

### Analysis

*• An edge which is not on any remaining optimal tours*
*will be eliminated, with its d*_{ij}*set to C + 1.*

*• So the algorithm ends with n edges which are not*
eliminated (why?).

*• This is true even if there are multiple optimal tours!*^{a}

aThanks to a lively class discussion on November 12, 2013 and De- cember 9, 2021.

### Analysis (concluded)

*• There are O(| x | + n*^{2}) calls to the algorithm for tsp (d).

*• Each call has an input length of O(| x |).*

*• So if tsp (d) can be solved in polynomial time, so can*
tsp.

*• Hence tsp (d) and tsp are equally hard (or easy).*^{a}

aHow about counting the number of optimal tsp tours? This is re- lated to #P-completeness (p. 874). Contributed by Mr. Vincent Hwang (R10922138) on December 9, 2021.

*Randomized Computation*

I know that half my advertising works, I just don’t know which half.

— John Wanamaker I know that half my advertising is a waste of money, I just don’t know which half!

— McGraw-Hill ad.

### Randomized Algorithms

^{a}

*• Randomized algorithms ﬂip unbiased coins.*

*• There are important problems for which there are no*
*known eﬃcient deterministic algorithms but for which*
very eﬃcient randomized algorithms exist.

**– Extraction of square roots, for instance.**

*• There are problems where randomization is necessary.*

**– Secure protocols.**

*• Randomized version can be more eﬃcient.*

**– Parallel algorithms for maximal independent set.**^{b}

### Randomized Algorithms (concluded)

*• Are randomized algorithms algorithms?*^{a}

*• Coin ﬂips are occasionally used in politics.*^{b}

aPascal, “Truth is so delicate that one has only to depart the least bit from it to fall into error.”

bIn the 2016 Iowa Democratic caucuses, e.g. (see http://edition.cnn.com/2016/02/02/politics/hillary-clinton-coin -flip-iowa-bernie-sanders/index.html).

### “Four Most Important Randomized Algorithms”

^{a}

1. Primality testing.^{b}

2. Graph connectivity using random walks.^{c}
3. Polynomial identity testing.^{d}

4. Algorithms for approximate counting.^{e}

aTrevisan (2006).

bRabin (1976); Solovay & Strassen (1977).

cAleliunas, Karp, Lipton, Lov´asz, & Rackoﬀ (1979).

dSchwartz (1980); Zippel (1979).

eSinclair & Jerrum (1989).

### Bipartite Perfect Matching

**• We are given a bipartite graph G = (U, V, E).**

* – U = { u*1

*, u*2

*, . . . , u*

_{n}*}.*

* – V = { v*1

*, v*2

*, . . . , v*

_{n}*}.*

**– E ⊆ U × V .**

**• We are asked if there is a perfect matching.**

**– A permutation π of { 1, 2, . . . , n } such that**

*(u*_{i}*, v** _{π(i)}*)

*∈ E*

*for all i ∈ { 1, 2, . . . , n }.*

*• A perfect matching contains n edges.*

### A Perfect Matching in a Bipartite Graph

:_{}
:_{}
:_{}
:_{}
:_{}

;_{}

;_{}

;_{}

;_{}

;_{}

### Symbolic Determinants

*• We are given a bipartite graph G.*

*• Construct the n × n matrix A*^{G}*whose (i, j)th entry A*^{G}_{ij}*is a symbolic variable x**ij* *if (u**i**, v**j*) *∈ E and 0 otherwise:*

*A*^{G}* _{ij}* =

⎧⎨

⎩

*x*_{ij}*, if (u*_{i}*, v** _{j}*)

*∈ E,*

*0,*

*otherwise.*

### Symbolic Determinants (continued)

*• The matrix for the bipartite graph G on p. 533 is*^{a}

*A** ^{G}* =

⎡

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎣

0 0 *x*13 *x*14 0

0 *x*22 0 0 0

*x*31 0 0 0 *x*35

*x*41 0 *x*43 *x*44 0

*x*51 0 0 0 *x*55

⎤

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎦

*.* (8)

aThe idea is similar to the Tanner (1981) graph in coding theory.

### Symbolic Determinants (concluded)

**• The determinant of A*** ^{G}* is

*det(A*

*) =*

^{G}*π*

*sgn(π)*
*n*
*i=1*

*A*^{G}_{i,π(i)}*.* (9)

**– π ranges over all permutations of n elements.**

* – sgn(π) is 1 if π is the product of an even number of*
transpositions and

*−1 otherwise.*

^{a}

*• det(A*^{G}*) contains n! terms, many of which may be 0s.*

aEquivalently, sgn(*π) = 1 if the number of (i, j)s such that i < j and*
*π(i) > π(j) is even. Contributed by Mr. Hwan-Jeu Yu (D95922028) on*
May 1, 2008.

### Determinant and Bipartite Perfect Matching

*• In*

*π* *sgn(π)*_{n}

*i=1* *A*^{G}* _{i,π(i)}*, note the following:

**– Each summand corresponds to a possible perfect**
*matching π.*

**– Nonzero summands** _{n}

*i=1* *A*^{G}* _{i,π(i)}* are distinct

*monomials and will not cancel.*

*• det(A** ^{G}*) is essentially an exhaustive enumeration.

**Proposition 65 (Edmonds, 1967) G has a perfect***matching if and only if det(A*^{G}*) is not identically zero.*

### Perfect Matching and Determinant (p. 533)

:_{}

:_{}
:_{}
:_{}
:_{}

;_{}

;_{}

;_{}

;_{}

;_{}

### Perfect Matching and Determinant (concluded)

*• The matrix is (p. 535)*

*A** ^{G}* =

⎡

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎣

0 0 *x*13 *x*14 0

0 *x*22 0 0 0

*x*31 0 0 0 *x*35

*x*41 0 *x*43 *x*44 0

*x*51 0 0 0 *x*55

⎤

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎦
*.*

*• det(A** ^{G}*) =

*−x*14

*x*22

*x*35

*x*43

*x*51

*+ x*13

*x*22

*x*35

*x*44

*x*51 +

*x*14

*x*22

*x*31

*x*43

*x*55

*− x*13

*x*22

*x*31

*x*44

*x*55.

### How To Test If a Polynomial Is Identically Zero?

*• det(A*^{G}*) is a polynomial in n*^{2} variables.

*• It has, potentially, exponentially many terms.*

*• Expanding the determinant polynomial is thus infeasible.*

*• If det(A** ^{G}*)

*≡ 0, then it remains zero if we substitute*

*arbitrary integers for the variables x*11

*, . . . , x*

*.*

_{nn}*• When det(A** ^{G}*)

*≡ 0, what is the likelihood of obtaining a*zero?

### Number of Roots of a Polynomial

* Lemma 66 (Schwartz, 1980) Let p(x*1

*, x*2

*, . . . , x*

*m*)

*≡ 0 be*

*a polynomial in m variables each of degree at most d. Let*

*M ∈ Z*

^{+}

*. Then the number of m-tuples*

*(x*1*, x*2*, . . . , x**m*) *∈ { 0, 1, . . . , M − 1 }*^{m}*such that p(x*1*, x*2*, . . . , x*_{m}*) = 0 is*

*≤ mdM*^{m−1}*.*

*• By induction on m (consult the textbook).*

### Density Attack

*• The density of roots in the domain is at most*
*mdM*^{m−1}

*M** ^{m}* =

*md*

*M* *.* (10)

*• So suppose p(x*1*, x*2*, . . . , x** _{m}*)

*≡ 0.*

*• Then a random*

*(x*1*, x*2*, . . . , x**m*) *∈ { 0, 1, . . . , M − 1 }** ^{m}*
has a probability of

*≤ md/M of being a root of p.*

*• Note that M is under our control!*

**– One can raise M to lower the error probability, e.g.**

### Density Attack (concluded)

*Here is a sampling algorithm to test if p(x*1*, x*2*, . . . , x** _{m}*)

*≡ 0.*

1: *Choose i*1*, . . . , i** _{m}* from

*{ 0, 1, . . . , M − 1 } randomly;*

2: * if p(i*1

*, i*2

*, . . . , i*

*)*

_{m}

**= 0 then**3: **return “p is not identically zero”;**

4: **else**

5: **return “p is (probably) identically zero”;**

6: **end if**

### Analysis

*• If p(x*1*, x*2*, . . . , x**m*) *≡ 0 , the algorithm will always be*
*correct as p(i*1*, i*2*, . . . , i** _{m}*) = 0.

*• Suppose p(x*1*, x*2*, . . . , x**m*) *≡ 0.*

**– The algorithm will answer incorrectly with**

*probability at most md/M by Eq. (10) on p. 542.*

*• We next return to the original problem of bipartite*
perfect matching.

### A Randomized Bipartite Perfect Matching Algorithm

^{a}

1: *Choose n*^{2} *integers i*11*, . . . , i** _{nn}* from

*{ 0, 1, . . . , 2n*

^{2}

*− 1 }*randomly;

*{So M = 2n*

^{2}.

*}*

2: *Calculate det(A*^{G}*(i*11*, . . . , i** _{nn}*)) by Gaussian elimination;

3: **if det(A**^{G}*(i*11*, . . . , i** _{nn}*))

**= 0 then**4: **return “G has a perfect matching”;**

5: **else**

6: **return “G has (probably) no perfect matchings”;**

7: **end if**

aLov´asz (1979). According to Paul Erd˝os, Lov´asz wrote his ﬁrst sig- niﬁcant paper “at the ripe old age of 17.”

### Analysis

*• If G has no perfect matchings, the algorithm will always*
*be correct as det(A*^{G}*(i*11*, . . . , i** _{nn}*)) = 0.

*• Suppose G has a perfect matching.*

**– The algorithm will answer incorrectly with**

*probability at most md/M = 0.5 with m = n*^{2}*, d = 1*
*and M = 2n*^{2} in Eq. (10) on p. 542.

*• Run the algorithm independently k times.*

*• Output “G has no perfect matchings” if and only if all*
say “(probably) no perfect matchings.”

*• The error probability is now reduced to at most 2** ^{−k}*.

### L´ oszl´ o Lov´ asz (1948–)

### Remarks

^{a}

*• Note that we calculated*

*prob[ algorithm answers “no” | G has no perfect matchings ],*
*prob[ algorithm answers “yes” | G has a perfect matching ].*

**– And they are 1 and** *≥ 1/2, respectively.*

*• We did not calculate*^{b}

*prob[ G has no perfect matchings | algorithm answers “no” ],*
*prob[ G has a perfect matching | algorithm answers “yes” ].*

aThanks to a lively class discussion on May 1, 2008.

b*Numerical Recipes in C (1988), “statistics is not a branch of math-*
ematics!” Similar issues arise in MAP (maximum a posteriori) estimates

### But How Large Can det *(A*

^{G}*(i*

^{11}

*, . . . , i*

*nn*

### )) Be?

*• It is at most*^{a}

*n!*

*2n*^{2}_{n}*.*

*• Stirling’s formula says n! ∼* *√*

*2πn (n/e)** ^{n}*.

*• Hence*

log_{2} *det(A*^{G}*(i*11*, . . . , i**nn**)) = O(n log*_{2} *n)*
bits are suﬃcient for representing the determinant.

*• We skip the details about how to make sure that all*
*intermediate results are of polynomial size.*

### An Intriguing Question

^{a}

*• Is there an (i*11*, . . . , i** _{nn}*) that will always give correct
answers for the algorithm on p. 545?

*• A theorem on p. 642 shows that such an (i*11*, . . . , i**nn*)
exists!

**– Whether it can be found eﬃciently is another matter.**

*• Once (i*11*, . . . , i** _{nn}*) is available, the algorithm can be
made deterministic.

**– Is it an algorithm for bipartite perfect matching?**^{b}

aThanks to a lively class discussion on November 24, 2004.

bWe have one algorithm for each *n — unless there is an algorithm*
to generate such (*i*11*, . . . , i**nn*) for all *n. Contributed by Mr. Han-Ting*

### Randomization vs. Nondeterminism

^{a}

*• What are the diﬀerences between randomized algorithms*
and nondeterministic algorithms?

*• Think of a randomized algorithm as a nondeterministic*
one but with a probability associated with every

guess/branch.

*• So each computation path of a randomized algorithm*
has a probability associated with it.

aContributed by Mr. Olivier Valery (D01922033) and Mr. Hasan Al- hasan (D01922034) on November 27, 2012.

### Monte Carlo Algorithms

^{a}

*• The randomized bipartite perfect matching algorithm is*
**called a Monte Carlo algorithm in the sense that**

**– If the algorithm ﬁnds that a matching exists, it is**
**always correct (no false positives; no type I**
**errors).**

**– If the algorithm answers in the negative, then it may**
**make an error (false negatives; type II errors).**

*∗ And the error probability must be small.*

aMetropolis & Ulam (1949).

### Monte Carlo Algorithms (continued)

*• The algorithm makes a false negative with probability*

*≤ 0.5.*^{a}

*• Again, this probability refers to*^{b}

prob[ algorithm answers “no”*| G has a perfect matching ]*
not

*prob[ G has a perfect matching | algorithm answers “no” ].*

aEquivalently, among the coin ﬂip sequences, at most half of them lead to the wrong answer.

### Monte Carlo Algorithms (concluded)

*• This probability 0.5 is not over the space of all graphs or*
*determinants, but over the algorithm’s own coin ﬂips.*

**– It holds for any bipartite graph.**

*• In contrast, to calculate*

*prob[ G has a perfect matching | algorithm answers “no” ],*
*we will need the distribution of G.*

*• But it is an empirical statement that is very hard to*
verify.

### The Markov Inequality

^{a}

**Lemma 67 Let x be a random variable taking nonnegative***integer values. Then for any k > 0,*

*prob[ x ≥ kE[ x ] ] ≤ 1/k.*

*• Let p*_{i}*denote the probability that x = i.*

*E[ x ] =*

*i*

*ip** _{i}* =

*i<kE[ x ]*

*ip** _{i}* +

*i≥kE[ x ]*

*ip*_{i}

*≥*

*i≥kE[ x ]*

*ip*_{i}*≥ kE[ x ]*

*i≥kE[ x ]*

*p*_{i}

*≥ kE[ x ] × prob[x ≥ kE[ x ]].*

### Andrei Andreyevich Markov (1856–1922)

*fsat for k-sat Formulas (p. 519)*

*• Let φ(x*1*, x*2*, . . . , x*_{n}*) be a k-sat formula.*

*• If φ is satisﬁable, then return a satisfying truth*
assignment.

*• Otherwise, return “no.”*

*• We next propose a randomized algorithm for this*
problem.

### A Random Walk Algorithm for *φ in CNF Form*

1: *Start with an arbitrary truth assignment T ;*

2: **for i = 1, 2, . . . , r do**

3: **if T |= φ then**

4: **return “φ is satisﬁable with T ”;**

5: **else**

6: *Let c be an unsatisﬁed clause in φ under T ; {All of*
*its literals are false under T .}*

7: *Pick any x of these literals at random;*

8: *Modify T to make x true;*

9: **end if**

10: **end for**

11: **return “φ is unsatisﬁable”;**

### 3sat vs. 2sat Again

*• Note that if φ is unsatisﬁable, the algorithm will answer*

“unsatisﬁable.”

*• The random walk algorithm needs expected exponential*
time for 3sat.

**– In fact, it runs in expected O((1.333 · · · + )*** ^{n}*) time

*with r = 3n,*

^{a}

*much better than O(2*

*).*

^{n}^{b}

*• We will show immediately that it works well for 2sat.*

*• The state of the art as of 2014 is expected O(1.30704** ^{n}*)

*time for 3sat and expected O(1.46899*

*) time for 4sat.*

^{n}^{c}

aUse this setting per run of the algorithm.

b

### Random Walk Works for 2sat

^{a}

**Theorem 68 Suppose the random walk algorithm with***r = 2n*^{2} *is applied to any satisfiable 2sat problem with n*
*variables. Then a satisfying truth assignment will be*

*discovered with probability at least 0.5.*

*• Let ˆT be a truth assignment such that ˆT |= φ.*

*• Assume our starting T diﬀers from ˆT in i values.*

**– Their Hamming distance is i.**

*• Recall T is arbitrary.*

aPapadimitriou (1991).

### The Proof

*• Let t(i) denote the expected number of repetitions of the*
ﬂipping step^{a} until a satisfying truth assignment is

found.

*• It can be shown that t(i) is ﬁnite.*

*• t(0) = 0 because it means that T = ˆT and hence T |= φ.*

*• If T = ˆT or any other satisfying truth assignment, then*
we need to ﬂip the coin at least once.

*• We ﬂip a coin to pick among the 2 literals of a clause*
*not satisﬁed by the present T .*

*• At least one of the 2 literals is true under ˆT because ˆT*

### The Proof (continued)

*• So we have at least a 50% chance of moving closer to ˆT .*

*• Thus*

*t(i) ≤* *t(i − 1) + t(i + 1)*

2 + 1

*for 0 < i < n.*

* – Inequality is used because, for example, T may diﬀer*
from ˆ

*T in both literals.*

*• It must also hold that*

*t(n) ≤ t(n − 1) + 1*
*because at i = n, we can only decrease i.*

### The Proof (continued)

*• Now, put the necessary relations together:*

*t(0) = 0,* (11)

*t(i) ≤* *t(i − 1) + t(i + 1)*

2 *+ 1, 0 < i < n, (12)*

*t(n) ≤ t(n − 1) + 1.* (13)

*• Technically, this is a one-dimensional random walk with*
*an absorbing barrier at i = 0 and a reﬂecting barrier at*
*i = n (if we replace “≤” with “=”).*^{a}

aThe proof in the textbook does exactly that. But a student pointed

### The Proof (continued)

*• Add up the relations for*

*2t(1), 2t(2), 2t(3), . . . , 2t(n − 1), t(n) to obtain*^{a}
*2t(1) + 2t(2) + · · · + 2t(n − 1) + t(n)*

*≤ t(0) + t(1) + 2t(2) + · · · + 2t(n − 2) + 2t(n − 1) + t(n)*
*+2(n − 1) + 1.*

*• Simplify it to yield*

*t(1) ≤ 2n − 1.* (14)

aAdding up the relations for *t(1), t(2), t(3), . . . , t(n−1) will also work,*
thanks to Mr. Yen-Wu Ti (D91922010).

### The Proof (continued)

*• Add up the relations for 2t(2), 2t(3), . . . , 2t(n − 1), t(n)*
to obtain

*2t(2) + · · · + 2t(n − 1) + t(n)*

*≤ t(1) + t(2) + 2t(3) + · · · + 2t(n − 2) + 2t(n − 1) + t(n)*
*+2(n − 2) + 1.*

*• Simplify it to yield*

*t(2) ≤ t(1) + 2n − 3 ≤ 2n − 1 + 2n − 3 = 4n − 4*

### The Proof (continued)

*• Continuing the process, we shall obtain*^{a}
*t(i) ≤ 2in − i*^{2}*.*

*• The worst upper bound happens when i = n, in which*
case

*t(n) ≤ n*^{2}*.*

*• We conclude that*

*t(i) ≤ t(n) ≤ n*^{2}
for 0 *≤ i ≤ n.*

a

### The Proof (concluded)

*• So the expected number of steps is at most n*^{2}.

*• The algorithm picks r = 2n*^{2}.

*• Apply the Markov inequality (p. 555) with k = 2 to*
yield the desired probability of 0.5.

*• The proof does not yield a polynomial bound for 3sat.*^{a}

aContributed by Mr. Cheng-Yu Lee (R95922035) on November 8, 2006.

### Boosting the Performance

*• We can pick r = 2mn*^{2} to have an error probability of

*≤* 1
*2m*
by Markov’s inequality.

*• Alternatively, with the same running time, we can run*
*the “r = 2n*^{2}*” algorithm m times.*

*• The error probability is now reduced to*

*≤ 2*^{−m}*.*

### Primality Tests

*• primes asks if a number N is a prime.*

*• The classic algorithm tests if k | N for k = 2, 3, . . . ,√*
*N .*

*• But it runs in Ω(2*^{(log}^{2} * ^{N)/2}*) steps.

*• compositeness asks if a number is composite.*

### The Fermat Test for Primality

Fermat’s “little” theorem (p. 505) suggests the following
*primality test for any given number N :*

1: *Pick a number a randomly from { 1, 2, . . . , N − 1 };*

2: **if a**^{N−1}**≡ 1 mod N then**

3: **return “N is composite”;**

4: **else**

5: **return “N is (probably) a prime”;**

6: **end if**

### The Fermat Test for Primality (continued)

**• Carmichael numbers are composite numbers that will***pass the Fermat test for all a ∈ { 1, 2, . . . , N − 1 }.*^{a}

**– The Fermat test will return “N is a prime” for all***Carmichael numbers N .*

*• If there are ﬁnitely many Carmichael numbers, store*
them for matches before running the Fermat test.

*• Unfortunately, there are inﬁnitely many such numbers.*^{b}
**– The number of Carmichael numbers less than N**

*exceeds N*^{2}^{/7}*for N large enough.*

aCarmichael (1910). Lo (1994) mentions an investment strategy based

### The Fermat Test for Primality (concluded)

*• The Fermat test will fail all of them.*

*• So the Fermat test is an incorrect algorithm for primes.*

*• Even suppose N is not a Carmichael number but*
remains composite.

*• We need many a ∈ { 1, 2, . . . , N − 1 } such that*
*a*^{N−1}*≡ 1 mod N.*

*• Otherwise, the correct answer will come only with a*
*vanishing probability (say 1/N ).*^{a}

aContributed by Mr. Vincent Hwang (R10922138) on December 9, 2021.