• 沒有找到結果。

Any nonzero polynomial of degree k has at most k distinct roots modulo p

在文檔中 The Primality Problem • (頁 31-52)

Exponents and Primitive Roots

• From Fermat’s “little” theorem, all exponents divide p − 1.

• A primitive root of p is thus a number with exponent p − 1.

• Let R(k) denote the total number of residues in Φ(p) = {1, 2, . . . , p − 1} that have exponent k.

• We already knew that R(k) = 0 for k ̸ |(p − 1).

• So

k|(p−1)

R(k) = p − 1 as every number has an exponent.

Size of R(k)

• Any a ∈ Φ(p) of exponent k satisfies xk = 1 mod p.

• Hence there are at most k residues of exponent k, i.e., R(k) ≤ k, by Lemma 57 (p. 458).

• Let s be a residue of exponent k.

• 1, s, s2, . . . , sk−1 are distinct modulo p.

– Otherwise, si = sj mod p with i < j.

– Then sj−i = 1 mod p with j − i < k, a contradiction.

• As all these k distinct numbers satisfy xk = 1 mod p, they comprise all the solutions of xk = 1 mod p.

Size of R(k) (continued)

• But do all of them have exponent k (i.e., R(k) = k)?

• And if not (i.e., R(k) < k), how many of them do?

• Pick s, where ℓ < k.

• Suppose ℓ ̸∈ Φ(k) with gcd(ℓ, k) = d > 1.

• Then

(s)k/d = (sk)ℓ/d = 1 mod p.

• Therefore, s has exponent at most k/d < k.

• We conclude that

R(k) ≤ ϕ(k).

Size of R(k) (concluded)

• Because all p − 1 residues have an exponent, p − 1 =

k|(p−1)

R(k)

k|(p−1)

ϕ(k) = p − 1

by Lemma 54 (p. 445).

• Hence

R(k) =



ϕ(k) when k|(p − 1) 0 otherwise

• In particular, R(p − 1) = ϕ(p − 1) > 0, and p has at least one primitive root.

• This proves one direction of Theorem 49 (p. 431).

A Few Calculations

• Let p = 13.

• From p. 455, we know ϕ(p − 1) = 4.

• Hence R(12) = 4.

• Indeed, there are 4 primitive roots of p.

• As

Φ(p − 1) = {1, 5, 7, 11}, the primitive roots are

g1, g5, g7, g11 for any primitive root g.

The Other Direction of Theorem 49 (p. 431)

• We show p is a prime if there is a number r such that 1. rp−1 = 1 mod p, and

2. r(p−1)/q ̸= 1 mod p for all prime divisors q of p − 1.

• Suppose p is not a prime.

• We proceed to show that no primitive roots exist.

• Suppose rp−1 = 1 mod p (note gcd(r, p) = 1).

• We will show that the 2nd condition must be violated.

The Proof (continued)

• So we proceed to show r(p−1)/q = 1 mod p for some prime divisor q of p − 1.

• rϕ(p) = 1 mod p by the Fermat-Euler theorem (p. 455).

• Because p is not a prime, ϕ(p) < p − 1.

• Let k be the smallest integer such that rk = 1 mod p.

• With the 1st condition, it is easy to show that k | (p − 1) (similar to p. 458).

• Note that k | ϕ(p) (p. 458).

• As k ≤ ϕ(p), k < p − 1.

The Proof (concluded)

• Let q be a prime divisor of (p − 1)/k > 1.

• Then k|(p − 1)/q.

• By the definition of k,

r(p−1)/q = 1 mod p.

• But this violates the 2nd condition.

Function Problems

• Decision problems are yes/no problems (sat, tsp (d), etc.).

• Function problems require a solution (a satisfying truth assignment, a best tsp tour, etc.).

• Optimization problems are clearly function problems.

• What is the relation between function and decision problems?

• Which one is harder?

Function Problems Cannot Be Easier than Decision Problems

• If we know how to generate a solution, we can solve the corresponding decision problem.

– If you can find a satisfying truth assignment efficiently, then sat is in P.

– If you can find the best tsp tour efficiently, then tsp (d) is in P.

• But decision problems can be as hard as the corresponding function problems.

fsat

• fsat is this function problem:

– Let ϕ(x1, x2, . . . , xn) be a boolean expression.

– If ϕ is satisfiable, then return a satisfying truth assignment.

– Otherwise, return “no.”

• We next show that if sat ∈ P, then fsat has a polynomial-time algorithm.

• sat is a subroutine (black box) that returns “yes” or

“no” on the satisfiability of the input.

An Algorithm for fsat Using sat

1: t := ϵ; {Truth assignment.}

2: if ϕ ∈ sat then

3: for i = 1, 2, . . . , n do

4: if ϕ[ xi = true ] ∈ sat then 5: t := t ∪ { xi = true};

6: ϕ := ϕ[ xi = true ];

7: else

8: t := t ∪ { xi = false};

9: ϕ := ϕ[ xi = false ];

10: end if 11: end for 12: return t;

13: else

14: return “no”;

15: end if

Analysis

• If sat can be solved in polynomial time, so can fsat.

– There are ≤ n + 1 calls to the algorithm for sat.a – Boolean expressions shorter than ϕ are used in each

call to the algorithm for sat.

• Hence sat and fsat are equally hard (or easy).

• Note that this reduction from fsat to sat is not a Karp reduction (recall p. 247).

• Instead, it calls sat multiple times as a subroutine and moves on sat’s outputs.

aContributed by Ms. Eva Ou (R93922132) on November 24, 2004.

tsp and tsp (d) Revisited

• We are given n cities 1, 2, . . . , n and integer distances dij = dji between any two cities i and j.

• tsp (d) asks if there is a tour with a total distance at most B.

• tsp asks for a tour with the shortest total distance.

– The shortest total distance is at most

i,j dij.

∗ Recall that the input string contains d11, . . . , dnn.

∗ Thus the shortest total distance is less than 2| x | in magnitude, where x is the input (why?).

• We next show that if tsp (d) ∈ P, then tsp has a polynomial-time algorithm.

An Algorithm for tsp Using tsp (d)

1: Perform a binary search over interval [ 0, 2| x | ] by calling tsp (d) to obtain the shortest distance, C;

2: for i, j = 1, 2, . . . , n do

3: Call tsp (d) with B = C and dij = C + 1;

4: if “no” then

5: Restore dij to old value; {Edge [ i, j ] is critical.}

6: end if

7: end for

8: return the tour with edges whose dij ≤ C;

Analysis

• An edge that is not on any optimal tour will be eliminated, with its dij set to C + 1.

• An edge which is not on all remaining optimal tours will also be eliminated.

• So the algorithm ends with n edges which are not eliminated (why?).

• This is true even if there are multiple optimal tours!a

aThanks to a lively class discussion on November 12, 2013.

Analysis (concluded)

• There are O(| x | + n2) calls to the algorithm for tsp (d).

• Each call has an input length of O(| x |).

• So if tsp (d) can be solved in polynomial time, so can tsp.

• Hence tsp (d) and tsp are equally hard (or easy).

Randomized Computation

I know that half my advertising works, I just don’t know which half.

— John Wanamaker

I know that half my advertising is a waste of money, I just don’t know which half!

— McGraw-Hill ad.

Randomized Algorithms

a

• Randomized algorithms flip unbiased coins.

• There are important problems for which there are no known efficient deterministic algorithms but for which very efficient randomized algorithms exist.

– Extraction of square roots, for instance.

• There are problems where randomization is necessary.

– Secure protocols.

• Randomized version can be more efficient.

– Parallel algorithm for maximal independent set.b

aRabin (1976); Solovay and Strassen (1977).

b“Maximal” (a local maximum) not “maximum” (a global maximum).

在文檔中 The Primality Problem • (頁 31-52)

相關文件