• 沒有找到結果。

Basic Modular Arithmetics

N/A
N/A
Protected

Academic year: 2022

Share "Basic Modular Arithmetics"

Copied!
58
0
0

加載中.... (立即查看全文)

全文

(1)

Basic Modular Arithmetics

a

Let m, n ∈ Z+.

m | n means m divides n; m is n’s divisor.

We call the numbers 0, 1, . . . , n − 1 the residue modulo n.

The greatest common divisor of m and n is denoted gcd(m, n).

The r in Theorem 49 (p. 391) is a primitive root of p.

We now prove the existence of primitive roots and then Theorem 49 (p. 391).

aCarl Friedrich Gauss.

(2)

Basic Modular Arithmetics (concluded)

We use

a ≡ b mod n if n | (a − b).

So 25 ≡ 38 mod 13.

We use

a = b mod n

if n | (a − b) and 0 ≤ b < n; in other words, b is the remainder of a divided by n.

– So 25 = 12 mod 13.

(3)

Euler’s

a

Totient or Phi Function

Let

Φ(n) = {m : 1 ≤ m < n, gcd(m, n) = 1}

be the set of all positive integers less than n that are prime to n.b

Φ(12) = {1, 5, 7, 11}.

Define Euler’s function of n to be φ(n) = |Φ(n)|.

φ(p) = p − 1 for prime p, and φ(1) = 1 by convention.

Euler’s function is not expected to be easy to compute without knowing n’s factorization.

aLeonhard Euler (1707–1783).

bZn is an alternative notation.

(4)

    Q











I+Q/

HXOHUSKLQE 

(5)

Two Properties of Euler’s Function

The inclusion-exclusion principlea can be used to prove the following.

Lemma 52 φ(n) = n Q

p|n(1 − 1p).

If n = pe11pe22 · · · pe`` is the prime factorization of n, then φ(n) = n

Y` i=1

µ

1 − 1 pi

.

Corollary 53 φ(mn) = φ(m) φ(n) if gcd(m, n) = 1.

aConsult any textbook on discrete mathematics.

(6)

A Key Lemma

Lemma 54 P

m|n φ(m) = n.

Let Q`

i=1 pkii be the prime factorization of n and consider Y`

i=1

[ φ(1) + φ(pi) + · · · + φ(pkii) ]. (4)

Equation (4) equals n because φ(pki ) = pki − pk−1i by Lemma 52.

Expand Eq. (4) to yield X

k10≤k1,...,k0`≤k`

Y` i=1

φ(pki0i).

(7)

The Proof (concluded)

By Corollary 53 (p. 404), Y`

i=1

φ(pki0i) = φ

à ` Y

i=1

pki0i

! .

So Eq. (4) becomes

X

k10≤k1,...,k`0≤k`

φ

à ` Y

i=1

pki0i

! .

Each Q`

i=1 pki0i is a unique divisor of n = Q`

i=1 pkii.

Equation (4) becomes

Xφ(m).

(8)

The Density Attack for primes

Witnesses to compositeness

of n

All numbers < n

(9)

The Density Attack for primes (continued)

1: for i = 1, 2, . . . , N do

2: Choose 1 ≤ m ≤ n randomly;

3: if m | n then

4: return “n is not a prime”;

5: end if

6: end for

7: return “n is (probably) a prime”;

(10)

The Density Attack for primes (continued)

It works, but does it work well?

The ratio of numbers ≤ n relatively prime to n (the white area) is φ(n)/n.

When n = pq, where p and q are distinct primes, φ(n)

n = pq − p − q + 1

pq > 1 − 1

q 1 p.

(11)

The Density Attack for primes (concluded)

So the ratio of numbers ≤ n not relatively prime to n (the grey area) is < (1/q) + (1/p).

The “density attack” has probability < 2/√

n of factoring n = pq when p ∼ q = O(√

n ).

The “density attack” to factor n = pq hence takes Ω(

n) steps on average when p ∼ q = O(√ n ).

– This running time is exponential: Ω(20.5 log2n).

(12)

The Chinese Remainder Theorem

Let n = n1n2 · · · nk, where ni are pairwise relatively prime.

For any integers a1, a2, . . . , ak, the set of simultaneous equations

x = a1 mod n1, x = a2 mod n2,

...

x = ak mod nk,

has a unique solution modulo n for the unknown x.

(13)

Fermat’s “Little” Theorem

a

Lemma 55 For all 0 < a < p, ap−1 = 1 mod p.

Consider aΦ(p) = {am mod p : m ∈ Φ(p)}.

aΦ(p) = Φ(p).

aΦ(p) ⊆ Φ(p) as a remainder must be between 0 and p − 1.

Suppose am = am0 mod p for m > m0, where m, m0 ∈ Φ(p).

That means a(m − m0) = 0 mod p, and p divides a or m − m0, which is impossible.

aPierre de Fermat (1601–1665).

(14)

The Proof (concluded)

Multiply all the numbers in Φ(p) to yield (p − 1)!.

Multiply all the numbers in aΦ(p) to yield ap−1(p − 1)!.

As aΦ(p) = Φ(p), ap−1(p − 1)! = (p − 1)! mod p.

Finally, ap−1 = 1 mod p because p 6 |(p − 1)!.

(15)

The Fermat-Euler Theorem

a

Corollary 56 For all a ∈ Φ(n), aφ(n) = 1 mod n.

The proof is similar to that of Lemma 55 (p. 412).

Consider aΦ(n) = {am mod n : m ∈ Φ(n)}.

aΦ(n) = Φ(n).

aΦ(n) ⊆ Φ(n) as a remainder must be between 0 and n − 1 and relatively prime to n.

Suppose am = am0 mod n for m0 < m < n, where m, m0 ∈ Φ(n).

That means a(m − m0) = 0 mod n, and n divides a or m − m0, which is impossible.

aProof by Mr. Wei-Cheng Cheng (R93922108, D95922011) on Novem- ber 24, 2004.

(16)

The Proof (concluded)

a

Multiply all the numbers in Φ(n) to yield Q

m∈Φ(n) m.

Multiply all the numbers in aΦ(n) to yield aφ(n) Q

m∈Φ(n) m.

As aΦ(n) = Φ(n), Y

m∈Φ(n)

m = aφ(n)

 Y

m∈Φ(n)

m

 mod n.

Finally, aφ(n) = 1 mod n because n 6 | Q

m∈Φ(n) m.

aSome typographical errors corrected by Mr. Jung-Ying Chen (D95723006) on November 18, 2008.

(17)

An Example

As 12 = 22 × 3,

φ(12) = 12 × µ

1 − 1 2

¶ µ

1 − 1 3

= 4.

In fact, Φ(12) = {1, 5, 7, 11}.

For example,

54 = 625 = 1 mod 12.

(18)

Exponents

The exponent of m ∈ Φ(p) is the least k ∈ Z+ such that mk = 1 mod p.

Every residue s ∈ Φ(p) has an exponent.

1, s, s2, s3, . . . eventually repeats itself modulo p, say si = sj mod p, which means sj−i = 1 mod p.

If the exponent of m is k and m` = 1 mod p, then k|`.

Otherwise, ` = qk + a for 0 < a < k, and

m` = mqk+a = ma = 1 mod p, a contradiction.

Lemma 57 Any nonzero polynomial of degree k has at most k distinct roots modulo p.

(19)

Exponents and Primitive Roots

From Fermat’s “little” theorem, all exponents divide p − 1.

A primitive root of p is thus a number with exponent p − 1.

Let R(k) denote the total number of residues in Φ(p) that have exponent k.

We already knew that R(k) = 0 for k 6 |(p − 1).

So X

k|(p−1)

R(k) = p − 1 as every number has an exponent.

(20)

Size of R(k)

Any a ∈ Φ(p) of exponent k satisfies xk = 1 mod p.

Hence there are at most k residues of exponent k, i.e., R(k) ≤ k, by Lemma 57 (p. 417).

Let s be a residue of exponent k.

1, s, s2, . . . , sk−1 are distinct modulo p.

Otherwise, si = sj mod p with i < j.

Then sj−i = 1 mod p with j − i < k, a contradiction.

As all these k distinct numbers satisfy xk = 1 mod p, they comprise all solutions of xk = 1 mod p.

(21)

Size of R(k) (continued)

But do all of them have exponent k (i.e., R(k) = k)?

And if not (i.e., R(k) < k), how many of them do?

Suppose ` < k and ` 6∈ Φ(k) with gcd(`, k) = d > 1.

Then

(s`)k/d = (sk)`/d = 1 mod p.

Therefore, s` has exponent at most k/d, which is less than k.

We conclude that

R(k) ≤ φ(k).

(22)

Size of R(k) (concluded)

Because all p − 1 residues have an exponent, p − 1 = X

k|(p−1)

R(k) ≤ X

k|(p−1)

φ(k) = p − 1

by Lemma 54 (p. 405).

Hence

R(k) =



φ(k) when k|(p − 1) 0 otherwise

In particular, R(p − 1) = φ(p − 1) > 0, and p has at least one primitive root.

This proves one direction of Theorem 49 (p. 391).

(23)

A Few Calculations

Let p = 13.

From p. 414, we know φ(p − 1) = 4.

Hence R(12) = 4.

Indeed, there are 4 primitive roots of p.

As

Φ(p − 1) = {1, 5, 7, 11}, the primitive roots are

g1, g5, g7, g11 for any primitive root g.

(24)

The Other Direction of Theorem 49 (p. 391)

We show p is a prime if there is a number r such that 1. rp−1 = 1 mod p, and

2. r(p−1)/q 6= 1 mod p for all prime divisors q of p − 1.

Suppose p is not a prime.

We proceed to show that no primitive roots exist.

Suppose rp−1 = 1 mod p (note gcd(r, p) = 1).

We will show that the 2nd condition must be violated.

(25)

The Proof (continued)

So we proceed to show r(p−1)/q = 1 mod p for some prime divisor q of p − 1.

rφ(p) = 1 mod p by the Fermat-Euler theorem (p. 414).

Because p is not a prime, φ(p) < p − 1.

Let k be the smallest integer such that rk = 1 mod p.

With the 1st condition, it is easy to show that k | (p − 1) (similar to p. 417).

Note that k | φ(p) (p. 417).

As k ≤ φ(p), k < p − 1.

(26)

The Proof (concluded)

Let q be a prime divisor of (p − 1)/k > 1.

Then k|(p − 1)/q.

By the definition of k,

r(p−1)/q = 1 mod p.

But this violates the 2nd condition.

(27)

Function Problems

Decision problems are yes/no problems (sat, tsp (d), etc.).

Function problems require a solution (a satisfying truth assignment, a best tsp tour, etc.).

Optimization problems are clearly function problems.

What is the relation between function and decision problems?

Which one is harder?

(28)

Function Problems Cannot Be Easier than Decision Problems

If we know how to generate a solution, we can solve the corresponding decision problem.

– If you can find a satisfying truth assignment efficiently, then sat is in P.

– If you can find the best tsp tour efficiently, then tsp (d) is in P.

But decision problems can be as hard as the corresponding function problems.

(29)

fsat

fsat is this function problem:

Let φ(x1, x2, . . . , xn) be a boolean expression.

If φ is satisfiable, then return a satisfying truth assignment.

– Otherwise, return “no.”

We next show that if sat ∈ P, then fsat has a polynomial-time algorithm.

(30)

An Algorithm for fsat Using sat

1: t := ²; {Truth assignment.}

2: if φ ∈ sat then

3: for i = 1, 2, . . . , n do

4: if φ[ xi = true ] ∈ sat then 5: t := t ∪ { xi = true };

6: φ := φ[ xi = true ];

7: else

8: t := t ∪ { xi = false };

9: φ := φ[ xi = false ];

10: end if 11: end for 12: return t;

13: else

14: return “no”;

15: end if

(31)

Analysis

If sat can be solved in polynomial time, so can fsat.

There are ≤ n + 1 calls to the algorithm for sat.aBoolean expressions shorter than φ are used in each

call to the algorithm for sat.

Hence sat and fsat are equally hard (or easy).

Note that this reduction from fsat to sat is not a Karp reduction (recall p. 219).

Instead, it calls sat multiple times as a subroutine and moves on sat’s outputs.

aContributed by Ms. Eva Ou (R93922132) on November 24, 2004.

(32)

tsp and tsp (d) Revisited

We are given n cities 1, 2, . . . , n and integer distances dij = dji between any two cities i and j.

tsp (d) asks if there is a tour with a total distance at most B.

tsp asks for a tour with the shortest total distance.

– The shortest total distance is at most P

i,j dij.

Recall that the input string contains d11, . . . , dnn.

Thus the shortest total distance is less than 2| x | in magnitude, where x is the input (why?).

We next show that if tsp (d) ∈ P, then tsp has a polynomial-time algorithm.

(33)

An Algorithm for tsp Using tsp (d)

1: Perform a binary search over interval [ 0, 2| x | ] by calling tsp (d) to obtain the shortest distance, C;

2: for i, j = 1, 2, . . . , n do

3: Call tsp (d) with B = C and dij = C + 1;

4: if “no” then

5: Restore dij to old value; {Edge [ i, j ] is critical.}

6: end if

7: end for

8: return the tour with edges whose dij ≤ C;

(34)

Analysis

An edge that is not on any optimal tour will be eliminated, with its dij set to C + 1.

An edge which is not on all remaining optimal tours will also be eliminated.

So the algorithm ends with n edges which are not eliminated (why?).

There are O(| x | + n2) calls to the algorithm for tsp (d).

So if tsp (d) can be solved in polynomial time, so can tsp.

Hence tsp (d) and tsp are equally hard (or easy).

(35)

Randomized Computation

(36)

I know that half my advertising works, I just don’t know which half.

— John Wanamaker I know that half my advertising is a waste of money, I just don’t know which half!

— McGraw-Hill ad.

(37)

Randomized Algorithms

a

Randomized algorithms flip unbiased coins.

There are important problems for which there are no known efficient deterministic algorithms but for which very efficient randomized algorithms exist.

– Extraction of square roots, for instance.

There are problems where randomization is necessary.

– Secure protocols.

Randomized version can be more efficient.

– Parallel algorithm for maximal independent set.

aRabin (1976); Solovay and Strassen (1977).

(38)

“Four Most Important Randomized Algorithms”

a

1. Primality testing.b

2. Graph connectivity using random walks.c 3. Polynomial identity testing.d

4. Algorithms for approximate counting.e

aTrevisan (2006).

bRabin (1976); Solovay and Strassen (1977).

cAleliunas, Karp, Lipton, Lov´asz, and Rackoff (1979).

dSchwartz (1980); Zippel (1979).

eSinclair and Jerrum (1989).

(39)

Bipartite Perfect Matching

We are given a bipartite graph G = (U, V, E).

U = {u1, u2, . . . , un}.

V = {v1, v2, . . . , vn}.

E ⊆ U × V .

We are asked if there is a perfect matching.

A permutation π of {1, 2, . . . , n} such that (ui, vπ(i)) ∈ E

for all i ∈ {1, 2, . . . , n}.

(40)

A Perfect Matching

X

X

X

X

X

Y

Y

Y

Y

Y

(41)

Symbolic Determinants

We are given a bipartite graph G.

Construct the n × n matrix AG whose (i, j)th entry AGij is a variable xij if (ui, vj) ∈ E and zero otherwise.

(42)

Symbolic Determinants (concluded)

The determinant of AG is det(AG) = X

π

sgn(π) Yn i=1

AGi,π(i). (5) – π ranges over all permutations of n elements.

sgn(π) is 1 if π is the product of an even number of transpositions and −1 otherwise.

Equivalently, sgn(π) = 1 if the number of (i, j)s such that i < j and π(i) > π(j) is even.a

aContributed by Mr. Hwan-Jeu Yu (D95922028) on May 1, 2008.

(43)

Determinant and Bipartite Perfect Matching

In P

π sgn(π)Qn

i=1 AGi,π(i), note the following:

– Each summand corresponds to a possible perfect matching π.

– All of these summands Qn

i=1 AGi,π(i) are different monomials and will not cancel.

It is essentially an exhaustive enumeration.

Proposition 58 (Edmonds (1967)) G has a perfect matching if and only if det(AG) is not identically zero.

(44)

A Perfect Matching in a Bipartite Graph

X

X

X

X

X

Y

Y

Y

Y

Y

(45)

The Perfect Matching in the Determinant

The matrix is

AG =









0 0 x13 x14 0

0 x22 0 0 0

x31 0 0 0 x35

x41 0 x43 x44 0

x51 0 0 0 x55









.

det(AG) = −x14x22x35x43x51 + x13x22x35x44x51 + x14x22x31x43x55 − x13x22x31x44x55, each denoting a perfect matching.

(46)

How To Test If a Polynomial Is Identically Zero?

det(AG) is a polynomial in n2 variables.

There are exponentially many terms in det(AG).

Expanding the determinant polynomial is not feasible.

– Too many terms.

Observation: If det(AG) is identically zero, then it

remains zero if we substitute arbitrary integers for the variables x11, . . . , xnn.

But what is the likelihood of obtaining a zero when det(AG) is not identically zero?

(47)

Number of Roots of a Polynomial

Lemma 59 (Schwartz (1980)) Let p(x1, x2, . . . , xm) 6≡ 0 be a polynomial in m variables each of degree at most d. Let M ∈ Z+. Then the number of m-tuples

(x1, x2, . . . , xm) ∈ {0, 1, . . . , M − 1}m such that p(x1, x2, . . . , xm) = 0 is

≤ mdMm−1.

By induction on m (consult the textbook).

(48)

Density Attack

The density of roots in the domain is at most mdMm−1

Mm = md

M . (6)

So suppose p(x1, x2, . . . , xm) 6≡ 0.

Then a random

(x1, x2, . . . , xm) ∈ { 0, 1, . . . , M − 1 }m has a probability of ≤ md/M of being a root of p.

Note that M is under our control.

One can raise M to lower the error probability, e.g.

(49)

Density Attack (concluded)

Here is a sampling algorithm to test if p(x1, x2, . . . , xm) 6≡ 0.

1: Choose i1, . . . , im from {0, 1, . . . , M − 1} randomly;

2: if p(i1, i2, . . . , im) 6= 0 then

3: return “p is not identically zero”;

4: else

5: return “p is (probably) identically zero”;

6: end if

(50)

A Randomized Bipartite Perfect Matching Algorithm

a

We now return to the original problem of bipartite perfect matching.

1: Choose n2 integers i11, . . . , inn from {0, 1, . . . , 2n2 − 1}

randomly;

2: Calculate det(AG(i11, . . . , inn)) by Gaussian elimination;

3: if det(AG(i11, . . . , inn)) 6= 0 then

4: return “G has a perfect matching”;

5: else

6: return “G has no perfect matchings”;

7: end if

aLov´asz (1979). According to Paul Erd˝os, Lov´asz wrote his first sig- nificant paper “at the ripe old age of 17.”

(51)

Analysis

If G has no perfect matchings, the algorithm will always be correct.

Suppose G has a perfect matching.

– The algorithm will answer incorrectly with

probability at most n2d/(2n2) = 0.5 with d = 1 in Eq. (6) on p. 447.

Run the algorithm independently k times and output

“G has no perfect matchings” if and only if they all say no.

– The error probability is now reduced to at most 2−k.

(52)

L´oszl´o Lov´asz (1948–)

(53)

Remarks

a

Note that we are calculating

prob[ algorithm answers “no” | G has no perfect matchings ], prob[ algorithm answers “yes” | G has a perfect matching ].

We are not calculatingb

prob[ G has no perfect matchings | algorithm answers “no” ], prob[ G has a perfect matching | algorithm answers “yes” ].

aThanks to a lively class discussion on May 1, 2008.

bNumerical Recipes in C (1988), “[As] we already remarked, statistics is not a branch of mathematics!”

(54)

But How Large Can det(A

G

(i

11

, . . . , i

nn

)) Be?

It is at most

n! ¡

2n2¢n .

Stirling’s formula says n! ∼

2πn (n/e)n.

Hence

log2 det(AG(i11, . . . , inn)) = O(n log2 n) bits are sufficient for representing the determinant.

We skip the details about how to make sure that all intermediate results are of polynomial sizes.

(55)

An Intriguing Question

a

Is there an (i11, . . . , inn) that will always give correct answers for the algorithm on p. 449?

A theorem on p. 544 shows that such a witness exists!

Whether it can be found efficiently is another question.

aThanks to a lively class discussion on November 24, 2004.

(56)

Perfect Matching for General Graphs

Page 438 is about bipartite perfect matching

Now we are given a graph G = (V, E).

V = {v1, v2, . . . , v2n}.

We are asked if there is a perfect matching.

A permutation π of {1, 2, . . . , 2n} such that (vi, vπ(i)) ∈ E

for all vi ∈ V .

(57)

The Tutte Matrix

a

Given a graph G = (V, E), construct the 2n × 2n Tutte matrix TG such that

TijG =







xij if (vi, vj) ∈ E and i < j,

−xij if (vi, vj) ∈ E and i > j, 0 othersie.

The Tutte matrix is a skew-symmetric symbolic matrix.

Similar to Proposition 58 (p. 442):

Proposition 60 G has a perfect matching if and only if det(TG) is not identically zero.

aWilliam Thomas Tutte (1917–2002).

(58)

William Thomas Tutte (1917–2002)

參考文獻

相關文件

Improper integrals can arise on bounded intervals. Suppose that f is continuous on the half-open interval [a, b) but is unbounded there..

1 As an aside, I don’t know if this is the best way of motivating the definition of the Fourier transform, but I don’t know a better way and most sources you’re likely to check

Theorem (Comparison Theorem For Functions) Suppose that a ∈ R, that I is an open interval that contains a, and that f,g are real functions defined everywhere on I except possibly at

— John Wanamaker I know that half my advertising is a waste of money, I just don’t know which half.. —

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that. – If the algorithm finds that a matching exists, it is always correct (no

— John Wanamaker I know that half my advertising is a waste of money, I just don’t know which half.. —

When we know that a relation R is a partial order on a set A, we can eliminate the loops at the vertices of its digraph .Since R is also transitive , having the edges (1, 2) and (2,

 If I buy a call option from you, I am paying you a certain amount of money in return for the right to force you to sell me a share of the stock, if I want it, at the strike price,