• 沒有找到結果。

# An Interactive Proof

N/A
N/A
Protected

Share "An Interactive Proof"

Copied!
55
0
0

(1)

What then do you call proof?

— Henry James (1843–1916), The Wings of the Dove (1902) Leibniz knew what a proof is.

Descartes did not.

— Ian Hacking (1973)

(2)

### What Is a Proof?

• A proof convinces a party of a certain claim.

– “xn + yn = zn for all x, y, z ∈ Z+ and n > 2.”

– “Graph G is Hamiltonian.”

– “xp = x mod p for prime p and p  |x.”

• In mathematics, a proof is a ﬁxed sequence of theorems.

– Think of it as a written examination.

• We will extend a proof to cover a proof process by which the validity of the assertion is established.

(3)

### Prover and Verifier

• There are two parties to a proof.

– The prover (Peggy).

– The verifier (Victor).

• Given an assertion, the prover’s goal is to convince the veriﬁer of its validity (completeness).

• The veriﬁer’s objective is to accept only correct assertions (soundness).

• The veriﬁer usually has an easier job than the prover.

• The setup is very much like the Turing test.a

(4)

### Interactive Proof Systems

• An interactive proof for a language L is a sequence of questions and answers between the two parties.

• At the end of the interaction, the veriﬁer decides whether the claim is true or false.

• The veriﬁer must be a probabilistic polynomial-time algorithm.

• The prover runs an exponential-time algorithm.a

– If the prover is not more powerful than the veriﬁer, no interaction is needed!

aSee the problem to Note 12.3.7 on p. 296 and Proposition 19.1 on p. 475, both of the textbook, about alternative complexity assumptions

(5)

### Interactive Proof Systems (concluded)

• The system decides L if the following two conditions hold for any common input x.

– If x ∈ L, then the probability that x is accepted by the veriﬁer is at least 1 − 2−| x |.

– If x ∈ L, then the probability that x is accepted by the veriﬁer with any prover replacing the original prover is at most 2−| x |.

• Neither the number of rounds nor the lengths of the messages can be more than a polynomial of | x |.

(6)

!

!

!

!

!

' ' ' ' '

(7)

### IP

a

• IP is the class of all languages decided by an interactive proof system.

• When x ∈ L, the completeness condition can be modiﬁed to require that the veriﬁer accept with certainty without aﬀecting IP.b

• Similar things cannot be said of the soundness condition when x ∈ L.

• Veriﬁer’s coin ﬂips can be public.c

aGoldwasser, Micali, & Rackoﬀ (1985).

bGoldreich, Mansour, & Sipser (1987).

(8)

### The Relations of IP with Other Classes

• NP ⊆ IP.

– IP becomes NP when the veriﬁer is deterministic and there is only one round of interaction.a

• BPP ⊆ IP.

– IP becomes BPP when the veriﬁer ignores the prover’s messages.

• IP = PSPACE.b

aRecall Proposition 40 on p. 328.

bShamir (1990).

(9)

### Graph Isomorphism

• V1 = V2 = { 1, 2, . . . , n }.

• Graphs G1 = (V1, E1) and G2 = (V2, E2) are isomorphic if there exists a permutation π on

{ 1, 2, . . . , n } so that (u, v) ∈ E1 ⇔ (π(u), π(v)) ∈ E2.

• No known polynomial-time algorithms.a

• The problem is in NP (hence IP).

• It is not likely to be NP-complete.b

aThe recent bound of Babai (2015) is 2O(logc n) for some constant c.

(10)

### graph nonisomorphism

• V1 = V2 = { 1, 2, . . . , n }.

• Graphs G1 = (V1, E1) and G2 = (V2, E2) are

nonisomorphic if there exist no permutations π on { 1, 2, . . . , n } so that (u, v) ∈ E1 ⇔ (π(u), π(v)) ∈ E2.

• Again, no known polynomial-time algorithms.

– It is in coNP, but how about NP or BPP?

– It is not likely to be coNP-complete.a

• Surprisingly, graph nonisomorphism ∈ IP.b

(11)

### A 2-Round Algorithm

1: Victor selects a random i ∈ { 1, 2 };

2: Victor selects a random permutation π on { 1, 2, . . . , n };

3: Victor applies π on graph Gi to obtain graph H;

4: Victor sends (G1, H) to Peggy;

5: if G1 = H then

6: Peggy sends j = 1 to Victor;

7: else

8: Peggy sends j = 2 to Victor;

9: end if

10: if j = i then

11: Victor accepts; {G1 ∼= G2.}

12: else

13: Victor rejects; {G = G .}

(12)

### Analysis

• Victor runs in probabilistic polynomial time.

• Suppose G1 ∼= G2.

– Peggy is able to tell which Gi is isomorphic to H, so j = i.

– So Victor always accepts.

• Suppose G1 = G2.

– No matter which i is picked by Victor, Peggy or any prover sees 2 identical copies.

– Peggy or any prover with exponential power has only probability one half of guessing i correctly.

– So Victor erroneously accepts with probability 1/2.

(13)

### Knowledge in Proofs

• Suppose I know a satisfying assignment to a satisﬁable boolean expression.

• I can convince Alice of this by giving her the assignment.

• But then I give her more knowledge than is necessary.

– Alice can claim that she found the assignment!

– Login authentication faces essentially the same issue.

– See

www.wired.com/wired/archive/1.05/atm pr.html for a famous ATM fraud in the U.S.

(14)

### Knowledge in Proofs (concluded)

• Suppose I always give Alice random bits.

• Alice extracts no knowledge from me by any measure, but I prove nothing.

• Question 1: Can we design a protocol to convince Alice (the knowledge) of a secret without revealing anything extra?

• Question 2: How to deﬁne this idea rigorously?

(15)

### Zero Knowledge Proofs

a

An interactive proof protocol (P, V ) for language L has the perfect zero-knowledge property if:

• For every veriﬁer V , there is an algorithm M with expected polynomial running time.

• M on any input x ∈ L generates the same probability distribution as the one that can be observed on the communication channel of (P, V ) on input x.

aGoldwasser, Micali, & Rackoﬀ (1985).

(16)

• Zero knowledge is a property of the prover.

– It is the robustness of the prover against attempts of the veriﬁer to extract knowledge via interaction.

– The veriﬁer may deviate arbitrarily (but in

polynomial time) from the predetermined program.

– A veriﬁer cannot use the transcript of the interaction to convince a third-party of the validity of the claim.

– The proof is hence not transferable.

(17)

• Whatever a veriﬁer can “learn” from the speciﬁed prover P via the communication channel could as well be

computed from the veriﬁer alone.

• The veriﬁer does not learn anything except “x ∈ L.”

• Zero-knowledge proofs yield no knowledge in the sense that they can be constructed by the veriﬁer who believes the statement, and yet these proofs do convince him.

(18)

• The “paradox” is resolved by noting that it is not the transcript of the conversation that convinces the veriﬁer.

• But the fact that this conversation was held “on line.”

• Computational zero-knowledge proofs are based on complexity assumptions.

M only needs to generate a distribution that is

computationally indistinguishable from the veriﬁer’s view of the interaction.

(19)

• If one-way functions exist, then zero-knowledge proofs exist for every problem in NP.a

• If one-way functions exist, then zero-knowledge proofs exist for every problem in PSPACE.b

• The veriﬁer can be restricted to the honest one (i.e., it follows the protocol).c

• The coins can be public.d

• The digital money Zcash (2016) is based on zero-knowledge proofs.

aGoldreich, Micali, & Wigderson (1986).

bOstrovsky & Wigderson (1993).

(20)

• Let n be a product of two distinct primes.

• Assume extracting the square root of a quadratic residue modulo n is hard without knowing the factors.

• We next present a zero-knowledge proof for the input x ∈ Zn

(21)

### Zero-Knowledge Proof of Quadratic Residuacity

1: for m = 1, 2, . . . , log2 n do

2: Peggy chooses a random v ∈ Zn and sends y = v2 mod n to Victor;

3: Victor chooses a random bit i and sends it to Peggy;

4: Peggy sends z = uiv mod n, where u is a square root of x; {u2 ≡ x mod n.}

5: Victor checks if z2 ≡ xiy mod n;

6: end for

7: Victor accepts x if Line 5 is conﬁrmed every time;

(22)

### A Useful Corollary of Lemma 81 (p. 680)

Corollary 82 Let n = pq be a product of two distinct

primes. (1) If x and y are both quadratic residues modulo n, then xy ∈ Zn is a quadratic residue modulo n. (2) If x is a quadratic residue modulo n and y is a quadratic nonresidue modulo n, then xy ∈ Zn is a quadratic nonresidue modulo n.

• Suppose x and y are both quadratic residues modulo n.

• Let x ≡ a2 mod n and y ≡ b2 mod n.

• Now xy is a quadratic residue as xy ≡ (ab)2 mod n.

(23)

### The Proof (concluded)

• Suppose x is a quadratic residue modulo n and y is a quadratic nonresidue modulo n.

• By Lemma 81 (p. 680), (x | p) = (x | q) = 1 but, say, (y | p) = −1.

• Now xy is a quadratic nonresidue as (xy | p) = −1, again by Lemma 81 (p. 680).

(24)

### Analysis

• Suppose x is a quadratic residue.

– Then x’s square root u can be computed by Peggy.

– Peggy can answer all challenges.

– Now,

z2 

ui2

v2 

u2i

v2 ≡ xiy mod n.

– So Victor will accept x.

(25)

### Analysis (continued)

• Suppose x is a quadratic nonresidue.

– Corollary 82 (p. 708) says if a is a quadratic residue, then xa is a quadratic nonresidue.

– As y is a quadratic residue, xiy can be a quadratic residue (see Line 5) only when i = 0.

– Peggy can answer only one of the two possible challenges, when i = 0.a

– So Peggy will be caught in any given round with probability one half.

aLine 5 (z2 ≡ xiy mod n) cannot equate a quadratic residue z2 with

(26)

### Analysis (continued)

• How about the claim of zero knowledge?

• The transcript between Peggy and Victor when x is a quadratic residue can be generated without Peggy!

• Here is how.

• Suppose x is a quadratic residue.a

• In each round of interaction with Peggy, the transcript is a triplet (y, i, z).

• We present an eﬃcient Bob that generates (y, i, z) with the same probability without accessing Peggy’s power.

(27)

### Analysis (concluded)

1: Bob chooses a random z ∈ Zn;

2: Bob chooses a random bit i;

3: Bob calculates y = z2x−i mod n;a

4: Bob writes (y, i, z) into the transcript;

aRecall Line 5 on p. 707: Victor checks if z2 ≡ xiy mod n.

(28)

• Assume x is a quadratic residue.

• For (y, i, z), y is a random quadratic residue, i is a random bit, and z is a random number.

• Bob cheats because (y, i, z) is not generated in the same order as in the original transcript.

– Bob picks Peggy’s answer z ﬁrst.

– Bob then picks Victor’s challenge i.

– Bob ﬁnally patches the transcript.

(29)

• So it is not the transcript that convinces Victor, but that conversation with Peggy is held “on line.”

• The same holds even if the transcript was generated by a cheating Victor’s interaction with (honest) Peggy.

• But we skip the details.a

(30)

### Zero-Knowledge Proof of 3 Colorability

a

1: for i = 1, 2, . . . , | E |2 do

2: Peggy chooses a random permutation π of the 3-coloring φ;

3: Peggy samples encryption schemes randomly, commitsb them, and sends π(φ(1)), π(φ(2)), . . . , π(φ(| V |)) encrypted to Victor;

4: Victor chooses at random an edge e ∈ E and sends it to Peggy for the coloring of the endpoints of e;

5: if e = (u, v) ∈ E then

6: Peggy reveals the colors π(φ(u)) and π(φ(v)) and “proves”

that they correspond to their encryptions;

7: else

8: Peggy stops;

9: end if

aGoldreich, Micali, & Wigderson (1986).

bContributed by Mr. Ren-Shuo Liu (D98922016) on December 22,

(31)

10: if the “proof” provided in Line 6 is not valid then 11: Victor rejects and stops;

12: end if

13: if π(φ(u)) = π(φ(v)) or π(φ(u)), π(φ(v)) ∈ { 1, 2, 3 } then 14: Victor rejects and stops;

15: end if 16: end for

17: Victor accepts;

(32)

### Analysis

• If the graph is 3-colorable and both Peggy and Victor follow the protocol, then Victor always accepts.

• Suppose the graph is not 3-colorable and Victor follows the protocol.

• Let e be an edge that is not colored legally.

• Victor will pick it with probability 1/m per round, where m = | E |.

• Then however Peggy plays, Victor will reject with /m per round.

(33)

### Analysis (concluded)

• So Victor will accept with probability at most

1 − m−1m2

≤ e−m.

• Thus the protocol is a valid IP protocol.

• This protocol yields no knowledge to Victor as all he gets is a bunch of random pairs.

• The proof that the protocol is zero-knowledge to any veriﬁer is intricate.a

aBut no longer necessary because of Vadhan (2006).

(34)

• Each π(φ(i)) is encrypted by a diﬀerent cryptosystem in Line 3.a

– Otherwise, the coloring will be revealed in Line 6.

• Each edge e must be picked randomly.b

– Otherwise, Peggy will know Victor’s game plan and plot accordingly.

aContributed by Ms. Yui-Huei Chang (R96922060) on May 22, 2008

bContributed by Mr. Chang-Rong Hung (R96922028) on May 22, 2008

(35)

### Approximability

(36)

All science is dominated by the idea of approximation.

— Bertrand Russell (1872–1970)

(37)

Just because the problem is NP-complete does not mean that you should not try to solve it.

— Stephen Cook (2002)

(38)

### Tackling Intractable Problems

• Many important problems are NP-complete or worse.

• Heuristics have been developed to attack them.

• They are approximation algorithms.

• How good are the approximations?

– We are looking for theoretically guaranteed bounds, not “empirical” bounds.

• Are there NP problems that cannot be approximated well (assuming NP = P)?

• Are there NP problems that cannot be approximated at

(39)

### Some Definitions

• Given an optimization problem, each problem instance x has a set of feasible solutions F (x).

• Each feasible solution s ∈ F (x) has a cost c(s) ∈ Z+. – Here, cost refers to the quality of the feasible

solution, not the time required to obtain it.

– It is our objective function: total distance, number of satisﬁed clauses, cut size, etc.

(40)

### Some Definitions (concluded)

• The optimum cost is

opt(x) = min

s∈F (x)c(s) for a minimization problem.

• It is

opt(x) = max

s∈F (x)c(s) for a maximization problem.

(41)

### Approximation Algorithms

• Let (polynomial-time) algorithm M on x returns a feasible solution.

• M is an -approximation algorithm, where  ≥ 0, if for all x,

| c(M(x)) − opt(x) |

max(opt(x), c(M(x))) ≤ .

– For a minimization problem,

c(M(x)) − mins∈F (x) c(s)

c(M(x)) ≤ .

– For a maximization problem,

maxs∈F (x) c(s) − c(M(x))

≤ . (17)

(42)

### Lower and Upper Bounds

• For a minimization problem,

s∈F (x)min c(s) ≤ c(M(x)) ≤ mins∈F (x) c(s) 1 −  .

• For a maximization problem, (1 − ) × max

s∈F (x)c(s) ≤ c(M(x)) ≤ max

s∈F (x)c(s). (18)

(43)

### Lower and Upper Bounds (concluded)

•  ranges between 0 (best) and 1 (worst).

• For minimization problems, an -approximation algorithm returns solutions within



opt, opt 1 − 

 .

• For maximization problems, an -approximation algorithm returns solutions within

[ (1 − ) × opt, opt ].

(44)

### Approximation Thresholds

• For each NP-complete optimization problem, we shall be interested in determining the smallest  for which there is a polynomial-time -approximation algorithm.

• But sometimes  has no minimum value.

• The approximation threshold is the greatest lower bound of all  ≥ 0 such that there is a polynomial-time

-approximation algorithm.

• By a standard theorem in real analysis, such a threshold exists.a

(45)

### Approximation Thresholds (concluded)

• The approximation threshold of an optimization

problem is anywhere between 0 (approximation to any desired degree) and 1 (no approximation is possible).

• If P = NP, then all optimization problems in NP have an approximation threshold of 0.

• So assume P = NP for the rest of the discussion.

(46)

### Approximation Ratio

• -approximation algorithms can also be measured via the approximation ratio:a

c(M(x)) opt(x) .

• For a minimization problem, the approximation ratio is 1 c(M(x))

mins∈F (x) c(s) 1

1 − . (19)

• For a maximization problem, the approximation ratio is 1 −  ≤ c(M(x))

maxs∈F (x) c(s) ≤ 1. (20)

(47)

### Approximation Ratio (concluded)

• Suppose there is an approximation algorithm that achieves an approximation ratio of θ.

– For a minimization problem, it implies a

(1 − θ−1)-approximation algorithm by Eq. (19).

– For a maximization problem, it implies a

(1 − θ)-approximation algorithm by Eq. (20).

(48)

### node cover

• node cover seeks the smallest C ⊆ V in graph

G = (V, E) such that for each edge in E, at least one of its endpoints is in C.

• A heuristic to obtain a good node cover is to iteratively move a node with the highest degree to the cover.

• This turns out to produce an approximation ratio ofa c(M(x))

opt(x) = Θ(logn).

• So it is not an -approximation algorithm for any constant  < 1 (see p. 733).

(49)

### A 0.5-Approximation Algorithm

a

1: C := ∅;

2: while E = ∅ do

3: Delete an arbitrary edge [u, v ] from E;

4: Add u and v to C; {Add 2 nodes to C each time.}

5: Delete edges incident with u or v from E;

6: end while

7: return C;

aGavril (1974).

(50)

### Analysis

• It is easy to see that C is a node cover.

• C contains | C |/2 edges.a

• No two edges of C share a node.b

• Any node cover C must contain at least one node from each of the edges of C.

– If there is an edge in C both of whose ends are outside C, then C will not be a cover.

aThe edges deleted in Line 3.

b C as a set of edges is a maximal matching.

(51)

(52)

### Analysis (concluded)

• This means that opt(G) ≥ | C |/2.

• The approximation ratio is hence

| C |

opt(G) ≤ 2.

• So we have a 0.5-approximation algorithm.a

• And the approximation threshold is therefore ≤ 0.5.

aRecall p. 733.

(53)

### The 0.5 Bound Is Tight for the Algorithm

a

Optimal cover

aContributed by Mr. Jenq-Chung Li (R92922087) on December 20,

(54)

### Remarks

• The approximation threshold is at leasta 1 

10

5 − 21−1

≈ 0.2651.

• The approximation threshold is 0.5 if one assumes the unique games conjecture (ugc).b

• This ratio 0.5 is also the lower bound for any “greedy”

algorithms.c

bKhot & Regev (2008).

(55)

### Maximum Satisfiability

• Given a set of clauses, maxsat seeks the truth assignment that satisﬁes the most.

• max2sat is already NP-complete (p. 349), so maxsat is NP-complete.

• Consider the more general k-maxgsat for constant k.

– Let Φ = { φ1, φ2, . . . , φm } be a set of boolean expressions in n variables.

– Each φi is a general expression involving up to k variables.

k-maxgsat seeks the truth assignment that satisﬁes

They can easily read much faster if they read passages at the right level, and if they have some practice in reading faster.. These books will help you

Usually, component design adopts multiple kinds of basic units, and different basic units are replaced with different structures... An instance of

• Zero-knowledge proofs yield no knowledge in the sense that they can be constructed by the veriﬁer who believes the statement, and yet these proofs do convince him....

• Zero-knowledge proofs yield no knowledge in the sense that they can be constructed by the verifier who believes the statement, and yet these proofs do convince him....

– Factorization is “harder than” calculating Euler’s phi function (see Lemma 51 on p. 406).. – So factorization is hardest, followed by calculating Euler’s phi function,

• Zero-knowledge proofs yield no knowledge in the sense that they can be constructed by the verifier who believes the statement, and yet these proofs do convince him..!.

• Zero-knowledge proofs yield no knowledge in the sense that they can be constructed by the verifier who believes the statement, and yet these proofs do convince him...

• The binomial interest rate tree can be used to calculate the yield volatility of zero-coupon bonds.. • Consider an n-period

2.1.1 The pre-primary educator must have specialised knowledge about the characteristics of child development before they can be responsive to the needs of children, set

• Environmental Report 2020 of Transport Department, Hong Kong: to provide a transport system in an environmentally acceptable manner to align with the sustainable development of

● the F&amp;B department will inform the security in advance if large-scaled conferences or banqueting events are to be held in the property.. Relationship Between Food and

Daily operation - Sanitizing after guest checked-in / swab test (guest floor

We may observe that the Riemann zeta function at integer values appears in the series expansion of the logarithm of the gamma function.. Several proofs may be found

In developing LIBSVM, we found that many users have zero machine learning knowledge.. It is unbelievable that many asked what the difference between training and

– It is not hard to show that calculating Euler’s phi function a is “harder than” breaking the RSA. – Factorization is “harder than” calculating Euler’s phi function (see

With λ selected by the universal rule, our stochastic volatility model (1)–(3) can be seen as a functional data generating process in the sense that it leads to an estimated

This study intends to bridge this gap by developing models that can demonstrate and describe the mechanism of knowledge creation activities from the perspective of

Along with this process, a critical component that must be realized in order to assist management in determining knowledge objective and strategies is the assessment of

By treating knowledge as valuable assets and installing and utilizing proper knowledge management system can increase the local governments’ performances and allow the

If an area can become a more comfortable and live by the develope , It can be improve the Culture、Property、Tourist business , even the rebirth in the cities。The sentence

By these two models, the statistical formulas of data transmission are constructed and they are used to derive the transmission quality in terms of jitter and skew phenomenon..

There can be no doubt that God makes decisions a propos of the disjunctive multiplicity of eternal objects; the dif culty is to establish in precisely what sense these divine