• 沒有找到結果。

Interactive Protocols and Proof Systems

Definition 2.5.1 (Interactive Protocols). An interactive protocol is a pair of algorithms (P, V ) for two communicating parties Peggy and Vic. It is common to call Peggy the prover and Vic the verifier. The parties send messages back and forth and perform some computation as prescribed by the specification of the protocol. Let x denote the common input and w and z denote the respective private inputs of P and V . The output of V is denoted by hP (w), V (z)i(x). Vic’s view of a protocol with Peggy consists of the entire list of parameters Vic “sees” during the execution of the protocol and is denoted viewPV(x). This includes all communicated values, Vic’s inputs and outputs, as well as all computations and random choices made by Vic.

More formally, the communicating parties are assumed to be two Turing machines P and V . Both machines have their own working tape, input tape, and random tape;

and both are able to read the same input from a read-only tape. The exchange of messages takes place in two communication tapes. One tape is a write-only tape for P and a read-only one for V , while the other is a write-only tape for V and a read-only one for P .

In an interactive protocol, the two parties alternately perform rounds that consist of:

1. Receive a message from the opposite party.

2. Perform some computation.

3. Send a message to the opposite party.

We remark that an interactive protocol may specify a sequence of rounds, which is then repeated a specified number of times.

If the prover and the verifier follow the behavior prescribed in the protocol, they are called an honest prover and an honest verifier. A prover that does not know the prover’s secret but tries to convince the verifier otherwise is called a dishonest prover.

A verifier that does not follow the behavior specified in the protocol is called a dishon-est verifier. Dishondishon-est parties are not rdishon-estricted to storing only their specified output, but are assumed to store their entire view. Note that each party, whether honest or

not, complies with the syntax of the communication interface because not following the syntax is immediately detected. In other words, she may only be dishonest in her private computations and the resulting data that she transmits. Moreover, whenever the protocol specifies that a party must verify a condition, if the verification fails, then the protocol is stopped and all parties are notified.

We introduce some notations that will later be used when analyzing the properties of a protocol. Let (P, V ) be an interactive protocol. We denote by P the algorithm that an honest prover executes, by P the algorithm that a dishonest prover exe-cutes, by V the algorithm of an honest verifier, and by V the algorithm of a general (possibly dishonest) verifier. To analyze the properties of the prover that manifests the knowledge of a secret, a third party, called a knowledge extractor, is often used.

This knowledge extractor interacts with the prover, but, in contrast to a possibly dishonest verifier, has the ability to reset and restart the prover at will. Let KP(x) denote the output of the knowledge extractor K that is given oracle access to P, i.e., can reset and restart P on input x. We remark that in complexity theory M(·)(·) often denotes an oracle machine M that makes oracle queries. The running time of an oracle machine is the number of steps made during its computation, and the oracle’s reply on each query is obtained in a single step.

Definition 2.5.2 (Interactive Proof Systems). An interactive proof system (P, V ) is a protocol between a prover P and a verifier V . The prover P is computationally unbounded, while the verifier V runs a probabilistic polynomial-time algorithm. The input to the protocol is a string x, known to both parties. The two exchange a sequence of messages m1, m2, . . . , m2|x|k, where the prover sends the odd-numbered ones, and the verifier the even ones. The transcript, denoted by trP,V(x), is defined as all exchanged messages (m1, m2, . . . , m2|x|k). All messages are polynomial in length: |mi| ≤ |x|k. The prover goes first, say.

The messages are defined as follows: m1 = P (x)—that is, the first message is pro-duced by the prover based on input x alone. Subsequently, and for all i ≤ |x|k, m2i = V (x, m1, . . . , m2i−1, ri), and m2i−1 = P (x, m1, . . . , m2i−2). Here ri is the polynomial long random string used by the verifier at the ith exchange. Notice that the prover does not know ri. Each even-numbered message is computed by the verifier based on the input, ri and all previous messages, while each odd-numbered one is computed by

the prover based on the input and all previous messages. Finally, if the last message is m2|x|k ∈{1, 0} the verifier accepts or rejects the common input. Usually m2|x|k = 1 is interpreted as “accept”, whereas m2|x|k = 0 is interpreted as “reject.”

(P, V ) is an interactive proof system for a language L if it satisfies the following two conditions:

1. Completeness: For every x ∈ L,

Pr[hP, V i(x) = 1] ≥ 2 3. 2. Soundness: For every x 6∈ L and every prover,

Pr[hP, V i(x) = 1] ≤ 1 3.

Note that for soundness condition, V must be able to resist any attempt of cheating from any prover.

Definition 2.5.3 (The Class IP). The complexity class IP is the set of languages that have interactive proof systems.

When a protocol is designed for use in practice, a prover usually cannot be more computationally powerful than probabilistic polynomial-time. Therefore, it makes sense to consider the situation where a prover’s computational powers are also limited to probabilistic polynomial time, and the prover’s power comes from the fact that he takes some auxiliary input called a “witness.” More formally:

Definition 2.5.4 (Computationally Sound Proof Systems, Interactive Ar-guments). An interactive protocol (P, V ) is a computationally sound proof System (or an interactive argument) for a language L if both algorithms are probabilistic polynomial-time with auxiliary inputs and the following two conditions hold:

1. Completeness: For every x ∈ L,

Pr[hP (w), V (z)i(x) = 1] ≥ 2 3. 2. Soundness: For every x 6∈ L and every prover,

Pr[hP(w), V (z)i(x) = 1] ≤ 1 3.

Interactive proofs of knowledge allow the prover to prove to the verifier that he knows the “witness,” not merely its existence. The concept of proofs of knowledge is first mentioned as a remark in [110]. Formal definitions are first given by Feige, Fiat, and Shamir [87] and by Tompa and Woll [215], and further refined by Feige and Shamir [88] and by Bellare and Goldreich [8]. Here we give a definition of a proof of knowledge similar to that presented in [8, 88].

Definition 2.5.5 (Interactive Proof of Knowledge). Let R ⊆ {0, 1}× {0, 1} be a polynomially bounded binary relation and let LR be the language defined by R.

An interactive proof of knowledge for the relation R is a protocol (P, V ) that satisfies the following two conditions:

1. Completeness: If (x, w) ∈ R then Pr[hP (w), V i(x) = 1] = 1.

2. Validity: There exits a probabilistic expected polynomial-time algorithm K (knowl-edge extractor) such that for every P, every polynomial Q and all sufficiently large x ∈ LR,

Pr[(x, KP(x)) ∈ R] ≥ Pr[hP, V i(x) = 1] − 1 Q(|x|).

The probabilities are taken over all random choices of V, P, P, and K, respectively.

We note that the above definition makes no requirements for the case x 6∈ LR because a proof of knowledge for R does not necessarily give an interactive proof for language membership in LR. Hence, soundness (i.e., a bound on the prover’s ability to lead the verifier to accept x 6∈ LR) is not required. If soundness is necessary or if the protocol is to run on input x 6∈ LR, then the following additional property should be satisfied.

Soundness: For every P, ∀x 6∈ LR,

P r[hP, V i(x) = 1] < 1 2

holds. The probabilities are taken over all random choices of P and V .

If a protocol satisfies this soundness property, the probability of accepting if x 6∈ LR can be made arbitrarily small by repeating it sequentially sufficiently many times.