• 沒有找到結果。

Computational Complexity

1.4 Contributions and Organization of the Thesis

2.1.2 Computational Complexity

The efficiency of an algorithm is measured with respect to the resource required to solve the problem. The resource may include time, storage space, random bits, numbers of processors, etc. Typically, the main focus is time. The running time of an algorithm on a particular input is the number of steps executed, expressed as a function of the input size. The worst-case running time of an algorithm is an upper bound on the running time for any input. The average-case running time of an algorithm is the average running time over all inputs of a fixed size. Often it is

difficult to derive the exact running time of an algorithm. To compare running time of algorithms, the standard asymptotic notation is used.

Definition 2.1.3 (Order Notation).

1. (Asymptotic upper bound) f (n) = O(g(n)) if there exits a positive constant c and a positive integer n0 such that 0 ≤ f (n) ≤ cg(n) for all n ≥ n0.

2. (Asymptotic lower bound) f (n) = Ω(g(n)) if there exists a positive c and a positive integer n0 such that 0 ≤ cg(n) ≤ f (n) for all n ≥ n0.

3. (Asymptotic tight bound) f (n) = Θ(g(n)) if there exit positive constants c1 and c2, and a positive integer n0 such that c1g(n) ≤ f (n) ≤ c2g(n) for all n ≥ n0. 4. (The o-notation) f (n) = o(g(n)) if for any positive constant c > 0 there exits a

constant n0 > 0 such that 0 ≤ f (n) < cg(n) for all n ≥ n0.

Intuitively, f (n) = O(g(n)) means that f grows no faster asymptotically than g, and f (n) = o(g(n)) means that g is an upper bound for f that is not asymptotically tight. f (n) = Ω(g(n)) means that f grows at least as fast asymptotically as g to within a constant multiple. If both f (n) = O(g(n)) and f (n) = Ω(g(n)), then f (n) = Θ(g(n)). The expression o(1) is often used to denote a term f (n) with limn→∞f (n)=0.

Definition 2.1.4. A polynomial-time algorithm is an algorithm that has a worst-case running time of the form O(nk), where n is the input size and k is a constant.

Any algorithm whose running time cannot be so bounded is called an exponential-time algorithm.

Definition 2.1.5. A probabilistic polynomial-time algorithm is a probabilistic algo-rithm that has a running time of the form O(nk), where n is the input size and k is a constant. The running time of a probabilistic algorithm is measured as the number of steps in the model of algorithms, i.e., the number of steps of the probabilistic Turing machine. Tossing a coin is one step in this model.

Let A be a probabilistic algorithm. The worst-case running time timeA(x) of A on input x is the maximum number of steps that A needs to generate the output

A(x). The expected running time etimeA(x) of A on input x is the average number of steps that A needs to generate the output A(x), i.e.,

etimeA(x) = X

t=1

t · P r[timeA(x) = t].

Let P be a computational problem and A be a probabilistic algorithm for P. The worst-case running time of A for P is

tA = max{timeA(x) : for all instances x of P}.

The expected running time of A for P is

etA = max{etimeA(x) : for all instances x of P}.

Definition 2.1.6 (Monte Carlo Algorithms/Las Vegas Algorithms). Let P be a computational problem.

1. A Monte Carlo algorithm A for P is a probabilistic algorithm A, whose running time timeA(x) for all instance x of P is bounded by a polynomial Q(|x|) and which yields a correct answer to P with a probability of at least 2/3.

2. A Las Vegas algorithm A for P is a probabilistic algorithm, whose running time etimeA(x) for all instance x of P is bounded by a polynomial Q(|x|) and which always yields a correct answer to P.

Definition 2.1.7. A subexponential-time algorithm is an algorithm that has a worst-case running time of the form O(e(b+o(1)na(ln(n))1−a)), where n is the input size, b is a positive constant, and a is a constant satisfying 0 < a < 1.

A subexponential-time algorithm is slower than a polynomial-time algorithm yet faster than an algorithm whose running time is exponential in the input size. Observe that for a = 0 the running time O(e(b+o(1)na(ln(n))1−a)) is polynomial O(na), while for a = 1 the running time O(e(b+o(1)na(ln(n))1−a)) is exponential O(ebn)

For simplicity, computational problems are often modelled as decision problems:

decide whether a given x ∈ {0, 1} belongs to a language L ⊆ {0, 1}. Computational problems are classified by the most efficient known algorithm for solving them.

Definition 2.1.8 (P). The complexity class P is the set of decision problems that can be solved by deterministic polynomial-time algorithms.

Definition 2.1.9 (NP). The complexity class NP is the set of decision problems for which a YES answer can be verified by polynomial-time deterministic algorithms given some extra information, called a witness.

Let R ⊆ {0, 1} × {0, 1} be a binary relation. We say that R is polynomially bounded if there exists a polynomial Q such that |w| ≤ Q(|x|) holds for all (x, w) in R. Furthermore, R is an NP-relation if it is polynomially bounded and if there exists a polynomial-time algorithm for deciding membership of pairs (x, w) in R. Let LR = {x|∃w such that (x, w) ∈ R} be the language defined by R. A language L is in NP if there exists an NP-relation RL ⊆ {0, 1} × {0, 1} such that x ∈ L if and only if there exists a w such that (x, w) ∈ RL. Such a w is called a witness of the membership of x in L. The set of all witnesses of x is denoted by RL(x).

Definition 2.1.10 (Bounded-Probability Polynomial-Time, BPP). Let L be the language of some decision problem. We say that L is recognized by the probabilistic polynomial-time algorithm A if for every x ∈ L, Pr[A(x) = 1] ≥ 2/3 and for every x 6∈ L, Pr[A(x) = 0] ≥ 2/3. BPP is the class of languages that can be recognized by a probabilistic polynomial-time algorithm.

Problems in P are considered easy, and problems not in P are considered hard.

It is widely believed that the class P is strictly smaller than the class NP. Whether NP=P is the most important open problem in complexity theory.

We will consider as efficient only randomized algorithm whose running time is bounded by a polynomial in the length of the input. A problem is called intractable (or computationally infeasible) if no probabilistic polynomial-time algorithm could solve it, whereas one that can be solved using a probabilistic polynomial-time algorithm is called tractable (or computationally feasible).

All the above complexity classes are defined in terms of worst-case complexity.

However, in cryptography, average-case complexity of a problem is a more significant measure than its worst-case complexity. This is because a cryptosystem must be unbreakable in most cases, which implies that it will be intractable to break the cryptosystem on the average. Hence, a necessary condition for a secure cryptographic

scheme is that the corresponding cryptanalysis problem must be intractable on the average.