• 沒有找到結果。

Algorithm Design and Analysis Introduction

N/A
N/A
Protected

Academic year: 2022

Share "Algorithm Design and Analysis Introduction"

Copied!
59
0
0

加載中.... (立即查看全文)

全文

(1)

Algorithm Design and Analysis Introduction

http://ada.miulab.tw

(2)

Outline

• Terminology

• Problem (問題)

• Problem instance (個例)

• Computation model (計算模型)

• Algorithm (演算法)

• The hardness of a problem (難度)

• Algorithm Design & Analysis Process

• Review: Asymptotic Analysis

• Algorithm Complexity

(3)

Efficiency Measurement = Speed

• Why we care?

• Computers may be fast, but they are not infinitely fast

• Memory may be inexpensive, but it is not free

(4)

Terminology

Textbook Ch. 1 – The Role of Algorithms in Computing

(5)

Problem (問題)

(6)

Problem Instance (個例)

• An instance of the champion problem

7 4 2 9 8

A[1] A[2] A[3] A[4] A[5]

(7)

Computation Model (計算模型)

• Each problem must have its rule (遊戲規則)

• Computation model (計算模型) = rule (遊戲規則)

• The problems with different rules have different hardness levels

(8)

Hardness (難易程度)

• How difficult to solve a problem

• Example: how hard is the champion problem?

• Following the comparison-based rule

What does “solve (解)” mean?

What does “difficult (難)” mean?

(9)

Problem Solving (解題)

• Definition of “solving” a problem

• Giving an algorithm (演算法) that produces a correct output for any instance of the problem.

(10)

Algorithm (演算法)

• Algorithm: a detailed step-by-step instruction

• Must follow the game rules

• Like a step-by-step recipe

• Programming language doesn’t matter

→ problem-solving recipe (technology)

• If an algorithm produces a correct output for any instance of the problem

→ this algorithm “solves” the problem

(11)

Hardness (難度)

• Hardness of the problem

• How much effort the best algorithm needs to solve any problem instance

• 防禦力

• 看看最厲害的賽亞人要花多少攻擊力才能打贏對手

防禦力 100000 攻擊力 50000

(12)

Algorithm Design &

Analysis Process

(13)

Algorithm Design & Analysis Process

2 3 4 1

Analyze running time/space requirement Prove the correctness

Develop an algorithm Formulate a problem

Design Step

Analysis

Step

(14)

1. Problem Formulation

(15)

2. Algorithm Design

• Create a detailed recipe for solving the problem

• Follow the comparison-based rule

• 不准偷看信封的內容

• 請別人幫忙「比大小」

• Algorithm: 擂台法

1. int i, j;

2. j = 1;

3. for (i = 2; i <= n; i++)

4. if (A[i] > A[j])

5. j = i;

6. return j;

Q1: Is this a comparison-based algorithm?

Q2: Does it solve the champion problem?

(16)

3. Correctness of the Algorithm

• Prove by contradiction (反證法)

1. int i, j;

2. j = 1;

3. for (i = 2; i <= n; i++)

4. if (A[i] > A[j])

5. j = i;

6. return j;

(17)

Hardness of The Champion Problem

• How much effort the best algorithm needs to solve any problem instance

• Follow the comparison-based rule

• 不准偷看信封的內容

• 請別人幫忙「比大小」

• Effort: we first use the times of comparison for measurement

1. int i, j;

2. j = 1;

3. for (i = 2; i <= n;

i++)

4. if (A[i] > A[j])

5. j = i;

6. return j;

(n - 1) comparisons

(18)

Hardness of The Champion Problem

• The hardness of the champion problem is (n - 1) comparisons

a) There is an algorithm that can solve the problem using at most (n – 1) comparisons

• This can be proved by 擂臺法, which uses (n – 1) comparisons for any problem instance

b) For any algorithm, there exists a problem instance that requires (n - 1) comparisons

• Why?

(19)

Hardness of The Champion Problem

• Q: Is there an algorithm that only needs 𝑛 − 2 comparisons?

• A: Impossible!

• Reason

• A single comparison only decides a loser

• If there are only 𝑛 − 2 comparisons, the most number of losers is 𝑛 − 2

• There exists a least 2 integers that did not lose

→ any algorithm cannot tell who the champion is

(20)

Finding Hardness

• Use the upper bound and the lower bound

• When they meet each other, we know the hardness of the problem

Upper bound

Upper bound

(21)

Hardness of The Champion Problem

• Upper bound

• how many comparisons are sufficient to solve the champion problem

• Each algorithm provides an upper bound

• The smarter algorithm provides tighter, lower, and better upper bound

• Lower bound

• how many comparisons in the worst case are necessary to solve the

champion problem

• Some arguments provide different lower bounds

• Higher lower bound is better

多此一舉擂臺法 1. int i, j;

2. j = 1;

3. for (i = 2; i <= n; i++)

4. if ((A[i] > A[j]) && (A[j] < A[i]))

5. j = i;

6. return j;

Every integer needs to be in the comparison once

→ (n/2) comparisons

→ (2n - 2) comparisons

When upper bound = lower bound, the problem is solved.

(22)

4. Algorithm Analysis

• The majority of researchers in algorithms studies the time and space required for solving problems in two directions

• Upper bounds: designing and analyzing algorithms

• Lower bounds: providing arguments

• When the upper and lower bounds match, we have an optimal algorithm and the problem is completely resolved

(23)

Asymptotic Analysis

Edmund Landau Donald E. Knuth

教過 我早就會了!

(24)

Motivation

• The hardness of the champion problem is exactly 𝑛 − 1 comparisons

• Different problems may have different 「難度量尺」

• cannot be interchangeable

• Focus on the standard growth of the function to ignore the unit and coefficient effects

(25)

Goal: Finding Hardness

• For a problem P, we want to figure out

• The hardness (complexity) of this problem P is Θ 𝑓 𝑛

• 𝑛 is the instance size of this problem P

• 𝑓 𝑛 is a function

• Θ 𝑓 𝑛 means that “it exactly equals to the growth of the function”

• Then we can argue that under the comparison-based computation model

• The hardness of the champion problem is Θ 𝑛

• The hardness of the sorting problem is Θ 𝑛 log 𝑛

(26)

upper bound is 𝑂 ℎ 𝑛 & lower bound is Ω ℎ 𝑛

→ the problem complexity is exactly Θ ℎ 𝑛

Goal: Finding Hardness

• Use the upper bound and the lower bound

• When they match, we know the hardness of the problem

Upper bound

Upper bound

Lower bound

use 𝑂 𝑓 𝑛 and 𝑜 𝑓 𝑛

(27)

Goal: Finding Hardness

• First learn how to analyze / measure the effort an algorithm needs

• Time complexity

• Space complexity

• Focus on worst-case complexity

• “average-case” analysis requires the assumption about the probability distribution of problem instances

Types Description

Worst Case Maximum running time for any instance of size n

Average Case Expected running time for a random instance of size n Amortized Worse-case running time for a series of operations

(28)

Review of Asymptotic Notation

(Textbook Ch. 3.1)

• 𝑓 𝑛 = time or space of an algorithm for an input of size 𝑛

• Asymptotic analysis: focus on the growth of 𝑓 𝑛 as 𝑛 → ∞

(29)

Review of Asymptotic Notation

(Textbook Ch. 3.1)

• 𝑓 𝑛 = time or space of an algorithm for an input of size 𝑛

• Asymptotic analysis: focus on the growth of 𝑓 𝑛 as 𝑛 → ∞

• Ο, or Big-Oh: upper bounding function

• Ω, or Big-Omega: lower bounding function

• Θ, or Big-Theta: tightly bounding function

(30)

Formal Definition of Big-Oh

(Textbook Ch. 3.1)

• For any two functions 𝑓 𝑛 and 𝑔 𝑛 ,

if there exist positive constants 𝑛0 and 𝑐 s.t.

for all 𝑛 ≥ 𝑛0.

(31)

𝒇 𝒏 = 𝑶 𝒈 𝒏

• Intuitive interpretation

• 𝑓 𝑛 does not grow faster than 𝑔 𝑛

• Comments

1) 𝑓 𝑛 = 𝑂 𝑔 𝑛 roughly means 𝑓 𝑛 ≤ 𝑔 𝑛 in terms of rate of growth 2) “=” is not “equality”, it is like “ϵ (belong to)”

The equality is 𝑓 𝑛 ⊆ 𝑂 𝑔 𝑛 3) We do not write 𝑂 𝑔 𝑛 = 𝑓 𝑛

• Note

• 𝑓 𝑛 and 𝑔 𝑛 can be negative for some integers 𝑛

• In order to compare using asymptotic notation 𝑂, both have to be non-negative for sufficiently large 𝑛

(32)

Review of Asymptotic Notation

(Textbook Ch. 3.1)

• Benefit

• Ignore the low-order terms, units, and coefficients

• Simplify the analysis

• Example: 𝑓 𝑛 = 5𝑛3 + 7𝑛2 − 8

• Upper bound: f(n) = O(n3), f(n) = O(n4), f(n) = O(n3log2n)

• Lower bound: f(n) = Ω(n3), f(n) = Ω(n2), f(n) = Ω(nlog2n)

• Tight bound: f(n) = Θ(n3)

• Q: 𝑓 𝑛 = 𝑂 𝑛3 and 𝑓 𝑛 = 𝑂 𝑛4 , so 𝑂 𝑛3 = 𝑂 𝑛4 ?

• 𝑂 𝑛3 represents a set of functions that are upper bounded by c𝑛3 for some constant

“=” doesn’t mean “equal to”

(33)

Exercise: 𝟏𝟎𝟎𝒏 𝟐 = 𝑶 𝒏 𝟑 − 𝒏 𝟐 ?

• Draft.

• Let 𝑛0 = 2 and 𝑐 = 100

holds for 𝑛 ≥ 2

(34)

Exercise: 𝒏 𝟐 = 𝑶 𝒏 ?

• Disproof.

• Assume for a contradiction that there exist positive constants 𝑐 and 𝑛0 s.t.

holds for any integer 𝑛 with 𝑛 ≥ 𝑛0.

• Assume

and because , it follows that

(35)

Rules

(Textbook Ch. 3.1)

(36)

Other Notations

(Textbook Ch. 3.1)

(37)

Goal: Finding Hardness

• First learn how to analyze / measure the effort an algorithm needs

• Time complexity

• Space complexity

• Focus on worst-case complexity

• “average-case” analysis requires the assumption about the probability distribution of problem instances

Using 𝑂 to give upper bounds on the worst-case time complexity of algorithms

(38)

Algorithm Analysis

• 擂台法

1. int i, j;

2. j = 1;

3. for (i = 2; i <= n; i++)

4. if (A[i] > A[j])

5. j = i;

6. return j;

Adding everything together

→ an upper bound on the

Rule 2 Rule 4

1 = 𝑂 𝑛 & Rule 3 Rule 2

O 1 time O 1 time

O 𝑛 iterations O 1 time

O 1 time O 1 time

• The worst-case time complexity is

(39)

Sorting Problem

(40)

Algorithm Analysis

• Bubble-Sort Algorithm

𝑂 1 time

𝑓 𝑛 iterations 𝑂 1 time

𝑂 𝑛 iterations 𝑂 1 time

𝑂 1 time 𝑂 1 time

1. int i, done;

2. do {

3. done = 1;

4. for (i = 1; i < n; i ++) {

5. if (A[i] > A[i + 1]) {

6. exchange A[i] and A[i + 1];

7. done = 0;

8. }

9. } prove by induction

(41)

Example Illustration

7 3 1 4 6 2 5

7

3 1 4 6 2 5

7 3

1 4 2 5 6

1 3 2 4 5 6 7

1 2 3 4 5 6 7

(42)

Goal: Finding Hardness

• First learn how to analyze / measure the effort an algorithm needs

• Time complexity

• Space complexity

• Focus on worst-case complexity

• “average-case” analysis requires the assumption about the probability distribution of problem instances

Using 𝑂 to give upper bounds on the worst-case time complexity of algorithms Using Ω to give lower bounds on the worst-case time complexity of algorithms

(43)

Algorithm Analysis

• 擂台法

1. int i;

2. int m = A[1];

3. for (i = 2; i <= n; i ++) {

4. if (A[i] > m)

5. m = A[i];

6. }

7. return m;

Adding everything together

→ a lower bound on the worst-case time complexity?

Ω 1 time Ω 1 time

Ω 𝑛 iterations Ω 1 time

Ω 1 time Ω 1 time

(44)

Algorithm Analysis

• 百般無聊擂台法

1. int i;

2. int m = A[1];

3. for (i = 2; i <= n; i ++) {

4. if (A[i] > m)

5. m = A[i];

6. if (i == n)

7. do i++ n times

8. }

9. return m;

Ω 1 time Ω 1 time

Ω 𝑛 iterations Ω 1 time

Ω 1 time Ω 1 time Ω 𝑛 time Ω 1 time

(45)

Algorithm Analysis

• Bubble-Sort Algorithm

𝑓 𝑛 iterations Ω 𝑛 time

1. int i, done;

2. do {

3. done = 1;

4. for (i = 1; i < n; i ++) {

5. if (A[i] > A[i + 1]) {

6. exchange A[i] and A[i + 1];

7. done = 0;

8. }

9. }

10. } while (done == 0)

When A is decreasing, 𝑓 𝑛 = Ω 𝑛 .

Therefore, the worst-case time complexity of Bubble-Sort is

(46)

Example Illustration

7 6 5 4 3 2 1

7

6 5 4 3 2 1

7 6

5 4 3 2 1

5

4 3 2 1 6 7

n iterations

(47)

Algorithm Complexity

In the worst case, what is the growth of function an algorithm takes

(48)

Time Complexity of an Algorithm

• We say that the (worst-case) time complexity of Algorithm A is Θ 𝑓 𝑛 if 1. Algorithm A runs in time 𝑂 𝑓 𝑛 &

2. Algorithm A runs in time Ω 𝑓 𝑛 (in the worst case)

o An input instance 𝐼 𝑛 s.t. Algorithm A runs in Ω 𝑓 𝑛 for each 𝑛

(49)

Tightness of the Complexity

• If we say that the time complexity analysis about 𝑂 𝑓 𝑛 is tight

• = the algorithm runs in time Ω 𝑓 𝑛 in the worst case

• = (worst-case) time complexity of the algorithm is Θ 𝑓 𝑛

• Not over-estimate the worst-case time complexity of the algorithm

• If we say that the time complexity analysis of Bubble-Sort algorithm about 𝑂 𝑛2 is tight

• = Time complexity of Bubble-Sort algorithm is Ω 𝑛2

• = Time complexity of Bubble-Sort algorithm is Θ 𝑛2

(50)

Algorithm Analysis

• 百般無聊擂台法

non-tight analysis

tight analysis

Step 3 takes 𝑂 𝑛 iterations for the for-loop,

where only last iteration takes 𝑂 𝑛 time and the rest take 𝑂 1 time.

The steps 3-8 take time 1. int i;

2. int m = A[1];

3. for (i = 2; i <= n; i ++) {

4. if (A[i] > m)

5. m = A[i];

6. if (i == n)

7. do i++ n times

8. }

9. return m;

𝑂 1 time 𝑂 1 time

𝑂 𝑛 iterations 𝑂 1 time

𝑂 1 time 𝑂 1 time 𝑂 𝑛 time 𝑂 1 time

(51)

Algorithm Comparison

• Q: can we say that Algorithm 1 is a better algorithm than Algorithm 2 if

• Algorithm 1 runs in 𝑂 𝑛 time

• Algorithm 2 runs in 𝑂 𝑛2 time

• A: No! The algorithm with a lower upper bound on its worst-case time does not necessarily have a lower time complexity.

(52)

Comparing A and B

• Algorithm A is no worse than Algorithm B in terms of worst-case time complexity if there exists a positive function 𝑓 𝑛 s.t.

• Algorithm A runs in time 𝑂 𝑓 𝑛 &

• Algorithm B runs in time Ω 𝑓 𝑛 in the worst case

• Algorithm A is (strictly) better than Algorithm B in terms of worst-case time complexity if there exists a positive function 𝑓 𝑛 s.t.

• Algorithm A runs in time 𝑂 𝑓 𝑛 &

• Algorithm B runs in time 𝜔 𝑓 𝑛 in the worst case or

(53)

Problem Complexity

In the worst case, what is the growth of the function the optimal algorithm of the problem takes

(54)

Time Complexity of a Problem

• We say that the (worst-case) time complexity of Problem P is Θ 𝑓 𝑛 if 1. The time complexity of Problem P is 𝑂 𝑓 𝑛 &

o There exists an 𝑂 𝑓 𝑛 -time algorithm that solves Problem P

2. The time complexity of Problem P is Ω 𝑓 𝑛

o Any algorithm that solves Problem P requires Ω 𝑓 𝑛 time

• The time complexity of the champion problem is Θ 𝑛 because 1. The time complexity of the champion problem is 𝑂 𝑛 &

o 「擂臺法」is the 𝑂 𝑛 -time algorithm

(55)

Optimal Algorithm

• If Algorithm A is an optimal algorithm for Problem P in terms of worst-case time complexity

• Algorithm A runs in time 𝑂 𝑓 𝑛 &

• The time complexity of Problem P is Ω 𝑓 𝑛 in the worst case

• Examples (the champion problem)

• 擂台法

• It runs in 𝑂 𝑛 time &

• Any algorithm solving the problem requires time Ω 𝑛 in the worst case

• 百般無聊擂台法

• It runs in 𝑂 𝑛 time &

• Any algorithm solving the problem requires time Ω 𝑛 in the worst case

→ optimal algorithm

→ optimal algorithm

(56)

Comparing P and Q

• Problem P is no harder than Problem Q in terms of (worst-case) time complexity if there exists a function 𝑓 𝑛 s.t.

• The (worst-case) time complexity of Problem P is 𝑂 𝑓 𝑛 &

• The (worst-case) time complexity of Problem Q is Ω 𝑓 𝑛

• Problem P is (strictly) easier than Problem Q in terms of (worst-case) time complexity if there exists a function 𝑓 𝑛 s.t.

• The (worst-case) time complexity of Problem P is 𝑂 𝑓 𝑛 &

• The (worst-case) time complexity of Problem Q is 𝜔 𝑓 𝑛 or

(57)

Concluding Remarks

• Algorithm Design and Analysis Process 1) Formulate a problem

2) Develop an algorithm 3) Prove the correctness

4) Analyze running time/space requirement

• Usually brute force (暴力法) is not very efficient

• Analysis Skills

• Prove by contradiction

• Induction

• Asymptotic analysis

• Problem instance

• Algorithm Complexity

• In the worst case, what is the growth of function an algorithm takes

• Problem Complexity

• In the worst case, what is the growth of the function the optimal algorithm of the problem takes Design Step

Analysis Step

(58)

Reading Assignment

• Textbook Ch. 3 – Growth of Function

(59)

Question?

Important announcement will be sent to

@ntu.edu.tw mailbox & post to the course website

Course Website: http://ada.miulab.tw Email: ada-ta@csie.ntu.edu.tw

參考文獻

相關文件

✓ Combining an optimal solution to the subproblem via greedy can arrive an optimal solution to the original problem. Prove that there is always an optimal solution to the

✓ Combining an optimal solution to the subproblem via greedy can arrive an optimal solution to the original problem.. Prove that there is always an optimal solution to the

Calculate the amortized cost of each operation based on the potential function 4. Calculate total amortized cost based on

jobs

[This function is named after the electrical engineer Oliver Heaviside (1850–1925) and can be used to describe an electric current that is switched on at time t = 0.] Its graph

Real Schur and Hessenberg-triangular forms The doubly shifted QZ algorithm.. Above algorithm is locally

利用 determinant 我 們可以判斷一個 square matrix 是否為 invertible, 也可幫助我們找到一個 invertible matrix 的 inverse, 甚至將聯立方成組的解寫下.

In this chapter we develop the Lanczos method, a technique that is applicable to large sparse, symmetric eigenproblems.. The method involves tridiagonalizing the given