• 沒有找到結果。

Slides credited from Prof. Hsueh-I Lu & Hsu-Chun Hsiao

N/A
N/A
Protected

Academic year: 2022

Share "Slides credited from Prof. Hsueh-I Lu & Hsu-Chun Hsiao"

Copied!
61
0
0

加載中.... (立即查看全文)

全文

(1)

Slides credited from Prof. Hsueh-I Lu & Hsu-Chun Hsiao

(2)

Any question should be sent via email

Please use [ADA2018] in the subject

Please write your 學號 姓名 in the email

Registration codes were sent out

Register (or drop?) the course ASAP

Slides are available before the lecture starts

Mini-HW 1 released

Due on 9/27 (Thu) 14:20

Submit to NTU COOL

(3)

3

教過 我早就會了!

(4)

▪ Terminology

Problem (問題)

Problem instance (個例)

Computation model (計算模型)

Algorithm (演算法)

The hardness of a problem (難度)

▪ Algorithm Design & Analysis Process

▪ Review: Asymptotic Analysis

(5)

Why we care?

Computers may be fast, but they are not infinitely fast

Memory may be inexpensive, but it is not free

5

(6)

Textbook Ch. 1 – The Role of Algorithms in Computing

6

(7)

7

(8)

An instance of the champion problem

7 4 2 9 8

A[1] A[2] A[3] A[4] A[5]

(9)

Computation Model (

Each problem must have its rule (遊戲規則)

Computation model (計算模型) = rule (遊戲規則)

The problems with different rules have different hardness levels

9

(10)

How difficult to solve a problem

Example: how hard is the champion problem?

Following the comparison-based rule

What does “solve (解)” mean?

What does “difficult (難)” mean?

(11)

Definition of “solving” a problem

Giving an algorithm (演算法) that produces a correct output for any instance of the problem.

11

(12)

Algorithm (演算法)

Algorithm: a detailed step-by-step instruction

Must follow the game rules

Like a step-by-step recipe

Programming language doesn’t matter

→ problem-solving recipe (technology)

If an algorithm produces a correct output for any instance of the problem

→ this algorithm “solves” the problem

(13)

Hardness (難度)

Hardness of the problem

How much effort the best algorithm needs to solve any problem instance

防禦力

看看最厲害的賽亞人要花多少攻擊力才能打贏對手

防禦力 100000 13

攻擊力 50000

(14)

14

(15)

1)

Formulate a problem

2)

Develop an algorithm

3)

Prove the correctness

4)

Analyze running time/space requirement

15

Design Step

Analysis Step

(16)
(17)

2. Algorithm Design

Create a detailed recipe for solving the problem

Follow the comparison-based rule

不准偷看信封的內容

請別人幫忙「比大小」

Algorithm: 擂台法

17

1.

int i, j;

2.

j = 1;

3.

for (i = 2; i <= n; i++)

4.

if (A[i] > A[j])

5.

j = i;

6.

return j;

Q1: Is this a comparison-based algorithm?

Q2: Does it solve the champion problem?

(18)

Prove by contradiction (反證法)

1. int i, j;

2. j = 1;

3. for (i = 2; i <= n; i++)

4. if (A[i] > A[j])

5. j = i;

6. return j;

(19)

Hardness of The Champion Problem

How much effort the best algorithm needs to solve any problem instance

Follow the comparison-based rule

不准偷看信封的內容

請別人幫忙「比大小」

Effort: we first use the times of comparison for measurement

19

1. int i, j;

2. j = 1;

3. for (i = 2; i <= n; i++)

4. if (A[i] > A[j])

5. j = i;

6. return j;

(n - 1) comparisons

(20)

Hardness of The Champion Problem

The hardness of the champion problem is (n - 1) comparisons

a) There is an algorithm that can solve the problem using at most (n – 1) comparisons

This can be proved by 擂臺法, which uses (n – 1) comparisons for any problem instance

b) For any algorithm, there exists a problem instance that requires (n - 1) comparisons

Why?

(21)

Hardness of The Champion Problem

Q: Is there an algorithm that only needs (n - 2) comparisons?

A: Impossible!

Reason

A single comparison only decides a loser

If there are only

(n - 2)

comparisons, the most number of losers is

(n - 2)

There exists a least 2 integers that did not lose

→ any algorithm cannot tell who the champion is

21

(22)

Use the upper bound and the lower bound

When they meet each other, we know the hardness of the problem

Upper bound

Upper bound

(23)

Hardness of The Champion Problem

Upper bound

how many comparisons are sufficient to solve the

champion problem

Each algorithm provides an upper bound

The smarter algorithm

provides tighter, lower, and better upper bound

Lower bound

how many comparisons in the worst case are necessary to solve the champion

problem

Some arguments provide different lower bounds

Higher lower bound is better

23

多此一舉擂臺法 1. int i, j;

2. j = 1;

3. for (i = 2; i <= n; i++)

4. if ((A[i] > A[j]) && (A[j] < A[i]))

5. j = i;

6. return j;

Every integer needs to be in the comparison once

→ (n/2) comparisons

→ (2n - 2) comparisons

When upper bound = lower bound, the problem is solved.

→ We figure out the hardness of the problem

(24)

The majority of researchers in algorithms studies the time and space required for solving problems in two directions

Upper bounds: designing and analyzing algorithms

Lower bounds: providing arguments

When the upper and lower bounds match, we have an

optimal algorithm and the problem is completely resolved

(25)

25

Edmund Landau (1877-1938)

Donald E. Knuth (1938-) 教過

我早就會了!

(26)

The hardness of the champion problem is exactly 𝑛 − 1 comparisons

Different problems may have different 「難度量尺」

cannot be interchangeable

Focus on the standard growth of the function to ignore the

unit and coefficient effects

(27)

For a problem P, we want to figure out

The hardness (complexity) of this problem P is Θ 𝑓 𝑛

𝑛 is the instance size of this problem P

𝑓 𝑛 is a function

Θ 𝑓 𝑛 means that “it exactly equals to the growth of the function”

Then we can argue that under the comparison-based computation model

The hardness of the champion problem is Θ 𝑛

The hardness of the sorting problem is Θ 𝑛 log 𝑛

27

(28)

upper bound is 𝑂 ℎ 𝑛 & lower bound is Ω ℎ 𝑛

→ the problem complexity is exactly Θ ℎ 𝑛

Use the upper bound and the lower bound

When they match, we know the hardness of the problem

Upper bound

Upper bound

use 𝑂 𝑓 𝑛 and 𝑜 𝑓 𝑛

(29)

First learn how to analyze / measure the effort an algorithm needs

Time complexity

Space complexity

Focus on worst-case complexity

“average-case” analysis requires the assumption about the probability distribution of problem instances

29

Types Description

Worst Case Maximum running time for any instance of size n

Average Case Expected running time for a random instance of size n Amortized Worse-case running time for a series of operations

(30)

𝑓 𝑛 = time or space of an algorithm for an input of size 𝑛

Asymptotic analysis: focus on the growth of 𝑓 𝑛 as 𝑛 → ∞

(31)

𝑓 𝑛 = time or space of an algorithm for an input of size 𝑛

Asymptotic analysis: focus on the growth of 𝑓 𝑛 as 𝑛 → ∞

Ο, or Big-Oh: upper bounding function

Ω, or Big-Omega: lower bounding function

Θ, or Big-Theta: tightly bounding function

31

(32)

For any two functions 𝑓 𝑛 and 𝑔 𝑛 ,

if there exist positive constants 𝑛

0

and 𝑐 s.t.

for all 𝑛 ≥ 𝑛

0

.

(33)

Intuitive interpretation

𝑓 𝑛 does not grow faster than 𝑔 𝑛

Comments

1) 𝑓 𝑛 = 𝑂 𝑔 𝑛 roughly means 𝑓 𝑛 ≤ 𝑔 𝑛 in terms of rate of growth

2) “=” is not “equality”, it is like “ϵ (belong to)”

The equality is 𝑓 𝑛 ⊆ 𝑂 𝑔 𝑛

3) We do not write 𝑂 𝑔 𝑛 = 𝑓 𝑛

Note

𝑓 𝑛 and 𝑔 𝑛 can be negative for some integers 𝑛

In order to compare using asymptotic notation 𝑂, both have to be non- negative for sufficiently large 𝑛

This requirement holds for other notations, i.e. Ω, Θ, 𝑜, 𝜔

33

(34)

Benefit

Ignore the low-order terms, units, and coefficients

Simplify the analysis

Example: 𝑓 𝑛 = 5𝑛

3

+ 7𝑛

2

− 8

Upper bound: f(n) = O(n3), f(n) = O(n4), f(n) = O(n3log2n)

Lower bound: f(n) = Ω(n3), f(n) = Ω(n2), f(n) = Ω(nlog2n)

Tight bound: f(n) = Θ(n3)

Q: 𝑓 𝑛 = 𝑂 𝑛

3

and 𝑓 𝑛 = 𝑂 𝑛

4

, so 𝑂 𝑛

3

= 𝑂 𝑛

4

?

3 3

“=” doesn’t mean

“equal to”

(35)

Draft.

Let 𝑛0 = 2 and 𝑐 = 100

holds for 𝑛 ≥ 2

35

(36)

Disproof.

Assume for a contradiction that there exist positive constants 𝑐 and 𝑛0 s.t.

holds for any integer 𝑛 with 𝑛 ≥ 𝑛0.

Assume

and because , it follows that

(37)

37

(38)
(39)

First learn how to analyze / measure the effort an algorithm needs

Time complexity

Space complexity

Focus on worst-case complexity

“average-case” analysis requires the assumption about the probability distribution of problem instances

39

Using 𝑂 to give upper bounds on the worst-case time complexity of algorithms

(40)

擂台法 The worst-case time complexity is

1. int i, j;

2. j = 1;

3. for (i = 2; i <= n; i++)

4. if (A[i] > A[j])

5. j = i;

6. return j;

Adding everything together

Rule 2 Rule 4

1 = 𝑂 𝑛 & Rule 3

Rule 2

O 1 time O 1 time

O 𝑛 iterations O 1 time

O 1 time O 1 time

(41)

41

(42)

Bubble-Sort Algorithm

𝑂 1 time

𝑓 𝑛 iterations 𝑂 1 time

𝑂 𝑛 iterations 𝑂 1 time

𝑂 1 time 𝑂 1 time

1. int i, done;

2. do {

3. done = 1;

4. for (i = 1; i < n; i ++) {

5. if (A[i] > A[i + 1]) {

6. exchange A[i] and A[i + 1];

7. done = 0;

8. }

(43)

43

7 3 1 4 6 2 5

7

3 1 4 6 2 5

7 3

1 4 2 5 6

1 3 2 4 5 6 7

1 2 3 4 5 6 7

(44)

First learn how to analyze / measure the effort an algorithm needs

Time complexity

Space complexity

Focus on worst-case complexity

“average-case” analysis requires the assumption about the probability distribution of problem instances

Using 𝑂 to give upper bounds on the worst-case time complexity of algorithms

(45)

45

擂台法

1. int i;

2. int m = A[1];

3. for (i = 2; i <= n; i ++) {

4. if (A[i] > m)

5. m = A[i];

6. }

7. return m;

Adding everything together

→ a lower bound on the worst- case time complexity?

Ω 1 time Ω 1 time

Ω 𝑛 iterations Ω 1 time

Ω 1 time Ω 1 time

(46)

百般無聊擂台法

1. int i;

2. int m = A[1];

3. for (i = 2; i <= n; i ++) {

4. if (A[i] > m)

5. m = A[i];

6. if (i == n)

7. do i++ n times

8. }

9. return m;

Ω 1 time Ω 1 time

Ω 𝑛 iterations Ω 1 time

Ω 1 time Ω 1 time Ω 𝑛 time Ω 1 time

(47)

Bubble-Sort Algorithm

47

𝑓 𝑛 iterations Ω 𝑛 time

1. int i, done;

2. do {

3. done = 1;

4. for (i = 1; i < n; i ++) {

5. if (A[i] > A[i + 1]) {

6. exchange A[i] and A[i + 1];

7. done = 0;

8. }

9. }

10. } while (done == 0)

When A is decreasing, 𝑓 𝑛 = Ω 𝑛 . Therefore, the worst-case time

complexity of Bubble-Sort is

(48)

7 6 5 4 3 2 1 7

6 5 4 3 2 1

7 6

5 4 3 2 1

5

4 3 2 1 6 7

n iterations

(49)

In the worst case, what is the growth of function an algorithm takes

49

(50)

Algorithm

We say that the (worst-case) time complexity of Algorithm A is Θ 𝑓 𝑛 if

1. Algorithm A runs in time 𝑂 𝑓 𝑛 &

2. Algorithm A runs in time Ω 𝑓 𝑛 (in the worst case)

o An input instance 𝐼 𝑛 s.t. Algorithm A runs in Ω 𝑓 𝑛 for each 𝑛

(51)

If we say that the time complexity analysis about 𝑂 𝑓 𝑛 is tight

= the algorithm runs in time Ω 𝑓 𝑛 in the worst case

= (worst-case) time complexity of the algorithm is Θ 𝑓 𝑛

Not over-estimate the worst-case time complexity of the algorithm

If we say that the time complexity analysis of Bubble-Sort algorithm about 𝑂 𝑛2 is tight

= Time complexity of Bubble-Sort algorithm is Ω 𝑛2

= Time complexity of Bubble-Sort algorithm is Θ 𝑛2

51

(52)

百般無聊擂台法 non-tight analysis

tight analysis

Step 3 takes 𝑂 𝑛 iterations for the for- loop, where only last iteration takes 𝑂 𝑛 time and the rest take 𝑂 1 time.

1. int i;

2. int m = A[1];

3. for (i = 2; i <= n; i ++) {

4. if (A[i] > m)

5. m = A[i];

6. if (i == n)

7. do i++ n times

8. }

𝑂 1 time 𝑂 1 time

𝑂 𝑛 iterations 𝑂 1 time

𝑂 1 time 𝑂 1 time 𝑂 𝑛 time

(53)

Q: can we say that Algorithm 1 is a better algorithm than Algorithm 2 if

Algorithm 1 runs in 𝑂 𝑛 time

Algorithm 2 runs in 𝑂 𝑛2 time

A: No! The algorithm with a lower upper bound on its worst-case time does not necessarily have a lower time complexity.

53

(54)

Algorithm A is no worse than Algorithm B in terms of worst-case time complexity if there exists a positive function 𝑓 𝑛 s.t.

Algorithm A runs in time 𝑂 𝑓 𝑛 &

Algorithm B runs in time Ω 𝑓 𝑛 in the worst case

Algorithm A is (strictly) better than Algorithm B in terms of worst-case time complexity if there exists a positive function 𝑓 𝑛 s.t.

Algorithm A runs in time 𝑂 𝑓 𝑛 &

Algorithm B runs in time 𝜔 𝑓 𝑛 in the worst case or

(55)

In the worst case, what is the growth of the function the optimal algorithm of the problem takes

55

(56)

Problem

We say that the (worst-case) time complexity of Problem P is Θ 𝑓 𝑛 if

1. The time complexity of Problem P is 𝑂 𝑓 𝑛 &

o There exists an 𝑂 𝑓 𝑛 -time algorithm that solves Problem P 2. The time complexity of Problem P is Ω 𝑓 𝑛

o Any algorithm that solves Problem P requires Ω 𝑓 𝑛 time

The time complexity of the champion problem is Θ 𝑛 because

1. The time complexity of the champion problem is 𝑂 𝑛 &

o 「擂臺法」is the 𝑂 𝑛 -time algorithm

(57)

If Algorithm A is an optimal algorithm for Problem P in terms of worst-case time complexity

Algorithm A runs in time 𝑂 𝑓 𝑛 &

The time complexity of Problem P is Ω 𝑓 𝑛 in the worst case

Examples (the champion problem)

擂台法

It runs in 𝑂 𝑛 time &

Any algorithm solving the problem requires time Ω 𝑛 in the worst case

百般無聊擂台法

It runs in 𝑂 𝑛 time &

Any algorithm solving the problem requires time Ω 𝑛 in the worst case

57

→ optimal algorithm

→ optimal algorithm

(58)

Problem P is no harder than Problem Q in terms of (worst-case) time complexity if there exists a function 𝑓 𝑛 s.t.

The (worst-case) time complexity of Problem P is 𝑂 𝑓 𝑛 &

The (worst-case) time complexity of Problem Q is Ω 𝑓 𝑛

Problem P is (strictly) easier than Problem Q in terms of (worst-case) time complexity if there exists a function 𝑓 𝑛 s.t.

The (worst-case) time complexity of Problem P is 𝑂 𝑓 𝑛 &

The (worst-case) time complexity of Problem Q is 𝜔 𝑓 𝑛 or

(59)

Algorithm Design and Analysis Process

1) Formulate a problem

2) Develop an algorithm

3) Prove the correctness

4) Analyze running time/space requirement

Usually brute force (暴力法) is not very efficient

Analysis Skills

Prove by contradiction

Induction

Asymptotic analysis

Problem instance

Algorithm Complexity

In the worst case, what is the growth of function an algorithm takes

Problem Complexity

In the worst case, what is the growth of the function the optimal algorithm of the

problem takes 59

Design Step Analysis Step

(60)

Textbook Ch. 3 – Growth of Function

(61)

Course Website: http://ada.miulab.org Email: ada-ta@csie.ntu.edu.tw

61

Important announcement will be sent to @ntu.edu.tw mailbox

& post to the course website

參考文獻

相關文件

Greedy-Choice Property : making locally optimal (greedy) choices leads to a globally optimal

 From a source vertex, systematically follow the edges of a graph to visit all reachable vertices of the graph.  Useful to discover the structure of

 “Greedy”: always makes the choice that looks best at the moment in the hope that this choice will lead to a globally optimal solution.  When to

✓ Express the solution of the original problem in terms of optimal solutions for subproblems. Construct an optimal solution from

✓ Express the solution of the original problem in terms of optimal solutions for subproblems.. Construct an optimal solution from

◦ Action, State, and Reward Markov Decision Process Reinforcement Learning.

 If SAT can be solved in deterministic polynomial time, then so can any NP problems  SAT ∈ NP-hard..  If A is an NP-hard problem and B can be reduced from A, then B is an

▪ Approximation algorithms for optimization problems: the approximate solution is guaranteed to be close to the exact solution (i.e., the optimal value)..