• 沒有找到結果。

Prove the correctness

N/A
N/A
Protected

Share "Prove the correctness"

Copied!
25
0
0

(1)
(2)

Date: 11/16 (Thursday)

Time: 14:20-17:20 (3 hours)

Location: R103 (check the seat assignment before entering the room)

Content

Recurrence and Asymptotic Analysis

Divide and Conquer

Dynamic Programming

Greedy

Based on slides, assignments, and some variations (practice via textbook exercises)

Format: Yes/No, Multiple-Choice, Short Answer, Prove/Explanation

Easy: ~60%, Medium: ~30%, Hard: ~10%

Close book

2

(3)

1)

2)

3)

4)

(4)

Analysis Skills

Induction

Asymptotic analysis

Problem instance

Algorithm Complexity

In the worst case, what is the growth of function an algorithm takes

Problem Complexity

In the worst case, what is the growth of the function the optimal algorithm of the problem takes

4

(5)

Do not focus on “specific algorithms”

But “some strategies” to “design” algorithms

First Skill: Divide-and-Conquer (各個擊破)

Second Skill: Dynamic Programming (動態規劃)

Third Skill: Greedy (貪婪法則)

(6)

6

(7)

Apply three steps at each level of the recursion

1. Divide the problem into a number of subproblems that are smaller instances of the same problem (比較小的 同樣問題)

2. Conquer the subproblems by solving them recursively If the subproblem sizes are small enough

then solve the subproblems

else recursively solve itself

3. Combine the solutions to the subproblems into the solution for the original problem

base case recursive case

1. Divide

2. Conquer

3. Combine

(8)

1.

Substitution Method (取代法)

Guess a bound and then prove by induction

2.

Recursion-Tree Method (遞迴樹法)

Expand the recurrence into a tree and sum up the cost 3.

Master Method (套公式大法/大師法)

Apply Master Theorem to a specific form of recurrences

8

(9)

Whether the problem with small inputs can be solved directly

Whether subproblem solutions can be combined into the original solution

Whether the overall complexity is better than naïve

(10)

10

(11)

“Programming”: a tabular method

Dynamic Programming: planning over time

(12)

Divide-and-Conquer

partition the problem into independent or disjoint subproblems

repeatedly solving the common subsubproblems

 more work than necessary

Dynamic Programming

partition the problem into dependent or overlapping subproblems

avoid recomputation

Top-down with memoization

Bottom-up method

12

(13)

Apply four steps

1. Characterize the structure of an optimal solution

2. Recursively define the value of an optimal solution

3. Compute the value of an optimal solution, typically in a bottom- up fashion

4. Construct an optimal solution from computed information

(14)

Whether subproblem solutions can combine into the original solution

When subproblems are overlapping

Whether the problem has optimal substructure

Common for optimization problem

Two ways to avoid recomputation

Top-down with memoization

Bottom-up method

Complexity analysis

Space for tabular filling

Size of the subproblem graph

14

(15)
(16)

makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution

not always yield optimal solution; may end up at local optimal

16

local maximal

global maximal

local maximal

(17)

Dynamic Programming

has optimal substructure

make an informed choice after getting optimal solutions to subproblems

dependent or overlapping subproblems

Greedy Algorithms

has optimal substructure

make a greedy choice before solving the subproblem

no overlapping subproblems

Each round selects only one subproblem

The subproblem size decreases

Optimal Solution

Possible Case 1 Possible

Case 2

max /min

Subproblem Solution Subproblem

Solution

+

= +

Optimal Solution

Greedy Choice

Subproblem Solution

= +

(18)

1.

2.

Demonstrate the optimal substructure

Combining an optimal solution to the subproblem via greedy can arrive an optimal solution to the original problem

3.

Prove that there is always an optimal solution to the original problem that makes the greedy choice

18

(19)

Optimal Substructure : an optimal solution to the problem contains within it optimal solutions to subproblems

Greedy-Choice Property : making locally optimal (greedy) choices leads to a globally optimal solution

Show that it exists an optimal solution that “contains” the greedy choice using exchange argument

For any optimal solution OPT, the greedy choice 𝑔 has two cases

𝑔 is in OPT: done

𝑔 not in OPT: modify OPT into OPT’ s.t. OPT’ contains 𝑔 and is at least as good as OPT

 If OPT’ is better than OPT, the property is proved by contradiction

(20)

Whether the problem has optimal substructure

Whether we can make a greedy choice and remain only one subproblem

Common for optimization problem

20

Optimal Solution

Greedy Choice

Subproblem Solution

= +

(21)
(22)

True or False: To prove the correctness of a greedy algorithm, we must prove that every optimal solution contains our greedy choice.

Given the following recurrence relation, provide a valid traversal order to fill the DP table or justify why no valid traversal exists.

22

(23)

Input: a sequence of integers 𝑙0, 𝑙1, … , 𝑙𝑛

𝑙𝑖−1 is the number of rows of matrix 𝐴𝑖

𝑙𝑖 is the number of columns of matrix 𝐴𝑖

Output: a order of performing 𝑛 − 1 matrix multiplications in the maximum number of operations to obtain the product of 𝐴1𝐴2… 𝐴𝑛

𝐴1 𝐴2 𝐴3 𝐴4 …… 𝐴𝑛

𝐴1.cols=𝐴2.rows

𝑖 ≤ 𝑘 < 𝑗

(24)

In a binary Huffman code, the number of symbols that can be used to form a codeword is 2 (0 and 1). We can use a 𝑛-ary Huffman code, in which the number of symbols that can be used to form a codeword is 𝑛.

1) Please prove that the decoding tree which represents an optimal code for a file must be a full 𝑛-ary tree, i.e., all non-leaf nodes in the tree have 𝑛 children.

2) Given the following information for a file, what are the lengths of the codewords for the characters ‘a’ and ‘c’ respectively, in a 3-ary

Huffman code derived for this file?

24

Character a b c d e f g

Frequency 700 400 200 100 1300 2400 100

(25)

25

Important announcement will be sent to @ntu.edu.tw mailbox

& post to the course website

• “Greedy”: always makes the choice that looks best at the moment in the hope that this choice will lead to a globally optimal solution. • When to

✓ Express the solution of the original problem in terms of optimal solutions for subproblems. Construct an optimal solution from

• Thinking: solve easiest case + combine smaller solutions into the original solution.. • Easy to find an

• makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution.. • not always yield optimal solution; may end up at

✓ Combining an optimal solution to the subproblem via greedy can arrive an optimal solution to the original problem. Prove that there is always an optimal solution to the

✓ Combining an optimal solution to the subproblem via greedy can arrive an optimal solution to the original problem.. Prove that there is always an optimal solution to the

 This technique is solving a problem by reducing its instance to a smaller one, solving the latte r, and then extending the obtained solution to g et a solution to the

✓ Combining an optimal solution to the subproblem via greedy can arrive an optimal solution to the original problem.. Prove that there is always an optimal solution to the

Greedy-Choice Property : making locally optimal (greedy) choices leads to a globally optimal

 “Greedy”: always makes the choice that looks best at the moment in the hope that this choice will lead to a globally optimal solution.  When to

Using the solution U of Riemann problem to obtain an e approximate solution for the solution U of balance laws..

In this work, for a locally optimal solution to the NLSDP (2), we prove that under Robinson’s constraint qualiﬁcation, the nonsingularity of Clarke’s Jacobian of the FB system

Speciﬁcally, for a locally optimal solution to the nonlinear second-order cone programming (SOCP), under Robinson’s constraint qualiﬁcation, we establish the equivalence among

In this work, for a locally optimal solution to the nonlin- ear SOCP (4), under Robinson’s constraint qualification, we show that the strong second-order sufficient condition

 Definition: A problem exhibits  optimal substructure if an ..

 Definition: A problem exhibits optimal subst ructure if an optimal solution to the proble m contains within it optimal solutions to su bproblems..  怎麼尋找 optimal

✓ Express the solution of the original problem in terms of optimal solutions for subproblems. Construct an optimal solution from

✓ Express the solution of the original problem in terms of optimal solutions for subproblems.. Construct an optimal solution from

▪ Approximation algorithms for optimization problems: the approximate solution is guaranteed to be close to the exact solution (i.e., the optimal value)..

We will design a simple and effective iterative algorithm to implement “The parallel Tower of Hanoi problem”, and prove its moving is an optimal

Four performance metrics: completeness of Pareto-optimal solutions, robustness of solution quality, the first and the last hitting time of Pareto-optimal solutions

The proposed algorithms use the optimal-searching technique of genetic algorithm (GA) to get an efficient scheduling solution in grid computing environment and adapt to

The purpose of this research is to develop an optimum automatic formation model of the golf club head by using the optimal designing techniques to combine the techniques of the