• 沒有找到結果。

# Algorithm Design and Analysis Dynamic Programming (1)

N/A
N/A
Protected

Share "Algorithm Design and Analysis Dynamic Programming (1)"

Copied!
77
0
0

(1)

(2)

### • Mini-HW 3 released

• Due on 10/10 (Thu) 14:20

• Online submission

### • Homework 1 released

• Due on 10/17 (Thur) 17:20 (2 weeks left)

• Writing: print out the A4 hard copy and submit to NTU COOL

• Programming: submit to Online Judge – http://ada-judge.csie.ntu.edu.tw

(3)

### Outline

• Dynamic Programming

• DP #1: Rod Cutting

• DP #2: Stamp Problem

• DP #3: Sequence Alignment Problem

• Longest Common Subsequence (LCS) / Edit Distance

• Viterbi Algorithm

• Space Efficient Algorithm

• DP #4: Matrix-Chain Multiplication

• DP #5: Weighted Interval Scheduling

• DP #6: Knapsack Problem

• 0/1 Knapsack

• Unbounded Knapsack

• Multidimensional Knapsack

• Fractional Knapsack

(4)

### 動腦一下 – 囚犯問題

• 有100個死囚，隔天執行死刑，典獄長開恩給他們一個存活的機會。

• 當隔天執行死刑時，每人頭上戴一頂帽子(黑或白)排成一隊伍，在死刑執行前，由隊 伍中最後的囚犯開始，每個人可以猜測自己頭上的帽子顏色(只允許說黑或白)，猜對 則免除死刑，猜錯則執行死刑。

• 若這些囚犯可以前一天晚上先聚集討論方案，是否有好的方法可以使總共存活的囚 犯數量期望值最高？

(5)

### 猜測規則

• 囚犯排成一排，每個人可以看到前面所有人的帽子，但看不到自己及後面囚犯的。

• 由最後一個囚犯開始猜測，依序往前。

• 每個囚犯皆可聽到之前所有囚犯的猜測內容。

……

Example: 奇數者猜測內容為前面一位的帽子顏色 → 存活期望值為75人 有沒有更多人可以存活的好策略?

(6)

(7)

## Dynamic Programming

Textbook Chapter 15 – Dynamic Programming

Textbook Chapter 15.3 – Elements of dynamic programming

(8)

• 用空間換取時間

• 讓走過的留下痕跡

### • “Programming”: a tabular method

Dynamic Programming: planning over time

(9)

### • Divide-and-Conquer

• partition the problem into

independent or disjoint subproblems

• repeatedly solving the common subsubproblems

→ more work than necessary

### • Dynamic Programming

• partition the problem into dependent or overlapping subproblems

• avoid recomputation

✓ Top-down with memoization

✓ Bottom-up method

(10)

### • Apply four steps

1. Characterize the structure of an optimal solution 2. Recursively define the value of an optimal solution

3. Compute the value of an optimal solution, typically in a bottom-up fashion 4. Construct an optimal solution from computed information

(11)

### • Fibonacci sequence (費波那契數列)

• Base case: F(0) = F(1) = 1

• Recursive case: F(n) = F(n-1) + F(n-2)

Fibonacci(n)

if n < 2 // base case return 1

// recursive case

return Fibonacci(n-1)+Fibonacci(n-2) F(5)

F(4) F(3)

F(3) F(2) F(2) F(1)

F(2) F(1) F(1) F(0) F(1) F(0)

F(1) F(0) Calling overlapping subproblems result in poor efficiency

F(3) was computed twice

F(2) was computed 3 times

(12)

### • Solve the overlapping subproblems recursively with memoization

• Check the memo before making the calls

F(5)

F(4) F(3)

F(3) F(2)

F(2) F(1)

F(1) F(0)

n 0 1 2 3 4 5

F(n) 1 1 ?2 ?3 ?5 8?

Avoid recomputation of the same subproblems using memo

(13)

### Top-Down with Memoization

Memoized-Fibonacci(n)

// initialize memo (array a[]) a[0] = 1

a[1] = 1

for i = 2 to n a[i] = 0

return Memoized-Fibonacci-Aux(n, a) Memoized-Fibonacci-Aux(n, a)

if a[n] > 0 return a[n]

// save the result to avoid recomputation

a[n] = Memoized-Fibonacci-Aux(n-1, a) + Memoized-Fibonacci-Aux(n-2, a) return a[n]

(14)

### • Building up solutions to larger and larger subproblems

Bottom-Up-Fibonacci(n) if n < 2

return 1 a[0] = 1 a[1] = 1

for i = 2 … n

a[i] = a[i-1] + a[i-2]

return a[n]

F(5)

F(4)

F(3)

F(2)

F(1)

F(0) Avoid recomputation of the same subproblems

(15)

### • Principle of Optimality

• Any subpolicy of an optimum policy must itself be an optimum policy with regard to the initial and terminal states of the subpolicy

### • Two key properties of DP for optimization

• Overlapping subproblems

• Optimal substructure – an optimal solution can be constructed from optimal solutions to subproblems

✓ Reduce search space (ignore non-optimal solutions)

If the optimal substructure (principle of optimality) does not hold, then it is incorrect to use DP

(16)

### • Shortest Path Problem

• Input: a graph where the edges have positive costs

• Output: a path from S to T with the smallest cost

Taipei (T)

Tainan (S)

M

CS→M CM→T

C’S→M < CS→M?

The path costing CS→M+ CM→T is the shortest path from S to T

→ The path with the cost CS→M must be a shortest path from S to M

Proof by “Cut-and-Paste” argument (proof by contradiction):

Suppose that it exists a path with smaller cost C’S→M, then we can

“cut” C and “paste” C’

(17)

## DP#1: Rod Cutting

Textbook Chapter 15.1 – Rod Cutting

(18)

𝑖

𝑛

### obtainable by cutting up the rod and selling the pieces

length 𝑖 (m) 1 2 3 4 5

price 𝑝𝑖 1 5 8 9 10

4m

2m

2m

(19)

### • A rod with the length = 4

4m

3m 1m

2m 2m

1m 3m

2m 1m 1m

1m 2m 1m

1m 2m 1m

1m 1m 1m

1m

→ 9

→ 8 + 1 = 9

→ 5 + 5 = 10

→ 1 + 8 = 9

→ 5 + 1 + 1 = 7

→ 1 + 5 + 1 = 7

→ 1 + 1 + 5 = 7

→ 1 + 1 + 1 + 1 = 4

length 𝑖 (m) 1 2 3 4 5

price 𝑝𝑖 1 5 8 9 10

(20)

### • A rod with the length = 𝑛

• For each integer position, we can choose “cut” or “not cut”

• There are 𝑛 – 1 positions for consideration

𝑛−1

### = Θ 2

𝑛−1

n

length 𝑖 (m) 1 2 3 4 5

price 𝑝𝑖 1 5 8 9 10

(21)

### • Optimal substructure – an optimal solution can be constructed from optimal solutions to subproblems

𝑟𝑛−𝑖 𝑟𝑖

no cut

cut at the i-th position (from left to right)

𝑟𝑛: the maximum

revenue obtainable for a rod of length 𝑛

(22)

### • Version 2

• try to reduce the number of subproblems → focus on the left-most cut

no cut

cut at the i-th position (from left to right)

left-most value maximum value obtainable from the remaining part

𝑟𝑛−𝑖 𝑝𝑖

(23)

### • Focus on the left-most cut

• assume that we always cut from left to right → the first cut

optimal solution to subproblems

𝑟𝑛−𝑖 𝑝𝑖

𝑟𝑛−1 𝑝1

𝑟𝑛−2 𝑝2

: : optimal solution

(24)

### • 𝑇 𝑛 = time for running

Cut-Rod(p, n)

Cut-Rod(p, n) // base case if n == 0

return 0

// recursive case q = -∞

for i = 1 to n

q = max(q, p[i] + Cut-Rod(p, n - i)) return q

(25)

### • Rod cutting problem

Cut-Rod(p, n) // base case if n == 0

return 0

// recursive case q = -∞

for i = 1 to n

q = max(q, p[i] + Cut-Rod(p, n - i)) return q

CR(4)

CR(3) CR(0)

CR(2) CR(1) CR(1) CR(0)

CR(1) CR(0) CR(0)

CR(0)

CR(0)

Calling overlapping subproblems result in poor efficiency

CR(2) CR(1)

CR(0) CR(0)

(26)

### • DP algorithm

• Top-down: solve overlapping subproblems recursively with memoization

• Bottom-up: build up solutions to larger and larger subproblems

(27)

### • Top-Down with Memoization

• Solve recursively and memo the subsolutions (跳著填表)

• Suitable that not all subproblems should be solved

### • Bottom-Up with Tabulation

• Fill the table from small to large

• Suitable that each small problem should be solved

f(0) f(1) f(2) f(n) f(0) f(1) f(2) f(n)

(28)

### • 𝑇 𝑛 = time for running

Memoized-Cut-Rod(p, n)

Memoized-Cut-Rod(p, n)

// initialize memo (an array r[] to keep max revenue) r[0] = 0

for i = 1 to n

r[i] = -∞ // r[i] = max revenue for rod with length = i return Memorized-Cut-Rod-Aux(p, n, r)

Memoized-Cut-Rod-Aux(p, n, r) if r[n] >= 0

return r[n] // return the saved solution q = -∞

for i = 1 to n

q = max(q, p[i] + Memoized-Cut-Rod-Aux(p, n-i, r)) r[n] = q // update memo

return q

(29)

### • 𝑇 𝑛 = time for running

Bottom-Up-Cut-Rod(p, n)

Bottom-Up-Cut-Rod(p, n) r[0] = 0

for j = 1 to n // compute r[1], r[2], ... in order q = -∞

for i = 1 to j

q = max(q, p[i] + r[j - i]) r[j] = q

return r[n]

(30)

𝑖

𝑛

### obtainable and the list of cut pieces

4m

2m

2m

length 𝑖 (m) 1 2 3 4 5

price 𝑝𝑖 1 5 8 9 10

(31)

### • Add an array to keep the cutting positions cut

Extended-Bottom-Up-Cut-Rod(p, n) r[0] = 0

for j = 1 to n //compute r[1], r[2], ... in order q = -∞

for i = 1 to j

if q < p[i] + r[j - i]

q = p[i] + r[j - i]

cut[j] = i // the best first cut for len j rod r[i] = q

return r[n], cut

Print-Cut-Rod-Solution(p, n)

(r, cut) = Extended-Bottom-up-Cut-Rod(p, n) while n > 0

print cut[n]

(32)

f(0) f(1) f(2) f(n)

### • Top-Down with Memoization

• Better when some subproblems not be solved at all

• Solve only the required parts of subproblems

### • Bottom-Up with Tabulation

• Better when all subproblems must be solved at least once

• Typically outperform top-down method by a constant factor

• No overhead for recursive calls

• Less overhead for maintaining the table f(0) f(1) f(2) f(n) F(5)

F(4)

F(3)

F(2)

F(1)

F(0)

(33)

### • Approach 1: approximate via (#subproblems) * (#choices for each subproblem)

• For rod cutting

• #subproblems = n

• #choices for each subproblem = O(n)

• → T(n) is about O(n2)

(34)

### subproblems depend on another 𝐺 = 𝑉, 𝐸 (E: edge, V: vertex)

• 𝑉 : #subproblems

• A subproblem is run only once

• |𝐸|: sum of #subsubproblems are needed for each subproblem

• Time complexity: linear to 𝑂( 𝐸 + 𝑉 )

Bottom-up: Reverse Topological Sort Top-down: Depth First Search

Graph Algorithm (taught later)

F(5)

F(4)

F(3)

F(2)

F(1)

F(0)

(35)

### 1. Characterize the structure of an optimal solution

✓ Overlapping subproblems: revisit same subproblems

✓ Optimal substructure: an optimal solution to the problem contains within it optimal solutions to subproblems

### 2. Recursively define the value of an optimal solution

✓ Express the solution of the original problem in terms of optimal solutions for subproblems

### 3. Compute the value of an optimal solution

✓ typically in a bottom-up fashion

### 4. Construct an optimal solution from computed information

✓ Step 3 and 4 may be combined

(36)

(37)

### • Step 1-Q2: Does it exhibit optimal structure? (an optimal solution can be represented by the optimal solutions to subproblems)

• Yes. → continue

• No. → go to Step 1-Q1 or there is no DP solution for this problem Rod Cutting Problem

Input: a rod of length 𝑛 and a table of prices 𝑝𝑖 for 𝑖 = 1, … , 𝑛 Output: the maximum revenue 𝑟𝑛 obtainable

(38)

### • Step 1-Q1: What can be the subproblems?

• Subproblems: Cut-Rod(0), Cut-Rod(1), …, Cut-Rod(n-1)

• Cut-Rod(i): rod cutting problem with length-i rod

• Goal: Cut-Rod(n)

• Suppose we know the optimal solution to Cut-Rod(i), there are i cases:

• Case 1: the first segment in the solution has length 1

• Case 2: the first segment in the solution has length 2

• Case i: the first segment in the solution has length i

Rod Cutting Problem

Input: a rod of length 𝑛 and a table of prices 𝑝𝑖 for 𝑖 = 1, … , 𝑛 Output: the maximum revenue 𝑟𝑛 obtainable

:

(39)

### • Yes. Prove by contradiction.

Rod Cutting Problem

Input: a rod of length 𝑛 and a table of prices 𝑝𝑖 for 𝑖 = 1, … , 𝑛 Output: the maximum revenue 𝑟𝑛 obtainable

(40)

### Step 2: Recursively Define the Value of an OPT Solution

• Suppose we know the optimal solution to Cut-Rod(i), there are i cases:

• Case 1: the first segment in the solution has length 1

• Case 2: the first segment in the solution has length 2

:

• Case i: the first segment in the solution has length i

• Recursively define the value

Rod Cutting Problem

Input: a rod of length 𝑛 and a table of prices 𝑝𝑖 for 𝑖 = 1, … , 𝑛 Output: the maximum revenue 𝑟𝑛 obtainable

(41)

### • Bottom-up method: solve smaller subproblems first

i 0 1 2 3 4 5 n

r[i]

Bottom-Up-Cut-Rod(p, n) r[0] = 0

for j = 1 to n // compute r[1], r[2], ... in order q = -∞

for i = 1 to j

q = max(q, p[i] + r[j - i]) r[j] = q

Rod Cutting Problem

Input: a rod of length 𝑛 and a table of prices 𝑝𝑖 for 𝑖 = 1, … , 𝑛 Output: the maximum revenue 𝑟𝑛 obtainable

(42)

### • Bottom-up method: solve smaller subproblems first

i 0 1 2 3 4 5 n

r[i] 0

cut[i] 0 1 1

2 5

3 8

2 10

length 𝑖 1 2 3 4 5

price 𝑝𝑖 1 5 8 9 10

Rod Cutting Problem

Input: a rod of length 𝑛 and a table of prices 𝑝𝑖 for 𝑖 = 1, … , 𝑛 Output: the maximum revenue 𝑟𝑛 obtainable

(43)

### Step 4: Construct an OPT Solution by Backtracking

Cut-Rod(p, n) r[0] = 0

for j = 1 to n // compute r[1], r[2], ... in order q = -∞

for i = 1 to j

if q < p[i] + r[j - i]

q = p[i] + r[j - i]

cut[j] = i // the best first cut for len j rod r[i] = q

return r[n], cut

Print-Cut-Rod-Solution(p, n) (r, cut) = Cut-Rod(p, n) while n > 0

print cut[n]

n = n – cut[n] // remove the first piece

(44)

## DP#2: Stamp Problem

(45)

1

2

𝑘

(46)

𝑛

### can be recursively defined as

Stamp(v, n) r_min = ∞

if n == 0 // base case return 0

for i = 1 to k // recursive case r[i] = Stamp(v, n - v[i])

if r[i] < r_min r_min = r[i]

return r_min + 1

(47)

### Step 1: Characterize an OPT Solution

• Subproblems

• S(i): the min #stamps with postage i

• Goal: S(n)

• Optimal substructure: suppose we know the optimal solution to S(i), there are k cases:

• Case 1: there is a stamp with v1 in OPT

• Case 2: there is a stamp with v2 in OPT

:

• Case k: there is a stamp with vk in OPT

Stamp Problem

Input: the postage 𝑛 and the stamps with values 𝑣1, 𝑣2, … , 𝑣𝑘 Output: the minimum number of stamps to cover the postage

(48)

### Step 2: Recursively Define the Value of an OPT Solution

• Suppose we know the optimal solution to S(i), there are k cases:

• Case 1: there is a stamp with v1 in OPT

• Case 2: there is a stamp with v2 in OPT

:

• Case k: there is a stamp with vk in OPT

### • Recursively define the value

Stamp Problem

Input: the postage 𝑛 and the stamps with values 𝑣1, 𝑣2, … , 𝑣𝑘 Output: the minimum number of stamps to cover the postage

(49)

Stamp Problem

Input: the postage 𝑛 and the stamps with values 𝑣1, 𝑣2, … , 𝑣𝑘 Output: the minimum number of stamps to cover the postage

### • Bottom-up method: solve smaller subproblems first

i 0 1 2 3 4 5 n

S[i]

Stamp(v, n) S[0] = 0

for i = 1 to n // compute r[1], r[2], ... in order r_min = ∞

for j = 1 to k

if S[i - v[j]] < r_min r_min = 1 + S[i – v[j]]

S[i] = r_min

(50)

### Step 4: Construct an OPT Solution by Backtracking

Stamp(v, n) S[0] = 0

for i = 1 to n r_min = ∞

for j = 1 to k

if S[i - v[j]] < r_min

r_min = 1 + S[i – v[j]]

B[i] = j // backtracking for stamp with v[j]

S[i] = r_min return S[n], B

Print-Stamp-Selection(v, n) (S, B) = Stamp(v, n)

while n > 0 print B[n]

n = n – v[B[n]]

(51)

## DP#3: Sequence Alignment

Textbook Chapter 15.4 – Longest common subsequence Textbook Problem 15-5 – Edit distance

(52)

### • How to evaluate the similarity between two sequences?

svkbrlvpnzanczyqza

banana

(53)

### • Output: longest common subsequence of two sequences

• The maximum-length sequence of characters that appear left-to-right (but not necessarily a continuous string) in both sequences

X = banana

Y = svkbrlvpnzanczyqza X → ---ba---n-an---a Y → svkbrlvpnzanczyqza X = banana

The infinite monkey theorem: a monkey hitting keys at random

4 5

(54)

### • Output: the minimum cost of transformation from X to Y

• Quantifier of the dissimilarity of two strings

X = banana

Y = svkbrlvpnzanczyqza X → ---ba---n-an---a Y → svkbrlvpnzanczyqza X = banana

1 deletion, 7 insertions, 1 substitution 12 insertions, 1 substitution

9 13

(55)

𝑚,𝑛

### for aligning two sequences

• Cost = #insertions × 𝐶INS + #deletions × 𝐶DEL + #substitutions × 𝐶𝑝,𝑞

(56)

### Step 1: Characterize an OPT Solution

• Subproblems

• SA(i, j): sequence alignment between prefix strings 𝑥1, … , 𝑥𝑖 and 𝑦1, … , 𝑦𝑗

• Goal: SA(m, n)

• Optimal substructure: suppose OPT is an optimal solution to SA(i, j), there are 3 cases:

• Case 1: 𝑥𝑖 and 𝑦𝑗 are aligned in OPT (match or substitution)

• OPT/{𝑥𝑖, , 𝑦𝑗} is an optimal solution of SA(i-1, j-1)

• Case 2: 𝑥𝑖 is aligned with a gap in OPT (deletion)

• OPT is an optimal solution of SA(i-1, j)

• Case 3: 𝑦𝑗 is aligned with a gap in OPT (insertion)

• OPT is an optimal solution of SA(i, j-1)

Sequence Alignment Problem Input: two sequences

Output: the minimal cost 𝑀𝑚,𝑛 for aligning two sequences

(57)

### • Suppose OPT is an optimal solution to SA(i, j) , there are 3 cases:

• Case 1: 𝑥𝑖 and 𝑦𝑗 are aligned in OPT (match or substitution)

• OPT/{𝑥𝑖, , 𝑦𝑗} is an optimal solution of SA(i-1, j-1)

• Case 2: 𝑥𝑖 is aligned with a gap in OPT (deletion)

• OPT is an optimal solution of SA(i-1, j)

• Case 3: 𝑦𝑗 is aligned with a gap in OPT (insertion)

• OPT is an optimal solution of SA(i, j-1)

### • Recursively define the value

Sequence Alignment Problem Input: two sequences

Output: the minimal cost 𝑀𝑚,𝑛 for aligning two sequences

(58)

Sequence Alignment Problem Input: two sequences

Output: the minimal cost 𝑀𝑚,𝑛 for aligning two sequences

### • Bottom-up method: solve smaller subproblems first

X\Y 0 1 2 3 4 5 n

0 1 : m

(59)

### • Bottom-up method: solve smaller subproblems first

X\Y 0 1 2 3 4 5 6 7 8 9 10 11 12

0 0 4 8 12 16 20 24 28 32 36 40 44 48 1 4 7 11 15 19 23 27 31 35 39 43 47 51 2 8 4 8 12 16 20 23 27 31 35 39 43 47 3 12 8 12 8 12 16 20 24 28 32 36 40 44 4 16 12 15 12 15 19 16 20 24 28 32 36 40 5 20 16 19 15 19 22 20 23 27 31 35 39 43

a e n i q a d i k j a z

b a n a n

Sequence Alignment Problem Input: two sequences

Output: the minimal cost 𝑀𝑚,𝑛 for aligning two sequences

(60)

### • Bottom-up method: solve smaller subproblems first

Seq-Align(X, Y, CDEL, CINS, Cp,q) for j = 0 to n

M[0][j] = j * CINS // |X|=0, cost=|Y|*penalty for i = 1 to m

M[i][0] = i * CDEL // |Y|=0, cost=|X|*penalty for i = 1 to m

for j = 1 to n

M[i][j] = min(M[i-1][j-1]+Cxi,yi, M[i-1][j]+CDEL, M[i][j-1]+CINS) return M[m][n]

Sequence Alignment Problem Input: two sequences

Output: the minimal cost 𝑀𝑚,𝑛 for aligning two sequences

(61)

### • Bottom-up method: solve smaller subproblems first

Sequence Alignment Problem Input: two sequences

Output: the minimal cost 𝑀𝑚,𝑛 for aligning two sequences

X\Y 0 1 2 3 4 5 6 7 8 9 10 11 12

0 0 4 8 12 16 20 24 28 32 36 40 44 48 1 4 7 11 15 19 23 27 31 35 39 43 47 51 2 8 4 8 12 16 20 23 27 31 35 39 43 47 3 12 8 12 8 12 16 20 24 28 32 36 40 44 4 16 12 15 12 15 19 16 20 24 28 32 36 40 5 20 16 19 15 19 22 20 23 27 31 35 39 43

a e n i q a d i k j a z

b a n a n

(62)

Sequence Alignment Problem Input: two sequences

Output: the minimal cost 𝑀𝑚,𝑛 for aligning two sequences

### • Bottom-up method: solve smaller subproblems first

Find-Solution(M) if m = 0 or n = 0

return {}

v = min(M[m-1][n-1] + Cxm,yn, M[m-1][n] + CDEL, M[m][n-1] + CINS) if v = M[m-1][n] + CDEL // ↑: deletion

return Find-Solution(m-1, n)

if v = M[m][n-1] + CINS // ←: insertion return Find-Solution(m, n-1)

return {(m, n)} ∪ Find-Solution(m-1, n-1) // ↖: match/substitution

(63)

### Step 4: Construct an OPT Solution by Backtracking

Seq-Align(X, Y, CDEL, CINS, Cp,q) for j = 0 to n

M[0][j] = j * CINS // |X|=0, cost=|Y|*penalty for i = 1 to m

M[i][0] = i * CDEL // |Y|=0, cost=|X|*penalty for i = 1 to m

for j = 1 to n

M[i][j] = min(M[i-1][j-1]+Cxi,yi, M[i-1][j]+CDEL, M[i][j-1]+CINS) return M[m][n]

Find-Solution(M) if m = 0 or n = 0

return {}

v = min(M[m-1][n-1] + Cxm,yn, M[m-1][n] + CDEL, M[m][n-1] + CINS) if v = M[m-1][n] + CDEL // ↑: deletion

return Find-Solution(m-1, n)

if v = M[m][n-1] + CINS // ←: insertion return Find-Solution(m, n-1)

(64)

### • If only keeping the most recent two rows: Space-Seq-Align(X, Y)

X\Y 0 1 2 3 j n

i - 1 i

The optimal value can be computed, but the solution cannot be reconstructed

X\Y 0 1 2 3 4 5 n

0 1 : m

(65)

### • Problem: find the min-cost alignment → find the shortest path

Divide-and-Conquer +

Dynamic Programming

a

e p p l

p e a

X\Y 0 1 2 3

0 0 4 8 12

1 4 7 11 15

2 8 4 8 12

3 12 8 12 8

4 16 12 15 12 5 20 16 19 15 a p e

p p l e a

→ distance = CINS

↓ distance = CDEL

↘ distance = C for edge (u, v) START

END

(66)

𝐹 2,3 = distance of the shortest path

### • 𝐹 𝑚, 𝑛 = 𝐵 0,0

i = 0

4 1 2 3

j = 0 1 2 3 4 5 6 7

5 𝐵 2,3 = distance of the

shortest path

(67)

### • Backward formulation

i = 0

4 1 2 3

j = 0 1 2 3 4 5 6 7

5

(68)

### • Observation 1: the length of the shortest path from 0,0 to 𝑚, 𝑛 that passes through 𝑖, 𝑗 is 𝐹 𝑖, 𝑗 + 𝐵 𝑖, 𝑗

𝐹 𝑖, 𝑗 : length of the shortest path from 0,0 to 𝑖, 𝑗 𝐵 𝑖, 𝑗 : length of the shortest path from 𝑖, 𝑗 to 𝑚, 𝑛

i = 0

4 1 2 3

j = 0 1 2 3 4 5 6 7

𝑭 𝒊, 𝒋

𝑩 𝒊, 𝒋

→ optimal substructure

(69)

### • Observation 2: for any 𝑣 in {0, … , 𝑛}, there exists a 𝑢 s.t. the shortest path between (0,0) and 𝑚, 𝑛 goes through (𝑢, 𝑣)

→ the shortest path must go across a vertical cut 𝐹 𝑖, 𝑗 : length of the shortest path from 0,0 to 𝑖, 𝑗

𝐵 𝑖, 𝑗 : length of the shortest path from 𝑖, 𝑗 to 𝑚, 𝑛

i = 0

4 1 2 3

j = 0 1 2 3 4 5 6 7

(70)

### • Observation 1+2:

i = 0

4 1 2 3

j = 0 1 2 3 4 5 6 7

5

i = 0

4 1 2 3

j = 0 1 2 3 4 5 6 7

5

𝐹 𝑖, 𝑗 : length of the shortest path from 0,0 to 𝑖, 𝑗 𝐵 𝑖, 𝑗 : length of the shortest path from 𝑖, 𝑗 to 𝑚, 𝑛

(71)

### • Goal: finds optimal solution

How to find the value of 𝑢?

Idea: utilize sequence alignment algo.

Call Space-Seq-Align(X,Y[1:v]) to find 𝐹 0, 𝑣 , 𝐹 1, 𝑣 , … , 𝐹 𝑚, 𝑣

Call Back-Space-Seq-Align(X,Y[v+1:n]) to find 𝐵 0, 𝑣 , 𝐵 1, 𝑣 , … , 𝐵 𝑚, 𝑣

Let 𝑢 be the index minimizing 𝐹 𝑢, 𝑣 + 𝐵 𝑢, 𝑣

(72)

### • Goal: finds optimal solution – DC-Align(X, Y)

1. Divide

2. Conquer

3. Combine

Divide the sequence of size n into 2 subsequences

Find 𝑢 to minimize 𝐹 𝑢, 𝑣 + 𝐵 𝑢, 𝑣

Recursive case (𝑛 > 1)

prefix

= DC-Align(X[1:u], Y[1:v])

suffix

= DC-Align(X[u+1:m], Y[v+1:n])

Base case (𝑛 = 1)

Return Seq-Align(X, Y)

Return prefix + suffix

𝑇 𝑚, 𝑛 = time for running DC-

Align(X, Y) with 𝑋 = 𝑚, 𝑌 = 𝑛

Space Complexity:

(73)

### • Proof

• There exists positive constants 𝑎, 𝑏 s.t. all

• Use induction to prove

Inductive hypothesis

when Practice to check the initial condition

(74)

1

2

𝑛

### .

ㄨ ㄅ ㄒ ㄎ ㄕ

START

END

Find the path from START to END with highest prob

(75)

### Viterbi Algorithm

𝜎1 𝜎2 … … 𝜎𝑛

START END

produce 𝜎1

produce 𝜎𝑗

V: vocabulary size

(76)

(77)

## Question?

Important announcement will be sent to

@ntu.edu.tw mailbox & post to the course website

• In the worst case, what is the growth of the function the optimal algorithm of the problem takes Design Step.

• Thinking: solve easiest case + combine smaller solutions into the original solution.. • Easy to find an

• makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution.. • not always yield optimal solution; may end up at

✓ Combining an optimal solution to the subproblem via greedy can arrive an optimal solution to the original problem. Prove that there is always an optimal solution to the

✓ Combining an optimal solution to the subproblem via greedy can arrive an optimal solution to the original problem.. Prove that there is always an optimal solution to the

jobs

✓ Combining an optimal solution to the subproblem via greedy can arrive an optimal solution to the original problem.. Prove that there is always an optimal solution to the

Sankoff knew little of algorithm design and nothin g of discrete dynamic programm ing, but as an undergraduate h e had effectively used the lat ter in working out an economic s

Using the solution U of Riemann problem to obtain an e approximate solution for the solution U of balance laws..

In this work, for a locally optimal solution to the NLSDP (2), we prove that under Robinson’s constraint qualiﬁcation, the nonsingularity of Clarke’s Jacobian of the FB system

Speciﬁcally, for a locally optimal solution to the nonlinear second-order cone programming (SOCP), under Robinson’s constraint qualiﬁcation, we establish the equivalence among

 Definition: A problem exhibits  optimal substructure if an ..

 Definition: A problem exhibits optimal subst ructure if an optimal solution to the proble m contains within it optimal solutions to su bproblems..  怎麼尋找 optimal

optimal solutions to related subproblems, which we may solve

✓ Express the solution of the original problem in terms of optimal solutions for subproblems. Construct an optimal solution from

✓ Express the solution of the original problem in terms of optimal solutions for subproblems.. Construct an optimal solution from

▪ Approximation algorithms for optimization problems: the approximate solution is guaranteed to be close to the exact solution (i.e., the optimal value)..

 Combining an optimal solution to the subproblem via greedy can arrive an optimal solution to the original problem. Prove that there is always an optimal solution to the

We will design a simple and effective iterative algorithm to implement “The parallel Tower of Hanoi problem”, and prove its moving is an optimal

Marar (2000), “On the Value of Optimal Myopic Solutions for Dynamic Routing and Scheduling Problems in The Presence of User Noncompliance,” Transportation Science, Vol..

Four performance metrics: completeness of Pareto-optimal solutions, robustness of solution quality, the first and the last hitting time of Pareto-optimal solutions

The proposed algorithms use the optimal-searching technique of genetic algorithm (GA) to get an efficient scheduling solution in grid computing environment and adapt to

For the minimum border problem of biochip mask, this paper first show the value of the minimum border length under the fixed mask area, and then propose the optimal algorithm