(動的計画法)

http://www.sw.it.aoyama.ac.jp/2015/DA/lecture12.html

© 2009-15 Martin J. Dürst 青山学院大学

- Leftovers and summary of last lecture
- Algorithm design strategies
- Overview of dynamic programming
- Example application: Order of evaluation of chain matrix multiplication
- Dynamic programming in Ruby

(Boyer-Moore algorithm, string matching and character encoding)

- Simple/simplistic algorithms
- Divide and conquer

**Dynamic programming**

- Investigate and clarify the structure of the (optimal) solution
- Recursive definition of (optimal) solution
- Bottom-up calculation of (optimal) solution
- Construction of (optimal) solution from calculation results

Proposed by Richard Bellman in the 1950ies

- Definition of the Fibonacci function
`f`(`n`):- 0 ≦
`n`≦ 1:`f`(`n`) =`n` `n`≧ 2:`f`(`n`) =`f`(`n`-1) +`f`(`n`-2)

- 0 ≦
- Implementation for this recursive definition is easy
- If
`n`grows, execution gets extremely slow - Reason for slow execution: The same calculation is repeated many times

(when evaluating`f`(`n`),`f`(1) is evaluated`f`(`n`) times) - Evaluation time can be shortened by changing the order of evaluations and remembering intermediate results

- The result of the multiplication of a
`r`_{0}×`r`_{1}matrix_{0}`M`_{1}and - a
`r`_{1}×`r`_{2}matrix_{1}`M`_{2}(_{0}`M`_{1}·_{1}`M`_{2}⇒_{0}`M`_{1}_{}`M`_{2}) - is a
`r`_{0}×`r`_{2}matrix_{0}`M`_{2} - This multiplication needs
`r`_{0}`r`_{1}`r`_{2}scalar multiplications and

`r`_{0}(`r`_{1}-1)`r`_{2}scalar additions, so its time complexity is`O`(`r`_{0}`r`_{1}`r`_{2}) - Because the number of scalar multiplications and additions is almost the same, we will only consider multiplications
- Actual example:
`r`_{0}=100,`r`_{1}=2,`r`_{2}=200

⇒ Number of multiplications: 100×2×200 = 40,000

- Multiplication of three (or more) matrices:
_{0}`M`_{1}·_{1}`M`_{2}·_{2}`M`_{3} - Multiplication of matrices is associative
- This means that there are multiple ways to calculate the overall
product:

(_{0}`M`_{1}·_{1}`M`_{2}) ·_{2}`M`_{3}or

_{0}`M`_{1}· (_{1}`M`_{2}·_{2}`M`_{3})

(also written:_{0}`M`_{2}`M`_{3}or_{0}`M`_{1}`M`_{3}) - For
`r`_{0}=100,`r`_{1}=2,`r`_{2}=200,`r`_{3}=3, the number of scalar multiplications for each order of evaluation is:

(

_{0}`M`_{1}·_{1}`M`_{2}) ·_{2}`M`_{3}: 100×2×200 + 100×200×3 = 100,000

_{0}`M`_{1}· (_{1}`M`_{2}·_{2}`M`_{3}): 2×200×3 + 100×2×3 = 1,800

Multiplications | Orders |
---|---|

0 | 1 |

1 | 1 |

2 | 2 |

3 | 5 |

4 | 14 |

5 | 42 |

6 | 132 |

7 | 429 |

8 | 1430 |

9 | 4862 |

- The number of orders for multiplying
`n`matrices looks small for small`n`, but grows exponentially - The number of orders is equal to the numbers in the middle of Pascal's
triangle (1, 2, 6, 20, 70,...)

divided by increasing natural numbers (1, 2, 3, 4, 5,...) - These numbers are called Catalan numbers:

C_{n}= (2`n`)!/(`n`!(`n`+1)!)

= Ω(4^{n}/`n`^{3/2})

- Catalan numbers have many applications:
- Combinations of paired parentheses
- Number of shapes of binary trees
- Number of triangulations of a (convex) polygon

- Impossible to decide by checking all evaluation orders
- Minimal evaluation cost (number of scalar multiplications):
- mincost(
`a`,`c`) is the minimal cost for evaluating_{a}`M`_{c}- if
`a`+1 ≧`c`, mincost(`a`,`c`) = 0 - if
`a`+1 <`c`, mincost(`a`,`c`) = min^{c-1}_{b=a+1}cost(`a`,`b`,`c`)

- if
- split(
`a`,`c`) is the optimal spliting point

- split(
`a`,`c`) = arg min_{b}cost(`a`,`b`,`c`)

- split(
- cost(
`a`,`b`,`c`) is the cost for calculating_{a}`M`_{b}`M`_{c}- i.e. the cost for splitting the evaluation of
_{a}`M`at_{c}`b` - cost(
`a`,`b`,`c`) = mincost(`a`,`b`) + mincost(`b`,`c`) +`r`_{a}`r`_{b}`r`_{c}

- i.e. the cost for splitting the evaluation of

- mincost(
- Simple implementation in Ruby:
`MatrixSlow`

in Cmatrix.rb

- The solution can be evaluated from split(0,
`n`) top-down using recursion - The problem with top-down evaluation is that intermediate results
(mincost(
`x`,`y`)) are calculated repeatedly - Bottom-up calculation:
- Calculate the minimal costs and splitting points for chains of length
`k`, starting with`k`=2 and increasing to`k`=`n` - Store intermediate results for reuse

- Calculate the minimal costs and splitting points for chains of length
- Implementation in Ruby:
`MatrixPlan`

in Cmatrix.rb

_{0}M_{1}M_{5}:
274_{0}M_{2}M_{5}: 450_{0}M_{3}M_{5}: 470_{0}M_{4}M_{5}: 320 |
|||||||||

_{0}M_{1}M_{4}:
260_{0}M_{2}M_{4}: 468_{0}M_{3}M_{4}: 400 |
_{1}M_{2}M_{5}:
366_{1}M_{3}M_{5}: 330_{1}M_{4}M_{5}:
250 |
||||||||

_{0}M_{1}M_{3}:
200_{0}M_{2}M_{3}: 288 |
_{1}M_{2}M_{4}:
360_{1}M_{3}M_{4}:
220 |
_{2}M_{3}M_{5}:
330_{2}M_{4}M_{5}: 390 |
|||||||

_{0}M_{1}M_{2}:
48 |
_{1}M_{2}M_{3}:
120 |
_{2}M_{3}M_{4}:
300 |
_{3}M_{4}M_{5}:
150 |
||||||

_{0}M_{1}: 0 |
_{1}M_{2}: 0 |
_{2}M_{3}: 0 |
_{3}M_{4}: 0 |
_{4}M_{5}: 0 |
|||||

r_{0} = 4 |
r_{1} = 2 |
r_{2} = 6 |
r_{3} = 10 |
r_{4} = 5 |
r_{5} = 3 |

- The calculation of mincost(
`a`,`c`) is`O`(`c`-`a`) - Evaluating all mincost(
`a`,`a`+`k`) is`O`((`n`-`k`)·`k`) - Total time complexity:
∑
^{n}_{k=1}`O`((`n`-`k`)·`k`) =`O`(`n`^{3})

The time complexity of dynamic programming depends on the structure of the problem

`O`(`n`^{3}),
`O`(`n`^{2}), `O`(`n`),
`O`(`n``m`) and so on are frequent time
complexities

- Investigate and clarify the structure of the (optimal) solution
- Recursive definition of (optimal) solution
- Bottom-up calculation of (optimal) solution
- Construction of (optimal) solution from calculation results

- Optimal substructure:

The global (optimal) solution can be constructed from the (optimal) solutions of subproblems - Overlapping subproblems (this is where dynamic programming differs from divide and conquer)
- Memoization

- To avoid repeatedly calling the same function with the same arguments,
spending time again to calculate the same results, we modify the function
so that:
- The result is stored (e.g. in a
`Hash`

) using the function arguments as the key - Before the actual calculation, the storage is checked, and a previous result is returned if found, before actual calculation

- The result is stored (e.g. in a
- This technique is called memoization
- In Ruby, this can easily be implemented with metaprogramming

(metaprogramming: changing the program while it runs) - Simple application example: Cfibonacci.rb

- Dynamic programming is an algorithm design strategy
- Dynamic programming is suited for problems where the overall (optimal) solution can be obtained from solutions for subproblems, but these subproblems overlap
- The time complexity of dynamic programming depends on the structure of the actual problem

- Review this lecture
- Find three problems that can be solved using dynamic programming, and investigate the algorithms used

- dynamic programming
- 動的計画法
- algorithm design strategies
- アルゴリズムの設計方針
- optimal solution
- 最適解
- Catalan number
- カタラン数
- matrix chain multiplication
- 連鎖行列積、行列の連鎖乗算
- triangulations
- (多角形の) 三角分割
- (convex) polygon
- (凸) 多角形
- intermediate result
- 途中結果
- splitting point
- 分割点
- arg min (argument of the minimum)
- 最小値点
- top-down
- 下向き、トップダウン
- bottom-up
- 上向き、ボトムアップ
- optimal substructure
- 部分構造の最適性
- overlapping subproblems
- 部分問題の重複
- memoization (verb: memoize)
- 履歴管理
- metaprogramming
- メタプログラミング