# Dynamic Programming

(動的計画法)

## Data Structures and Algorithms

### 12th lecture, December 23, 2019

http://www.sw.it.aoyama.ac.jp/2019/DA/lecture12.html

### Martin J. Dürst © 2009-19 Martin J. Dürst 青山学院大学

# Today's Schedule

• Leftovers and summary of last lecture
• Algorithm design strategies
• Overview of dynamic programming
• Example application: Order of evaluation of chain matrix multiplication
• Dynamic programming in Ruby

# Leftovers and Summary of Last Lecture

(string matching context, string matching and character encoding)

# Algorithm Design Strategies

• Simple/simplistic algorithms
• Divide and conquer
• Dynamic programming

# Overview of Dynamic Programming

• Investigate and clarify the structure of the (optimal) solution
• Recursive definition of (optimal) solution
• Bottom-up calculation of (optimal) solution
• Construction of (optimal) solution from calculation results
• Proposed by Richard Bellman in the 1950ies
• Name now sounds arbitrary, but is firmly established

# Simple Example of Dynamic Programming

• Definition of the Fibonacci function f(n):
• n=0, n=1: f(n) = n
• n ≧ 2: f(n) = f(n-1) + f(n-2)
`def fib (n)`
`    n<2 ? n : fib(n-1) + fib(n-2)`
`end`
• If n grows, execution gets extremely slow
• Reason for slow execution: The same calculation is repeated many times
(when evaluating f(n), f(1) is evaluated f(n) times)
• Evaluation time can be shortened by
• Changing the order of evaluations, or
• Remembering intermediate results

# Matrix Multiplication

• Multiplying a matrix 0M1 (r0 by r1) and a matrix 1M2 (r1 by r2) results in a r0 by r2 matrix 0M2 (0M1· 1M20M1M2)
• This multiplication needs r0r1r2 scalar multiplications and r0r1r2 scalar additions,
so its time complexity is O(r0r1r2)
• Actual example: r0=100, r1=2, r2=200
⇒ Number of multiplications: 100×2×200 = 40'000
• Because the number of scalar multiplications and additions is the same, we will only consider multiplications

# Matrix Multiplication Program Skeleton

```for (i=0; i<r0; i++)
for (j=0; j<r2; j++) {
sum = 0;
for (k=0; k<r1; k++)
sum += 0M1[i][k] * 1M2[k][j];
0M2[i][j] = sum;
}```

# Chain Multiplication of Reals

• A series of multiplications (e.g. 163·25·4) is called chain multiplication
• Multiplication of reals is associative (i.e. (163·25)·4 = 163·(25·4))
• Not all multiplication orders have the same speed.
For humans, 163·(25·4) = 163·100 = 16300 is faster than (163·25)·4 = 4075·4
• Conclusion: Choosing a good multiplication order can speed up calculation

# Chain Multiplication of Matrices

• Matrices can also be multiplied in chains (e.g. 0M1 · 1M2 · 2M3)
• Multiplication of matrices is associative (but not commutative!)
• This means that there are multiple ways to calculate the overall product:
(0M1·1M2) · 2M3 (also written 0M2M3) or
0M1 · (1M2·2M3) (also written 0M1M3)
• For r0=100, r1=2, r2=200, r3=3, the number of scalar multiplications is:

(0M1·1M2) · 2M3: 100×2×200 + 100×200×3 = 100'000
0M1 · (1M2·2M3): 2×200×3 + 100×2×3 = 1'800

• Conclusion: Choosing a good order for matrix multiplications can save a lot of work

# Number of Matrix Multiplications Orders

Multiplications Orders
0 1
1 1
2 2
3 5
4 14
5 42
6 132
7 429
8 1430
9 4862
• The number of orders for multiplying n matrices is small for small n, but grows exponentially
• The number of orders is equal to the numbers in the middle of Pascal's triangle (1, 2, 6, 20, 70,...)
divided by increasing natural numbers (1, 2, 3, 4, 5,...)
• These numbers are called Catalan numbers:
Cn = (2n)!/(n!(n+1)!) = Ω(4n/n3/2)
• Catalan numbers have many applications:
• Combinations of n pairs of properly nested parentheses (n=3: ()()(), (())(), ()(()), ((())), (()()))
• Number of shapes of binary trees of size n
• Number of triangulations of a (convex) polygon with n vertices

# Optimal Order of Multiplications

• Checking all orders is very slow (Ω(n4n/n3/2) = Ω(4n/n1/2))
• Minimal evaluation cost (number of scalar multiplications):
• mincost(a, c): minimal cost for evaluating aMc
• if a+1 ≧ c, mincost(a, c) = 0
• if a+1 < c, mincost(a, c) = minc-1b=a+1 cost(a, b, c)
• split(a, c): optimal spliting point
• split(a, c) = arg minb cost(a, b, c)
• cost(a, b, c): cost for calculating aMbMc
• i.e. cost for splitting the evaluation of aMc at b
• cost(a, b, c) = mincost(a, b)+mincost(b, c) + rarbrc
• Simple implementation in Ruby: `MatrixSlow` in Cmatrix.rb

# Inverting Optimization Order and Storing Intermediate Results

• The solution can be evaluated from split(0, n) top-down using recursion
• The problem with top-down evaluation is that intermediate results (mincost(x, y)) are calculated repeatedly
• Bottom-up calculation:
• Calculate the minimal costs and splitting points for chains of length k, starting with k=2 and increasing to k=n
• Store intermediate results for reuse
• Implementation in Ruby: `MatrixPlan` in Cmatrix.rb

# Example Calculation

 0M1M5: 274 0M2M5: 450 0M3M5: 470 0M4M5: 320 0M1M4: 260 0M2M4: 468 0M3M4: 400 1M2M5: 366 1M3M5: 330 1M4M5: 250 0M1M3: 200 0M2M3: 288 1M2M4: 360 1M3M4: 220 2M3M5: 330 2M4M5: 390 0M1M2: 48 1M2M3: 120 2M3M4: 300 3M4M5: 150 0M1: 0 1M2: 0 2M3: 0 3M4: 0 4M5: 0 r0 = 4 r1 = 2 r2 = 6 r3 = 10 r4 = 5 r5 = 3

# Complexity of Optimizing Evaluation Order

• The calculation of mincost(a, c) is O(c-a)
• Evaluating all mincost(a, a+k) is O((n-kk)
• Total time complexity: ∑nk=1 O((n-kk) = O(n3)

The time complexity of dynamic programming depends on the structure of the problem

O(n3), O(n2), O(n), O(nm),... are frequent time complexities

# Overview of Dynamic Programming

• Investigate and clarify the structure of the (optimal) solution
• Recursive definition of (optimal) solution (e.g. `MatrixSlow`)
• Bottom-up calculation of (optimal) solution (e.g. `MatrixPlan`)
• Construction of (optimal) solution from calculation results

# Conditions for Using Dynamic Programming

• Optimal substructure:
The global (optimal) solution can be constructed from the (optimal) solutions of subproblems
(common with divide and conquer)
• Overlapping subproblems
(different from divide and conquer)

# Memoization

• The key in dynamic programming is to reuse intermediate results
• Many functions can be changed so that they remember results
• This is called memoization:
• Add a data structure that stores results
(a dictionary with arguments as key and result as value)
• Check the dictionary
• If the result is stored, return it immediately
• If the result is not stored, calculate it, store it, and return it
• Only possible for pure functions (no side effects)

# Memoization in Ruby

• Use metaprogramming to modify a function so that:
• On first calculation, result is stored (e.g. in a `Hash` using function arguments as the key)
• Before each calculation, storage is checked, and stored result used if available
• Metaprogramming changes the program while it runs
• Simple application example: Cfibonacci.rb

# Summary

• Dynamic programming is an algorithm design strategy
• Dynamic programming is suited for problems where the overall (optimal) solution can be obtained from solutions for subproblems, but the subproblems overlap
• The time complexity of dynamic programming depends on the structure of the actual problem

# Homework

• Review this lecture (including the 'Example Calculation' and the programs)
• Find three problems that can be solved using dynamic programming, and investigate the algorithms used

# Glossary

dynamic programming

algorithm design strategies
アルゴリズムの設計方針
optimal solution

Catalan number
カタラン数
matrix chain multiplication

triangulations
(多角形の) 三角分割
(convex) polygon
() 多角形
intermediate result

splitting point

arg min (argument of the minimum)

top-down

bottom-up

optimal substructure

overlapping subproblems

memoization (verb: memoize)

metaprogramming
メタプログラミング