# Dynamic Programming

(動的計画法)

## Data Structures and Algorithms

### 12th lecture, December 3, 2015

http://www.sw.it.aoyama.ac.jp/2015/DA/lecture12.html

### Martin J. Dürst

© 2009-15 Martin J. Dürst 青山学院大学

# Today's Schedule

• Leftovers and summary of last lecture
• Algorithm design strategies
• Overview of dynamic programming
• Example application: Order of evaluation of chain matrix multiplication
• Dynamic programming in Ruby

# Leftovers and Summary of Last Lecture

(Boyer-Moore algorithm, string matching and character encoding)

# Algorithm Design Strategies

• Simple/simplistic algorithms
• Divide and conquer
• Dynamic programming

# Overview of Dynamic Programming

• Investigate and clarify the structure of the (optimal) solution
• Recursive definition of (optimal) solution
• Bottom-up calculation of (optimal) solution
• Construction of (optimal) solution from calculation results

Proposed by Richard Bellman in the 1950ies

# Simple Example of Dynamic Programming

• Definition of the Fibonacci function f(n):
• 0 ≦ n ≦ 1: f(n) = n
• n ≧ 2: f(n) = f(n-1) + f(n-2)
• Implementation for this recursive definition is easy
• If n grows, execution gets extremely slow
• Reason for slow execution: The same calculation is repeated many times
(when evaluating f(n), f(1) is evaluated f(n) times)
• Evaluation time can be shortened by changing the order of evaluations and remembering intermediate results

# Matrix Multiplication

• The result of the multiplication of a r0 × r1 matrix 0M1 and
• a r1 × r2 matrix 1M2 (0M1· 1M20M1M2)
• is a r0 × r2 matrix 0M2
• This multiplication needs r0r1r2 scalar multiplications and
r0(r1-1)r2 scalar additions, so its time complexity is O(r0r1r2)
• Because the number of scalar multiplications and additions is almost the same, we will only consider multiplications
• Actual example: r0=100, r1=2, r2=200
⇒ Number of multiplications: 100×2×200 = 40,000

# Chain Multiplication of Three Matrices

• Multiplication of three (or more) matrices: 0M1· 1M2 · 2M3
• Multiplication of matrices is associative
• This means that there are multiple ways to calculate the overall product:
(0M1· 1M2) · 2M3 or
0M1· (1M2 · 2M3)
(also written: 0M2M3 or 0M1M3)
• For r0=100, r1=2, r2=200, r3=3, the number of scalar multiplications for each order of evaluation is:

(0M1· 1M2) · 2M3: 100×2×200 + 100×200×3 = 100,000
0M1· (1M2 · 2M3): 2×200×3 + 100×2×3 = 1,800

# Number of Orders of Matrix Multiplications

Multiplications Orders
0 1
1 1
2 2
3 5
4 14
5 42
6 132
7 429
8 1430
9 4862
• The number of orders for multiplying n matrices looks small for small n, but grows exponentially
• The number of orders is equal to the numbers in the middle of Pascal's triangle (1, 2, 6, 20, 70,...)
divided by increasing natural numbers (1, 2, 3, 4, 5,...)
• These numbers are called Catalan numbers:
Cn = (2n)!/(n!(n+1)!)
= Ω(4n/n3/2)
• Catalan numbers have many applications:
• Combinations of paired parentheses
• Number of shapes of binary trees
• Number of triangulations of a (convex) polygon

# Optimal Order of Multiplications

• Impossible to decide by checking all evaluation orders
• Minimal evaluation cost (number of scalar multiplications):
• mincost(a, c) is the minimal cost for evaluating aMc
• if a+1 ≧ c, mincost(a, c) = 0
• if a+1 < c, mincost(a, c) = minc-1b=a+1 cost(a, b, c)
• split(a, c) is the optimal spliting point
• split(a, c) = arg minb cost(a, b, c)
• cost(a, b, c) is the cost for calculating aMbMc
• i.e. the cost for splitting the evaluation of aMc at b
• cost(a, b, c) = mincost(a, b) + mincost(b, c) + rarbrc
• Simple implementation in Ruby: `MatrixSlow` in Cmatrix.rb

# Inverting Optimization Order and Storing Intermediate Results

• The solution can be evaluated from split(0, n) top-down using recursion
• The problem with top-down evaluation is that intermediate results (mincost(x, y)) are calculated repeatedly
• Bottom-up calculation:
• Calculate the minimal costs and splitting points for chains of length k, starting with k=2 and increasing to k=n
• Store intermediate results for reuse
• Implementation in Ruby: `MatrixPlan` in Cmatrix.rb

# Example Calculation

 0M1M5: 274 0M2M5: 450 0M3M5: 470 0M4M5: 320 0M1M4: 260 0M2M4: 468 0M3M4: 400 1M2M5: 366 1M3M5: 330 1M4M5: 250 0M1M3: 200 0M2M3: 288 1M2M4: 360 1M3M4: 220 2M3M5: 330 2M4M5: 390 0M1M2: 48 1M2M3: 120 2M3M4: 300 3M4M5: 150 0M1: 0 1M2: 0 2M3: 0 3M4: 0 4M5: 0 r0 = 4 r1 = 2 r2 = 6 r3 = 10 r4 = 5 r5 = 3

# Complexity of Optimizing Evaluation Order

• The calculation of mincost(a, c) is O(c-a)
• Evaluating all mincost(a, a+k) is O((n-kk)
• Total time complexity: ∑nk=1 O((n-kk) = O(n3)

The time complexity of dynamic programming depends on the structure of the problem

O(n3), O(n2), O(n), O(nm) and so on are frequent time complexities

# Overview of Dynamic Programming

• Investigate and clarify the structure of the (optimal) solution
• Recursive definition of (optimal) solution
• Bottom-up calculation of (optimal) solution
• Construction of (optimal) solution from calculation results

# Main Elements of Dynamic Programming

• Optimal substructure:
The global (optimal) solution can be constructed from the (optimal) solutions of subproblems
• Overlapping subproblems (this is where dynamic programming differs from divide and conquer)
• Memoization

# Memoization in Ruby

• To avoid repeatedly calling the same function with the same arguments, spending time again to calculate the same results, we modify the function so that:
• The result is stored (e.g. in a `Hash`) using the function arguments as the key
• Before the actual calculation, the storage is checked, and a previous result is returned if found, before actual calculation
• This technique is called memoization
• In Ruby, this can easily be implemented with metaprogramming
(metaprogramming: changing the program while it runs)
• Simple application example: Cfibonacci.rb

# Summary

• Dynamic programming is an algorithm design strategy
• Dynamic programming is suited for problems where the overall (optimal) solution can be obtained from solutions for subproblems, but these subproblems overlap
• The time complexity of dynamic programming depends on the structure of the actual problem

# Homework

• Review this lecture
• Find three problems that can be solved using dynamic programming, and investigate the algorithms used

# Glossary

dynamic programming

algorithm design strategies
アルゴリズムの設計方針
optimal solution

Catalan number
カタラン数
matrix chain multiplication

triangulations
(多角形の) 三角分割
(convex) polygon
() 多角形
intermediate result

splitting point

arg min (argument of the minimum)

top-down

bottom-up

optimal substructure

overlapping subproblems

memoization (verb: memoize)

metaprogramming
メタプログラミング