Dynamic Programming
Table of Contents
Dynamic programming refers to simplifying a complicated problem
- by breaking it down into simpler sub-problems
- in a recursive manner.
While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively.
Likewise, in computer science, if a problem can be solved optimally
- by breaking it into sub-problems and
- then recursively finding the optimal solutions to the sub-problems,
then it is said to have optimal substructure.
The term dynamic programming was orignally used in 1940s by Richard Bellman to describe the process of solving problems where one needs to find the best decisions one after another.
1. In Optimzation, Control Theory
The recursive relation between sub-problems is called Bellman Equation.
In control theory, a typical probelm is to find an admissible control which causes the system to follow an admissible trajectory on a continuous time interval that minimizes a cost function.
2. Computer Science
Two key attributes that a problem must have for dynamic programming to be applicable is:
- optimal substructure
- overlapping sub-problems
If the sub-problems don't overlap, the solution strategy would be divide and conquer (e.g. quicksort, merge sort) and not classified as dynamic programming problems.
Overlapping sub-problems means
- the space of sub problems is small
- algorithm uses old solutions instead of generating new sub problems most of the time
(e.g. Fibonacci series)
2.1. Top Down Approach
- Need to memoize (e.g. fibonacci(n) number starting from n-1 & n-2)
- More direct recursive formulation
2.2. Bottom Up Approach
- Many not need to memoize (e.g. fibonacci(n) number starting from 1)
- May need to try reformulating the recursive formula