In a nutshell, dynamic programming is recursion without repetition. Dynamic programming algorithms store the solutions of intermediate subproblems, often but not always in some kind of array or table.
Dynamic programming is not about filling in tables. It's about smart recursion!
Dynamic programming algorithms are best developed in two distinct stages.
- Formulate the problem recursively. Write down a recursive formula or algorithm for the whole problem in terms of the answers to smaller subproblems. This is the hard part. A complete recursive formulation has two parts:
- Specification. Describe the problem that you want to solve recursively, in coherent and precise English -- not how to solve that problem, but what problem you're trying to solve. Without this specification, it is impossible, even in principle, to determine whether your solution is correct.
- Solution. Give a clear recursive formula or algorithm for the whole problem in terms of the answers to smaller instances of exactly the same problem.
- Build solutions to your recurrence from the bottom up. Write an algorithm that starts with the base cases of your recurrence and works its way up to the final solution, by considering intermediate subproblems in the correct order. This stage can be broken down into several smaller, relatively mechanical steps:
- Identify the subproblems. What are all the different ways your recursive algorithm can call itself, starting with some initial input? For example, the argument to RecFibo is always an integer between 0 and n.
- Choose a memoization data structure. Find a data structure that can store the solution to every subproblem you identified in step (i). This is usually but not always a multidimensional array.
- Identify dependencies. Except for the base cases, every subproblem depends on other subproblems -- which ones? Draw a picture of your data structure, pick a generic element, and draw arrows from each of the other elements it depends on. Then formalize your picture.
- Find a good evaluation order. Order the subproblems so that each one comes after the subproblems it depends on. You should consider the base cases first, then the subproblems that depends only on base cases, and so on, eventually building up to the original top-level problem. The dependencies you identified in the previous step define a partial order over the subproblems; you need to find a linear extension of that partial order. Be careful!
- Analyze space and running time. The number of distinct subproblems determines the space complexity of your memoized algorithm. To computer the total running time, add up the running times of all possible subproblems, assuming deeper recursive calls are already memoized. You can actually do this immediately after step (i).
- Write down the algorithm. You know what order to consider the subproblems, and you know how to solve each subproblem. So do that! If your data structure is an array, this usually means writing a few nested for-loops around your original recurrence, and replacing the recursive calls with array look-ups.