Prerequisite: Divide and Conquer

The Two Key Properties

A problem admits a dynamic programming solution when it has:

  1. Optimal substructure. An optimal solution to the problem contains optimal solutions to its subproblems. (Contrast greedy: locally optimal choices suffice. DP: you must consider all subproblem solutions.)
  2. Overlapping subproblems. The recursive solution visits the same subproblems multiple times. (Contrast D&C: subproblems are disjoint.)

When both hold, memoizing or tabulating subproblem solutions converts exponential brute-force into polynomial time.

Top-Down vs Bottom-Up

Top-down (memoization): Write the natural recursion; cache results in a hash map or array. Computes only reachable states; may have overhead from recursion and cache lookups.

Bottom-up (tabulation): Fill a table in topological order of subproblem dependencies. Better cache locality; enables space optimization.

Both have identical asymptotic complexity when all states are visited.

Fibonacci: The Simplest Example

Naive recursion: $T(n) = T(n-1) + T(n-2) + O(1)$, which solves to $T(n) = O(\phi^n) \approx O(1.618^n)$.

With memoization, each of the $n$ states is computed once:

$$\text{dp}[n] = \text{dp}[n-1] + \text{dp}[n-2], \quad \text{dp}[0] = 0,\ \text{dp}[1] = 1$$

Time: $O(n)$. Space: $O(n)$, reducible to $O(1)$ by keeping only two previous values.

Longest Common Subsequence

Given strings $s$ of length $m$ and $t$ of length $n$, find the longest subsequence common to both (not necessarily contiguous).

Recurrence:

$$\text{dp}[i][j] = \begin{cases} 0 & \text{if } i = 0 \text{ or } j = 0 \ \text{dp}[i-1][j-1] + 1 & \text{if } s_i = t_j \ \max(\text{dp}[i-1][j],\ \text{dp}[i][j-1]) & \text{otherwise} \end{cases}$$

The answer is $\text{dp}[m][n]$. Time: $O(mn)$. Space: $O(mn)$, reducible to $O(\min(m,n))$ using rolling arrays.

Proof of optimal substructure. Suppose $Z$ is an LCS of $s[1..m]$ and $t[1..n]$. If $s_m = t_n$, then $Z$ ends with $s_m$ and $Z[1..|Z|-1]$ is an LCS of $s[1..m-1]$ and $t[1..n-1]$ (by exchange argument). If $s_m \neq t_n$, then $Z$ is an LCS of either $s[1..m-1], t$ or $s, t[1..n-1]$.

0/1 Knapsack

Given $n$ items with weights $w_i$ and values $v_i$, and capacity $W$, maximize total value subject to total weight $\leq W$.

Recurrence:

$$\text{dp}[i][w] = \begin{cases} 0 & \text{if } i = 0 \ \text{dp}[i-1][w] & \text{if } w_i > w \ \max(\text{dp}[i-1][w],\ v_i + \text{dp}[i-1][w - w_i]) & \text{otherwise} \end{cases}$$

Time: $O(nW)$. This is pseudo-polynomial: polynomial in the numeric value of $W$ but exponential in its bit-length. Knapsack is NP-hard, so no polynomial algorithm is known.

Space: $O(W)$ using a single 1D array iterated in reverse (to avoid using item $i$ twice).

Matrix Chain Multiplication

Given matrices $A_1, A_2, \ldots, A_n$ where $A_i$ has dimension $p_{i-1} \times p_i$, find the parenthesization minimizing total scalar multiplications.

Recurrence:

$$\text{dp}[i][j] = \min_{i \leq k < j} \left( \text{dp}[i][k] + \text{dp}[k+1][j] + p_{i-1} \cdot p_k \cdot p_j \right)$$

with $\text{dp}[i][i] = 0$. Fill by increasing chain length $\ell = j - i$.

Time: $O(n^3)$. Space: $O(n^2)$. This cannot be solved greedily: the optimal split depends on all dimensions.

Edit Distance (Levenshtein)

The minimum number of single-character insertions, deletions, and substitutions to transform string $s$ into $t$.

$$\text{dp}[i][j] = \min\begin{cases} \text{dp}[i-1][j] + 1 & \text{(delete from } s\text{)} \ \text{dp}[i][j-1] + 1 & \text{(insert into } s\text{)} \ \text{dp}[i-1][j-1] + \mathbf{1}[s_i \neq t_j] & \text{(substitute or match)} \end{cases}$$

Base cases: $\text{dp}[i][0] = i$, $\text{dp}[0][j] = j$. Time: $O(mn)$. Space: $O(\min(m,n))$ with rolling arrays.

Bellman-Ford as DP

Bellman-Ford computes single-source shortest paths in graphs with negative edges (but no negative cycles). Define:

$$\text{dp}[v][k] = \text{shortest path from source } s \text{ to } v \text{ using at most } k \text{ edges}$$

Recurrence:

$$\text{dp}[v][k] = \min!\left(\text{dp}[v][k-1],\ \min_{(u,v) \in E} \text{dp}[u][k-1] + w(u,v)\right)$$

After $n-1$ iterations (since any shortest path in a graph with $n$ vertices uses at most $n-1$ edges), the answers are correct. Time: $O(VE)$.

DP on Trees and DAGs

On a DAG, subproblem dependencies form a partial order; fill in reverse topological order. On trees, define $\text{dp}[v]$ in terms of $\text{dp}[\text{children}(v)]$; compute via post-order DFS.

Example - independent set on a tree: Let $\text{dp}[v][0/1]$ = max weight independent set in subtree of $v$ when $v$ is excluded/included.

$$\text{dp}[v][1] = w_v + \sum_{c \in \text{children}(v)} \text{dp}[c][0]$$ $$\text{dp}[v][0] = \sum_{c \in \text{children}(v)} \max(\text{dp}[c][0], \text{dp}[c][1])$$

Time: $O(n)$.

Space Optimization: Rolling Arrays

Many DP recurrences only depend on the previous row (or the previous two entries). In such cases, maintain only the rows in use.

For LCS and edit distance: $\text{dp}[i][j]$ depends only on row $i-1$. Keep two arrays of length $n+1$, alternating. Space drops from $O(mn)$ to $O(n)$.

For 0/1 knapsack: iterate $w$ from $W$ down to $w_i$ using a single array. This ensures item $i$ is used at most once.

Examples

Coin change. Given denominations $c_1, \ldots, c_k$ and target $T$, find the minimum number of coins. Let $\text{dp}[t]$ = min coins for amount $t$.

$$\text{dp}[t] = 1 + \min_{c_i \leq t} \text{dp}[t - c_i], \quad \text{dp}[0] = 0$$

Time: $O(kT)$. The greedy algorithm (largest denomination first) fails for general coin systems.

Longest Increasing Subsequence. Find the longest strictly increasing subsequence of $A[1..n]$. Naive DP: $O(n^2)$ with $\text{dp}[i] = 1 + \max_{j < i, A[j] < A[i]} \text{dp}[j]$.

Via patience sorting: maintain a list of “piles” where each pile’s top is the smallest tail of an increasing subsequence of that length. Binary search for the correct pile in $O(\log n)$ per element. Total time: $O(n \log n)$. The number of piles equals the LIS length.


Read Next: Greedy Algorithms