Prerequisite: Constraint Satisfaction & Backtracking

The Core Idea

Branch and bound (B&B) is an exact algorithm for combinatorial optimization. It performs systematic enumeration of the solution space but avoids exploring regions that cannot improve upon the best solution found so far.

The three components:

  1. Branching: Split the current problem into subproblems (children in a search tree). Each branch restricts some decision variable to a subset of its domain.
  2. Bounding: Compute a bound on the best achievable objective value in a subproblem, without enumerating all completions.
  3. Pruning: If the bound for a node is no better than the current best known solution (incumbent), prune the entire subtree.

For minimization: compute a lower bound; prune if lower bound $\geq$ incumbent. For maximization: compute an upper bound; prune if upper bound $\leq$ incumbent.

The Search Tree

Each node represents a partial solution: some variables have been fixed, others remain free. The root is the empty partial solution. Leaves that satisfy all constraints are feasible solutions.

Best-first search: Use a priority queue ordered by bound. Explore the node with the most promising bound first. This tends to find good solutions and tighten the incumbent quickly, leading to more pruning.

Depth-first search with pruning: Dive deep to find a feasible solution quickly, then backtrack and prune. Uses $O(d)$ space where $d$ is depth. Often faster in practice when good heuristics exist.

Pseudocode (minimization):

BranchAndBound(problem):
    incumbent = +infinity
    pq = priority queue, insert root with lb = LowerBound(root)
    while pq is not empty:
        node = ExtractMin(pq)          // best lower bound
        if node.lb >= incumbent: continue  // prune
        if node is a complete solution:
            incumbent = min(incumbent, node.value)
            continue
        for child in Branch(node):
            lb = LowerBound(child)
            if lb < incumbent:
                Insert(pq, child, lb)
    return incumbent

Bounding: The Key to Efficiency

The quality of the bound determines how much of the search tree is pruned. A bound must be:

  • Valid: never worse than the true optimum of the subproblem (a lower bound for minimization is always $\leq$ the true minimum).
  • Tight: close to the true optimum so that pruning is effective.
  • Cheap to compute: bounding is called at every node.

Common bounding strategies:

  • LP relaxation: relax integrality constraints; solve a linear program. Always valid; tight when the LP solution happens to be integral.
  • Lagrangian relaxation: dualize complicating constraints into the objective.
  • Problem-specific combinatorial bounds: e.g., use the assignment problem lower bound for TSP.

TSP via Branch and Bound

The Travelling Salesman Problem: find a minimum-weight Hamiltonian cycle in a complete weighted graph.

Branching: Fix or exclude specific edges. A natural branching strategy: at each node, pick an edge $e = (u, v)$ that must be traversed; branch on “include $e$” vs “exclude $e$.”

Lower bound via assignment relaxation: Relax the Hamiltonian cycle constraint to: every vertex has exactly one outgoing edge and one incoming edge (a set of cycles, not necessarily a single cycle). This is the assignment problem, solvable in $O(n^3)$ by the Hungarian algorithm. Since every Hamiltonian cycle is a feasible assignment, the assignment optimum is a valid lower bound.

For a $10$-city instance, this bound is typically within $5$–$15%$ of the optimum, pruning the vast majority of the search tree.

0/1 Knapsack via Branch and Bound

Variables: $x_i \in {0, 1}$, maximize $\sum v_i x_i$ subject to $\sum w_i x_i \leq W$.

Branching: At each node, choose the next item $i$; branch on $x_i = 1$ and $x_i = 0$.

Upper bound via LP relaxation (fractional knapsack): Allow $x_i \in [0,1]$. Sort remaining items by value-to-weight ratio $v_i/w_i$ descending; greedily fill the knapsack, taking a fractional last item if needed. This is solvable in $O(n \log n)$ (after sorting once) and gives a valid upper bound: the fractional knapsack optimum $\geq$ the 0/1 optimum.

Example. Items: $(v, w) \in {(10, 2), (6, 2), (12, 3), (15, 5)}$, $W = 7$. LP relaxation on full problem: take items 1, 2 fully ($w = 4$, $v = 16$), then $3/5$ of item 4 ($v = 9$), total $v = 25$. Real optimum is 22 (items 1, 2, 3). The bound guides which branches to explore first.

Branch Strategies

Binary branching on a variable: Choose a fractional variable $x_i$ in the LP relaxation; create one child with $x_i = 0$ and another with $x_i = 1$. Standard in integer programming.

Branching on disjunctions: For general integer programs, branch on $x_i \leq \lfloor v_i^\ast \rfloor$ vs $x_i \geq \lceil v_i^\ast \rceil$ where $v_i^\ast$ is the LP value. This is the basis of branch-and-cut, which adds cutting planes (valid inequalities) to tighten LP bounds before branching.

Variable selection: Choose the variable closest to $0.5$ (“most fractional”) to maximize the bound improvement per branch, or use strong branching: temporarily fix each candidate and solve the LP to measure the actual bound change.

B&B vs DP vs Greedy

Property Greedy DP Branch and Bound
Exact? Sometimes Yes (when applicable) Yes
Worst-case Polynomial Polynomial Exponential
Practical speed Very fast Fast Often fast with good bounds
Optimality requirement Problem-specific Optimal substructure Any discrete optimization
Memory $O(1)$ extra $O(\text{states})$ $O(\text{tree depth})$

B&B is exact and exponential in the worst case, but good bounds and pruning can make it orders of magnitude faster than exhaustive enumeration. For many real instances (e.g., MILP in industrial optimization), B&B finds the optimum quickly even when $n$ is large.

Examples

Integer programming B&B. A general mixed-integer linear program (MILP) branches on fractional integer variables and tightens bounds via LP relaxations. Modern solvers (CPLEX, Gurobi) combine B&B with cutting planes (Gomory cuts, cover inequalities) and heuristics for primal solutions, solving instances with millions of variables.

Alpha-beta pruning in game trees. In two-player zero-sum games (chess, Go), minimax search explores a tree where maximizer and minimizer alternate. Alpha-beta pruning maintains a window $[\alpha, \beta]$: prune a maximizer’s subtree if its upper bound $\leq \alpha$ (the maximizer can already achieve $\alpha$ elsewhere); prune a minimizer’s subtree if its lower bound $\geq \beta$. This is B&B applied to game trees: in the best case, alpha-beta reduces the branching factor from $b$ to $\sqrt{b}$, effectively doubling the search depth for a fixed budget.


Read Next: Randomized Algorithms