Prerequisite: Local Search & Hill Climbing

Hill climbing accepts only improving moves and stalls at local optima. Simulated annealing escapes by accepting worse moves probabilistically - a form of randomisation. Tabu search takes a different approach: use explicit memory to guide the search. Forbid recently visited moves or solutions, forcing the search to explore new regions, and allow the current solution to worsen in the short term in order to escape local optima.

Tabu search was introduced by Fred Glover in 1986 and remains one of the most effective metaheuristics in combinatorial optimisation, particularly for scheduling problems.

The Tabu List

The core data structure is the tabu list: a memory of recently performed moves (or recently visited solutions) that are declared forbidden for the next $k$ iterations. The integer $k$ is the tabu tenure.

At each iteration:

  1. Generate the neighbourhood $N(s)$ of the current solution $s$.
  2. Remove from $N(s)$ any move that is tabu (unless an aspiration criterion applies - see below).
  3. Move to the best solution in the remaining neighbourhood, even if it is worse than $s$.
  4. Add the just-executed move to the tabu list; remove the oldest entry if the list has reached size $k$.

By forbidding revisiting recent moves, the search cannot immediately undo the previous step. This forces it to cycle through a sequence of solutions, eventually escaping the local optimum.

Tabu tenure. If $k$ is too small, the search may cycle among a small set of solutions. If $k$ is too large, too many moves are forbidden and the search becomes restricted. Typical values are $k \in [5, 20]$ for most problems, often set to $\sqrt{n}$ where $n$ is the problem size.

Aspiration Criteria

A tabu move is one whose reverse is on the tabu list. But a tabu move might lead to a solution better than the best solution found so far. It would be counterproductive to forbid such a move. The standard aspiration criterion overrides the tabu status: a tabu move is allowed if the resulting solution improves on the best known solution.

Formally, accept a tabu move $m$ producing solution $s'$ if:

$$f(s') < f(s^\ast)$$

where $s^\ast$ is the best solution found so far.

Short-Term, Medium-Term, and Long-Term Memory

Tabu search’s distinguishing feature is a layered memory structure operating at different timescales.

Short-term memory (recency). The tabu list stores the most recent $k$ moves to prevent cycling. This is the core mechanism described above.

Medium-term memory (intensification). Track which regions of the solution space have produced good solutions. Periodically restart from the best known solution, or bias the neighbourhood evaluation toward attributes that appeared in good solutions. This focuses computational effort on promising areas.

Long-term memory (diversification). Track the frequency with which each solution attribute has appeared across all visited solutions. Add a frequency penalty $p_i$ for attribute $i$ to the objective when evaluating neighbours:

$$\tilde{f}(s') = f(s') + \lambda \sum_i p_i \cdot \mathbf{1}[\text{attribute } i \in s']$$

When the search has concentrated too long on one region, frequency penalties push it toward under-explored attributes. The parameter $\lambda$ controls the balance between objective quality and diversification.

Choosing the tabu tenure $k$ is delicate and problem-dependent. Reactive tabu search (Battiti and Tecchiolli, 1994) adjusts $k$ dynamically:

  • If a solution is visited that has been seen recently, increase $k$ (the search is cycling; force longer memory).
  • If solutions have not repeated for many iterations, decrease $k$ (the search is exploring well; allow more moves).

This removes a key tuning parameter and makes the algorithm more robust across problem instances.

Candidate List Strategies

In large-scale problems, evaluating all neighbours at each step is prohibitively expensive. Candidate list strategies restrict the neighbourhood considered:

  • Elite candidates: maintain a small list of moves with the greatest improvement potential, updated incrementally as the solution changes.
  • Restricted neighbourhood: only evaluate moves touching the most recently modified solution components.
  • Frequency-biased sampling: sample moves with probability inversely proportional to their frequency, giving fresh moves higher priority.

Comparison with Simulated Annealing

Both SA and tabu search are metaheuristics for escaping local optima, but their mechanisms differ:

Property Simulated Annealing Tabu Search
Escape mechanism Probabilistic acceptance Forbidden-move list
Memory None (memoryless) Explicit tabu list
Determinism Stochastic Deterministic given tabu list
Tuning Temperature schedule Tabu tenure, memory parameters
Parallelism Easy (parallel tempering) More complex

Tabu search is often preferred when determinism is desirable (reproducible results), when the neighbourhood structure can be evaluated efficiently with incremental updates, or when problem-specific knowledge can be embedded in the candidate list.

Examples

Job Shop Scheduling. Given $n$ jobs and $m$ machines, each job requires operations in a fixed order on specific machines. Minimise makespan. The TSAB algorithm (Nowicki and Smutnicki, 1996) uses tabu search on a neighbourhood of swapping two adjacent operations on the critical path. It held the record for the Ft10 benchmark ($10 \times 10$ instance) for many years.

Vehicle Routing Problem. Given customer locations and vehicle capacities, find minimum-cost delivery routes. Tabu search with moves that relocate a customer from one route to another, or swap customers between routes, consistently produces near-optimal solutions and is used in commercial logistics software.

Quadratic Assignment Problem (QAP). Assign $n$ facilities to $n$ locations to minimise the sum of flow-times-distance products. QAP is among the hardest combinatorial optimisation problems. Tabu search with pairwise swap neighbourhoods and frequency-based diversification produces the best known solutions for many benchmark instances.


Read Next: Genetic Algorithms & Evolutionary Computation