Laplace Transform
Prerequisite:
Definition and Region of Convergence
Definition. The Laplace transform of a function $f: [0, \infty) \to \mathbb{R}$ is
$$\mathcal{L}{f(t)}(s) = F(s) = \int_0^\infty f(t),e^{-st},dt$$
provided the integral converges. Here $s \in \mathbb{C}$ is a complex frequency variable.
Region of Convergence (ROC). The integral converges absolutely for all $s$ with $\operatorname{Re}(s) > \sigma_c$, where $\sigma_c$ is the abscissa of convergence. For example, $f(t) = e^{at}$ gives $F(s) = 1/(s-a)$ with ROC $\operatorname{Re}(s) > a$.
Geometrically, the ROC is a right half-plane in the complex $s$-plane:
Im(s)
|
| ROC: Re(s) > sigma_c
| /
| /
----------+--/----------------------> Re(s)
| sigma_c
|
|
The Laplace transform is analytic in its ROC.
Transforms of Standard Functions
| $f(t)$ | $F(s) = \mathcal{L}{f}$ | ROC |
|---|---|---|
| $1$ | $\dfrac{1}{s}$ | $\operatorname{Re}(s) > 0$ |
| $t^n$ | $\dfrac{n!}{s^{n+1}}$ | $\operatorname{Re}(s) > 0$ |
| $e^{at}$ | $\dfrac{1}{s - a}$ | $\operatorname{Re}(s) > a$ |
| $\sin(\omega t)$ | $\dfrac{\omega}{s^2 + \omega^2}$ | $\operatorname{Re}(s) > 0$ |
| $\cos(\omega t)$ | $\dfrac{s}{s^2 + \omega^2}$ | $\operatorname{Re}(s) > 0$ |
| $u(t)$ (unit step) | $\dfrac{1}{s}$ | $\operatorname{Re}(s) > 0$ |
| $\delta(t)$ (Dirac delta) | $1$ | all $s$ |
The $\sin$ and $\cos$ entries follow from computing $\mathcal{L}{e^{i\omega t}} = 1/(s - i\omega)$ and taking real and imaginary parts.
Properties
Linearity. $\mathcal{L}{af + bg} = aF(s) + bG(s)$.
Time Shift. If $f$ is shifted by $a \geq 0$:
$$\mathcal{L}{f(t - a),u(t - a)} = e^{-as},F(s).$$
Frequency Shift. Multiplying by an exponential shifts $s$:
$$\mathcal{L}{e^{at},f(t)} = F(s - a).$$
Convolution Theorem. The Laplace transform converts convolution to multiplication:
$$\mathcal{L}{(f * g)(t)} = F(s),G(s)$$
where $(f * g)(t) = \int_0^t f(\tau),g(t - \tau),d\tau$.
Proof sketch. Substitute the definition of convolution into the Laplace integral, interchange the order of integration (justified by absolute convergence), and collect the exponential factors to get $F(s)G(s)$.
Differentiation Rules
Theorem. If $f$ is differentiable and $\mathcal{L}{f} = F(s)$, then
$$\mathcal{L}{f'(t)} = s,F(s) - f(0).$$
Proof. Integrate by parts: $\int_0^\infty f'(t)e^{-st},dt = [f(t)e^{-st}]_0^\infty + s\int_0^\infty f(t)e^{-st},dt = -f(0) + sF(s)$.
Applying the rule repeatedly:
$$\mathcal{L}{f''(t)} = s^2 F(s) - s,f(0) - f'(0).$$
This is the key property that makes Laplace transforms useful for ODEs: differentiation becomes multiplication by $s$, reducing an ODE to an algebraic equation.
Solving ODEs with the Laplace Transform
Strategy. Given $ay'' + by' + cy = g(t)$ with initial conditions $y(0) = y_0$, $y'(0) = v_0$:
- Apply $\mathcal{L}$ to both sides.
- Use the differentiation rules to express $\mathcal{L}{y''}$ and $\mathcal{L}{y'}$ in terms of $Y(s) = \mathcal{L}{y}$.
- Solve algebraically for $Y(s)$.
- Invert to find $y(t) = \mathcal{L}^{-1}{Y(s)}$.
Example. Solve $y'' + y = 0$, $y(0) = 1$, $y'(0) = 0$.
Transform: $s^2 Y - s\cdot 1 - 0 + Y = 0 \implies (s^2 + 1)Y = s \implies Y(s) = \frac{s}{s^2 + 1}$.
Inverting: $y(t) = \cos(t)$.
Partial Fractions for Inverse Laplace
When $Y(s)$ is a ratio of polynomials, decompose into partial fractions before inverting. For example:
$$Y(s) = \frac{2s + 3}{(s+1)(s+2)} = \frac{A}{s+1} + \frac{B}{s+2}.$$
Solving: $A = 1$, $B = 1$, so $y(t) = e^{-t} + e^{-2t}$.
Complex poles contribute damped sinusoids: $\frac{1}{(s+a)^2 + \omega^2}$ inverts to $\frac{1}{\omega}e^{-at}\sin(\omega t)$.
Transfer Functions and Control Theory
In control theory, a linear time-invariant (LTI) system relates input $x(t)$ to output $y(t)$ via a differential equation. Taking the Laplace transform (assuming zero initial conditions) gives
$$Y(s) = H(s),X(s)$$
where the transfer function is $H(s) = Y(s)/X(s)$.
The transfer function is a rational function $H(s) = N(s)/D(s)$. Its poles are the roots of $D(s)$ and its zeros are the roots of $N(s)$.
Stability Condition. An LTI system is bounded-input bounded-output (BIBO) stable if and only if all poles of $H(s)$ lie in the open left half-plane $\operatorname{Re}(s) < 0$.
Im(s)
|
X X | stable
X X | region
----------+-----------> Re(s)
X X |
X X |
|
Poles (X) in left half-plane => stable system.
Poles on imaginary axis => marginally stable.
Poles in right half-plane => unstable.
The step response $y(t) = \mathcal{L}^{-1}{H(s)/s}$ reveals how the system responds to a sudden input - fundamental in control design (PID controllers, Bode plots, Nyquist criterion).
Initial and Final Value Theorems
Initial Value Theorem. If $f$ and $f'$ are Laplace-transformable:
$$\lim_{t \to 0^+} f(t) = \lim_{s \to \infty} s,F(s).$$
Final Value Theorem. If $f$ has a finite limit as $t \to \infty$, and all poles of $sF(s)$ are in the left half-plane:
$$\lim_{t \to \infty} f(t) = \lim_{s \to 0} s,F(s).$$
These let you extract limiting behavior directly from $F(s)$ without inverting the transform - useful for checking steady-state values in control systems.
Connection to the Fourier Transform
The Fourier transform is
$$\mathcal{F}{f}(\omega) = \int_{-\infty}^\infty f(t),e^{-i\omega t},dt.$$
For causal functions ($f(t) = 0$ for $t < 0$), evaluating the Laplace transform on the imaginary axis $s = i\omega$ gives the Fourier transform:
$$\mathcal{L}{f}(i\omega) = \int_0^\infty f(t),e^{-i\omega t},dt = \mathcal{F}{f}(\omega).$$
The Laplace transform is thus a generalization: it adds a real damping factor $e^{-\sigma t}$ that ensures convergence even when $f$ grows. The Fourier transform lives on the boundary of the ROC.
This explains why poles in the left half-plane correspond to stable behavior: the Fourier transform converges on the imaginary axis, and a left-half-plane pole at $s = -a + i\omega_0$ contributes a damped oscillation $e^{-at}\cos(\omega_0 t)$ that decays.
Examples
The Z-transform is the discrete analogue of the Laplace transform. For a sequence ${x[n]}$:
$$X(z) = \sum_{n=0}^\infty x[n],z^{-n}.$$
The substitution $z = e^{sT}$ (with $T$ the sampling period) links the two: poles in the left half-plane of the $s$-domain map to poles inside the unit circle $|z| < 1$ in the $z$-domain - the stability condition for discrete-time systems.
In digital signal processing, the transfer function $H(z)$ characterizes filters. The poles and zeros of $H(z)$ on the unit circle determine the frequency response $H(e^{i\omega})$ - the discrete-time analogue of evaluating $H(s)$ on the imaginary axis.
In machine learning, convolution layers implement the convolution theorem: a 1D convolution in time is pointwise multiplication in frequency. The Laplace/Fourier framework justifies why frequency-domain analysis of neural networks (e.g., NTK spectral analysis) works.
Read Next: