Prerequisite:


The Riemann integral requires a bounded function on a bounded interval. Many functions encountered in analysis, probability, and signal processing fail one or both conditions. Improper integrals extend the theory to these cases by replacing the integral with a limit.

Two Types of Improper Integrals

Type 1: Unbounded interval. The domain of integration extends to $\pm\infty$.

$$\int_a^\infty f(x),dx = \lim_{b\to\infty} \int_a^b f(x),dx$$

$$\int_{-\infty}^b f(x),dx = \lim_{a\to-\infty} \int_a^b f(x),dx$$

For a doubly infinite integral: $\int_{-\infty}^\infty f = \int_{-\infty}^c f + \int_c^\infty f$, where each piece must converge separately (for any choice of $c$).

Type 2: Unbounded integrand. $f$ has a singularity at an endpoint or interior point.

$$\int_a^b f(x),dx = \lim_{\varepsilon \to 0^+} \int_{a+\varepsilon}^b f(x),dx \quad \text{(singularity at } a\text{)}$$

For a singularity at an interior point $c \in (a,b)$:

$$\int_a^b f = \lim_{\varepsilon\to 0^+}\int_a^{c-\varepsilon} f + \lim_{\delta\to 0^+}\int_{c+\delta}^b f,$$

where both limits must exist independently.

Convergence means the limit exists and is finite. Divergence means otherwise (limit is $\pm\infty$ or does not exist).

The p-Integrals: Canonical Examples

Type 1 p-integral.

$$\int_1^\infty x^{-p},dx = \lim_{b\to\infty}\int_1^b x^{-p},dx.$$

For $p \neq 1$: $\int_1^b x^{-p},dx = \frac{b^{1-p}-1}{1-p}$.

  • If $p > 1$: $b^{1-p} \to 0$, so the integral converges to $\frac{1}{p-1}$.
  • If $p < 1$: $b^{1-p} \to \infty$, diverges.
  • If $p = 1$: $\int_1^b x^{-1},dx = \ln b \to \infty$, diverges.

Theorem. $\int_1^\infty x^{-p},dx$ converges if and only if $p > 1$.

Type 2 p-integral.

$$\int_0^1 x^{-p},dx = \lim_{\varepsilon\to 0^+}\int_\varepsilon^1 x^{-p},dx.$$

For $p \neq 1$: $\int_\varepsilon^1 x^{-p},dx = \frac{1 - \varepsilon^{1-p}}{1-p}$.

  • If $p < 1$: $\varepsilon^{1-p} \to 0$, converges to $\frac{1}{1-p}$.
  • If $p > 1$: $\varepsilon^{1-p} \to \infty$, diverges.
  • If $p = 1$: $\ln(1) - \ln(\varepsilon) \to \infty$, diverges.

Theorem. $\int_0^1 x^{-p},dx$ converges if and only if $p < 1$.

The boundary $p = 1$ is the critical case for both, but they diverge on opposite sides. This asymmetry is worth internalising.

Convergence Tests

Comparison Test

Theorem. Suppose $0 \leq f(x) \leq g(x)$ for all $x \geq a$.

  • If $\int_a^\infty g$ converges, then $\int_a^\infty f$ converges.
  • If $\int_a^\infty f$ diverges, then $\int_a^\infty g$ diverges.

Proof. $G(b) = \int_a^b g$ is increasing and bounded above (by convergence of $\int g$), so $F(b) = \int_a^b f \leq G(b)$ is also increasing and bounded. Monotone bounded sequences converge. $\blacksquare$

The same statement holds for Type 2 integrals by an analogous monotone limit argument.

Limit Comparison Test

Theorem. Let $f, g > 0$ on $[a,\infty)$ and suppose $\lim_{x\to\infty} f(x)/g(x) = L \in (0,\infty)$. Then $\int_a^\infty f$ and $\int_a^\infty g$ either both converge or both diverge.

Proof. From the limit, there exists $M$ such that for $x \geq M$, $\frac{L}{2} g(x) \leq f(x) \leq 2L,g(x)$. Apply the comparison test in both directions. $\blacksquare$

The boundary cases $L = 0$ and $L = \infty$ give one-sided conclusions: $L = 0$ means $\int g$ convergent implies $\int f$ convergent; $L = \infty$ means $\int f$ convergent implies $\int g$ convergent.

Absolute and Conditional Convergence

Definition. $\int_a^\infty f$ is absolutely convergent if $\int_a^\infty |f(x)|,dx < \infty$. It is conditionally convergent if it converges but $\int_a^\infty |f|$ diverges.

Theorem. Absolute convergence implies convergence.

Proof. Write $f = f^+ - f^-$ where $f^+(x) = \max(f(x),0)$ and $f^-(x) = \max(-f(x),0)$. Both are non-negative, and $f^+ \leq |f|$, $f^- \leq |f|$. So $\int f^+$ and $\int f^-$ both converge by comparison with $\int |f|$. Then $\int f = \int f^+ - \int f^-$ converges. $\blacksquare$

The converse fails. The prototypical conditionally convergent improper integral is $\int_1^\infty \frac{\sin x}{x},dx$, which converges (see Dirichlet’s test below) but $\int_1^\infty \frac{|\sin x|}{x},dx = \infty$.

Dirichlet’s Test

Theorem. Let $f$ be continuously differentiable with $f(x) \to 0$ monotonically, and let $g$ be continuous with $\left|\int_a^b g(x),dx\right| \leq M$ for all $b \geq a$. Then $\int_a^\infty f(x)g(x),dx$ converges.

This applies to $\int_1^\infty \frac{\sin x}{x},dx$: take $f(x) = 1/x$ (decreasing to 0) and $g(x) = \sin x$ (bounded antiderivative $-\cos x$).

Abel’s Test

Theorem. If $\int_a^\infty g$ converges and $f$ is bounded and monotone, then $\int_a^\infty fg$ converges.

Abel’s test is useful when $f$ does not go to zero but remains bounded.

Cauchy Principal Value

When a doubly infinite integral or an integral with an interior singularity diverges in the ordinary sense, one may still assign a finite value by a symmetric limiting process.

$$\text{P.V.}\int_{-\infty}^\infty f(x),dx = \lim_{R\to\infty}\int_{-R}^R f(x),dx$$

$$\text{P.V.}\int_a^b f(x),dx = \lim_{\varepsilon\to 0^+}\left(\int_a^{c-\varepsilon} f + \int_{c+\varepsilon}^b f\right) \quad (c \text{ a singularity})$$

Example: $\text{P.V.}\int_{-1}^1 \frac{1}{x},dx = 0$ by symmetry, even though the integral diverges in the ordinary sense. The principal value discards sign-alternating infinities that cancel. It is defined consistently in complex analysis via the residue theorem.

The Gamma Function

Definition. For $s > 0$:

$$\Gamma(s) = \int_0^\infty x^{s-1}e^{-x},dx.$$

This is both a Type 1 integral (upper limit $\infty$) and, for $s < 1$, a Type 2 integral (integrand $\sim x^{s-1}$ blows up at $0$). Convergence: near $0$, $x^{s-1}e^{-x} \sim x^{s-1}$, which is integrable iff $s - 1 > -1$, i.e., $s > 0$. Near $\infty$, exponential decay dominates any power, so the tail converges for all $s > 0$.

Theorem. $\Gamma(s+1) = s,\Gamma(s)$ for $s > 0$.

Proof. Integrate by parts with $u = x^s$ and $dv = e^{-x}dx$:

$$\Gamma(s+1) = \int_0^\infty x^s e^{-x},dx = \bigl[-x^s e^{-x}\bigr]_0^\infty + s\int_0^\infty x^{s-1}e^{-x},dx.$$

The boundary term vanishes (at $0$: $x^s \to 0$; at $\infty$: $e^{-x}$ dominates). The remaining integral is $s,\Gamma(s)$. $\blacksquare$

Corollary. $\Gamma(1) = \int_0^\infty e^{-x},dx = 1$, so by induction $\Gamma(n) = (n-1)!$ for positive integers $n$.

The Gamma function is the unique log-convex extension of the factorial to the positive reals (Bohr-Mollerup theorem), and extends to a meromorphic function on $\mathbb{C}$ with poles at $0, -1, -2, \ldots$

Special value. $\Gamma(1/2) = \sqrt{\pi}$, which follows from the Gaussian integral $\int_{-\infty}^\infty e^{-x^2},dx = \sqrt{\pi}$ via the substitution $x^2 = t$.

The Beta Function

Definition. For $p, q > 0$:

$$B(p,q) = \int_0^1 x^{p-1}(1-x)^{q-1},dx.$$

Convergence: near $0$, integrand $\sim x^{p-1}$, integrable iff $p > 0$; near $1$, integrand $\sim (1-x)^{q-1}$, integrable iff $q > 0$.

Theorem (Beta-Gamma relation). $B(p,q) = \dfrac{\Gamma(p),\Gamma(q)}{\Gamma(p+q)}$.

Proof sketch. Compute $\Gamma(p)\Gamma(q)$ as a double integral $\int_0^\infty\int_0^\infty x^{p-1}y^{q-1}e^{-(x+y)},dx,dy$, change variables to $x = ts$, $y = t(1-s)$ with Jacobian $t$, separate the $t$- and $s$-integrals to get $\Gamma(p+q)\cdot B(p,q)$.

The Beta function computes the normalisation constant for the Beta distribution and appears in Bayesian conjugate priors.

Examples

Probability distributions. The expected value of a continuous random variable with density $p(x)$ is $E[X] = \int_{-\infty}^\infty x,p(x),dx$ - an improper integral. Many densities have support on $(0,\infty)$ or $\mathbb{R}$: Gaussian, exponential, Gamma, Cauchy. Verifying that $p$ is a valid density requires confirming $\int p = 1$, which is itself an improper integral convergence question.

Laplace transforms. $\mathcal{L}{f}(s) = \int_0^\infty f(t)e^{-st},dt$. Convergence requires the real part of $s$ to exceed the growth rate of $f$. The region of convergence is a half-plane; this is directly an improper integral convergence statement.

Fourier transforms. $\hat{f}(\xi) = \int_{-\infty}^\infty f(x)e^{-2\pi i \xi x},dx$. For $f \in L^1(\mathbb{R})$ (absolutely integrable), $\hat{f}$ is well-defined everywhere. Extending to $L^2$ (square-integrable) functions requires a limiting argument (the Fourier transform of an $L^2$ function is defined as a limit of transforms of $L^1$ functions) - this is the Plancherel theorem, and it relies on absolute convergence theory.


Read Next: