In analysis, individual inequalities or estimates are usually not so useful *per se* (though there are some notable exceptions, such as the Sobolev embedding inequality, or the Cauchy-Schwarz inequality), but are instead representative examples of a larger useful class of estimates. (cf. Gowers' "Two cultures of mathematics".)

In this particular case, the classical Hardy inequality exemplifies two useful principles; firstly, that an inverse power weight such as $1/|x|^\alpha$ is "dominated" in some $L^p$ sense by the corresponding derivative $|\nabla|^\alpha$ (or, to put it somewhat facetiously, $\frac{1}{x} = O(\frac{d}{dx} )$; compare with the uncertainty principle $dx \cdot d\xi \gtrsim 1$); and secondly, that a maximal average of a function is often dominated in an $L^p$ sense by the function itself. The first principle is captured by a number of higher-dimensional generalisations of Hardy's inequality (which typically take a shape such as

$$\left\| \frac{f}{|x|^\alpha} \right\| _ {L^p({\bf R}^n)} \leq C_{p,\alpha,n} \| |\nabla|^\alpha f \|_{L^p({\bf R}^n)}$$

under suitable assumptions on $p,n,\alpha,f$) which are fundamental to the analysis of any PDE that involves singular potentials or weights such as $\frac{1}{|x|^\alpha}$. The second principle is captured by a different family of generalisations of Hardy's inequality, namely the maximal inequalities for which the Hardy-Littlewood maximal inequality is the model example. This inequality is the foundation of a large part of real-variable harmonic analysis, and in particular in the analysis of singular integral operators such as the Hilbert transform or pseudo-differential operators.

There are two nice features of Hardy's original inequality that are also worth pointing out. Firstly, it is an $L^p$ inequality with an explicit optimal constant, which is something of a rarity in analysis (there are maybe only a dozen or so other such sharp inequalities known for the fundamental operators in analysis). The other is that the inequality is never actually satisfied with equality (except in the trivial case when the function vanishes); one can construct sequences of near-extremisers that get arbitrarily close to attaining equality, but they do not converge to a limit that actually attains that equality. (The function $f = x^{-1/p}$ formally attains equality for Theorem 2, but there is a logarithmic divergence on both sides.) This is perhaps one of the simplest examples of such a situation, and one which is well worth studying if one is interested in using variational methods to find optimal constants for other inequalities, as one needs to have a good intuition as to when one expects optimisers to actually exist or not.