10 Inner Product Spaces
10.1 The Geometric Deficiency of Vector Spaces
A vector space provides algebraic structure: we can add vectors and scale them. The theory developed thus far—spanning, linear independence, dimension, linear maps, determinants, eigenvalues—proceeds entirely without reference to geometric notions like distance, angle, or orthogonality.
Consider \mathbb{R}^2 as an abstract vector space over \mathbb{R}. We have bases, we have linear maps, we can solve systems of equations. But we cannot measure the length of a vector. We cannot determine whether two vectors are perpendicular. We cannot speak of “the closest point in a subspace to a given vector.” These are not deficiencies of our theory—they are features we have deliberately omitted.
Euclidean space \mathbb{R}^n admits additional structure: the dot product \langle x, y \rangle = \sum_{i=1}^{n} x_i y_i. This operation measures both magnitude and angle. The length of x is \|x\| = \sqrt{\langle x, x \rangle}. Two vectors are orthogonal when \langle x, y \rangle = 0. The angle \theta between nonzero vectors satisfies \cos \theta = \frac{\langle x, y \rangle}{\|x\| \|y\|}.
Many vector spaces naturally admit analogous structure. The space C([a,b]) of continuous functions on [a,b] carries an inner product \langle f, g \rangle = \int_a^b f(x) g(x) \, dx. The norm \|f\| = \sqrt{\langle f, f \rangle} is the L^2 norm, essential in Fourier analysis and quantum mechanics. Two functions are orthogonal when \int f g = 0, a condition arising naturally in the study of orthogonal polynomials, Fourier series, and approximation theory.
The complex vector space \mathbb{C}^n requires modification: the standard inner product is \langle z, w \rangle = \sum_{i=1}^{n} z_i \overline{w_i}, where \overline{w_i} denotes complex conjugation. This conjugate-linearity in the second argument is necessary for \langle z, z \rangle to be real and positive, ensuring \|z\|^2 = \langle z, z \rangle defines a genuine norm.
This chapter develops the theory of inner product spaces: vector spaces equipped with an operation measuring magnitude and angle. We establish basic properties, introduce the induced norm and metric, study orthogonality and angles, and prove the Riesz representation theorem. The theory unifies the geometry of Euclidean space with the analysis of function spaces, providing the foundation for orthogonal projections, spectral theory, and quadratic forms in subsequent chapters.
10.2 Inner Products: Definition and Examples
We work over \mathbb{F} \in \{\mathbb{R}, \mathbb{C}\} and give a single unified definition.
Definition 10.1 (Inner product) Let \mathcal{V} be a vector space over \mathbb{F}. An inner product on \mathcal{V} is a function \langle \cdot, \cdot \rangle : \mathcal{V} \times \mathcal{V} \to \mathbb{F} satisfying, for all u, v, w \in \mathcal{V} and c, d \in \mathbb{F}:
(Linearity in first argument) \langle cu + dv, w \rangle = c\langle u, w \rangle + d\langle v, w \rangle
(Conjugate symmetry) \langle u, v \rangle = \overline{\langle v, u \rangle}
(Positive definiteness) \langle v, v \rangle > 0 for all v \neq 0
Conjugate symmetry forces \langle v, v \rangle = \overline{\langle v, v \rangle}, so \langle v, v \rangle \in \mathbb{R} for all v. Positive definiteness then makes sense regardless of the field. Combining linearity in the first argument with conjugate symmetry gives conjugate-linearity in the second argument: \langle u, cv + dw \rangle = \overline{\langle cv + dw, u \rangle} = \overline{c\langle v,u\rangle + d\langle w,u\rangle} = \overline{c}\langle u, v \rangle + \overline{d}\langle u, w \rangle. Over a complex space, an inner product is therefore a Hermitian positive definite sesquilinear form—linear in the first slot, conjugate-linear in the second.
Remark (real case). When \mathbb{F} = \mathbb{R}, complex conjugation is trivial: \overline{c} = c for all c \in \mathbb{R}. Conjugate symmetry reduces to ordinary symmetry \langle u, v \rangle = \langle v, u \rangle, and conjugate-linearity in the second argument reduces to ordinary linearity. The inner product is then a symmetric positive definite bilinear form. Every statement in this chapter holds over \mathbb{R} as the special case \overline{c} = c.
Definition 10.2 (Inner product space) An inner product space is a vector space \mathcal{V} equipped with an inner product \langle \cdot, \cdot \rangle.
10.2.1 Examples
1. Euclidean space \mathbb{R}^n. The standard inner product is \langle x, y \rangle = \sum_{i=1}^{n} x_i y_i = x^T y. Symmetry and linearity are immediate. Positive definiteness: \langle x, x \rangle = \sum x_i^2 > 0 for x \neq 0.
2. Complex Euclidean space \mathbb{C}^n. The standard inner product is \langle z, w \rangle = \sum_{i=1}^{n} z_i \overline{w_i}. Conjugate symmetry: \overline{\langle w, z \rangle} = \overline{\sum w_i \overline{z_i}} = \sum \overline{w_i} z_i = \langle z, w \rangle. Positive definiteness: \langle z, z \rangle = \sum |z_i|^2 > 0 for z \neq 0.
3. Weighted inner products on \mathbb{R}^n. Given positive weights w_1, \ldots, w_n > 0, \langle x, y \rangle_w = \sum_{i=1}^{n} w_i x_i y_i emphasizes certain coordinates over others.
4. L^2 inner product on C([a,b]). For continuous f, g : [a,b] \to \mathbb{R}, \langle f, g \rangle = \int_a^b f(x) g(x) \, dx. Positive definiteness: if f \neq 0, then f is nonzero on some open interval, so \int f^2 > 0.
5. Weighted L^2 inner product. Given a positive continuous weight w : [a,b] \to (0,\infty), \langle f, g \rangle_w = \int_a^b f(x) g(x) w(x) \, dx. This arises in the theory of orthogonal polynomials (Legendre, Chebyshev, Hermite).
6. Complex L^2 inner product on C([a,b], \mathbb{C}). For complex-valued continuous functions, \langle f, g \rangle = \int_a^b f(x) \overline{g(x)} \, dx, essential in Fourier analysis and quantum mechanics.
7. Frobenius inner product on M_{m \times n}(\mathbb{R}). For matrices A, B \in M_{m \times n}(\mathbb{R}), \langle A, B \rangle = \operatorname{tr}(A^T B) = \sum_{i,j} a_{ij} b_{ij}. For complex matrices, use \langle A, B \rangle = \operatorname{tr}(A^* B) where A^* is the conjugate transpose.
10.2.2 Non-Examples
Not every symmetric bilinear form is an inner product. Positive definiteness is the essential constraint.
1. Minkowski inner product on \mathbb{R}^4. In special relativity, \langle u, v \rangle = u_0 v_0 - u_1 v_1 - u_2 v_2 - u_3 v_3 is symmetric and bilinear but not positive definite: \langle (1,1,0,0), (1,1,0,0) \rangle = 0 despite (1,1,0,0) \neq 0. This Lorentzian metric is central to the geometry of spacetime but does not satisfy our definition.
2. Degenerate bilinear forms. On \mathbb{R}^2, the form \langle (x_1, y_1), (x_2, y_2) \rangle = x_1 x_2 is symmetric and bilinear but annihilates the nonzero vector (0,1): \langle (0,1),(0,1)\rangle = 0. Such forms are degenerate—they fail positive definiteness.
10.3 Basic Properties
Theorem 10.1 Let \mathcal{V} be an inner product space. For all u, v, w \in \mathcal{V} and c \in \mathbb{F}:
\langle 0, v \rangle = \langle v, 0 \rangle = 0
\langle u, v + w \rangle = \langle u, v \rangle + \langle u, w \rangle
\langle u, cv \rangle = \overline{c}\langle u, v \rangle
If \langle u, v \rangle = 0 for all v \in \mathcal{V}, then u = 0
Proof.
By linearity in the first argument, \langle 0, v \rangle = \langle 0 \cdot v, v \rangle = 0\langle v,v\rangle = 0. By conjugate symmetry, \langle v, 0 \rangle = \overline{\langle 0, v \rangle} = \overline{0} = 0.
and (c) follow directly from conjugate-linearity in the second argument, derived in the preamble to Definition 10.1.
Taking v = u gives \langle u, u \rangle = 0. By positive definiteness, u = 0. \square
Property (d) shows the inner product is non-degenerate: the only vector orthogonal to everything is the zero vector. This distinguishes inner products from degenerate bilinear forms.
10.4 The Induced Norm
An inner product induces a notion of length.
Definition 10.3 (Norm induced by inner product) For v \in \mathcal{V}, the norm of v is \|v\| = \sqrt{\langle v, v \rangle}.
Since \langle v, v \rangle \ge 0 always (with equality only at v = 0), the square root is well-defined.
Theorem 10.2 The norm satisfies:
\|v\| \ge 0 with equality if and only if v = 0
\|cv\| = |c| \|v\| for all c \in \mathbb{F}
(Cauchy–Schwarz) |\langle u, v \rangle| \le \|u\| \|v\|
(Triangle inequality) \|u + v\| \le \|u\| + \|v\|
Proof.
Follows from positive definiteness.
Using linearity in the first slot and conjugate-linearity in the second, \|cv\|^2 = \langle cv, cv \rangle = c \langle v, cv \rangle = c\,\overline{c}\,\langle v, v \rangle = |c|^2\|v\|^2, so \|cv\| = |c|\|v\|.
Proved in Theorem 10.3 below.
By conjugate symmetry, \langle v,u\rangle = \overline{\langle u,v\rangle}, so \langle u,v\rangle + \langle v,u\rangle = 2\operatorname{Re}\langle u,v\rangle. Therefore \|u + v\|^2 = \|u\|^2 + 2\operatorname{Re}\langle u, v \rangle + \|v\|^2 \le \|u\|^2 + 2|\langle u, v \rangle| + \|v\|^2 \le (\|u\| + \|v\|)^2, where the last step applies Cauchy–Schwarz. Taking square roots gives the result. \square
Theorem 10.3 (Cauchy–Schwarz inequality) For all u, v \in \mathcal{V}, |\langle u, v \rangle| \le \|u\| \|v\|, with equality if and only if u and v are linearly dependent.
Proof. If v = 0 both sides are zero and equality holds trivially. Assume v \neq 0.
The key idea is to minimise the real function f : \mathbb{R} \to \mathbb{R} defined by f(t) = \|u - tv\|^2. Since this is a squared norm it satisfies f(t) \ge 0 for all t. To handle the complex case cleanly, we first rotate u so that the inner product with v becomes real and non-negative. Write \langle u, v \rangle = |\langle u,v\rangle| e^{i\theta} and replace u by \tilde{u} = e^{-i\theta}u. Since |e^{-i\theta}| = 1 we have \|\tilde{u}\| = \|u\|, and \langle \tilde{u}, v \rangle = e^{-i\theta}\langle u, v \rangle = |\langle u, v\rangle| \ge 0. It therefore suffices to prove the inequality for \tilde{u} and v, i.e., we may assume without loss of generality that \langle u,v\rangle \ge 0 is a non-negative real number.
Now expand f(t) = \|u - tv\|^2 using linearity and conjugate-linearity: f(t) = \|u\|^2 - t\langle v, u\rangle - t\langle u, v\rangle + t^2\|v\|^2 = \|u\|^2 - 2t\langle u, v\rangle + t^2\|v\|^2, where the middle step uses \langle v,u\rangle = \overline{\langle u,v\rangle} = \langle u,v\rangle (since we arranged \langle u,v\rangle \in \mathbb{R}). This is a real quadratic in t that is everywhere non-negative. A non-negative quadratic at^2 + bt + c \ge 0 for all t \in \mathbb{R} has non-positive discriminant b^2 - 4ac \le 0. Here a = \|v\|^2, b = -2\langle u,v\rangle, c = \|u\|^2, giving 4\langle u,v\rangle^2 - 4\|u\|^2\|v\|^2 \le 0, hence \langle u,v\rangle^2 \le \|u\|^2\|v\|^2. Taking square roots and recalling \langle u,v\rangle = |\langle u,v\rangle| (by our normalisation), |\langle u, v\rangle| \le \|u\|\|v\|.
Equality holds if and only if the discriminant is zero, i.e., f has a real root t_0, i.e., \|u - t_0 v\|^2 = 0, i.e., u = t_0 v. Since we replaced u by e^{-i\theta}u, equality in the original problem holds iff e^{-i\theta}u = t_0 v for some real t_0, i.e., u = t_0 e^{i\theta}v— which is exactly linear dependence of u and v over \mathbb{F}. \square
Remark. The minimum of f(t) is attained at t^* = \langle u,v\rangle / \|v\|^2 (the unique zero of f'(t) = -2\langle u,v\rangle + 2t\|v\|^2). Geometrically, t^*v is the orthogonal projection of u onto the line spanned by v, and u - t^*v is the component of u perpendicular to v. The inequality f(t^*) \ge 0 is precisely the statement that this perpendicular component has non-negative squared length.
The Cauchy–Schwarz inequality is one of the most important inequalities in mathematics. It appears in probability theory, analysis (where Hölder’s inequality generalises it to L^p spaces), and throughout quantum mechanics, where it underlies the uncertainty principle.
10.5 The Induced Metric and Convergence
The norm converts the inner product space into a metric space.
Definition 10.4 (Metric induced by inner product) The distance between u, v \in \mathcal{V} is d(u, v) = \|u - v\|.
Theorem 10.4 d : \mathcal{V} \times \mathcal{V} \to [0, \infty) is a metric: it is non-negative (with d(u,v) = 0 \iff u = v), symmetric, and satisfies the triangle inequality d(u,w) \le d(u,v) + d(v,w).
Proof. Non-negativity and the equality condition follow from \|u-v\| \ge 0 with equality iff u-v = 0. Symmetry: \|u-v\| = \|{-(v-u)}\| = |-1|\|v-u\| = \|v-u\|. Triangle inequality: \|u-w\| = \|(u-v)+(v-w)\| \le \|u-v\|+\|v-w\| by Theorem 10.2(d). \square
A sequence (v_n) converges to v if \|v_n - v\| \to 0. It is Cauchy if \|v_n - v_m\| \to 0 as n,m \to \infty. The space is complete if every Cauchy sequence converges. Finite-dimensional inner product spaces are always complete. The space C([a,b]) with the L^2 inner product is not complete—its completion is L^2([a,b]), the space of square-integrable functions.
Definition 10.5 (Hilbert space) A Hilbert space is a complete inner product space.
Hilbert spaces are the natural setting for infinite-dimensional analysis. Quantum mechanics models states as vectors in a Hilbert space with observables as self-adjoint operators. Fourier analysis decomposes functions in L^2. Sobolev spaces, essential in partial differential equations, are Hilbert spaces with inner products incorporating derivatives. We work primarily with finite-dimensional spaces, where completeness is automatic; the theory extends to infinite dimensions with appropriate care.
10.6 The Parallelogram Law and Polarization Identity
Inner products satisfy special identities not shared by arbitrary norms.
Theorem 10.5 (Parallelogram law) For all u, v \in \mathcal{V}, \|u + v\|^2 + \|u - v\|^2 = 2\|u\|^2 + 2\|v\|^2.
Proof. Expanding both squared norms and using \langle u,v\rangle + \langle v,u\rangle = 2\operatorname{Re}\langle u,v\rangle: \|u+v\|^2 = \|u\|^2 + 2\operatorname{Re}\langle u,v\rangle + \|v\|^2, \qquad \|u-v\|^2 = \|u\|^2 - 2\operatorname{Re}\langle u,v\rangle + \|v\|^2. Adding, the cross terms cancel and the result follows. \square
Geometrically: the sum of the squares of the two diagonals of a parallelogram equals twice the sum of the squares of two adjacent sides.
Theorem 10.6 A norm \|\cdot\| on a vector space \mathcal{V} arises from an inner product if and only if it satisfies the parallelogram law.
We omit the proof, which constructs the inner product from the norm using the polarization identity.
Theorem 10.7 (Polarization identity) In a real inner product space, \langle u, v \rangle = \tfrac{1}{4}\bigl(\|u + v\|^2 - \|u - v\|^2\bigr). In a complex inner product space, \langle u, v \rangle = \tfrac{1}{4}\bigl(\|u + v\|^2 - \|u - v\|^2 + i\|u + iv\|^2 - i\|u - iv\|^2\bigr).
Proof. For the real case, the expansions \|u \pm v\|^2 = \|u\|^2 \pm 2\langle u,v\rangle + \|v\|^2 give \|u+v\|^2 - \|u-v\|^2 = 4\langle u,v\rangle on subtraction.
For the complex case we extract real and imaginary parts separately. The same expansion gives \|u+v\|^2 - \|u-v\|^2 = 4\operatorname{Re}\langle u,v\rangle. For the imaginary part, use conjugate-linearity in the second argument: \langle u, iv\rangle = \overline{i}\langle u,v\rangle = -i\langle u,v\rangle, so \operatorname{Re}\langle u,iv\rangle = \operatorname{Im}\langle u,v\rangle. Applying the same expansion to the pair (u, iv), \|u+iv\|^2 - \|u-iv\|^2 = 4\operatorname{Re}\langle u,iv\rangle = 4\operatorname{Im}\langle u,v\rangle. Multiplying by i and adding both lines: (\|u+v\|^2 - \|u-v\|^2) + i(\|u+iv\|^2 - \|u-iv\|^2) = 4\operatorname{Re}\langle u,v\rangle + 4i\operatorname{Im}\langle u,v\rangle = 4\langle u,v\rangle. \quad\square
The polarization identity shows that the inner product is completely determined by the norm—angle information is encoded entirely in length information.
10.7 Bilinear and Sesquilinear Forms
Inner products are special cases of more general structures.
Definition 10.6 (Bilinear form) A bilinear form on a real vector space \mathcal{V} is a function B : \mathcal{V} \times \mathcal{V} \to \mathbb{R} linear in each argument: B(cu + dv, w) = cB(u, w) + dB(v, w), \qquad B(u, cv + dw) = cB(u, v) + dB(u, w).
A bilinear form is symmetric if B(u, v) = B(v, u) and positive definite if B(v, v) > 0 for all v \neq 0. An inner product on a real space is precisely a symmetric positive definite bilinear form.
Definition 10.7 (Sesquilinear form) A sesquilinear form on a complex vector space \mathcal{V} is a function S : \mathcal{V} \times \mathcal{V} \to \mathbb{C} linear in the first argument and conjugate-linear in the second: S(cu + dv, w) = cS(u, w) + dS(v, w), \qquad S(u, cv + dw) = \overline{c}S(u, v) + \overline{d}S(u, w).
A sesquilinear form is Hermitian if S(u, v) = \overline{S(v, u)} and positive definite if S(v, v) > 0 for all v \neq 0. An inner product on a complex space is precisely a Hermitian positive definite sesquilinear form.
Forms without positive definiteness arise throughout mathematics and physics. The Minkowski metric of special relativity is a symmetric bilinear form with signature (+,-,-,-). Symplectic forms in Hamiltonian mechanics are skew-symmetric bilinear forms satisfying B(u,v) = -B(v,u). These lie outside inner product theory but share the underlying algebraic framework.
10.8 Matrix Representation of Inner Products
In finite dimensions, inner products are represented by matrices.
Let \mathcal{V} be a finite-dimensional inner product space with basis \mathcal{B} = \{v_1, \ldots, v_n\}. For vectors u = \sum_i a_i v_i and w = \sum_j b_j v_j, \langle u, w \rangle = \Bigl\langle \sum_i a_i v_i,\, \sum_j b_j v_j \Bigr\rangle = \sum_{i,j} a_i \overline{b_j} \langle v_i, v_j \rangle.
Define the Gram matrix G \in M_{n \times n}(\mathbb{F}) by g_{ij} = \langle v_i, v_j \rangle. Then \langle u, w \rangle = [u]_{\mathcal{B}}^T \, G \, \overline{[w]_{\mathcal{B}}}, where \overline{[w]_{\mathcal{B}}} denotes the coordinatewise complex conjugate of [w]_{\mathcal{B}} (trivial over \mathbb{R}). In \mathbb{R}^n with the standard basis, G = I_n and \langle x, y \rangle = x^T y.
The Gram matrix is Hermitian (G^* = G) and positive definite. Conversely, every Hermitian positive definite matrix defines an inner product via the formula above.
Theorem 10.8 The Gram matrix G is positive definite: for all nonzero c \in \mathbb{F}^n, c^* G c > 0.
Proof. Let v = \sum_i c_i v_i. Then c^* G c = \langle v, v \rangle. Since \{v_1, \ldots, v_n\} is a basis, c \neq 0 implies v \neq 0, so \langle v,v\rangle > 0 by positive definiteness. \square
The determinant \det(G) measures the squared volume of the parallelepiped spanned by v_1, \ldots, v_n in the geometry of the inner product.
10.9 Orthogonality and Angles
Definition 10.8 (Orthogonal vectors) Vectors u, v \in \mathcal{V} are orthogonal, written u \perp v, if \langle u, v \rangle = 0.
Orthogonality generalises perpendicularity: in \mathbb{R}^n with the standard inner product, u \perp v iff the vectors meet at a right angle; in function spaces, f \perp g when \int fg = 0. The zero vector is orthogonal to every vector and is the only vector orthogonal to itself.
Theorem 10.9 (Pythagorean theorem) If u \perp v, then \|u + v\|^2 = \|u\|^2 + \|v\|^2.
Proof. Expand \|u+v\|^2 = \|u\|^2 + \langle u,v\rangle + \langle v,u\rangle + \|v\|^2. Since \langle u,v\rangle = 0, conjugate symmetry gives \langle v,u\rangle = \overline{\langle u,v\rangle} = 0, so both cross terms vanish. \square
The theorem extends to finite sums: if v_1, \ldots, v_k are pairwise orthogonal then \|\sum_i v_i\|^2 = \sum_i \|v_i\|^2.
Remark on the converse. In a real inner product space, \|u+v\|^2 = \|u\|^2 + \|v\|^2 forces 2\langle u,v\rangle = 0, hence u \perp v. In the complex case this need not hold: expanding gives only 2\operatorname{Re}\langle u,v\rangle = 0, which leaves the imaginary part of \langle u,v\rangle unconstrained. For example, in \mathbb{C} with the standard inner product, take u = 1 and v = i: \|u+v\|^2 = |1+i|^2 = 2 = 1 + 1 = \|u\|^2 + \|v\|^2, yet \langle u,v\rangle = 1 \cdot \overline{i} = -i \neq 0.
10.9.1 Angles Between Vectors
For nonzero vectors in a real inner product space, Cauchy–Schwarz ensures -1 \le \langle u,v\rangle/(\|u\|\|v\|) \le 1.
Definition 10.9 (Angle between vectors (real case)) Let \mathcal{V} be a real inner product space and u, v \in \mathcal{V} nonzero. The angle \theta \in [0, \pi] between u and v satisfies \cos \theta = \frac{\langle u, v \rangle}{\|u\| \|v\|}.
When \theta = 0 the vectors are parallel (v = cu, c > 0); when \theta = \pi they are anti-parallel (c < 0); when \theta = \pi/2 they are orthogonal.
In complex inner product spaces \langle u,v\rangle is generally complex, so angles are less canonical. One convention sets \cos\theta = |\langle u,v\rangle|/(\|u\|\|v\|) \in [0,1], measuring the magnitude of alignment while discarding phase. For most purposes in complex spaces, orthogonality alone suffices.
10.10 The Riesz Representation Theorem
Every linear functional on a finite-dimensional inner product space arises from the inner product with a fixed vector. This collapses the distinction between vectors and linear functionals whenever an inner product is present.
Theorem 10.10 (Riesz representation theorem) Let \mathcal{V} be a finite-dimensional inner product space over \mathbb{F}. For every linear functional \varphi : \mathcal{V} \to \mathbb{F}, there exists a unique u \in \mathcal{V} such that \varphi(v) = \langle v, u \rangle \quad \text{for all } v \in \mathcal{V}.
Proof. Let \{e_1, \ldots, e_n\} be an orthonormal basis of \mathcal{V} and set u = \sum_{i=1}^n \overline{\varphi(e_i)}\, e_i. For any v = \sum_j a_j e_j, \langle v, u \rangle = \sum_j a_j \Bigl\langle e_j,\, \sum_i \overline{\varphi(e_i)}\, e_i \Bigr\rangle = \sum_j a_j \sum_i \overline{\overline{\varphi(e_i)}}\,\langle e_j, e_i \rangle = \sum_j a_j \sum_i \varphi(e_i)\,\delta_{ji} = \sum_j a_j\,\varphi(e_j). The second equality applies conjugate-linearity in the second argument: \langle e_j, c\, e_i\rangle = \overline{c}\,\langle e_j, e_i\rangle, so pulling \overline{\varphi(e_i)} out of the second slot yields \overline{\overline{\varphi(e_i)}} = \varphi(e_i). Hence \langle v, u \rangle = \varphi\!\Bigl(\sum_j a_j e_j\Bigr) = \varphi(v), using linearity of \varphi.
The conjugate \overline{\varphi(e_i)} in the definition of u is essential: conjugate-linearity in the second slot un-conjugates it to recover exactly \varphi(e_i), not \overline{\varphi(e_i)}. In the real case this distinction disappears.
For uniqueness, if \langle v, u\rangle = \langle v, u'\rangle for all v, then \langle v, u - u'\rangle = 0 for all v; taking v = u - u' gives \|u - u'\|^2 = 0, hence u = u'. \square
The theorem extends to infinite-dimensional Hilbert spaces, where it identifies every continuous linear functional with an inner product vector. In finite dimensions every linear functional is automatically continuous, so no continuity hypothesis is needed here.
10.11 Closing Remarks
An inner product equips a vector space with lengths and angles. The Cauchy–Schwarz inequality |\langle u,v\rangle| \le \|u\|\|v\| is the single estimate from which the triangle inequality, the notion of angle, and the Gram matrix theory all follow. The parallelogram law characterises which norms come from inner products, and the polarization identity shows the norm determines the inner product—so the geometry is encoded entirely in lengths.
The Riesz representation theorem is the key structural result: linear functionals are indistinguishable from vectors, meaning \mathcal{V} \cong \mathcal{V}^* canonically once an inner product is fixed. Chapter 11 builds on this to decompose spaces orthogonally and project onto subspaces.