6  Change of Basis and Coordinate Systems

Chapter 4 established that every linear map acquires a matrix representation once bases are chosen. But the representation depends on the choice. The differentiation operator D : \mathcal{P}_2 \to \mathcal{P}_1 has matrix \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} with respect to the standard bases \{1, x, x^2\} and \{1, x\}. If instead we use \{1, x-1, (x-1)^2\} and \{1, x-1\}, the matrix is different. The operator is unchanged—only its numerical description varies.

This raises natural questions. Given two bases for the same space, how do coordinates transform? Given two matrix representations of the same linear map, how are they related? Can we choose bases to simplify the matrix—to make it diagonal, or triangular, or exhibit some other special form?

These questions are practical. In applications, the “natural” coordinate system may obscure structure. A rotation of \mathbb{R}^2 by 90° has matrix \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} in standard coordinates, which mixes components. In a rotated basis, the same map might have simpler form. Eigenvalue problems, quadratic forms, and differential equations all become tractable by choosing bases adapted to the problem.

Physical and geometric perspective. When modeling a vibrating system, the “natural” coordinates might be Cartesian positions x_1, \ldots, x_n. But the system often decouples when expressed in normal mode coordinates—linear combinations of positions that oscillate independently. Finding these coordinates amounts to diagonalizing the system’s matrix (eigenvalue problem). Similarly, in differential geometry, curves and surfaces admit intrinsic coordinate systems (arc length, principal curvatures) that reveal geometric properties invisible in ambient coordinates.

This chapter develops the machinery of coordinate changes. We derive transformation formulas for vectors and matrices, characterize when two matrices represent the same linear map, and explore how basis choice affects computation and understanding.


6.1 Change of Basis for Vectors

Let \mathcal{V} be a finite-dimensional vector space with \dim \mathcal{V} = n. Fix two ordered bases \mathcal{B} = \{v_1, \ldots, v_n\}, \quad \mathcal{C} = \{w_1, \ldots, w_n\}.

A vector v \in \mathcal{V} has coordinates [v]_{\mathcal{B}} \in \mathbb{F}^n and [v]_{\mathcal{C}} \in \mathbb{F}^n. Both describe the same geometric object—the vector v itself—in different coordinate systems. We seek the relationship between these coordinate vectors.

Since \mathcal{B} is a basis, each w_j can be written uniquely as w_j = \sum_{i=1}^{n} p_{ij} v_i. The coefficients p_{ij} specify how the \mathcal{C}-basis vectors are expressed in terms of the \mathcal{B}-basis.

Definition 6.1 (Change-of-basis matrix) The change-of-basis matrix from \mathcal{B} to \mathcal{C} is the element P \in M_{n \times n}(\mathbb{F}) with columns P = \begin{pmatrix} | & & | \\ [w_1]_{\mathcal{B}} & \cdots & [w_n]_{\mathcal{B}} \\ | & & | \end{pmatrix}. Equivalently, p_{ij} is the i-th \mathcal{B}-coordinate of w_j.

Terminology: “from \mathcal{B} to \mathcal{C}

The phrase “from \mathcal{B} to \mathcal{C}” refers to expressing \mathcal{C}-basis vectors in terms of \mathcal{B}-basis vectors. The matrix P then converts \mathcal{C}-coordinates to \mathcal{B}-coordinates (note the reversal). This convention is standard but can be confusing on first encounter. The next theorem makes the direction explicit.

Theorem 6.1 (Change-of-basis formula) If P is the change-of-basis matrix from \mathcal{B} to \mathcal{C}, then [v]_{\mathcal{B}} = P[v]_{\mathcal{C}} for all v \in \mathcal{V}.

Proof. Write v = \sum_{j=1}^{n} c_j w_j, so [v]_{\mathcal{C}} = \begin{pmatrix} c_1 \\ \vdots \\ c_n \end{pmatrix}. Substituting w_j = \sum_i p_{ij} v_i gives v = \sum_{j=1}^{n} c_j \left(\sum_{i=1}^{n} p_{ij} v_i\right) = \sum_{i=1}^{n} \left(\sum_{j=1}^{n} p_{ij} c_j\right) v_i. The i-th \mathcal{B}-coordinate of v is \sum_j p_{ij} c_j = (P[v]_{\mathcal{C}})_i. \square

To convert from \mathcal{C}-coordinates to \mathcal{B}-coordinates, multiply by P. This might seem backwards—P goes “from \mathcal{B} to \mathcal{C}” but acts on \mathcal{C}-coordinates to produce \mathcal{B}-coordinates. The terminology reflects the basis relationship (P expresses \mathcal{C} in terms of \mathcal{B}), not the direction of the coordinate transformation.

Theorem 6.2 The change-of-basis matrix P is invertible, and P^{-1} is the change-of-basis matrix from \mathcal{C} to \mathcal{B}.

Proof. The columns of P are [w_1]_{\mathcal{B}}, \ldots, [w_n]_{\mathcal{B}}. Since \{w_1, \ldots, w_n\} is a basis of \mathcal{V}, these vectors are linearly independent. The coordinate map is an isomorphism by Theorem 5.2, so their coordinate images are linearly independent in \mathbb{F}^n. Thus the columns of P are linearly independent, making P invertible by Theorem 5.22.

For the second claim, note that P^{-1} has j-th column equal to the solution of Px = e_j where e_j is the j-th standard basis vector of \mathbb{F}^n. By the change-of-basis formula, [v_j]_{\mathcal{B}} = P[v_j]_{\mathcal{C}}. Thus [v_j]_{\mathcal{C}} = P^{-1}[v_j]_{\mathcal{B}}. Applying this to all basis vectors shows that the j-th column of P^{-1} is [v_j]_{\mathcal{C}}, which is precisely the definition of the change-of-basis matrix from \mathcal{C} to \mathcal{B}. \square

The inverse relationship gives [v]_{\mathcal{C}} = P^{-1}[v]_{\mathcal{B}}.

Change-of-basis matrices form a bidirectional dictionary between coordinate systems. The invertibility reflects the fact that both \mathcal{B} and \mathcal{C} are bases—each can express the other, and the transformations are reversible.


6.2 Change of Basis for Linear Maps

Now consider a linear operator T : \mathcal{V} \to \mathcal{V}. In basis \mathcal{B}, it has matrix A = [T]_{\mathcal{B}}^{\mathcal{B}}. In basis \mathcal{C}, it has matrix B = [T]_{\mathcal{C}}^{\mathcal{C}}. Both represent the same geometric transformation. How are A and B related?

Notation Convention

For a linear operator T : \mathcal{V} \to \mathcal{V} represented in a single basis \mathcal{B} for both domain and codomain, we often write [T]_{\mathcal{B}} as shorthand for [T]_{\mathcal{B}}^{\mathcal{B}}. This notation is used throughout this chapter.

Theorem 6.3 (Similarity transformation) Let T : \mathcal{V} \to \mathcal{V} be linear with matrices A = [T]_{\mathcal{B}} and B = [T]_{\mathcal{C}} in bases \mathcal{B} and \mathcal{C} respectively. If P is the change-of-basis matrix from \mathcal{B} to \mathcal{C}, then A = PBP^{-1}.

Proof. For any v \in \mathcal{V}, the matrix action formulas from Theorem 5.6 give [T(v)]_{\mathcal{B}} = A[v]_{\mathcal{B}}, \quad [T(v)]_{\mathcal{C}} = B[v]_{\mathcal{C}}. The change-of-basis formulas from Theorem 6.1 give [v]_{\mathcal{B}} = P[v]_{\mathcal{C}}, \quad [T(v)]_{\mathcal{B}} = P[T(v)]_{\mathcal{C}}. Combining these: P[T(v)]_{\mathcal{C}} = [T(v)]_{\mathcal{B}} = A[v]_{\mathcal{B}} = AP[v]_{\mathcal{C}}. Substituting [T(v)]_{\mathcal{C}} = B[v]_{\mathcal{C}}: PB[v]_{\mathcal{C}} = AP[v]_{\mathcal{C}}. Since this holds for all v \in \mathcal{V} (equivalently, for all coordinate vectors in \mathbb{F}^n), we have PB = AP. Multiplying both sides on the left by P^{-1} gives B = P^{-1}AP, or equivalently A = PBP^{-1}. \square

Definition 6.2 (Similar matrices) Matrices A, B \in M_{n \times n}(\mathbb{F}) are similar if there exists a matrix P \in \mathrm{GL}_n(\mathbb{F}) such that B = P^{-1}AP.

Similarity is an equivalence relation: it is reflexive (A = I^{-1}AI), symmetric (if B = P^{-1}AP then A = (P^{-1})^{-1}BP^{-1} = PBP^{-1}), and transitive (if B = P^{-1}AP and C = Q^{-1}BQ then C = (PQ)^{-1}A(PQ)).

Similar matrices represent the same linear transformation in different bases. They share all basis-independent properties—eigenvalues (see the Eigenvalues chapter), determinant, trace, rank, nullity—but may have different entries. The problem of finding a “good” basis for a linear operator is the problem of finding a simple representative in its similarity class.


6.3 Examples of Change of Basis

6.3.1 Polynomial Bases

Let \mathcal{V} = \mathcal{P}_2 with standard basis \mathcal{B} = \{1, x, x^2\} and shifted basis \mathcal{C} = \{1, x-1, (x-1)^2\}.

Motivation for shifted bases. When approximating a function f near x = a, Taylor series expand in powers of (x-a). The shifted basis \{1, (x-a), (x-a)^2, \ldots\} is natural for local analysis. Chebyshev polynomials, used in numerical approximation, form another alternative basis optimized for minimizing interpolation error.

Express each \mathcal{C}-basis vector in terms of \mathcal{B}: \begin{align*} 1 &= 1 \cdot 1 + 0 \cdot x + 0 \cdot x^2, \\ x-1 &= -1 \cdot 1 + 1 \cdot x + 0 \cdot x^2, \\ (x-1)^2 &= 1 \cdot 1 + (-2) \cdot x + 1 \cdot x^2. \end{align*} The change-of-basis matrix from \mathcal{B} to \mathcal{C} is P = \begin{pmatrix} 1 & -1 & 1 \\ 0 & 1 & -2 \\ 0 & 0 & 1 \end{pmatrix}.

Consider the polynomial p(x) = 2 + 3x - x^2. In basis \mathcal{B}, its coordinates are [p]_{\mathcal{B}} = \begin{pmatrix} 2 \\ 3 \\ -1 \end{pmatrix}. To find [p]_{\mathcal{C}}, solve P[p]_{\mathcal{C}} = [p]_{\mathcal{B}}: [p]_{\mathcal{C}} = P^{-1}\begin{pmatrix} 2 \\ 3 \\ -1 \end{pmatrix}. Computing P^{-1} = \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \end{pmatrix} (which can be verified by checking PP^{-1} = I) gives [p]_{\mathcal{C}} = \begin{pmatrix} 4 \\ 1 \\ -1 \end{pmatrix}, confirming p(x) = 4 \cdot 1 + 1 \cdot (x-1) + (-1) \cdot (x-1)^2.

Now consider differentiation D : \mathcal{P}_2 \to \mathcal{P}_2 (extending codomain to \mathcal{P}_2 for simplicity). In basis \mathcal{B}: D(1) = 0, \quad D(x) = 1, \quad D(x^2) = 2x, giving matrix A = [D]_{\mathcal{B}} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{pmatrix}.

In basis \mathcal{C}: D(1) = 0, \quad D(x-1) = 1, \quad D((x-1)^2) = 2(x-1), giving matrix B = [D]_{\mathcal{C}} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{pmatrix}.

Both matrices are identical—the shifted basis \mathcal{C} doesn’t simplify the differentiation operator in this case. The matrix remains upper triangular with zeros on the diagonal, reflecting that differentiation reduces polynomial degree. But for operators like “multiply by (x-1)”, the shifted basis would yield simpler form.

6.3.2 Rotations in the Plane

Let \mathcal{V} = \mathbb{R}^2 with standard basis \mathcal{E} = \{e_1, e_2\} where e_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, e_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}. Consider rotation by \theta = \pi/2 counterclockwise, with matrix R = [T]_{\mathcal{E}} = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}.

Now use a rotated basis \mathcal{B} = \{v_1, v_2\} where v_1 = \begin{pmatrix} \cos\phi \\ \sin\phi \end{pmatrix}, \quad v_2 = \begin{pmatrix} -\sin\phi \\ \cos\phi \end{pmatrix} for some angle \phi. The change-of-basis matrix from \mathcal{E} to \mathcal{B} is P = \begin{pmatrix} \cos\phi & -\sin\phi \\ \sin\phi & \cos\phi \end{pmatrix}.

The matrix of T in basis \mathcal{B} is [T]_{\mathcal{B}} = P^{-1}RP = \begin{pmatrix} \cos\phi & \sin\phi \\ -\sin\phi & \cos\phi \end{pmatrix} \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} \cos\phi & -\sin\phi \\ \sin\phi & \cos\phi \end{pmatrix}.

For \phi = \pi/4, we get P = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix} and the matrix simplifies to [T]_{\mathcal{B}} = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}, the same form—rotation by 90° has the same matrix representation in any basis obtained by rotating the standard basis. This reflects the geometric invariance of rotations. In the Inner Products chapter, when we introduce inner products and define orthonormal bases (bases preserving angles and lengths), we will see that rotations always have this form in orthonormal coordinates.

6.3.3 Function Spaces and Fourier Bases

Let \mathcal{V} = \operatorname{span}\{1, \cos x, \sin x\}, the space of functions of the form f(x) = a + b\cos x + c\sin x. The standard basis is \mathcal{B} = \{1, \cos x, \sin x\}.

Harmonic oscillators and normal modes. This space models simple harmonic motion. A mass-spring system with position x(t) satisfying x''(t) + \omega^2 x(t) = 0 has solutions x(t) = A\cos(\omega t) + B\sin(\omega t). For coupled oscillators—multiple masses connected by springs—solutions are linear combinations of normal modes, each oscillating independently at a characteristic frequency (eigenvalue). Finding these modes requires diagonalizing the system matrix, which we study in the Eigenvalues chapter.

Consider the differentiation operator D : \mathcal{V} \to \mathcal{V}: D(1) = 0, \quad D(\cos x) = -\sin x, \quad D(\sin x) = \cos x. The matrix in basis \mathcal{B} is [D]_{\mathcal{B}} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix}.

This off-diagonal structure mixes the \cos x and \sin x components. In a different basis adapted to the eigenstructure (involving complex exponentials e^{ix}, which we explore in the Eigenvalues chapter), the operator becomes diagonal—each basis element is an eigenvector, and differentiation simply multiplies by a scalar. Such bases are essential in solving differential equations and analyzing Fourier series.

6.3.4 Differential Equations and Exponential Bases

Consider the space of solutions to y'' - 3y' + 2y = 0. By the theory of linear ODEs, this is a 2-dimensional vector space with basis \mathcal{B} = \{e^x, e^{2x}\}.

The differentiation operator D : \mathcal{V} \to \mathcal{V} acts by D(e^x) = e^x, \quad D(e^{2x}) = 2e^{2x}. In this basis, the matrix is diagonal: [D]_{\mathcal{B}} = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}.

The basis \mathcal{B} consists of eigenfunctions of D—functions that D merely scales rather than mixing. This is the continuous analog of eigenvectors for matrices. Diagonal form means the system decouples: each basis function evolves independently under differentiation.

If we instead used the basis \mathcal{C} = \{e^x + e^{2x}, e^x - e^{2x}\} (linear combinations of the exponentials), the matrix [D]_{\mathcal{C}} would be non-diagonal, obscuring the independent evolution of the exponential modes. Eigenvalue theory (see the Eigenvalues chapter) systematically finds bases that diagonalize operators.


Calculus provides a rich source of linear maps whose coordinate representations illuminate the change-of-basis formalism. We assume familiarity with multivariable calculus; readers may treat this section as an extended example connecting linear algebra to analysis.

Let f : \mathbb{R}^n \to \mathbb{R}^m be differentiable at a point a. The total derivative Df_a : \mathbb{R}^n \to \mathbb{R}^m is the unique linear map satisfying \lim_{h \to 0} \frac{\|f(a+h) - f(a) - Df_a(h)\|}{\|h\|} = 0. It is the best linear approximation to f near a.

In standard coordinates, Df_a is represented by the Jacobian matrix J_f(a) = \begin{pmatrix} \frac{\partial f_1}{\partial x_1} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \vdots & \ddots & \vdots \\ \frac{\partial f_m}{\partial x_1} & \cdots & \frac{\partial f_m}{\partial x_n} \end{pmatrix} evaluated at a. The (i,j) entry is \frac{\partial f_i}{\partial x_j}.

This is a matrix representation of the linear map Df_a in the standard bases of \mathbb{R}^n and \mathbb{R}^m. If we choose different bases—say, scaled or rotated coordinates—the matrix changes according to the similarity transformation formula.

Example. Let f : \mathbb{R}^2 \to \mathbb{R} be f(x,y) = x^2 + xy + y^2. At (1,1), the derivative is the linear map Df_{(1,1)} : \mathbb{R}^2 \to \mathbb{R} with matrix (in standard coordinates) J_f(1,1) = \begin{pmatrix} 2x+y & x+2y \end{pmatrix}\bigg|_{(1,1)} = \begin{pmatrix} 3 & 3 \end{pmatrix}. This is a 1 \times 2 matrix (a linear functional). It sends \begin{pmatrix} h_1 \\ h_2 \end{pmatrix} to 3h_1 + 3h_2.

If we use a rotated coordinate system on \mathbb{R}^2 given by basis vectors v_1 = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ 1 \end{pmatrix}, \quad v_2 = \frac{1}{\sqrt{2}}\begin{pmatrix} -1 \\ 1 \end{pmatrix}, the change-of-basis matrix is P = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}. The matrix of Df_{(1,1)} in the new coordinates is J_f(1,1) \cdot P = \begin{pmatrix} 3 & 3 \end{pmatrix} \cdot \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix} = \begin{pmatrix} 3\sqrt{2} & 0 \end{pmatrix}.

The gradient has been diagonalized in this coordinate system—it points purely in the v_1 direction with no v_2 component. The geometric object (the gradient vector) is unchanged; only its numerical description depends on coordinates.

Differential geometry connection. On a curved surface or manifold, there is no globally preferred coordinate system. The tangent space at each point is a vector space of directional derivatives, and different local coordinates (charts) induce different bases. Change-of-basis matrices govern how vectors transform between coordinate patches. The differential df of a smooth function is intrinsic—a linear map on the tangent space—while its matrix representation (the Jacobian) depends on coordinates. Studying geometry “without coordinates” requires working with invariant objects (tensors, differential forms) that transform predictably under coordinate changes.


6.4 Isomorphisms

An isomorphism T : \mathcal{V} \to \mathcal{W} provides a canonical way to transfer bases between spaces.

Theorem 6.4 (Basis transfer) Let T : \mathcal{V} \to \mathcal{W} be an isomorphism. If \mathcal{B} = \{v_1, \ldots, v_n\} is a basis of \mathcal{V}, then \{T(v_1), \ldots, T(v_n)\} is a basis of \mathcal{W}.

Proof. For independence, suppose \sum c_j T(v_j) = 0. By linearity, T(\sum c_j v_j) = 0. Since T is injective (as an isomorphism, by Theorem 4.6), \sum c_j v_j = 0. By independence of \mathcal{B}, all c_j = 0.

For spanning, let w \in \mathcal{W}. Since T is surjective (by Theorem 4.7), w = T(v) for some v \in \mathcal{V}. Write v = \sum a_j v_j, giving w = T(v) = \sum a_j T(v_j). \square

This result justifies our earlier claim that all n-dimensional vector spaces are isomorphic to \mathbb{R}^n (from Theorem 4.8). Given a basis \mathcal{B} of \mathcal{V}, the coordinate map [\cdot]_{\mathcal{B}} : \mathcal{V} \to \mathbb{R}^n is an isomorphism by Theorem 5.2 that sends \mathcal{B} to the standard basis \{e_1, \ldots, e_n\} of \mathbb{R}^n.

Different bases of \mathcal{V} yield different isomorphisms to \mathbb{R}^n, related by composition with a change-of-basis transformation. If \mathcal{B} and \mathcal{C} are two bases of \mathcal{V}, then [\cdot]_{\mathcal{B}} = P \cdot [\cdot]_{\mathcal{C}}, where P is the change-of-basis matrix from \mathcal{B} to \mathcal{C}. This is precisely the commutativity of the diagram \begin{array}{ccc} &\mathcal{V} & \\ \swarrow\scriptstyle{[\cdot]_{\mathcal{B}}} & & \searrow\scriptstyle{[\cdot]_{\mathcal{C}}} \\ \mathbb{R}^n & \xrightarrow{P} & \mathbb{R}^n \end{array}

The ambiguity in choosing an isomorphism \mathcal{V} \to \mathbb{R}^n corresponds exactly to the ambiguity in choosing a basis of \mathcal{V}. There is no “natural” or “canonical” isomorphism unless additional structure (an inner product, as in the Inner Products chapter, a privileged set of vectors, symmetry considerations) selects a preferred basis.

Theorem 6.5 Let T : \mathcal{V} \to \mathcal{W} be an isomorphism with bases \mathcal{B} of \mathcal{V} and \mathcal{C} of \mathcal{W}. Then [T]_{\mathcal{B}}^{\mathcal{C}} \in \mathrm{GL}_n(\mathbb{F}), and [T^{-1}]_{\mathcal{C}}^{\mathcal{B}} = ([T]_{\mathcal{B}}^{\mathcal{C}})^{-1}.

Proof. By Theorem 5.21 from Chapter 4, the matrix of an isomorphism is invertible, and the matrix of the inverse map is the inverse matrix. \square

When \mathcal{V} = \mathcal{W} and we use the same basis for both domain and codomain, an isomorphism T : \mathcal{V} \to \mathcal{V} has matrix in \mathrm{GL}_n(\mathbb{F}). The set of isomorphisms \mathcal{V} \to \mathcal{V} forms a group under composition, and the map T \mapsto [T]_{\mathcal{B}} is a group isomorphism (a bijection preserving the composition operation) to \mathrm{GL}_n(\mathbb{F}).

This abstract group is independent of coordinates—it’s determined by the vector space structure of \mathcal{V} alone. Different basis choices give different representations \mathrm{GL}_n(\mathbb{F}), all related by conjugation (similarity transformations). The general linear group is thus both a computational tool (matrices with inverses) and a geometric object (isomorphisms of vector spaces).


6.5 Applications: Choosing Good Coordinates

The power of change of basis lies in choosing coordinates adapted to the problem’s structure. We survey several (advanced) contexts where basis choice is crucial.

6.5.1 Separation of Variables in PDEs

The heat equation \frac{\partial u}{\partial t} = \alpha \frac{\partial^2 u}{\partial x^2} on a finite interval [0, L] with boundary conditions u(0,t) = u(L,t) = 0 has solutions u(x,t) = \sum_{n=1}^{\infty} a_n e^{-\alpha n^2 \pi^2 t / L^2} \sin\left(\frac{n\pi x}{L}\right).

The functions \sin(n\pi x / L) form an orthogonal basis of the space L^2([0,L]) (functions square-integrable on [0,L]). In this Fourier sine basis, the heat operator is diagonal: each basis function evolves independently by exponential decay. The solution decouples into infinitely many independent modes, each characterized by an eigenvalue -\alpha n^2\pi^2/L^2.

This phenomenon—decoupling via eigenbasis—is universal in linear PDEs. Wave equations, Schrödinger equations, and Laplace equations all simplify dramatically when expressed in bases of eigenfunctions. The Eigenvalues chapter develops the finite-dimensional version (eigenvalue decomposition), which extends to infinite-dimensional settings via spectral theory.

6.5.2 Principal Component Analysis

In data science, a dataset of m observations in \mathbb{R}^n can be viewed as m vectors. The covariance matrix C \in M_{n \times n}(\mathbb{R}) encodes correlations between variables. Diagonalizing C via eigenvalue decomposition yields an orthonormal basis of principal components—directions of maximal variance.

Projecting data onto the first few principal components reduces dimension while preserving variance. This change of basis is data-driven: the new coordinates are optimally adapted to the dataset’s structure. Applications include image compression, genomics, and machine learning.

6.5.3 Coordinate-Free Geometry

In differential geometry, a curve \gamma : [a,b] \to \mathbb{R}^n can be reparametrized without changing its geometric shape. The arc length parametrization chooses coordinates intrinsic to the curve: the parameter measures distance along the curve. In these coordinates, \|\gamma'(s)\| = 1, simplifying formulas for curvature and torsion.

Similarly, surfaces admit principal curvature coordinates aligned with the directions of maximal and minimal bending. The Frenet-Serret frame for space curves is a moving orthonormal basis adapted to the curve’s geometry. These are geometric analogs of eigenbases—coordinates revealing intrinsic structure.

More generally, tensor fields on manifolds are coordinate-independent geometric objects. Computing their components requires choosing local coordinates (a basis of each tangent space), but geometric conclusions must be invariant under coordinate changes. Differential geometry systematizes this principle: physical laws should be expressible without reference to arbitrary coordinates.


6.6 Closing Remarks

We stress that geometric objects exist independently of how we describe them numerically. A vector, a subspace, a linear map—these are intrinsic. Coordinates are a tool for computation, and different coordinate systems offer different computational advantages.

The change-of-basis formula [v]_{\mathcal{B}} = P[v]_{\mathcal{C}} and the similarity transformation A = PBP^{-1} encode how numerical representations transform when we change perspective. Invariants—properties that all coordinate systems agree on—are the signature of the underlying geometry. Rank, nullity, and trace are simple invariants. In the Eigenvalues chapter, we introduce eigenvalues, perhaps the most important invariants of a linear operator, which determine behavior under iteration, exponentiation, and long-term dynamics.

Much of advanced linear algebra consists of finding good bases for particular problems. Eigenvector bases diagonalize operators, simplifying powers and exponentials. Orthonormal bases simplify projections, decompositions, and least squares. Jordan bases reveal the fine structure of non-diagonalizable operators. Each choice of basis is a choice of perspective, adapted to the problem at hand.

The interplay between intrinsic geometry and coordinate description recurs throughout mathematics. In topology, homeomorphisms classify spaces up to continuous deformation, independent of how we embed them in Euclidean space. In differential geometry, diffeomorphisms relate smooth manifolds, independent of local coordinate charts. In algebra, isomorphisms identify structures sharing the same abstract properties. Linear algebra provides the simplest setting for this profound idea: there is a difference between the thing itself and how we choose to describe it.