3 Finite-Dimensional Vector Spaces
A vector space is determined by which linear combinations it permits. When finitely many vectors generate the entire space, structure becomes rigid: size is measurable, decomposition is unique under mild conditions, and algebraic properties are tractable.
We work over \mathbb{R} unless stated otherwise.
3.1 Linear Combinations and Generation
The axioms of a vector space permit only addition and scalar multiplication. Any vector constructed from given ones arises from these operations.
Definition 3.1 (Linear combination) Let \mathcal{V} be a vector space. A linear combination of vectors v_1,\dots,v_n \in \mathcal{V} is a vector of the form \sum_{i=1}^{n} a_i v_i, where a_1,\dots,a_n \in \mathbb{R}.
In \mathbb{R}^2, the vector \begin{pmatrix} 3 \\ 2 \end{pmatrix} is the linear combination 3 \begin{pmatrix} 1 \\ 0 \end{pmatrix} + 2 \begin{pmatrix} 0 \\ 1 \end{pmatrix}. In the space \mathcal{P}_2 of polynomials of degree at most 2, the polynomial p(x) = 2 + 3x - x^2 is 2 \cdot 1 + 3 \cdot x + (-1) \cdot x^2.
Definition 3.2 (Span) Let S \subset \mathcal{V}. The span of S, denoted \operatorname{span}(S), is the set of all finite linear combinations of elements of S.
Theorem 3.1 (Algebraic closure) A vector v \in \mathcal{V} lies in \operatorname{span}(S) if and only if it can be obtained from elements of S using finitely many additions and scalar multiplications.
Proof. This is immediate from the definition: a linear combination is precisely the result of finitely many additions and scalar multiplications. \square
The span formalizes algebraic generation.
Theorem 3.2 For any S \subset \mathcal{V}, \operatorname{span}(S) is a subspace of \mathcal{V}.
Proof. The zero vector is obtained as the trivial sum. If u = \sum_{i=1}^{n} a_i u_i, \quad v = \sum_{j=1}^{m} b_j v_j with u_i, v_j \in S, then u + v = \sum_{i=1}^{n} a_i u_i + \sum_{j=1}^{m} b_j v_j is another finite linear combination. For c \in \mathbb{R}, cu = \sum_{i=1}^{n} (ca_i) u_i is also in \operatorname{span}(S). \square
The span is not just a subspace containing S; it is the smallest such subspace.
Theorem 3.3 \operatorname{span}(S) = \bigcap \{ \mathcal{W} \subset \mathcal{V} : \mathcal{W} \text{ is a subspace and } S \subset \mathcal{W} \}.
Proof. Any subspace containing S contains all finite linear combinations of elements of S, hence contains \operatorname{span}(S). Conversely, \operatorname{span}(S) itself is such a subspace. \square
The span of \left\{ \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \end{pmatrix} \right\} in \mathbb{R}^2 is all of \mathbb{R}^2. The span of \{1, x, x^2\} in \mathcal{P} is \mathcal{P}_2. The span of a single nonzero vector v is \{cv : c \in \mathbb{R}\}, the line through the origin in direction v.
Definition 3.3 (Finite generation) A vector space \mathcal{V} is finitely generated if \mathcal{V} = \operatorname{span}(S) for some finite set S \subset \mathcal{V}.
Finite generation is the hypothesis under which dimension becomes meaningful. Infinite-dimensional spaces require different techniques.
3.2 Linear Independence and Redundancy
A generating set may contain unnecessary vectors. In \mathbb{R}^2, the set \left\{ \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \begin{pmatrix} 1 \\ 1 \end{pmatrix} \right\} spans the space, but the third vector is redundant: \begin{pmatrix} 1 \\ 1 \end{pmatrix} = 1 \cdot \begin{pmatrix} 1 \\ 0 \end{pmatrix} + 1 \cdot \begin{pmatrix} 0 \\ 1 \end{pmatrix}.
Definition 3.4 (Linear independence) A finite set \{v_1,\dots,v_n\} \subset \mathcal{V} is linearly independent if \sum_{i=1}^{n} a_i v_i = 0 \quad \Longrightarrow \quad a_1 = \cdots = a_n = 0. Otherwise, the set is linearly dependent.
Theorem 3.4 A finite set \{v_1,\dots,v_n\} \subset \mathcal{V} is linearly dependent if and only if one of the vectors lies in the span of the others.
Proof. Suppose the set is linearly dependent. Then there exist scalars a_1,\dots,a_n, not all zero, such that \sum_{i=1}^{n} a_i v_i = 0. Choose k with a_k \neq 0. Solving for v_k gives v_k = -\frac{1}{a_k} \sum_{i \neq k} a_i v_i, so v_k is a linear combination of the others.
Conversely, if v_k = \sum_{i \neq k} b_i v_i, then \sum_{i \neq k} b_i v_i - v_k = 0, a nontrivial relation. \square
Corollary 3.1 If a spanning set is linearly dependent, it can be reduced without changing its span.
Proof. If S spans \mathcal{V} and is linearly dependent, then by Theorem 3.4 some v \in S lies in \operatorname{span}(S \setminus \{v\}). Since every element of \operatorname{span}(S) is a linear combination of elements of S, replacing v by its expression in terms of the remaining vectors shows \operatorname{span}(S \setminus \{v\}) = \operatorname{span}(S) = \mathcal{V}. \square
Theorem 3.5 If \{v_1,\dots,v_n\} is linearly independent, then every vector in \operatorname{span}(v_1,\dots,v_n) admits a unique representation as a linear combination of the v_i.
Proof. Suppose v = \sum_{i=1}^{n} a_i v_i = \sum_{i=1}^{n} b_i v_i. Subtracting yields \sum_{i=1}^{n} (a_i - b_i) v_i = 0. By independence, a_i - b_i = 0 for all i. \square
3.3 Bases
Linear independence eliminates redundancy but does not guarantee generation. A basis satisfies both properties.
Definition 3.5 (Basis) A basis of a vector space \mathcal{V} is a linearly independent set that spans \mathcal{V}.
Theorem 3.6 A finite set \mathcal{B} \subset \mathcal{V} is a basis if and only if it is a minimal spanning set, or equivalently a maximal linearly independent set.
Proof. Suppose \mathcal{B} is a basis. If we remove any vector from \mathcal{B}, independence is preserved but the span shrinks by Theorem 3.5, so \mathcal{B} is minimal. If we add any vector v \in \mathcal{V} to \mathcal{B}, the set becomes dependent since v \in \operatorname{span}(\mathcal{B}), so \mathcal{B} is maximal.
Conversely, a minimal spanning set must be independent (otherwise we could remove a redundant vector), and a maximal independent set must span (otherwise we could add a vector outside its span while preserving independence). \square
The set \left\{ \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \end{pmatrix} \right\} is a basis of \mathbb{R}^2. The set \{1, x, x^2, \dots, x^n\} is a basis of \mathcal{P}_n.
Theorem 3.7 Every finitely generated vector space admits a basis.
Proof. Let S be a finite spanning set. If S is independent, it is a basis. Otherwise, by Theorem 3.4, one vector in S lies in the span of the others. Remove it. The span remains unchanged by Corollary 3.1. Repeat this process. Since S is finite, the algorithm terminates with an independent spanning set. \square
Bases are not unique. In \mathbb{R}^2, both \left\{ \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \end{pmatrix} \right\} \quad \text{and} \quad \left\{ \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \begin{pmatrix} 1 \\ -1 \end{pmatrix} \right\} are bases. But all bases of a given space have the same cardinality.
Theorem 3.8 (Extension to a basis) Every linearly independent set in a finite-dimensional vector space can be extended to a basis.
Proof. Let \mathcal{V} be finite-dimensional and let \{v_1,\ldots,v_k\} be linearly independent. By the existence of a basis, \mathcal{V} has a basis \{w_1,\ldots,w_n\}, hence \mathcal{V}=\operatorname{span}(w_1,\ldots,w_n).
The set \{v_1,\ldots,v_k,w_1,\ldots,w_n\} spans \mathcal{V}. If it is linearly independent, it is already a basis and we are done. Otherwise, the set is linearly dependent. By Theorem 3.4, one of its vectors is a linear combination of the others.
No vector v_j can be redundant: if v_j \in \operatorname{span}(v_1,\ldots,v_{j-1},v_{j+1},\ldots,v_k,w_1,\ldots,w_n), then in particular v_j lies in the span of the remaining v_i, contradicting the linear independence of \{v_1,\ldots,v_k\}. Hence any redundant vector must be one of the w_i.
Remove such a w_i. The resulting set still spans \mathcal{V} and still contains \{v_1,\ldots,v_k\}. Repeating this procedure eliminates all redundancy. Since only finitely many w_i are available, the process terminates with a linearly independent spanning set containing \{v_1,\ldots,v_k\}. This set is a basis of \mathcal{V} extending \{v_1,\ldots,v_k\}. \square
This theorem justifies the common technique: “Let \{u_1, \ldots, u_k\} be a basis of \ker(T) and extend it to a basis of \mathcal{V}.” We now know such an extension always exists.
3.4 Dimension
To prove that all bases have the same size, we need a principle controlling the cardinality of independent sets relative to spanning sets.
Theorem 3.9 (Exchange lemma) Let \{v_1,\dots,v_n\} be linearly independent and suppose \mathcal{V} = \operatorname{span}(w_1,\dots,w_m). Then n \le m.
Proof. We prove by iterative exchange that for each k = 0, 1, \ldots, \min(n, m), after suitably reindexing the w_j, \mathcal{V} = \operatorname{span}(v_1, \ldots, v_k, w_{k+1}, \ldots, w_m).
For k = 0 this is the hypothesis. For the step k \to k + 1 (with k < n): since v_{k+1} \in \mathcal{V}, write v_{k+1} = \sum_{i=1}^{k} a_i v_i + \sum_{j=k+1}^{m} b_j w_j.
Since \{v_1, \ldots, v_{k+1}\} is linearly independent, v_{k+1} \notin \operatorname{span}(v_1, \ldots, v_k), so some b_j \neq 0. Reindex so that b_{k+1} \neq 0. Then w_{k+1} \in \operatorname{span}(v_1, \ldots, v_{k+1}, w_{k+2}, \ldots, w_m), and by Theorem 3.3, \mathcal{V} = \operatorname{span}(v_1, \ldots, v_{k+1}, w_{k+2}, \ldots, w_m).
Now suppose for contradiction that n > m. After m exchanges, \{v_1, \ldots, v_m\} spans \mathcal{V}. Then v_{m+1} \in \operatorname{span}(v_1, \ldots, v_m), contradicting the linear independence of \{v_1, \ldots, v_n\}. Hence n \le m. \square
Historical note. This result is often called the Steinitz exchange lemma, after Ernst Steinitz (1913), who proved it in the context of field extensions. The argument above—iteratively exchanging spanning vectors for independent ones—is essentially Steinitz’s original method.
Theorem 3.10 Any two bases of a finitely generated vector space have the same cardinality.
Proof. Let \mathcal{B}_1, \mathcal{B}_2 be bases. Since \mathcal{B}_1 is independent and \mathcal{B}_2 spans, Theorem 3.9 gives |\mathcal{B}_1| \le |\mathcal{B}_2|. Reversing the roles gives |\mathcal{B}_2| \le |\mathcal{B}_1|. \square
Definition 3.6 (Dimension) The dimension of a finite-dimensional vector space \mathcal{V}, denoted \dim \mathcal{V}, is the number of elements in any basis of \mathcal{V}.
We have \dim \mathbb{R}^n = n, \dim \mathcal{P}_n = n+1.
3.5 Sums of Subspaces
Definition 3.7 (Sum of subspaces) Let \mathcal{W}_1, \mathcal{W}_2 \subset \mathcal{V} be subspaces. Their sum is \mathcal{W}_1 + \mathcal{W}_2 = \left\{ w_1 + w_2 : w_1 \in \mathcal{W}_1, w_2 \in \mathcal{W}_2 \right\}.
Theorem 3.11 \mathcal{W}_1 + \mathcal{W}_2 is a subspace of \mathcal{V}.
Proof. The zero vector is 0 + 0. If u = u_1 + u_2 and v = v_1 + v_2, then u + v = (u_1 + v_1) + (u_2 + v_2) \in \mathcal{W}_1 + \mathcal{W}_2. Scalar multiples behave similarly. \square
Theorem 3.12 (Dimension formula) If \mathcal{W}_1, \mathcal{W}_2 are finite-dimensional, then \dim(\mathcal{W}_1 + \mathcal{W}_2) = \dim \mathcal{W}_1 + \dim \mathcal{W}_2 - \dim(\mathcal{W}_1 \cap \mathcal{W}_2).
Proof. Let \{u_1,\dots,u_k\} be a basis of \mathcal{W}_1 \cap \mathcal{W}_2. Extend it to bases \{u_1,\dots,u_k, v_1,\dots,v_r\} \quad \text{of } \mathcal{W}_1, \{u_1,\dots,u_k, w_1,\dots,w_s\} \quad \text{of } \mathcal{W}_2.
We claim \{u_1,\dots,u_k, v_1,\dots,v_r, w_1,\dots,w_s\} is a basis of \mathcal{W}_1 + \mathcal{W}_2.
Any x \in \mathcal{W}_1 + \mathcal{W}_2 has the form x = x_1 + x_2 with x_i \in \mathcal{W}_i. Then x_1 = \sum_{i=1}^{k} a_i u_i + \sum_{j=1}^{r} b_j v_j, \quad x_2 = \sum_{i=1}^{k} c_i u_i + \sum_{\ell=1}^{s} d_\ell w_\ell, so x = \sum_{i=1}^{k} (a_i + c_i) u_i + \sum_{j=1}^{r} b_j v_j + \sum_{\ell=1}^{s} d_\ell w_\ell.
Next, suppose \sum_{i=1}^{k} \alpha_i u_i + \sum_{j=1}^{r} \beta_j v_j + \sum_{\ell=1}^{s} \gamma_\ell w_\ell = 0. Rearranging, \sum_{j=1}^{r} \beta_j v_j = - \sum_{i=1}^{k} \alpha_i u_i - \sum_{\ell=1}^{s} \gamma_\ell w_\ell. The left side is in \mathcal{W}_1, the right side is in \mathcal{W}_2. Thus both lie in \mathcal{W}_1 \cap \mathcal{W}_2, so \sum_{j=1}^{r} \beta_j v_j = \sum_{i=1}^{k} \delta_i u_i for some \delta_i. But \{u_1,\dots,u_k, v_1,\dots,v_r\} is independent, so \beta_j = 0 for all j and \delta_i = 0 for all i. Substituting back gives \sum_{i=1}^{k} \alpha_i u_i + \sum_{\ell=1}^{s} \gamma_\ell w_\ell = 0. Independence of \{u_1,\dots,u_k, w_1,\dots,w_s\} forces \alpha_i = 0 and \gamma_\ell = 0. \square
The subtracted term \dim(\mathcal{W}_1 \cap \mathcal{W}_2) compensates for the vectors counted in both \mathcal{W}_1 and \mathcal{W}_2.
3.6 Direct Sums
Definition 3.8 (Direct sum) We write \mathcal{V} = \mathcal{W}_1 \oplus \mathcal{W}_2 if every v \in \mathcal{V} can be written uniquely as v = w_1 + w_2 with w_i \in \mathcal{W}_i.
Theorem 3.13 For subspaces \mathcal{W}_1, \mathcal{W}_2 \subset \mathcal{V}, the following are equivalent:
- \mathcal{V} = \mathcal{W}_1 \oplus \mathcal{W}_2
- \mathcal{V} = \mathcal{W}_1 + \mathcal{W}_2 and \mathcal{W}_1 \cap \mathcal{W}_2 = \{0\}
Proof. (1)\Rightarrow (2): Suppose u \in \mathcal{W}_1 \cap \mathcal{W}_2. Then u = u + 0 = 0 + u with u \in \mathcal{W}_1 and u \in \mathcal{W}_2. Uniqueness forces u = 0.
(2)\Rightarrow (1): If v = w_1 + w_2 = w_1' + w_2', then w_1 - w_1' = w_2' - w_2 \in \mathcal{W}_1 \cap \mathcal{W}_2 = \{0\}, so w_1 = w_1' and w_2 = w_2'. \square
Corollary 3.2 If \mathcal{V} = \mathcal{W}_1 \oplus \mathcal{W}_2, then \dim \mathcal{V} = \dim \mathcal{W}_1 + \dim \mathcal{W}_2.
Proof. Apply Theorem 3.12 with \mathcal{W}_1 \cap \mathcal{W}_2 = \{0\}. \square
3.7 Finite-Dimensional Subspaces
Theorem 3.14 Every subspace of a finite-dimensional vector space is finite-dimensional.
Proof. Let \mathcal{W} \subseteq \mathcal{V} with \dim \mathcal{V} = n. Any linearly independent subset of \mathcal{W} is linearly independent in \mathcal{V}, hence has size at most n by Theorem 3.9. \square
Theorem 3.15 If \mathcal{W} \subsetneq \mathcal{V}, then \dim \mathcal{W} < \dim \mathcal{V}.
Proof. Let \{w_1,\dots,w_k\} be a basis of \mathcal{W}. Since \mathcal{W} \neq \mathcal{V}, there exists v \in \mathcal{V} \setminus \mathcal{W}. The set \{w_1,\dots,w_k, v\} is independent: any relation \sum_{i=1}^{k} a_i w_i + bv = 0 with b \neq 0 would give v \in \operatorname{span}(w_1,\dots,w_k) = \mathcal{W}. Thus \dim \mathcal{V} \ge k+1 > k. \square
3.8 Closing Remarks
Finite-dimensional vector spaces admit bases, and dimension provides a complete numerical invariant. Linear independence eliminates redundancy; bases provide unique coordinates; direct sums decompose spaces into independent components.
In the next chapter we study linear maps between vector spaces. Kernels and images produce new subspaces, and dimension governs their interaction through the rank–nullity theorem.