16 Tensors and Bilinear Forms
This chapter assumes familiarity with dual spaces (the Dual Space chapter) and inner products (the Inner Product Spaces chapter). The tensor product construction in Section 16.2 is algebraically demanding; again, students who have seen quotient constructions in abstract algebra will find it natural. The later sections on multilinear maps and change of basis for tensors assume comfort with the Kronecker delta and dual bases.
No later chapter in the main sequence depends on this material. It is included for students heading toward differential geometry, or advanced algebra.
The Inner Product Spaces chapter introduced inner products as symmetric positive definite bilinear forms. Dropping positive definiteness — or symmetry, or both — yields a much wider world of bilinear structures. The Minkowski metric of special relativity is symmetric but indefinite. The symplectic form of Hamiltonian mechanics is skew-symmetric. Bilinear forms in this generality classify quadratic surfaces, control whether systems of equations have solutions, and encode the geometry of curved spacetime.
The tensor product enlarges this picture further. Where a bilinear form B : \mathcal{V} \times \mathcal{V} \to \mathbb{R} is a single map consuming two vectors from the same space, a tensor of type (p, q) is a multilinear map consuming p vectors and q covectors. Tensors are the natural language for any quantity that must transform consistently under changes of basis — which turns out to include almost everything in differential geometry and physics.
This chapter develops bilinear forms systematically, constructs the tensor product, defines tensors as multilinear maps, and describes how they change under change of basis.
16.1 Bilinear Forms
We have defined bilinear forms and established that inner products are the symmetric positive definite case. We now develop the general theory.
Recall: a bilinear form on \mathcal{V} is a map B : \mathcal{V} \times \mathcal{V} \to \mathbb{F} that is linear in each argument separately. The three main symmetry classes are:
- Symmetric: B(u, v) = B(v, u) for all u, v. Inner products, the Minkowski metric, and all quadratic forms arise this way.
- Skew-symmetric (alternating): B(u, v) = -B(v, u) for all u, v. Equivalently B(v, v) = 0 for all v. The symplectic form and the determinant are examples.
- Neither: A general bilinear form need have no symmetry.
16.1.1 Matrix representation
Fix a basis \mathcal{B} = \{v_1, \ldots, v_n\} of \mathcal{V}. The Gram matrix of B relative to \mathcal{B} is G_{ij} = B(v_i, v_j).
For u = \sum_i a_i v_i and w = \sum_j b_j v_j, bilinearity gives B(u, w) = \sum_{i,j} a_i b_j B(v_i, v_j) = [u]_{\mathcal{B}}^T G [w]_{\mathcal{B}}.
The Gram matrix G encodes B completely in coordinates. Its symmetry mirrors that of B: B is symmetric iff G = G^T, and skew-symmetric iff G = -G^T.
16.1.2 Change of basis for bilinear forms
Suppose we change to a basis \mathcal{C} related to \mathcal{B} by the change-of-basis matrix P (so [v]_{\mathcal{B}} = P[v]_{\mathcal{C}}). How does the Gram matrix transform?
Theorem 16.1 If G is the Gram matrix of B in basis \mathcal{B} and P is the change-of-basis matrix from \mathcal{C} to \mathcal{B}, then the Gram matrix in basis \mathcal{C} is G' = P^T G P.
Proof. For u, w \in \mathcal{V}, using [u]_{\mathcal{B}} = P[u]_{\mathcal{C}}: B(u, w) = [u]_{\mathcal{B}}^T G [w]_{\mathcal{B}} = (P[u]_{\mathcal{C}})^T G (P[w]_{\mathcal{C}}) = [u]_{\mathcal{C}}^T (P^T G P) [w]_{\mathcal{C}}. \quad \square
The transformation G \mapsto P^T G P is called congruence. It differs from similarity (G \mapsto P^{-1}GP), which governs the change of basis for linear operators. This difference reflects the fact that bilinear forms consume two vectors rather than one.
16.1.3 Rank and degeneracy
Definition 16.1 (Rank and degeneracy) The rank of a bilinear form B is the rank of any Gram matrix G (this is basis-independent, since \operatorname{rank}(P^T G P) = \operatorname{rank}(G) for invertible P). A bilinear form is non-degenerate if \operatorname{rank}(B) = \dim \mathcal{V}, equivalently if B(u, v) = 0 for all v implies u = 0.
A non-degenerate bilinear form induces an isomorphism \mathcal{V} \to \mathcal{V}^* by u \mapsto B(u, \cdot). For symmetric forms, this is an isomorphism between \mathcal{V} and its dual — but one that depends on B, not canonical in the sense.
16.1.4 Sylvester’s law of inertia
For symmetric bilinear forms over \mathbb{R}, the rank is not the only invariant under congruence.
Theorem 16.2 (Sylvester’s law of inertia) Let B be a symmetric bilinear form on a finite-dimensional real vector space. There exists a basis in which the Gram matrix is diagonal with entries in \{1, -1, 0\}. The number of +1 entries (n_+), the number of -1 entries (n_-), and the number of 0 entries (n_0) are invariants of B: they do not depend on the choice of such a basis.
The triple (n_+, n_-, n_0) is the signature or inertia of B. The rank is n_+ + n_-.
We prove only the existence of the diagonalizing basis (the invariance requires more work).
Proof of existence. Choose any basis of \mathcal{V}. If the Gram matrix has a nonzero diagonal entry, permute basis vectors to bring it to position (1,1), then use it to eliminate all off-diagonal entries in the first row and column (analogous to row/column reduction). If all diagonal entries are zero but some off-diagonal entry G_{12} is nonzero, replace v_1 by v_1 + v_2 to create a nonzero diagonal entry and proceed. Continuing recursively yields a diagonal Gram matrix. Scaling each basis vector by the appropriate factor converts the nonzero diagonal entries to \pm 1. \square
Examples.
The standard inner product on \mathbb{R}^n has signature (n, 0, 0): all positive definite.
The Minkowski metric \eta(u, v) = u_0 v_0 - u_1 v_1 - u_2 v_2 - u_3 v_3 on \mathbb{R}^4 has signature (1, 3, 0): one timelike direction, three spacelike.
The degenerate form B((x_1, y_1), (x_2, y_2)) = x_1 x_2 on \mathbb{R}^2 has signature (1, 0, 1): rank 1, one null direction.
16.1.5 Skew-symmetric forms
Skew-symmetric bilinear forms have a different normal form.
Theorem 16.3 Every skew-symmetric bilinear form \Omega on a finite-dimensional real vector space has even rank 2k, and there exists a basis in which the Gram matrix is block-diagonal with k copies of \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} and zeros elsewhere.
A non-degenerate skew-symmetric form is called a symplectic form. By the theorem, symplectic forms exist only on even-dimensional spaces. The pair (\mathcal{V}, \Omega) is called a symplectic vector space. Hamiltonian mechanics lives in symplectic spaces: position and momentum coordinates form conjugate pairs, each pair contributing one 2 \times 2 block.
16.2 The Tensor Product
Bilinear maps B : \mathcal{V} \times \mathcal{W} \to \mathcal{U} arise constantly — the inner product is one, the determinant another. The tensor product is the “universal” bilinear map: every bilinear map factors uniquely through it.
16.2.1 The universal property
Definition 16.2 (Tensor product) The tensor product \mathcal{V} \otimes \mathcal{W} of vector spaces \mathcal{V} and \mathcal{W} is a vector space together with a bilinear map \otimes : \mathcal{V} \times \mathcal{W} \to \mathcal{V} \otimes \mathcal{W}, (v, w) \mapsto v \otimes w, satisfying the following universal property: for every vector space \mathcal{U} and every bilinear map B : \mathcal{V} \times \mathcal{W} \to \mathcal{U}, there exists a unique linear map \tilde{B} : \mathcal{V} \otimes \mathcal{W} \to \mathcal{U} such that \tilde{B}(v \otimes w) = B(v, w).
In other words, every bilinear map from \mathcal{V} \times \mathcal{W} factors uniquely through the tensor product: \begin{array}{ccc} \mathcal{V} \times \mathcal{W} & \xrightarrow{\;\otimes\;} & \mathcal{V} \otimes \mathcal{W} \\ & \searrow\scriptstyle{B} & \downarrow\scriptstyle{\tilde{B}} \\ & & \mathcal{U} \end{array}
The universal property characterizes \mathcal{V} \otimes \mathcal{W} uniquely up to isomorphism: any two spaces satisfying it are canonically isomorphic. This is the right way to define the tensor product — it says what \otimes does rather than how it is constructed.
16.2.2 Construction
To prove the tensor product exists, we construct it explicitly. Let F(\mathcal{V} \times \mathcal{W}) be the free vector space on the set \mathcal{V} \times \mathcal{W} — the space of all formal finite linear combinations of pairs (v, w). Let R be the subspace generated by all elements of the form: (u + v, w) - (u, w) - (v, w), \quad (v, u + w) - (v, u) - (v, w), (cv, w) - c(v, w), \quad (v, cw) - c(v, w) for all u, v \in \mathcal{V}, u, w \in \mathcal{W}, c \in \mathbb{F}. These are precisely the relations that bilinearity demands. Define \mathcal{V} \otimes \mathcal{W} = F(\mathcal{V} \times \mathcal{W}) / R, and write v \otimes w for the coset of (v, w) in this quotient. The map (v, w) \mapsto v \otimes w is then bilinear by construction.
Students familiar with group theory will recognize this construction: \mathcal{V} \otimes \mathcal{W} is a quotient of a free object by the relations encoding the structure we want. The same pattern appears in free groups (F(S)/R), group presentations, and free modules. The universal property is exactly the statement that maps out of a quotient-of-free-object correspond to structure-preserving maps from the generators.
16.2.3 Properties and dimension
Elements of \mathcal{V} \otimes \mathcal{W} are finite sums \sum_i v_i \otimes w_i. Not every element is a simple tensor v \otimes w — these are the pure “rank-one” tensors, and they span \mathcal{V} \otimes \mathcal{W} but do not exhaust it.
The tensor product is bilinear: (v_1 + v_2) \otimes w = v_1 \otimes w + v_2 \otimes w, v \otimes (w_1 + w_2) = v \otimes w_1 + v \otimes w_2, and (cv) \otimes w = c(v \otimes w) = v \otimes (cw).
Theorem 16.4 If \{e_1, \ldots, e_m\} is a basis of \mathcal{V} and \{f_1, \ldots, f_n\} is a basis of \mathcal{W}, then \{e_i \otimes f_j : 1 \le i \le m, 1 \le j \le n\} is a basis of \mathcal{V} \otimes \mathcal{W}. In particular, \dim(\mathcal{V} \otimes \mathcal{W}) = \dim \mathcal{V} \cdot \dim \mathcal{W}.
Proof. Spanning: any simple tensor v \otimes w = (\sum_i a_i e_i) \otimes (\sum_j b_j f_j) = \sum_{i,j} a_i b_j (e_i \otimes f_j) by bilinearity, and every element of \mathcal{V} \otimes \mathcal{W} is a sum of simple tensors.
Independence: suppose \sum_{i,j} c_{ij} (e_i \otimes f_j) = 0. Define the bilinear map B_{\ell k} : \mathcal{V} \times \mathcal{W} \to \mathbb{F} by B_{\ell k}(e_i, f_j) = \delta_{i\ell}\delta_{jk} (extracting the (\ell, k) coordinate). By the universal property, the induced linear map \tilde{B}_{\ell k} : \mathcal{V} \otimes \mathcal{W} \to \mathbb{F} satisfies \tilde{B}_{\ell k}(e_i \otimes f_j) = \delta_{i\ell}\delta_{jk}. Applying \tilde{B}_{\ell k} to the zero sum gives c_{\ell k} = 0. \square
16.2.4 Key isomorphisms
Several spaces are naturally isomorphic to tensor products.
Theorem 16.5 There are natural isomorphisms:
\mathcal{V}^* \otimes \mathcal{W} \cong \mathcal{L}(\mathcal{V}, \mathcal{W}), where \varphi \otimes w corresponds to the linear map v \mapsto \varphi(v) w.
\mathcal{V}^* \otimes \mathcal{W}^* \cong \mathcal{B}(\mathcal{V}, \mathcal{W}), where \mathcal{B}(\mathcal{V}, \mathcal{W}) is the space of bilinear maps \mathcal{V} \times \mathcal{W} \to \mathbb{F}, and \varphi \otimes \psi corresponds to (v, w) \mapsto \varphi(v)\psi(w).
(\mathcal{V} \otimes \mathcal{W})^* \cong \mathcal{V}^* \otimes \mathcal{W}^*.
Part (a) shows that matrices are tensors: an m \times n matrix is an element of (\mathbb{F}^m)^* \otimes \mathbb{F}^n \cong \mathcal{L}(\mathbb{F}^m, \mathbb{F}^n). Part (b) says bilinear forms are tensors in \mathcal{V}^* \otimes \mathcal{V}^*.
16.3 Tensors as Multilinear Maps
Tensors generalize bilinear forms to maps with multiple inputs, some from \mathcal{V} and some from \mathcal{V}^*.
Definition 16.3 (Tensor of type (p, q)) A tensor of type (p, q) on \mathcal{V} is a multilinear map T : \underbrace{\mathcal{V}^* \times \cdots \times \mathcal{V}^*}_{p} \times \underbrace{\mathcal{V} \times \cdots \times \mathcal{V}}_{q} \to \mathbb{F}. The space of all such tensors is denoted \mathcal{T}^p_q(\mathcal{V}).
The type (p, q) is often called the valence of the tensor: p contravariant slots (taking covectors) and q covariant slots (taking vectors). The terminology reflects the transformation behavior under change of basis, as we will see.
Special cases:
- Type (0, 0): scalars \mathbb{F}.
- Type (1, 0): linear maps \mathcal{V}^* \to \mathbb{F}, which by the double dual isomorphism identify canonically with vectors in \mathcal{V}.
- Type (0, 1): linear maps \mathcal{V} \to \mathbb{F}, which are covectors in \mathcal{V}^*.
- Type (0, 2): bilinear forms \mathcal{V} \times \mathcal{V} \to \mathbb{F}. The inner product and the Minkowski metric are both (0, 2) tensors.
- Type (1, 1): bilinear maps \mathcal{V}^* \times \mathcal{V} \to \mathbb{F}. By Theorem 16.5, these are isomorphic to \mathcal{L}(\mathcal{V}, \mathcal{V}): linear operators are (1, 1) tensors.
- Type (2, 0): bilinear maps \mathcal{V}^* \times \mathcal{V}^* \to \mathbb{F}, isomorphic to \mathcal{V} \otimes \mathcal{V}.
The space \mathcal{T}^p_q(\mathcal{V}) is naturally isomorphic to the tensor product \mathcal{T}^p_q(\mathcal{V}) \cong \underbrace{\mathcal{V} \otimes \cdots \otimes \mathcal{V}}_{p} \otimes \underbrace{\mathcal{V}^* \otimes \cdots \otimes \mathcal{V}^*}_{q}.
16.3.1 Components in a basis
Fix a basis \mathcal{B} = \{e_1, \ldots, e_n\} of \mathcal{V} and let \{e^1, \ldots, e^n\} denote the dual basis of \mathcal{V}^* (so e^i(e_j) = \delta^i_j). A tensor T \in \mathcal{T}^p_q(\mathcal{V}) has components T^{i_1 \cdots i_p}{}_{j_1 \cdots j_q} = T(e^{i_1}, \ldots, e^{i_p}, e_{j_1}, \ldots, e_{j_q}).
The tensor is completely determined by its n^{p+q} components. In index notation, upper indices label contravariant slots and lower indices label covariant slots — the convention that physicists use universally. Einstein’s summation convention abbreviates repeated index sums: v^i e_i means \sum_{i=1}^n v^i e_i.
Example. A linear operator T : \mathcal{V} \to \mathcal{V} as a (1,1) tensor has components T^i{}_j = T(e^i, e_j) = e^i(T(e_j)), which is the (i,j)-entry of the matrix of T in the basis \mathcal{B}. The operator acts as T(v) = T^i{}_j v^j e_i (summing over j), where v^j are the components of v.
Example. The Kronecker delta \delta^i_j is a (1,1) tensor with components \delta^i_j — it represents the identity operator.
Example. The metric tensor g of an inner product has components g_{ij} = \langle e_i, e_j \rangle = G_{ij}, the Gram matrix. It is a (0,2) tensor.
16.3.2 Tensor products of tensors
Tensors of different types can be combined by the tensor product: if S \in \mathcal{T}^p_q and T \in \mathcal{T}^r_s, then S \otimes T \in \mathcal{T}^{p+r}_{q+s} is defined by (S \otimes T)(\varphi^1, \ldots, \varphi^{p+r}, v_1, \ldots, v_{q+s}) = S(\varphi^1, \ldots, \varphi^p, v_1, \ldots, v_q) \cdot T(\varphi^{p+1}, \ldots, \varphi^{p+r}, v_{q+1}, \ldots, v_{q+s}).
In components: (S \otimes T)^{i_1 \cdots i_{p+r}}{}_{j_1 \cdots j_{q+s}} = S^{i_1 \cdots i_p}{}_{j_1 \cdots j_q} \cdot T^{i_{p+1} \cdots i_{p+r}}{}_{j_{q+1} \cdots j_{q+s}}.
16.3.3 Contraction
There is an operation with no analogue for vectors or bilinear forms alone: contraction. Given a (p, q) tensor with p, q \ge 1, we can contract one upper index with one lower index to obtain a (p-1, q-1) tensor: (C^k_\ell \, T)^{i_1 \cdots \hat{\imath}_k \cdots i_p}{}_{j_1 \cdots \hat{\jmath}_\ell \cdots j_q} = \sum_{m=1}^n T^{i_1 \cdots m \cdots i_p}{}_{j_1 \cdots m \cdots j_q}, where the m appears in the k-th upper position and the \ell-th lower position.
Example. The trace of a linear operator T^i{}_j is the contraction \sum_i T^i{}_i = \operatorname{tr}(T): a (0,0) tensor, i.e., a scalar. This is the intrinsic formulation of why the trace is basis-independent.
Example. Contraction of the metric tensor g_{ij} with a vector v^j gives a covector g_{ij} v^j, a (0,1) tensor. This is the musical isomorphism \flat : \mathcal{V} \to \mathcal{V}^*: it uses the metric to lower an index, converting a vector into a covector. The inverse \sharp : \mathcal{V}^* \to \mathcal{V} raises an index using the inverse metric g^{ij}.
16.4 Change of Basis for Tensors
The transformation law for tensor components under a change of basis is the heart of tensor calculus. It is why tensors are the right language for coordinate-independent physics.
Suppose \{e_i\} and \{\tilde{e}_i\} are two bases related by \tilde{e}_j = P^i{}_j e_i (i.e., the j-th new basis vector is \sum_i P^i{}_j e_i, so P is the change-of-basis matrix from old to new, with columns expressing the new basis in the old). The dual bases satisfy \tilde{e}^i = (P^{-1})^i{}_j e^j.
Theorem 16.6 (Tensor transformation law) Under the change of basis with matrix P, the components of a (p, q) tensor transform as \tilde{T}^{i_1 \cdots i_p}{}_{j_1 \cdots j_q} = (P^{-1})^{i_1}{}_{k_1} \cdots (P^{-1})^{i_p}{}_{k_p} \cdot P^{\ell_1}{}_{j_1} \cdots P^{\ell_q}{}_{j_q} \cdot T^{k_1 \cdots k_p}{}_{\ell_1 \cdots \ell_q}.
Each contravariant (upper) index transforms by P^{-1}; each covariant (lower) index transforms by P.
Proof. By definition, \tilde{T}^{i_1 \cdots i_p}{}_{j_1 \cdots j_q} = T(\tilde{e}^{i_1}, \ldots, \tilde{e}^{i_p}, \tilde{e}_{j_1}, \ldots, \tilde{e}_{j_q}). Substituting \tilde{e}^i = (P^{-1})^i{}_k e^k and \tilde{e}_j = P^\ell{}_j e_\ell and using multilinearity gives the formula. \square
This is precisely the reason for the terminology: upper-index slots transform contravariantly (by P^{-1}, opposite to the basis) and lower-index slots transform covariantly (by P, with the basis). We saw this already in Chapter 17: dual basis coordinates transform by (P^{-1})^T, which for a single covariant index is exactly the transformation law above.
Special cases:
A vector v^i transforms as \tilde{v}^i = (P^{-1})^i{}_k v^k: it is a (1,0) tensor. A covector \varphi_j transforms as \tilde{\varphi}_j = P^\ell{}_j \varphi_\ell: it is a (0,1) tensor. The matrix T^i{}_j of a linear operator transforms as \tilde{T}^i{}_j = (P^{-1})^i{}_k T^k{}_\ell P^\ell{}_j, which is the similarity transformation \tilde{A} = P^{-1}AP in matrix notation.
16.4.1 Invariance
A key principle: scalars formed by full contraction of tensor components are basis-independent. For example, the inner product g_{ij} v^i w^j is a (0,0) tensor — a scalar — and therefore takes the same value in every basis. This is the coordinate-free content of \langle v, w \rangle = \langle v, w \rangle.
More generally, any tensor equation S^{i \cdots}{}_{j \cdots} = T^{i \cdots}{}_{j \cdots} that holds in one basis holds in all bases — because both sides transform by the same law and so their difference, which is zero in one basis, transforms to zero in every basis. This is why physical laws written in tensor form are automatically coordinate-independent.
16.5 Symmetric and Alternating Tensors
Among (0, q) tensors — multilinear maps \mathcal{V}^q \to \mathbb{F} — two subspaces are particularly important.
A (0, q) tensor T is symmetric if T(v_1, \ldots, v_q) is unchanged by any permutation of the inputs. It is alternating (or antisymmetric) if T(v_1, \ldots, v_q) changes sign under any transposition of two inputs — equivalently, if T vanishes whenever two inputs coincide.
The symmetric (0, q) tensors form a subspace \operatorname{Sym}^q(\mathcal{V}^*); the alternating ones form \Lambda^q(\mathcal{V}^*), the space of q-forms or exterior forms of degree q.
16.5.1 Exterior forms
The space \Lambda^q(\mathcal{V}^*) is where the determinant lives. Recall from Chapter 7 that \det is an alternating multilinear map on n vectors in \mathbb{R}^n — that is, \det \in \Lambda^n((\mathbb{R}^n)^*).
More generally, \dim \Lambda^q(\mathcal{V}^*) = \binom{n}{q} for 0 \le q \le n, and \Lambda^q(\mathcal{V}^*) = 0 for q > n. The exterior product \alpha \wedge \beta of a p-form \alpha and a q-form \beta is a (p+q)-form defined by antisymmetrized tensor product: (\alpha \wedge \beta)(v_1, \ldots, v_{p+q}) = \frac{1}{p!\, q!} \sum_{\sigma \in S_{p+q}} \operatorname{sgn}(\sigma) \, \alpha(v_{\sigma(1)}, \ldots, v_{\sigma(p)}) \beta(v_{\sigma(p+1)}, \ldots, v_{\sigma(p+q)}).
The collection \Lambda^*(\mathcal{V}^*) = \bigoplus_{q=0}^n \Lambda^q(\mathcal{V}^*) with the wedge product is the exterior algebra of \mathcal{V}^*. It is the algebraic structure underlying differential forms, integration on manifolds, and the generalization of cross products to arbitrary dimensions.
The determinant of an n \times n matrix A = (v_1 \, \cdots \, v_n) is the unique alternating n-form satisfying \det(I) = 1, evaluated on the columns. In the language of this section: \det = e^1 \wedge \cdots \wedge e^n where \{e^i\} is the dual of the standard basis. The formula \det(AB) = \det(A)\det(B) follows from the transformation law for n-forms under linear maps.
16.6 Closing Remarks
Bilinear forms generalize inner products by dropping positive definiteness — and the resulting structures, classified by Sylvester’s law of inertia, include the geometries of special relativity (Lorentzian signature) and Hamiltonian mechanics (symplectic forms). The tensor product converts bilinear maps into linear ones via the universal property, and the resulting tensor algebra organizes all multilinear structures into a single coherent framework.
The transformation law for tensors — contravariant indices by P^{-1}, covariant by P — is not an arbitrary convention but reflects the actual behavior of the two kinds of objects under coordinate changes, first seen in Chapter 17. A tensor equation holding in one basis holds in all, making tensors the natural language for coordinate-independent statements. This is why general relativity is formulated in terms of tensors: the field equations G_{\mu\nu} = 8\pi T_{\mu\nu} are tensor equations, holding in every coordinate system simultaneously.
The exterior algebra \Lambda^*(\mathcal{V}^*) takes alternating multilinear forms as its basic objects, with the determinant as the top-dimensional example. In differential geometry, these become differential forms — objects that can be integrated over curves, surfaces, and higher-dimensional manifolds — and the wedge product encodes the geometry of oriented volumes. Stokes’s theorem, in its fully general form, is a statement about differential forms and their exterior derivatives.