Section D  Dimension

From A First Course in Linear Algebra
Version 2.10
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

Subsection D: Dimension

Definition D
Dimension
Suppose that V is a vector space and \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {v}_{t}\right \} is a basis of V . Then the dimension of V is defined by \mathop{ dim}\nolimits \left (V \right ) = t. If V has no finite bases, we say V has infinite dimension.

(This definition contains Notation D.)

This is a very simple definition, which belies its power. Grab a basis, any basis, and count up the number of vectors it contains. That’s the dimension. However, this simplicity causes a problem. Given a vector space, you and I could each construct different bases — remember that a vector space might have many bases. And what if your basis and my basis had different sizes? Applying Definition D we would arrive at different numbers! With our current knowledge about vector spaces, we would have to say that dimension is not “well-defined.” Fortunately, there is a theorem that will correct this problem.

In a strictly logical progression, the next two theorems would precede the definition of dimension. Many subsequent theorems will trace their lineage back to the following fundamental result.

Theorem SSLD
Spanning Sets and Linear Dependence
Suppose that S = \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {v}_{t}\right \} is a finite set of vectors which spans the vector space V . Then any set of t + 1 or more vectors from V is linearly dependent.

Proof   We want to prove that any set of t + 1 or more vectors from V is linearly dependent. So we will begin with a totally arbitrary set of vectors from V , R = \left \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {u}_{m}\right \}, where m > t. We will now construct a nontrivial relation of linear dependence on R.

Each vector {u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {u}_{m} can be written as a linear combination of {v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {v}_{t} since S is a spanning set of V . This means there exist scalars {a}_{ij}, 1 ≤ i ≤ t, 1 ≤ j ≤ m, so that

\eqalignno{ {u}_{1} & = {a}_{11}{v}_{1} + {a}_{21}{v}_{2} + {a}_{31}{v}_{3} + \mathrel{⋯} + {a}_{t1}{v}_{t} & & \cr {u}_{2} & = {a}_{12}{v}_{1} + {a}_{22}{v}_{2} + {a}_{32}{v}_{3} + \mathrel{⋯} + {a}_{t2}{v}_{t} & & \cr {u}_{3} & = {a}_{13}{v}_{1} + {a}_{23}{v}_{2} + {a}_{33}{v}_{3} + \mathrel{⋯} + {a}_{t3}{v}_{t} & & \cr &\quad \quad \mathop{\mathop{⋮}} & & \cr {u}_{m} & = {a}_{1m}{v}_{1} + {a}_{2m}{v}_{2} + {a}_{3m}{v}_{3} + \mathrel{⋯} + {a}_{tm}{v}_{t} & & }

Now we form, unmotivated, the homogeneous system of t equations in the m variables, {x}_{1},\kern 1.95872pt {x}_{2},\kern 1.95872pt {x}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {x}_{m}, where the coefficients are the just-discovered scalars {a}_{ij},

\eqalignno{ {a}_{11}{x}_{1} + {a}_{12}{x}_{2} + {a}_{13}{x}_{3} + \mathrel{⋯} + {a}_{1m}{x}_{m} & = 0 & & \cr {a}_{21}{x}_{1} + {a}_{22}{x}_{2} + {a}_{23}{x}_{3} + \mathrel{⋯} + {a}_{2m}{x}_{m} & = 0 & & \cr {a}_{31}{x}_{1} + {a}_{32}{x}_{2} + {a}_{33}{x}_{3} + \mathrel{⋯} + {a}_{3m}{x}_{m} & = 0 & & \cr \mathop{\mathop{⋮}}\quad \quad & & & \cr {a}_{t1}{x}_{1} + {a}_{t2}{x}_{2} + {a}_{t3}{x}_{3} + \mathrel{⋯} + {a}_{tm}{x}_{m} & = 0 & & \cr & & }

This is a homogeneous system with more variables than equations (our hypothesis is expressed as m > t), so by Theorem HMVEI there are infinitely many solutions. Choose a nontrivial solution and denote it by {x}_{1} = {c}_{1},\kern 1.95872pt {x}_{2} = {c}_{2},\kern 1.95872pt {x}_{3} = {c}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {x}_{m} = {c}_{m}. As a solution to the homogeneous system, we then have

\eqalignno{ {a}_{11}{c}_{1} + {a}_{12}{c}_{2} + {a}_{13}{c}_{3} + \mathrel{⋯} + {a}_{1m}{c}_{m} & = 0 & & \cr {a}_{21}{c}_{1} + {a}_{22}{c}_{2} + {a}_{23}{c}_{3} + \mathrel{⋯} + {a}_{2m}{c}_{m} & = 0 & & \cr {a}_{31}{c}_{1} + {a}_{32}{c}_{2} + {a}_{33}{c}_{3} + \mathrel{⋯} + {a}_{3m}{c}_{m} & = 0 & & \cr \mathop{\mathop{⋮}}\quad \quad & & & \cr {a}_{t1}{c}_{1} + {a}_{t2}{c}_{2} + {a}_{t3}{c}_{3} + \mathrel{⋯} + {a}_{tm}{c}_{m} & = 0 & & \cr & & }

As a collection of nontrivial scalars, {c}_{1},\kern 1.95872pt {c}_{2},\kern 1.95872pt {c}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {c}_{m} will provide the nontrivial relation of linear dependence we desire,

\eqalignno{ &{c}_{1}{u}_{1} + {c}_{2}{u}_{2} + {c}_{3}{u}_{3} + \mathrel{⋯} + {c}_{m}{u}_{m} & & & & \cr & = {c}_{1}\left ({a}_{11}{v}_{1} + {a}_{21}{v}_{2} + {a}_{31}{v}_{3} + \mathrel{⋯} + {a}_{t1}{v}_{t}\right ) & &\text{@(a href="fcla-jsmath-2.10li39.html#definition.TSVS")Definition TSVS@(/a)} & & & & \cr &\quad \quad + {c}_{2}\left ({a}_{12}{v}_{1} + {a}_{22}{v}_{2} + {a}_{32}{v}_{3} + \mathrel{⋯} + {a}_{t2}{v}_{t}\right ) & & & & \cr &\quad \quad + {c}_{3}\left ({a}_{13}{v}_{1} + {a}_{23}{v}_{2} + {a}_{33}{v}_{3} + \mathrel{⋯} + {a}_{t3}{v}_{t}\right ) & & & & \cr &\quad \quad \quad \quad \mathop{\mathop{⋮}} & & & & \cr &\quad \quad + {c}_{m}\left ({a}_{1m}{v}_{1} + {a}_{2m}{v}_{2} + {a}_{3m}{v}_{3} + \mathrel{⋯} + {a}_{tm}{v}_{t}\right ) & & & & \cr & = {c}_{1}{a}_{11}{v}_{1} + {c}_{1}{a}_{21}{v}_{2} + {c}_{1}{a}_{31}{v}_{3} + \mathrel{⋯} + {c}_{1}{a}_{t1}{v}_{t} & &\text{@(a href="fcla-jsmath-2.10li37.html#property.DVA")Property DVA@(/a)} & & & & \cr &\quad \quad + {c}_{2}{a}_{12}{v}_{1} + {c}_{2}{a}_{22}{v}_{2} + {c}_{2}{a}_{32}{v}_{3} + \mathrel{⋯} + {c}_{2}{a}_{t2}{v}_{t} & & & & \cr &\quad \quad + {c}_{3}{a}_{13}{v}_{1} + {c}_{3}{a}_{23}{v}_{2} + {c}_{3}{a}_{33}{v}_{3} + \mathrel{⋯} + {c}_{3}{a}_{t3}{v}_{t} & & & & \cr &\quad \quad \quad \quad \mathop{\mathop{⋮}} & & & & \cr &\quad \quad + {c}_{m}{a}_{1m}{v}_{1} + {c}_{m}{a}_{2m}{v}_{2} + {c}_{m}{a}_{3m}{v}_{3} + \mathrel{⋯} + {c}_{m}{a}_{tm}{v}_{t} & & & & \cr & = \left ({c}_{1}{a}_{11} + {c}_{2}{a}_{12} + {c}_{3}{a}_{13} + \mathrel{⋯} + {c}_{m}{a}_{1m}\right ){v}_{1} & &\text{@(a href="fcla-jsmath-2.10li37.html#property.DSA")Property DSA@(/a)} & & & & \cr &\quad \quad + \left ({c}_{1}{a}_{21} + {c}_{2}{a}_{22} + {c}_{3}{a}_{23} + \mathrel{⋯} + {c}_{m}{a}_{2m}\right ){v}_{2} & & & & \cr &\quad \quad + \left ({c}_{1}{a}_{31} + {c}_{2}{a}_{32} + {c}_{3}{a}_{33} + \mathrel{⋯} + {c}_{m}{a}_{3m}\right ){v}_{3} & & & & \cr &\quad \quad \quad \quad \mathop{\mathop{⋮}} & & & & \cr &\quad \quad + \left ({c}_{1}{a}_{t1} + {c}_{2}{a}_{t2} + {c}_{3}{a}_{t3} + \mathrel{⋯} + {c}_{m}{a}_{tm}\right ){v}_{t} & & & & \cr & = \left ({a}_{11}{c}_{1} + {a}_{12}{c}_{2} + {a}_{13}{c}_{3} + \mathrel{⋯} + {a}_{1m}{c}_{m}\right ){v}_{1} & &\text{@(a href="fcla-jsmath-2.10li69.html#property.CMCN")Property CMCN@(/a)} & & & & \cr &\quad \quad + \left ({a}_{21}{c}_{1} + {a}_{22}{c}_{2} + {a}_{23}{c}_{3} + \mathrel{⋯} + {a}_{2m}{c}_{m}\right ){v}_{2} & & & & \cr &\quad \quad + \left ({a}_{31}{c}_{1} + {a}_{32}{c}_{2} + {a}_{33}{c}_{3} + \mathrel{⋯} + {a}_{3m}{c}_{m}\right ){v}_{3} & & & & \cr &\quad \quad \quad \quad \mathop{\mathop{⋮}} & & & & \cr &\quad \quad + \left ({a}_{t1}{c}_{1} + {a}_{t2}{c}_{2} + {a}_{t3}{c}_{3} + \mathrel{⋯} + {a}_{tm}{c}_{m}\right ){v}_{t} & & & & \cr & = 0{v}_{1} + 0{v}_{2} + 0{v}_{3} + \mathrel{⋯} + 0{v}_{t} & &\text{${c}_{j}$ as solution} & & & & \cr & = 0 + 0 + 0 + \mathrel{⋯} + 0 & &\text{@(a href="fcla-jsmath-2.10li37.html#theorem.ZSSM")Theorem ZSSM@(/a)} & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.10li37.html#property.Z")Property Z@(/a)} & & & & }

That does it. R has been undeniably shown to be a linearly dependent set.

The proof just given has some monstrous expressions in it, mostly owing to the double subscripts present. Now is a great opportunity to show the value of a more compact notation. We will rewrite the key steps of the previous proof using summation notation, resulting in a more economical presentation, and even greater insight into the key aspects of the proof. So here is an alternate proof — study it carefully.

Proof   (Alternate Proof of Theorem SSLD) We want to prove that any set of t + 1 or more vectors from V is linearly dependent. So we will begin with a totally arbitrary set of vectors from V , R = \left \{{u}_{j}\mathrel{∣}1 ≤ j ≤ m\right \}, where m > t. We will now construct a nontrivial relation of linear dependence on R.

Each vector {u}_{j}, 1 ≤ j ≤ m can be written as a linear combination of {v}_{i}, 1 ≤ i ≤ t since S is a spanning set of V . This means there are scalars {a}_{ij}, 1 ≤ i ≤ t, 1 ≤ j ≤ m, so that

\eqalignno{ {u}_{j} & ={ \mathop{∑ }}_{i=1}^{t}{a}_{ ij}{v}_{i} & &1 ≤ j ≤ m & & & & }

Now we form, unmotivated, the homogeneous system of t equations in the m variables, {x}_{j}, 1 ≤ j ≤ m, where the coefficients are the just-discovered scalars {a}_{ij},

\eqalignno{ { \mathop{∑ }}_{j=1}^{m}{a}_{ ij}{x}_{j} = 0 & &1 ≤ i ≤ t & & & & }

This is a homogeneous system with more variables than equations (our hypothesis is expressed as m > t), so by Theorem HMVEI there are infinitely many solutions. Choose one of these solutions that is not trivial and denote it by {x}_{j} = {c}_{j}, 1 ≤ j ≤ m. As a solution to the homogeneous system, we then have {\mathop{ \mathop{∑ }}\nolimits }_{j=1}^{m}{a}_{ ij}{c}_{j} = 0 for 1 ≤ i ≤ t. As a collection of nontrivial scalars, {c}_{j}, 1 ≤ j ≤ m, will provide the nontrivial relation of linear dependence we desire,

\eqalignno{ { \mathop{∑ }}_{j=1}^{m}{c}_{ j}{u}_{j} & ={ \mathop{∑ }}_{j=1}^{m}{c}_{ j}\left ({\mathop{∑ }}_{i=1}^{t}{a}_{ ij}{v}_{i}\right ) & &\text{@(a href="fcla-jsmath-2.10li39.html#definition.TSVS")Definition TSVS@(/a)} & & & & \cr & ={ \mathop{∑ }}_{j=1}^{m}{ \mathop{∑ }}_{i=1}^{t}{c}_{ j}{a}_{ij}{v}_{i} & &\text{@(a href="fcla-jsmath-2.10li37.html#property.DVA")Property DVA@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{t}{ \mathop{∑ }}_{j=1}^{m}{c}_{ j}{a}_{ij}{v}_{i} & &\text{@(a href="fcla-jsmath-2.10li69.html#property.CMCN")Property CMCN@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{t}{ \mathop{∑ }}_{j=1}^{m}{a}_{ ij}{c}_{j}{v}_{i} & &\text{Commutativity in ${ℂ}^{}$} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{t}\left ({\mathop{∑ }}_{j=1}^{m}{a}_{ ij}{c}_{j}\right ){v}_{i} & &\text{@(a href="fcla-jsmath-2.10li37.html#property.DSA")Property DSA@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{t}0{v}_{ i} & &\text{${c}_{j}$ as solution} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{t}0 & &\text{@(a href="fcla-jsmath-2.10li37.html#theorem.ZSSM")Theorem ZSSM@(/a)} & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.10li37.html#property.Z")Property Z@(/a)} & & & & }

That does it. R has been undeniably shown to be a linearly dependent set.

Notice how the swap of the two summations is so much easier in the third step above, as opposed to all the rearranging and regrouping that takes place in the previous proof. In about half the space. And there are no ellipses (\mathop{\mathop{…}}).

Theorem SSLD can be viewed as a generalization of Theorem MVSLD. We know that {ℂ}^{m} has a basis with m vectors in it (Theorem SUVB), so it is a set of m vectors that spans {ℂ}^{m}. By Theorem SSLD, any set of more than m vectors from {ℂ}^{m} will be linearly dependent. But this is exactly the conclusion we have in Theorem MVSLD. Maybe this is not a total shock, as the proofs of both theorems rely heavily on Theorem HMVEI. The beauty of Theorem SSLD is that it applies in any vector space. We illustrate the generality of this theorem, and hint at its power, in the next example.

Example LDP4
Linearly dependent set in {P}_{4}
In Example SSP4 we showed that

S = \left \{x − 2,\kern 1.95872pt {x}^{2} − 4x + 4,\kern 1.95872pt {x}^{3} − 6{x}^{2} + 12x − 8,\kern 1.95872pt {x}^{4} − 8{x}^{3} + 24{x}^{2} − 32x + 16\right \}

is a spanning set for W = \left \{p(x)\mathrel{∣}p ∈ {P}_{4},\ p(2) = 0\right \}. So we can apply Theorem SSLD to W with t = 4. Here is a set of five vectors from W, as you may check by verifying that each is a polynomial of degree 4 or less and has x = 2 as a root,

\eqalignno{ T & = \left \{{p}_{1},\kern 1.95872pt {p}_{2},\kern 1.95872pt {p}_{3},\kern 1.95872pt {p}_{4},\kern 1.95872pt {p}_{5}\right \} ⊆ W & & \cr &\ & & \cr {p}_{1} & = {x}^{4} − 2{x}^{3} + 2{x}^{2} − 8x + 8 & & \cr {p}_{2} & = −{x}^{3} + 6{x}^{2} − 5x − 6 & & \cr {p}_{3} & = 2{x}^{4} − 5{x}^{3} + 5{x}^{2} − 7x + 2 & & \cr {p}_{4} & = −{x}^{4} + 4{x}^{3} − 7{x}^{2} + 6x & & \cr {p}_{5} & = 4{x}^{3} − 9{x}^{2} + 5x − 6 & & }

By Theorem SSLD we conclude that T is linearly dependent, with no further computations.

Theorem SSLD is indeed powerful, but our main purpose in proving it right now was to make sure that our definition of dimension (Definition D) is well-defined. Here’s the theorem.

Theorem BIS
Bases have Identical Sizes
Suppose that V is a vector space with a finite basis B and a second basis C. Then B and C have the same size.

Proof   Suppose that C has more vectors than B. (Allowing for the possibility that C is infinite, we can replace C by a subset that has more vectors than B.) As a basis, B is a spanning set for V (Definition B), so Theorem SSLD says that C is linearly dependent. However, this contradicts the fact that as a basis C is linearly independent (Definition B). So C must also be a finite set, with size less than, or equal to, that of B.

Suppose that B has more vectors than C. As a basis, C is a spanning set for V (Definition B), so Theorem SSLD says that B is linearly dependent. However, this contradicts the fact that as a basis B is linearly independent (Definition B). So C cannot be strictly smaller than B.

The only possibility left for the sizes of B and C is for them to be equal.

Theorem BIS tells us that if we find one finite basis in a vector space, then they all have the same size. This (finally) makes Definition D unambiguous.

Subsection DVS: Dimension of Vector Spaces

We can now collect the dimension of some common, and not so common, vector spaces.

Theorem DCM
Dimension of {ℂ}^{m}
The dimension of {ℂ}^{m} (Example VSCV) is m.

Proof   Theorem SUVB provides a basis with m vectors.

Theorem DP
Dimension of {P}_{n}
The dimension of {P}_{n} (Example VSP) is n + 1.

Proof   Example BP provides two bases with n + 1 vectors. Take your pick.

Theorem DM
Dimension of {M}_{mn}
The dimension of {M}_{mn} (Example VSM) is mn.

Proof   Example BM provides a basis with mn vectors.

Example DSM22
Dimension of a subspace of {M}_{22}
It should now be plausible that

Z = \left \{\left [\array{ a&b \cr c&d } \right ]\mathrel{∣}2a + b + 3c + 4d = 0,\kern 1.95872pt − a + 3b − 5c − d = 0\right \}

is a subspace of the vector space {M}_{22} (Example VSM). (It is.) To find the dimension of Z we must first find a basis, though any old basis will do.

First concentrate on the conditions relating a,\kern 1.95872pt b,\kern 1.95872pt c and d. They form a homogeneous system of two equations in four variables with coefficient matrix

\left [\array{ 2 &1& 3 & 4 \cr −1&3&−5&−1 } \right ]

We can row-reduce this matrix to obtain

\left [\array{ \text{1}&0& 2 &2 \cr 0&\text{1}&−1&0 } \right ]

Rewrite the two equations represented by each row of this matrix, expressing the dependent variables (a and b) in terms of the free variables (c and d), and we obtain,

\eqalignno{ a & = −2c − 2d & & \cr b & = c & & }

We can now write a typical entry of Z strictly in terms of c and d, and we can decompose the result,

\left [\array{ a&b \cr c&d } \right ] = \left [\array{ −2c − 2d&c \cr c &d } \right ] = \left [\array{ −2c&c \cr c &0 } \right ]+\left [\array{ −2d&0 \cr 0 &d } \right ] = c\left [\array{ −2&1 \cr 1 &0 } \right ]+d\left [\array{ −2&0 \cr 0 &1 } \right ]

this equation says that an arbitrary matrix in Z can be written as a linear combination of the two vectors in

S = \left \{\left [\array{ −2&1 \cr 1 &0 } \right ],\kern 1.95872pt \left [\array{ −2&0 \cr 0 &1 } \right ]\right \}

so we know that

Z = \left \langle S\right \rangle = \left \langle \left \{\left [\array{ −2&1 \cr 1 &0 } \right ],\kern 1.95872pt \left [\array{ −2&0 \cr 0 &1 } \right ]\right \}\right \rangle

Are these two matrices (vectors) also linearly independent? Begin with a relation of linear dependence on S,

\eqalignno{ {a}_{1}\left [\array{ −2&1 \cr 1 &0 } \right ] + {a}_{2}\left [\array{ −2&0 \cr 0 &1 } \right ] & = O & & \cr \left [\array{ −2{a}_{1} − 2{a}_{2}&{a}_{1} \cr {a}_{1} &{a}_{2} } \right ] & = \left [\array{ 0&0 \cr 0&0 } \right ] & & }

From the equality of the two entries in the last row, we conclude that {a}_{1} = 0, {a}_{2} = 0. Thus the only possible relation of linear dependence is the trivial one, and therefore S is linearly independent (Definition LI). So S is a basis for V (Definition B). Finally, we can conclude that \mathop{ dim}\nolimits \left (Z\right ) = 2 (Definition D) since S has two elements.

Example DSP4
Dimension of a subspace of {P}_{4}
In Example BSP4 we showed that

S = \left \{x − 2,\kern 1.95872pt {x}^{2} − 4x + 4,\kern 1.95872pt {x}^{3} − 6{x}^{2} + 12x − 8,\kern 1.95872pt {x}^{4} − 8{x}^{3} + 24{x}^{2} − 32x + 16\right \}

is a basis for W = \left \{p(x)\mathrel{∣}p ∈ {P}_{4},\ p(2) = 0\right \}. Thus, the dimension of W is four, \mathop{ dim}\nolimits \left (W\right ) = 4.

Note that \mathop{ dim}\nolimits \left ({P}_{4}\right ) = 5 by Theorem DP, so W is a subspace of dimension 4 within the vector space {P}_{4} of dimension 5, illustrating the upcoming Theorem PSSD.

Example DC
Dimension of the crazy vector space
In Example BC we determined that the set R = \left \{(1,\kern 1.95872pt 0),\kern 1.95872pt (6,\kern 1.95872pt 3)\right \} from the crazy vector space, C (Example CVS), is a basis for C. By Definition D we see that C has dimension 2, \mathop{ dim}\nolimits \left (C\right ) = 2.

It is possible for a vector space to have no finite bases, in which case we say it has infinite dimension. Many of the best examples of this are vector spaces of functions, which lead to constructions like Hilbert spaces. We will focus exclusively on finite-dimensional vector spaces. OK, one infinite-dimensional example, and then we will focus exclusively on finite-dimensional vector spaces.

Example VSPUD
Vector space of polynomials with unbounded degree
Define the set P by

P = \left \{p\mathrel{∣}p(x)\text{ is a polynomial in }x\right \}

Our operations will be the same as those defined for {P}_{n} (Example VSP).

With no restrictions on the possible degrees of our polynomials, any finite set that is a candidate for spanning P will come up short. We will give a proof by contradiction (Technique CD). To this end, suppose that the dimension of P is finite, say \mathop{ dim}\nolimits \left (P\right ) = n.

The set T = \left \{1,\kern 1.95872pt x,\kern 1.95872pt {x}^{2},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {x}^{n}\right \} is a linearly independent set (check this!) containing n + 1 polynomials from P. However, a basis of P will be a spanning set of P containing n vectors. This situation is a contradiction of Theorem SSLD, so our assumption that P has finite dimension is false. Thus, we say \mathop{ dim}\nolimits \left (P\right ) = ∞.

Subsection RNM: Rank and Nullity of a Matrix

For any matrix, we have seen that we can associate several subspaces — the null space (Theorem NSMS), the column space (Theorem CSMS), row space (Theorem RSMS) and the left null space (Theorem LNSMS). As vector spaces, each of these has a dimension, and for the null space and column space, they are important enough to warrant names.

Definition NOM
Nullity Of a Matrix
Suppose that A is an m × n matrix. Then the nullity of A is the dimension of the null space of A, n\left (A\right ) =\mathop{ dim}\nolimits \left (N\kern -1.95872pt \left (A\right )\right ).

(This definition contains Notation NOM.)

Definition ROM
Rank Of a Matrix
Suppose that A is an m × n matrix. Then the rank of A is the dimension of the column space of A, r\left (A\right ) =\mathop{ dim}\nolimits \left (C\kern -1.95872pt \left (A\right )\right ).

(This definition contains Notation ROM.)

Example RNM
Rank and nullity of a matrix
Let’s compute the rank and nullity of

A = \left [\array{ 2 &−4&−1& 3 & 2 & 1 &−4 \cr 1 &−2& 0 & 0 & 4 & 0 & 1 \cr −2& 4 & 1 & 0 &−5&−4&−8 \cr 1 &−2& 1 & 1 & 6 & 1 &−3 \cr 2 &−4&−1& 1 & 4 &−2&−1 \cr −1& 2 & 3 &−1& 6 & 3 &−1 } \right ]

To do this, we will first row-reduce the matrix since that will help us determine bases for the null space and column space.

\left [\array{ \text{1}&−2&0&0& 4 &0& 1 \cr 0& 0 &\text{1}&0& 3 &0&−2 \cr 0& 0 &0&\text{1}&−1&0&−3 \cr 0& 0 &0&0& 0 &\text{1}& 1 \cr 0& 0 &0&0& 0 &0& 0 \cr 0& 0 &0&0& 0 &0& 0 } \right ]

From this row-equivalent matrix in reduced row-echelon form we record D = \left \{1,\kern 1.95872pt 3,\kern 1.95872pt 4,\kern 1.95872pt 6\right \} and F = \left \{2,\kern 1.95872pt 5,\kern 1.95872pt 7\right \}.

For each index in D, Theorem BCS creates a single basis vector. In total the basis will have 4 vectors, so the column space of A will have dimension 4 and we write r\left (A\right ) = 4.

For each index in F, Theorem BNS creates a single basis vector. In total the basis will have 3 vectors, so the null space of A will have dimension 3 and we write n\left (A\right ) = 3.

There were no accidents or coincidences in the previous example — with the row-reduced version of a matrix in hand, the rank and nullity are easy to compute.

Theorem CRN
Computing Rank and Nullity
Suppose that A is an m × n matrix and B is a row-equivalent matrix in reduced row-echelon form with r nonzero rows. Then r\left (A\right ) = r and n\left (A\right ) = n − r.

Proof   Theorem BCS provides a basis for the column space by choosing columns of A that correspond to the dependent variables in a description of the solutions to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ). In the analysis of B, there is one dependent variable for each leading 1, one per nonzero row, or one per pivot column. So there are r column vectors in a basis for C\kern -1.95872pt \left (A\right ).

Theorem BNS provide a basis for the null space by creating basis vectors of the null space of A from entries of B, one for each independent variable, one per column with out a leading 1. So there are n − r column vectors in a basis for n\left (A\right ).

Every archetype (Appendix A) that involves a matrix lists its rank and nullity. You may have noticed as you studied the archetypes that the larger the column space is the smaller the null space is. A simple corollary states this trade-off succinctly. (See Technique LC.)

Theorem RPNC
Rank Plus Nullity is Columns
Suppose that A is an m × n matrix. Then r\left (A\right ) + n\left (A\right ) = n.

Proof   Let r be the number of nonzero rows in a row-equivalent matrix in reduced row-echelon form. By Theorem CRN,

r\left (A\right ) + n\left (A\right ) = r + (n − r) = n

When we first introduced r as our standard notation for the number of nonzero rows in a matrix in reduced row-echelon form you might have thought r stood for “rows.” Not really — it stands for “rank”!

Subsection RNNM: Rank and Nullity of a Nonsingular Matrix

Let’s take a look at the rank and nullity of a square matrix.

Example RNSM
Rank and nullity of a square matrix
The matrix

E = \left [\array{ 0 & 4 &−1& 2 & 2 & 3 & 1 \cr 2 &−2& 1 &−1& 0 &−4&−3 \cr −2&−3& 9 &−3& 9 &−1& 9 \cr −3&−4& 9 & 4 &−1& 6 &−2 \cr −3&−4& 6 &−2& 5 & 9 &−4 \cr 9 &−3& 8 &−2&−4& 2 & 4 \cr 8 & 2 & 2 & 9 & 3 & 0 & 9 } \right ]

is row-equivalent to the matrix in reduced row-echelon form,

\left [\array{ \text{1}&0&0&0&0&0&0 \cr 0&\text{1}&0&0&0&0&0 \cr 0&0&\text{1}&0&0&0&0 \cr 0&0&0&\text{1}&0&0&0 \cr 0&0&0&0&\text{1}&0&0 \cr 0&0&0&0&0&\text{1}&0 \cr 0&0&0&0&0&0&\text{1} } \right ]

With n = 7 columns and r = 7 nonzero rows Theorem CRN tells us the rank is r\left (E\right ) = 7 and the nullity is n\left (E\right ) = 7 − 7 = 0.

The value of either the nullity or the rank are enough to characterize a nonsingular matrix.

Theorem RNNM
Rank and Nullity of a Nonsingular Matrix
Suppose that A is a square matrix of size n. The following are equivalent.

  1. A is nonsingular.
  2. The rank of A is n, r\left (A\right ) = n.
  3. The nullity of A is zero, n\left (A\right ) = 0.

Proof   (1 2) Theorem CSNM says that if A is nonsingular then C\kern -1.95872pt \left (A\right ) = {ℂ}^{n}. If C\kern -1.95872pt \left (A\right ) = {ℂ}^{n}, then the column space has dimension n by Theorem DCM, so the rank of A is n.
(2 3) Suppose r\left (A\right ) = n. Then Theorem RPNC gives

\eqalignno{ n\left (A\right ) & = n − r\left (A\right ) & &\text{@(a href="#theorem.RPNC")Theorem RPNC@(/a)} & & & & \cr & = n − n & &\text{Hypothesis} & & & & \cr & = 0 & & & & }

(3 1) Suppose n\left (A\right ) = 0, so a basis for the null space of A is the empty set. This implies that N\kern -1.95872pt \left (A\right ) = \left \{0\right \} and Theorem NMTNS says A is nonsingular.

With a new equivalence for a nonsingular matrix, we can update our list of equivalences (Theorem NME5) which now becomes a list requiring double digits to number.

Theorem NME6
Nonsingular Matrix Equivalences, Round 6
Suppose that A is a square matrix of size n. The following are equivalent.

  1. A is nonsingular.
  2. A row-reduces to the identity matrix.
  3. The null space of A contains only the zero vector, N\kern -1.95872pt \left (A\right ) = \left \{0\right \}.
  4. The linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) has a unique solution for every possible choice of b.
  5. The columns of A are a linearly independent set.
  6. A is invertible.
  7. The column space of A is {ℂ}^{n}, C\kern -1.95872pt \left (A\right ) = {ℂ}^{n}.
  8. The columns of A are a basis for {ℂ}^{n}.
  9. The rank of A is n, r\left (A\right ) = n.
  10. The nullity of A is zero, n\left (A\right ) = 0.

Proof   Building on Theorem NME5 we can add two of the statements from Theorem RNNM.

Subsection READ: Reading Questions

  1. What is the dimension of the vector space {P}_{6}, the set of all polynomials of degree 6 or less?
  2. How are the rank and nullity of a matrix related?
  3. Explain why we might say that a nonsingular matrix has “full rank.”

Subsection EXC: Exercises

C20 The archetypes listed below are matrices, or systems of equations with coefficient matrices. For each, compute the nullity and rank of the matrix. This information is listed for each archetype (along with the number of columns in the matrix, so as to illustrate Theorem RPNC), and notice how it could have been computed immediately after the determination of the sets D and F associated with the reduced row-echelon form of the matrix.
Archetype A
Archetype B
Archetype C
Archetype D/Archetype E
Archetype F
Archetype G/Archetype H
Archetype I
Archetype J
Archetype K
Archetype L  
Contributed by Robert Beezer

C21 Find the dimension of the subspace W = \left \{\left [\array{ a + b \cr a + c \cr a + d \cr d} \right ]\mathrel{∣}a,b,c,d ∈ ℂ\right \}of {ℂ}^{4}.  
Contributed by Chris Black Solution [1122]

C22 Find the dimension of the subspace W = \left \{a + bx + c{x}^{2} + d{x}^{3}\mathrel{∣}a + b + c + d = 0\right \} of {P}_{3}.  
Contributed by Chris Black Solution [1123]

C23 Find the dimension of the subspace W = \left \{\left [\array{ a&b \cr c&d } \right ]\mathrel{∣}a + b = c,b + c = d,c + d = a\right \}of {M}_{2,2}.  
Contributed by Chris Black Solution [1123]

C30 For the matrix A below, compute the dimension of the null space of A, \mathop{ dim}\nolimits \left (N\kern -1.95872pt \left (A\right )\right ).

A = \left [\array{ 2&−1&−3&11& 9 \cr 1& 2 & 1 &−7&−3 \cr 3& 1 &−3& 6 & 8 \cr 2& 1 & 2 &−5&−3 } \right ]

 
Contributed by Robert Beezer Solution [1125]

C31 The set W below is a subspace of {ℂ}^{4}. Find the dimension of W.

W = \left \langle \left \{\left [\array{ 2 \cr −3 \cr 4 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 3 \cr 0 \cr 1 \cr −2 } \right ],\kern 1.95872pt \left [\array{ −4 \cr −3 \cr 2 \cr 5 } \right ]\right \}\right \rangle

 
Contributed by Robert Beezer Solution [1126]

C35 Find the rank and nullity of the matrix A = \left [\array{ 1 &0&1 \cr 1 &2&2 \cr 2 &1&1 \cr −1&0&1 \cr 1 &1&2 } \right ].  
Contributed by Chris Black Solution [1127]

C36 Find the rank and nullity of the matrix A = \left [\array{ 1&2&1&1&1 \cr 1&3&2&0&4 \cr 1&2&1&1&1 } \right ].  
Contributed by Chris Black Solution [1127]

C37 Find the rank and nullity of the matrix A = \left [\array{ 3 &2&1&1& 1 \cr 2 &3&0&1& 1 \cr −1&1&2&1& 0 \cr 1 &1&0&1& 1 \cr 0 &1&1&2&−1 } \right ].  
Contributed by Chris Black Solution [1127]

C40 In Example LDP4 we determined that the set of five polynomials, T, is linearly dependent by a simple invocation of Theorem SSLD. Prove that T is linearly dependent from scratch, beginning with Definition LI.  
Contributed by Robert Beezer

M20 {M}_{22} is the vector space of 2 × 2 matrices. Let {S}_{22} denote the set of all 2 × 2 symmetric matrices. That is

{S}_{22} = \left \{A ∈ {M}_{22}\mathrel{∣}{A}^{t} = A\right \}

(a) Show that {S}_{22} is a subspace of {M}_{22}.
(b) Exhibit a basis for {S}_{22} and prove that it has the required properties.
(c) What is the dimension of {S}_{22}?  
Contributed by Robert Beezer Solution [1128]

M21 A 2 × 2 matrix B is upper triangular if {\left [B\right ]}_{21} = 0. Let U{T}_{2} be the set of all 2 × 2 upper triangular matrices. Then U{T}_{2} is a subspace of the vector space of all 2 × 2 matrices, {M}_{22} (you may assume this). Determine the dimension of U{T}_{2} providing all of the necessary justifications for your answer.  
Contributed by Robert Beezer Solution [1131]

Subsection SOL: Solutions

C21 Contributed by Chris Black Statement [1117]
The subspace W can be written as

\eqalignno{ W & = \left \{\left [\array{ a + b \cr a + c \cr a + d \cr d} \right ]\mathrel{∣}a,b,c,d ∈ ℂ\right \} & & \cr & = \left \{a\left [\array{ 1 \cr 1 \cr 1 \cr 0 } \right ] + b\left [\array{ 1 \cr 0 \cr 0 \cr 0 } \right ] + c\left [\array{ 0 \cr 1 \cr 0 \cr 0 } \right ] + d\left [\array{ 0 \cr 0 \cr 1 \cr 1 } \right ]\mathrel{∣}a,b,c,d ∈ ℂ\right \} & & \cr & = \left \langle \left \{\left [\array{ 1 \cr 1 \cr 1 \cr 0 } \right ],\left [\array{ 1 \cr 0 \cr 0 \cr 0 } \right ],\left [\array{ 0 \cr 1 \cr 0 \cr 0 } \right ],\left [\array{ 0 \cr 0 \cr 1 \cr 1 } \right ]\right \}\right \rangle & & }

Since the set of vectors \left \{\left [\array{ 1 \cr 1 \cr 1 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 1 \cr 0 \cr 0 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 0 \cr 1 \cr 0 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 0 \cr 0 \cr 1 \cr 1 } \right ]\right \} is a linearly independent set (why?), it forms a basis of W. Thus, W is a subspace of {ℂ}^{4} with dimension 4 (and must therefore equal {ℂ}^{4}).

C22 Contributed by Chris Black Statement [1117]
The subspace W = \left \{a + bx + c{x}^{2} + d{x}^{3}\mathrel{∣}a + b + c + d = 0\right \} can be written as

\eqalignno{ W & = \left \{a + bx + c{x}^{2} + (−a − b − c){x}^{3}\mathrel{∣}a,b,c ∈ ℂ\right \} & & \cr & = \left \{a(1 − {x}^{3}) + b(x − {x}^{3}) + c({x}^{2} − {x}^{3})\mathrel{∣}a,b,c ∈ ℂ\right \} & & \cr & = \left \langle \left \{1 − {x}^{3},x − {x}^{3},{x}^{2} − {x}^{3}\right \}\right \rangle & & }

Since these vectors are linearly independent (why?), W is a subspace of {P}_{3} with dimension 3.

C23 Contributed by Chris Black Statement [1117]
The equations specified are equivalent to the system

\eqalignno{ a + b − c & = 0 & & \cr b + c − d & = 0 & & \cr a − c − d & = 0 & & }

The coefficient matrix of this system row-reduces to

\eqalignno{ \left [\array{ \text{1}&0&0&−3 \cr 0&\text{1}&0& 1 \cr 0&0&\text{1}&−2 } \right ] & & }

Thus, every solution can be decribed with a suitable choice of d, together with a = 3d, b = −d and c = 2d. Thus the subspace W can be described as

\eqalignno{ W & = \left \{\left [\array{ 3d&−d \cr 2d& d } \right ]\mathrel{∣}d ∈ ℂ\right \} = \left \langle \left \{\left [\array{ 3&−1 \cr 2& 1} \right ]\right \}\right \rangle & & }

So, W is a subspace of {M}_{2,2} with dimension 1.

C30 Contributed by Robert Beezer Statement [1118]
Row reduce A,

A\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0& 1 & 1 \cr 0&\text{1}&0&−3&−1 \cr 0&0&\text{1}&−2&−2 \cr 0&0&0& 0 & 0 } \right ]

So r = 3 for this matrix. Then

\eqalignno{ \mathop{ dim}\nolimits \left (N\kern -1.95872pt \left (A\right )\right ) & = n\left (A\right ) & &\text{@(a href="#definition.NOM")Definition NOM@(/a)} & & & & \cr & = \left (n\left (A\right ) + r\left (A\right )\right ) − r\left (A\right ) & & & & \cr & = 5 − r\left (A\right ) & &\text{@(a href="#theorem.RPNC")Theorem RPNC@(/a)} & & & & \cr & = 5 − 3 & &\text{@(a href="#theorem.CRN")Theorem CRN@(/a)} & & & & \cr & = 2 & & & & }

We could also use Theorem BNS and create a basis for N\kern -1.95872pt \left (A\right ) with n − r = 5 − 3 = 2 vectors (because the solutions are described with 2 free variables) and arrive at the dimension as the size of this basis.

C31 Contributed by Robert Beezer Statement [1118]
We will appeal to Theorem BS (or you could consider this an appeal to Theorem BCS). Put the three column vectors of this spanning set into a matrix as columns and row-reduce.

\left [\array{ 2 & 3 &−4 \cr −3& 0 &−3 \cr 4 & 1 & 2 \cr 1 &−2& 5 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0& 1 \cr 0&\text{1}&−2 \cr 0&0& 0 \cr 0&0& 0 } \right ]

The pivot columns are D = \left \{1, 2\right \} so we can “keep” the vectors corresponding to the pivot columns and set

T = \left \{\left [\array{ 2 \cr −3 \cr 4 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 3 \cr 0 \cr 1 \cr −2 } \right ]\right \}

and conclude that W = \left \langle T\right \rangle and T is linearly independent. In other words, T is a basis with two vectors, so W has dimension 2.

C35 Contributed by Chris Black Statement [1119]
The row reduced form of matrix A is \left [\array{ \text{1}&0&0 \cr 0&\text{1}&0 \cr 0&0&\text{1} \cr 0&0&0 \cr 0&0&0} \right ], so the rank of A (number of columns with leading 1’s) is 3, and the nullity is 0.

C36 Contributed by Chris Black Statement [1119]
The row reduced form of matrix A is \left [\array{ \text{1}&0&−1& 3 &5 \cr 0&\text{1}& 1 &−1&3 \cr 0&0& 0 & 0 &0 } \right ], so the rank of A (number of columns with leading 1’s) is 2, and the nullity is 5 − 2 = 3.

C37 Contributed by Chris Black Statement [1119]
This matrix A row reduces to the 5 × 5 identity matrix, so it has full rank. The rank of A is 5, and the nullity is 0.

M20 Contributed by Robert Beezer Statement [1120]
(a) We will use the three criteria of Theorem TSS. The zero vector of {M}_{22} is the zero matrix, O (Definition ZM), which is a symmetric matrix. So {S}_{22} is not empty, since O∈ {S}_{22}.

Suppose that A and B are two matrices in {S}_{22}. Then we know that {A}^{t} = A and {B}^{t} = B. We want to know if A + B ∈ {S}_{22}, so test A + B for membership,

\eqalignno{ {\left (A + B\right )}^{t} & = {A}^{t} + {B}^{t} & &\text{@(a href="fcla-jsmath-2.10li30.html#theorem.TMA")Theorem TMA@(/a)} & & & & \cr & = A + B & &A,\kern 1.95872pt B ∈ {S}_{22} & & & & }

So A + B is symmetric and qualifies for membership in {S}_{22}.

Suppose that A ∈ {S}_{22} and α ∈ {ℂ}^{}. Is αA ∈ {S}_{22}? We know that {A}^{t} = A. Now check that,

\eqalignno{ α{A}^{t} & = α{A}^{t} & &\text{@(a href="fcla-jsmath-2.10li30.html#theorem.TMSM")Theorem TMSM@(/a)} & & & & \cr & = αA & &A ∈ {S}_{22} & & & & }

So αA is also symmetric and qualifies for membership in {S}_{22}.

With the three criteria of Theorem TSS fulfilled, we see that {S}_{22} is a subspace of {M}_{22}.

(b) An arbitrary matrix from {S}_{22} can be written as \left [\array{ a&b \cr b&d } \right ]. We can express this matrix as

\eqalignno{ \left [\array{ a&b \cr b&d } \right ] & = \left [\array{ a&0 \cr 0&0 } \right ] + \left [\array{ 0&b \cr b&0 } \right ] + \left [\array{ 0&0 \cr 0&d } \right ] & & \cr & = a\left [\array{ 1&0 \cr 0&0 } \right ] + b\left [\array{ 0&1 \cr 1&0 } \right ] + d\left [\array{ 0&0 \cr 0&1 } \right ] & & }

this equation says that the set

T = \left \{\left [\array{ 1&0 \cr 0&0 } \right ],\kern 1.95872pt \left [\array{ 0&1 \cr 1&0 } \right ],\kern 1.95872pt \left [\array{ 0&0 \cr 0&1 } \right ]\right \}

spans {S}_{22}. Is it also linearly independent?

Write a relation of linear dependence on S,

\eqalignno{ O & = {a}_{1}\left [\array{ 1&0 \cr 0&0 } \right ] + {a}_{2}\left [\array{ 0&1 \cr 1&0 } \right ] + {a}_{3}\left [\array{ 0&0 \cr 0&1 } \right ] & & \cr \left [\array{ 0&0 \cr 0&0 } \right ] & = \left [\array{ {a}_{1}&{a}_{2} \cr {a}_{2}&{a}_{3} } \right ] & & }

The equality of these two matrices (Definition ME) tells us that {a}_{1} = {a}_{2} = {a}_{3} = 0, and the only relation of linear dependence on T is trivial. So T is linearly independent, and hence is a basis of {S}_{22}.

(c) The basis T found in part (b) has size 3. So by Definition D, \mathop{ dim}\nolimits \left ({S}_{22}\right ) = 3.

M21 Contributed by Robert Beezer Statement [1120]
A typical matrix from U{T}_{2} looks like

\left [\array{ a&b \cr 0&c } \right ]

where a,\kern 1.95872pt b,\kern 1.95872pt c ∈ {ℂ}^{} are arbitrary scalars. Observing this we can then write

\left [\array{ a&b \cr 0&c } \right ] = a\left [\array{ 1&0 \cr 0&0 } \right ]+b\left [\array{ 0&1 \cr 0&0 } \right ]+c\left [\array{ 0&0 \cr 0&1 } \right ]

which says that

R = \left \{\left [\array{ 1&0 \cr 0&0 } \right ],\kern 1.95872pt \left [\array{ 0&1 \cr 0&0 } \right ],\kern 1.95872pt \left [\array{ 0&0 \cr 0&1 } \right ]\right \}

is a spanning set for U{T}_{2} (Definition TSVS). Is R is linearly independent? If so, it is a basis for U{T}_{2}. So consider a relation of linear dependence on R,

{ α}_{1}\left [\array{ 1&0 \cr 0&0 } \right ]+{α}_{2}\left [\array{ 0&1 \cr 0&0 } \right ]+{α}_{3}\left [\array{ 0&0 \cr 0&1 } \right ] = O = \left [\array{ 0&0 \cr 0&0 } \right ]

From this equation, one rapidly arrives at the conclusion that {α}_{1} = {α}_{2} = {α}_{3} = 0. So R is a linearly independent set (Definition LI), and hence is a basis (Definition B) for U{T}_{2}. Now, we simply count up the size of the set R to see that the dimension of U{T}_{2} is \mathop{ dim}\nolimits \left (U{T}_{2}\right ) = 3.