Skip to main content

Section SS Spanning Sets

In this section we will provide an extremely compact way to describe an infinite set of vectors, making use of linear combinations. This will give us a convenient way to describe the solution set of a linear system, the null space of a matrix, and many other sets of vectors.

Subsection SSV Span of a Set of Vectors

In Example VFSAL we saw the solution set of a homogeneous system described as all possible linear combinations of two particular vectors. This is a useful way to construct or describe infinite sets of vectors, so we encapsulate the idea in a definition.

Definition SSCV. Span of a Set of Column Vectors.

Given a set of vectors \(S=\{\vectorlist{u}{p}\}\text{,}\) their span, \(\spn{S}\text{,}\) is the set of all possible linear combinations of \(\vectorlist{u}{p}\text{.}\) Symbolically,

\begin{align*} \spn{S}&=\setparts{\lincombo{\alpha}{u}{p}}{\alpha_i\in\complexes,\,1\leq i\leq p}\\ &=\setparts{\sum_{i=1}^{p}\alpha_i\vect{u}_i}{\alpha_i\in\complexes,\,1\leq i\leq p}\text{.} \end{align*}

The span is just a set of vectors, though in all but one situation it is an infinite set. (Just when is it not infinite? See Exercise SS.T30.) So we start with a finite collection of vectors \(S\) (\(p\) of them to be precise), and use this finite set to describe an infinite set of vectors, \(\spn{S}\text{.}\) Confusing the finite set \(S\) with the infinite set \(\spn{S}\) is one of the most persistent problems in understanding introductory linear algebra. We will see this construction repeatedly, so let us work through some examples to get comfortable with it. The most obvious question about a set is if a particular item of the correct type is in the set, or not in the set.

Consider the set of 5 vectors, \(S\text{,}\) from \(\complex{4}\)

\begin{equation*} S=\set{ \colvector{1 \\ 1 \\ 3 \\ 1},\, \colvector{2 \\ 1 \\ 2 \\ -1},\, \colvector{7 \\ 3 \\ 5 \\ -5},\, \colvector{1 \\ 1 \\ -1 \\ 2},\, \colvector{-1 \\ 0 \\ 9 \\ 0} } \end{equation*}

and consider the infinite set of vectors \(\spn{S}\) formed from all possible linear combinations of the elements of \(S\text{.}\) Here are four vectors we definitely know are elements of \(\spn{S}\text{,}\) since we will construct them in accordance with Definition SSCV.

\begin{align*} \vect{w}&= (2)\colvector{1 \\ 1 \\ 3 \\ 1}+ (1)\colvector{2 \\ 1 \\ 2 \\ -1}+ (-1)\colvector{7 \\ 3 \\ 5 \\ -5}+ (2)\colvector{1 \\ 1 \\ -1 \\ 2}+ (3)\colvector{-1 \\ 0 \\ 9 \\ 0} = \colvector{-4\\2\\28\\10}\\ \vect{x}&= (5)\colvector{1 \\ 1 \\ 3 \\ 1}+ (-6)\colvector{2 \\ 1 \\ 2 \\ -1}+ (-3)\colvector{7 \\ 3 \\ 5 \\ -5}+ (4)\colvector{1 \\ 1 \\ -1 \\ 2}+ (2)\colvector{-1 \\ 0 \\ 9 \\ 0} = \colvector{-26\\-6\\2\\34}\\ \vect{y}&= (1)\colvector{1 \\ 1 \\ 3 \\ 1}+ (0)\colvector{2 \\ 1 \\ 2 \\ -1}+ (1)\colvector{7 \\ 3 \\ 5 \\ -5}+ (0)\colvector{1 \\ 1 \\ -1 \\ 2}+ (1)\colvector{-1 \\ 0 \\ 9 \\ 0} = \colvector{7\\4\\17\\-4}\\ \vect{z}&= (0)\colvector{1 \\ 1 \\ 3 \\ 1}+ (0)\colvector{2 \\ 1 \\ 2 \\ -1}+ (0)\colvector{7 \\ 3 \\ 5 \\ -5}+ (0)\colvector{1 \\ 1 \\ -1 \\ 2}+ (0)\colvector{-1 \\ 0 \\ 9 \\ 0} = \colvector{0\\0\\0\\0} \end{align*}

The purpose of a set is to collect objects with some common property, and to exclude objects without that property. So the most fundamental question about a set is if a given object is an element of the set or not. Let us learn more about \(\spn{S}\) by investigating which vectors are elements of the set, and which are not.

First, is \(\vect{u}=\colvector{-15\\-6\\19\\5}\) an element of \(\spn{S}\text{?}\) We are asking if there are scalars \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\alpha_4,\,\alpha_5\) such that

\begin{equation*} \alpha_1\colvector{1 \\ 1 \\ 3 \\ 1}+ \alpha_2\colvector{2 \\ 1 \\ 2 \\ -1}+ \alpha_3\colvector{7 \\ 3 \\ 5 \\ -5}+ \alpha_4\colvector{1 \\ 1 \\ -1 \\ 2}+ \alpha_5\colvector{-1 \\ 0 \\ 9 \\ 0} =\vect{u} =\colvector{-15\\-6\\19\\5}\text{?} \end{equation*}

Applying Theorem SLSLC we recognize the search for these scalars as a solution to a linear system of equations with augmented matrix and reduced row-echelon form

\begin{align*} \begin{bmatrix} 1 & 2 & 7 & 1 & -1 & -15 \\ 1 & 1 & 3 & 1 & 0 & -6 \\ 3 & 2 & 5 & -1 & 9 & 19 \\ 1 & -1 & -5 & 2 & 0 & 5 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & -1 & 0 & 3 & 10 \\ 0 & \leading{1} & 4 & 0 & -1 & -9 \\ 0 & 0 & 0 & \leading{1} & -2 & -7 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{align*}

At this point, we see that the system is consistent (Theorem RCLS), so we know there is a solution for the five scalars \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\alpha_4,\,\alpha_5\text{.}\) This is enough evidence for us to say that \(\vect{u}\in\spn{S}\text{.}\) If we wished further evidence, we could compute an actual solution, say

\begin{align*} \alpha_1&=2 & \alpha_2&=1 & \alpha_3&=-2 & \alpha_4&=-3 & \alpha_5&=2\text{.} \end{align*}

This particular solution allows us to write

\begin{equation*} (2)\colvector{1 \\ 1 \\ 3 \\ 1}+ (1)\colvector{2 \\ 1 \\ 2 \\ -1}+ (-2)\colvector{7 \\ 3 \\ 5 \\ -5}+ (-3)\colvector{1 \\ 1 \\ -1 \\ 2}+ (2)\colvector{-1 \\ 0 \\ 9 \\ 0} =\vect{u} =\colvector{-15\\-6\\19\\5} \end{equation*}

making it even more obvious that \(\vect{u}\in\spn{S}\text{.}\)

Let us do it again. Is \(\vect{v}=\colvector{3\\1\\2\\-1}\) an element of \(\spn{S}\text{?}\) We are asking if there are scalars \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\alpha_4,\,\alpha_5\) such that

\begin{equation*} \alpha_1\colvector{1 \\ 1 \\ 3 \\ 1}+ \alpha_2\colvector{2 \\ 1 \\ 2 \\ -1}+ \alpha_3\colvector{7 \\ 3 \\ 5 \\ -5}+ \alpha_4\colvector{1 \\ 1 \\ -1 \\ 2}+ \alpha_5\colvector{-1 \\ 0 \\ 9 \\ 0} =\vect{v} =\colvector{3\\1\\2\\-1}\text{?} \end{equation*}

Applying Theorem SLSLC we recognize the search for these scalars as a solution to a linear system of equations with augmented matrix and reduced row-echelon form

\begin{align*} \begin{bmatrix} 1 & 2 & 7 & 1 & -1 & 3 \\ 1 & 1 & 3 & 1 & 0 & 1 \\ 3 & 2 & 5 & -1 & 9 & 2 \\ 1 & -1 & -5 & 2 & 0 & -1 \end{bmatrix} &\rref \begin{bmatrix} \leading{1} & 0 & -1 & 0 & 3 & 0 \\ 0 & \leading{1} & 4 & 0 & -1 & 0 \\ 0 & 0 & 0 & \leading{1} & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 & \leading{1} \end{bmatrix}\text{.} \end{align*}

At this point, we see that the system is inconsistent by Theorem RCLS, so we know there is not a solution for the five scalars \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\alpha_4,\,\alpha_5\text{.}\) This is enough evidence for us to say that \(\vect{v}\not\in\spn{S}\text{.}\) End of story.

Begin with the finite set of three vectors of size \(3\)

\begin{equation*} S=\{\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3\} =\left\{ \colvector{1\\2\\1},\,\colvector{-1\\1\\1},\,\colvector{2\\1\\0} \right\} \end{equation*}

and consider the infinite set \(\spn{S}\text{.}\) The vectors of \(S\) could have been chosen to be anything, but for reasons that will become clear later, we have chosen the three columns of the coefficient matrix in Archetype A.

First, as an example, note that

\begin{equation*} \vect{v}=(5)\colvector{1\\2\\1}+(-3)\colvector{-1\\1\\1}+(7)\colvector{2\\1\\0}=\colvector{22\\14\\2} \end{equation*}

is in \(\spn{S}\text{,}\) since it is a linear combination of \(\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3\text{.}\) We write this succinctly as \(\vect{v}\in\spn{S}\text{.}\) There is nothing magical about the scalars \(\alpha_1=5,\,\alpha_2=-3,\,\alpha_3=7\text{,}\) they could have been chosen to be anything. So repeat this part of the example yourself, using different values of \(\alpha_1,\,\alpha_2,\,\alpha_3\text{.}\) What happens if you choose all three scalars to be zero?

So we know how to quickly construct sample elements of the set \(\spn{S}\text{.}\) A slightly different question arises when you are handed a vector of the correct size and asked if it is an element of \(\spn{S}\text{.}\) For example, is \(\vect{w}=\colvector{1\\8\\5}\) in \(\spn{S}\text{?}\) More succinctly, \(\vect{w}\in\spn{S}\text{?}\)

To answer this question, we will look for scalars \(\alpha_1,\,\alpha_2,\,\alpha_3\) so that

\begin{align*} \alpha_1\vect{u}_1+\alpha_2\vect{u}_2+\alpha_3\vect{u}_3&=\vect{w}\\ \end{align*}

By Theorem SLSLC solutions to this vector equation are solutions to the system of equations

\begin{align*} \alpha_1-\alpha_2+2\alpha_3&=1\\ 2\alpha_1+\alpha_2+\alpha_3&=8\\ \alpha_1+\alpha_2&=5\\ \end{align*}

Building the augmented matrix for this linear system, and row-reducing, gives

\begin{align*} \begin{bmatrix} \leading{1} & 0 & 1 & 3\\ 0 & \leading{1} & -1 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{align*}

This system has infinitely many solutions (there is a free variable in \(x_3\)), but all we need is one solution vector. The solution \(\alpha_1 = 2\text{,}\) \(\alpha_2 = 3\text{,}\) \(\alpha_3 = 1\text{,}\) tells us that

\begin{equation*} (2)\vect{u}_1+(3)\vect{u}_2+(1)\vect{u}_3=\vect{w} \end{equation*}

so we are convinced that \(\vect{w}\) really is in \(\spn{S}\text{.}\) Notice that there are an infinite number of ways to answer this question affirmatively.

We could choose a different solution, this time choosing the free variable to be zero. Then \(\alpha_1 = 3\text{,}\) \(\alpha_2 = 2\text{,}\) \(\alpha_3 = 0\text{,}\) shows us that

\begin{equation*} (3)\vect{u}_1+(2)\vect{u}_2+(0)\vect{u}_3=\vect{w}\text{.} \end{equation*}

Verifying the arithmetic in this second solution will make it obvious that \(\vect{w}\) is in this span. And of course, we now realize that there are an infinite number of ways to realize \(\vect{w}\) as element of \(\spn{S}\text{.}\)

Let us ask the same type of question again, but this time with \(\vect{y}=\colvector{2\\4\\3}\text{,}\) i.e. is \(\vect{y}\in\spn{S}\text{?}\)

So we will look for scalars \(\alpha_1,\,\alpha_2,\,\alpha_3\) so that

\begin{align*} \alpha_1\vect{u}_1+\alpha_2\vect{u}_2+\alpha_3\vect{u}_3&=\vect{y}\\ \end{align*}

By Theorem SLSLC solutions to this vector equation are the solutions to the system of equations

\begin{align*} \alpha_1-\alpha_2+2\alpha_3&=2\\ 2\alpha_1+\alpha_2+\alpha_3&=4\\ \alpha_1+\alpha_2&=3\\ \end{align*}

Building the augmented matrix for this linear system, and row-reducing, gives

\begin{align*} \begin{bmatrix} \leading{1} & 0 & 1 & 0\\ 0 & \leading{1} & -1 & 0\\ 0 & 0 & 0 & \leading{1} \end{bmatrix}\text{.} \end{align*}

This system is inconsistent (there is a pivot column in the last column, Theorem RCLS), so there are no scalars \(\alpha_1,\,\alpha_2,\,\alpha_3\) that will create a linear combination of \(\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3\) that equals \(\vect{y}\text{.}\) More precisely, \(\vect{y}\not\in\spn{S}\text{.}\)

There are three things to observe in this example. (1) It is easy to construct vectors in \(\spn{S}\text{.}\) (2) It is possible that some vectors are in \(\spn{S}\) (e.g. \(\vect{w}\)), while others are not (e.g. \(\vect{y}\)). (3) Deciding if a given vector is in \(\spn{S}\) leads to solving a linear system of equations and asking if the system is consistent.

With a computer program in hand to solve systems of linear equations, could you create a program to decide if a vector was, or was not, in the span of a given set of vectors? Is this art or science?

This example was built on vectors from the columns of the coefficient matrix of Archetype A. Study the determination that \(\vect{v}\in\spn{S}\) and see if you can connect it with some of the other properties of Archetype A.

Having analyzed Archetype A in Example SCAA, we will of course subject Archetype B to a similar investigation.

Begin with the finite set of three vectors of size \(3\) that are the columns of the coefficient matrix in Archetype B,

\begin{equation*} R=\{\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3\} =\left\{ \colvector{-7\\5\\1},\,\colvector{-6\\5\\0},\,\colvector{-12\\7\\4} \right\} \end{equation*}

and consider the infinite set \(\spn{R}\text{.}\)

First, as an example, note that

\begin{equation*} \vect{x}=(2)\colvector{-7\\5\\1}+(4)\colvector{-6\\5\\0}+(-3)\colvector{-12\\7\\4}=\colvector{-2\\9\\-10} \end{equation*}

is in \(\spn{R}\text{,}\) since it is a linear combination of \(\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3\text{.}\) In other words, \(\vect{x}\in\spn{R}\text{.}\) Try some different values of \(\alpha_1,\,\alpha_2,\,\alpha_3\) yourself, and see what vectors you can create as elements of \(\spn{R}\text{.}\)

Now ask if a given vector is an element of \(\spn{R}\text{.}\) For example, is \(\vect{z}=\colvector{-33\\24\\5}\) in \(\spn{R}\text{?}\) Is \(\vect{z}\in\spn{R}\text{?}\)

To answer this question, we will look for scalars \(\alpha_1,\,\alpha_2,\,\alpha_3\) so that

\begin{align*} \alpha_1\vect{v}_1+\alpha_2\vect{v}_2+\alpha_3\vect{v}_3&=\vect{z}\\ \end{align*}

By Theorem SLSLC solutions to this vector equation are the solutions to the system of equations

\begin{align*} -7\alpha_1-6\alpha_2-12\alpha_3&=-33\\ 5\alpha_1+5\alpha_2+7\alpha_3&=24\\ \alpha_1+4\alpha_3&=5\\ \end{align*}

Building the augmented matrix for this linear system, and row-reducing, gives

\begin{align*} \begin{bmatrix} \leading{1}&0&0&-3\\ 0&\leading{1}&0&5\\ 0&0&\leading{1}&2 \end{bmatrix}\text{.} \end{align*}

This system has a unique solution, \(\alpha_1 = -3\text{,}\) \(\alpha_2 = 5\text{,}\) \(\alpha_3 = 2\text{,}\) telling us that

\begin{equation*} (-3)\vect{v}_1+(5)\vect{v}_2+(2)\vect{v}_3=\vect{z} \end{equation*}

so we are convinced that \(\vect{z}\) really is in \(\spn{R}\text{.}\) Notice that in this case we have only one way to answer the question affirmatively since the solution is unique.

Let us ask about another vector, say is \(\vect{x}=\colvector{-7\\8\\-3}\) in \(\spn{R}\text{?}\) Is \(\vect{x}\in\spn{R}\text{?}\)

We desire scalars \(\alpha_1,\,\alpha_2,\,\alpha_3\) so that

\begin{align*} \alpha_1\vect{v}_1+\alpha_2\vect{v}_2+\alpha_3\vect{v}_3&=\vect{x}\\ \end{align*}

By Theorem SLSLC solutions to this vector equation are the solutions to the system of equations

\begin{align*} -7\alpha_1-6\alpha_2-12\alpha_3&=-7\\ 5\alpha_1+5\alpha_2+7\alpha_3&=8\\ \alpha_1+4\alpha_3&=-3\\ \end{align*}

Building the augmented matrix for this linear system, and row-reducing, gives

\begin{align*} \begin{bmatrix} \leading{1} & 0 & 0 & 1 \\ 0 & \leading{1} & 0 & 2 \\ 0 & 0 & \leading{1} & -1 \end{bmatrix}\text{.} \end{align*}

This system has a unique solution, \(\alpha_1 = 1\text{,}\) \(\alpha_2 = 2\text{,}\) \(\alpha_3 = -1\text{,}\) telling us that

\begin{equation*} (1)\vect{v}_1+(2)\vect{v}_2+(-1)\vect{v}_3=\vect{x} \end{equation*}

so we are convinced that \(\vect{x}\) really is in \(\spn{R}\text{.}\) Notice that in this case we again have only one way to answer the question affirmatively since the solution is again unique.

We could continue to test other vectors for membership in \(\spn{R}\text{,}\) but there is no point. A question about membership in \(\spn{R}\) inevitably leads to a system of three equations in the three variables \(\alpha_1,\,\alpha_2,\,\alpha_3\) with a coefficient matrix whose columns are the vectors \(\vect{v}_1,\,\vect{v}_2,\,\vect{v}_3\text{.}\) This particular coefficient matrix is nonsingular, so by Theorem NMUS, the system is guaranteed to have a solution. (This solution is unique, but that is not critical here.) So no matter which vector we might have chosen for \(\vect{z}\text{,}\) we would have been certain to discover that it was an element of \(\spn{R}\text{.}\) Stated differently, every vector of size 3 is in \(\spn{R}\text{,}\) or \(\spn{R}=\complex{3}\text{.}\)

Compare this example with Example SCAA, and see if you can connect \(\vect{z}\) with some aspects of the write-up for Archetype B.

A strength of Sage is the ability to create infinite sets, such as the span of a set of vectors, from finite descriptions. In other words, we can take a finite set with just a handful of vectors and Sage will create the set that is the span of these vectors, which is an infinite set. Here we will show you how to do this, and show how you can use the results. The key command is the vector space method .span().

Most of the above should be fairly self-explanatory, but a few comments are in order. The span, W, is created in the first compute cell with the .span() method, which accepts a list of vectors and needs to be employed as a method of a vector space. The information about W printed when we just input the span itself may be somewhat confusing, and as before, we need to learn some more theory to totally understand it all. For now you can see the number system (Rational Field) and the number of entries in each vector (degree 4). The dimension may be more complicated than you first suspect.

Sets are all about membership, and we see that we can easily check membership in a span. We know the vector x will be in the span W since we built it as a linear combination of v1 and v2. The vectors y and u are a bit more mysterious, but Sage can answer the membership question easily for both.

The last compute cell is something new. The symbol <= is meant here to be the “subset of” relation, i.e. a slight abuse of the mathematical symbol \(\subseteq\text{,}\) and then we are not surprised to learn that W really is a subset of V.

It is important to realize that the span is a construction that begins with a finite set, yet creates an infinite set. With a loop (the for command) and the .random_element() vector space method we can create many, but not all, of the elements of a span. In the examples below, you can edit the total number of random vectors produced, and if you create too many you may make it difficult (or impossible) to wade through them all.

Each example is designed to illustrate some aspect of the behavior of the span command and to provoke some questions. So put on your mathematician's hat, evaluate the compute cells to create some sample elements, and then study the output carefully looking for patterns and maybe even conjecture some explanations for the behavior. The puzzle gets just a bit harder for each new example. (We use the Sequence() command to get nicely-formatted line-by-line output of the list, and notice that we are only providing a portion of the output here. You will want to evalaute the computation of vecs and then evaluate the subsequent cell for maximum effect.)

With the notion of a span, we can expand our techniques for checking the consistency of a linear system. Theorem SLSLC tells us a system is consistent if and only if the vector of constants is a linear combination of the columns of the coefficient matrix. This is because Theorem SLSLC says that any solution to the system will provide a linear combination of the columns of the coefficient that equals the vector of constants. So consistency of a system is equivalent to the membership of the vector of constants in the span of the columns of the coefficient matrix. Read that last sentence again carefully. We will see this idea again, but more formally, in Theorem CSCS.

We will reprise Sage SLC, which is based on Archetype F. We again make use of the matrix method .columns() to get all of the columns into a list at once.

You could try to find an example of a vector of constants which would create an inconsistent system with this coefficient matrix. But there is no such thing. Here is why — the null space of coeff is trivial, just the zero vector.

So with a trivial null space, we know the matrix is nonsingular by Theorem NMTNS. Then a nonsingular coefficient matrix always creates a system with a unique solution by Theorem NMUS.

The system is consistent, as we have shown, so we can apply Theorem PSPHS. We can read Theorem PSPHS as saying any two different solutions of the system will differ by an element of the null space, and the only possibility for this null space vector is just the zero vector. In other words, any two solutions cannot be different, so the solution is unique.

Subsection SSNS Spanning Sets of Null Spaces

We saw in Example VFSAL that when a system of equations is homogeneous the solution set can be expressed in the form described by Theorem VFSLS where the vector \(\vect{c}\) is the zero vector. We can essentially ignore this vector, so that the remainder of the typical expression for a solution looks like an arbitrary linear combination, where the scalars are the free variables and the vectors are \(\vectorlist{u}{n-r}\text{.}\) Which sounds a lot like a span. This is the substance of the next theorem.

First, note that when \(n=r\text{,}\) the set in the span is empty, and the conclusion is true. So we assume for the remainder that \(n\gt r\text{.}\) (You might look at the discussion in Solution T20.1.)

Consider the homogeneous system with \(A\) as a coefficient matrix, \(\homosystem{A}\text{.}\) Its set of solutions, \(S\text{,}\) is by Definition NSM, the null space of \(A\text{,}\) \(\nsp{A}\text{.}\) Let \(B^{\prime}\) denote the result of row-reducing the augmented matrix of this homogeneous system. Since the system is homogeneous, the final column of the augmented matrix will be all zeros, and after any number of row operations (Definition RO), the column will still be all zeros. So \(B^{\prime}\) has a final column that is totally zeros.

Now apply Theorem VFSLS to \(B^{\prime}\text{,}\) after noting that our homogeneous system must be consistent (Theorem HSC). The vector \(\vect{c}\) has zeros for each entry that has an index in \(F\text{.}\) For entries with their index in \(D\text{,}\) the value is \(-\matrixentry{B^{\prime}}{k,{n+1}}\text{,}\) but for \(B^{\prime}\) any entry in the final column (index \(n+1\)) is zero. So \(\vect{c}=\zerovector\text{.}\) The vectors \(\vect{z}_j\text{,}\) \(1\leq j\leq n-r\) are identical to the vectors \(\vect{u}_j\text{,}\) \(1\leq j\leq n-r\) described in Theorem VFSLS. Putting it all together and applying Definition SSCV in the final step,

\begin{align*} \nsp{A} &=S\\ &=\setparts{ \vect{c}+\alpha_1\vect{u}_1+\alpha_2\vect{u}_2+\alpha_3\vect{u}_3+\cdots+\alpha_{n-r}\vect{u}_{n-r} }{ \alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_{n-r}\in\complexes }\\ &=\setparts{ \alpha_1\vect{u}_1+\alpha_2\vect{u}_2+\alpha_3\vect{u}_3+\cdots+\alpha_{n-r}\vect{u}_{n-r} }{ \alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_{n-r}\in\complexes }\\ &=\spn{\set{\vectorlist{z}{n-r}}}\text{.} \end{align*}

Notice that the hypotheses of Theorem VFSLS and Theorem SSNS are slightly different. In the former, \(B\) is the row-reduced version of an augmented matrix of a linear system, while in the latter, \(B\) is the row-reduced version of an arbitrary matrix. Understanding this subtlety now will avoid confusion later.

Find a set of vectors, \(S\text{,}\) so that the null space of the matrix \(A\) below is the span of \(S\text{,}\) that is, \(\spn{S}=\nsp{A}\text{.}\)

\begin{equation*} A= \begin{bmatrix} 1 & 3 & 3 & -1 & -5\\ 2 & 5 & 7 & 1 & 1\\ 1 & 1 & 5 & 1 & 5\\ -1 & -4 & -2 & 0 & 4 \end{bmatrix} \end{equation*}

The null space of \(A\) is the set of all solutions to the homogeneous system \(\homosystem{A}\text{.}\) If we find the vector form of the solutions to this homogeneous system (Theorem VFSLS) then the vectors \(\vect{u}_j\text{,}\) \(1\leq j\leq n-r\) in the linear combination are exactly the vectors \(\vect{z}_j\text{,}\) \(1\leq j\leq n-r\) described in Theorem SSNS. So we can mimic Example VFSAL to arrive at these vectors (rather than being a slave to the formulas in the statement of the theorem).

Begin by row-reducing \(A\text{.}\) The result is

\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 6 & 0 & 4\\ 0 & \leading{1} & -1 & 0 & -2\\ 0 & 0 & 0 & \leading{1} & 3\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}

With \(D=\set{1,\,2,\,4}\) and \(F=\set{3,\,5}\) we recognize that \(x_3\) and \(x_5\) are free variables and we can interpret each nonzero row as an expression for the dependent variables \(x_1\text{,}\) \(x_2\text{,}\) \(x_4\) (respectively) in the free variables \(x_3\) and \(x_5\text{.}\) With this we can write the vector form of a solution vector as

\begin{equation*} \colvector{x_1\\x_2\\x_3\\x_4\\x_5}= \colvector{-6x_3-4x_5\\x_3+2x_5\\x_3\\-3x_5\\x_5}= x_3\colvector{-6\\1\\1\\0\\0}+ x_5\colvector{-4\\2\\0\\-3\\1}\text{.} \end{equation*}

Then in the notation of Theorem SSNS

\begin{align*} \vect{z}_1&=\colvector{-6\\1\\1\\0\\0} & \vect{z}_2&=\colvector{-4\\2\\0\\-3\\1} & \nsp{A}&=\spn{\set{\vect{z}_1,\,\vect{z}_2}} =\spn{\set{\colvector{-6\\1\\1\\0\\0},\,\colvector{-4\\2\\0\\-3\\1}}}\text{.} \end{align*}

Let us express the null space of \(A\) as the span of a set of vectors, applying Theorem SSNS as economically as possible, without reference to the underlying homogeneous system of equations (in contrast to Example SSNS).

\begin{equation*} A= \begin{bmatrix} 2 & 1 & 5 & 1 & 5 & 1 \\ 1 & 1 & 3 & 1 & 6 & -1 \\ -1 & 1 & -1 & 0 & 4 & -3 \\ -3 & 2 & -4 & -4 & -7 & 0 \\ 3 & -1 & 5 & 2 & 2 & 3 \end{bmatrix} \end{equation*}

Theorem SSNS creates vectors for the span by first row-reducing the matrix in question. The row-reduced version of \(A\) is

\begin{equation*} B= \begin{bmatrix} \leading{1} & 0 & 2 & 0 & -1 & 2 \\ 0 & \leading{1} & 1 & 0 & 3 & -1 \\ 0 & 0 & 0 & \leading{1} & 4 & -2 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}

We will mechanically follow the prescription of Theorem SSNS. Here we go, in two big steps.

First, the non-pivot columns have indices \(F=\set{3,\,5,\,6}\text{,}\) so we will construct the \(n-r=6-3=3\) vectors with a pattern of zeros and ones dictated by the indices in \(F\text{.}\) This is the realization of the first two lines of the three-case definition of the vectors \(\vect{z}_j\text{,}\) \(1\leq j\leq n-r\text{.}\)

\begin{align*} \vect{z}_1&=\colvector{\\ \\ 1\\ \\0 \\ 0} & \vect{z}_2&=\colvector{\\ \\ 0\\ \\ 1\\ 0} & \vect{z}_3&=\colvector{\\ \\ 0\\ \\ 0\\ 1} \end{align*}

Each of these vectors arises due to the presence of a column that is not a pivot column. The remaining entries of each vector are the entries of the non-pivot column, negated, and distributed into the empty slots in order (these slots have indices in the set \(D\text{,}\) so also refer to pivot columns). This is the realization of the third line of the three-case definition of the vectors \(\vect{z}_j\text{,}\) \(1\leq j\leq n-r\text{.}\)

\begin{align*} \vect{z}_1&=\colvector{-2\\ -1\\ 1\\ 0 \\0 \\ 0} & \vect{z}_2&=\colvector{1\\ -3\\ 0\\ -4\\ 1\\ 0} & \vect{z}_3&=\colvector{-2\\ 1\\ 0\\ 2\\ 0\\ 1} \end{align*}

So, by Theorem SSNS, we have

\begin{equation*} \nsp{A} = \spn{\set{\vect{z}_1,\,\vect{z}_2,\,\vect{z}_3}} = \spn{\set{ \colvector{-2\\ -1\\ 1\\ 0 \\0 \\ 0},\, \colvector{1\\ -3\\ 0\\ -4\\ 1\\ 0},\, \colvector{-2\\ 1\\ 0\\ 2\\ 0\\ 1} }}\text{.} \end{equation*}

We know that the null space of \(A\) is the solution set of the homogeneous system \(\homosystem{A}\text{,}\) but nowhere in this application of Theorem SSNS have we found occasion to reference the variables or equations of this system. These details are all buried in the proof of Theorem SSNS.

We have seen that we can create a null space in Sage with the .right_kernel() method for matrices. We use the optional argument basis='pivot', so we get exactly the spanning set \(\set{\vectorlist{z}{n-r}}\) described in Theorem SSNS. This optional argument will overide Sage's default spanning set, whose purpose we will explain fully in Sage SUTH0. Here is Example SSNS again, along with a few extras we will comment on afterwards.

We built the null space as nsp, and then asked for its .basis(). For now, a “basis” will give us a spanning set, and with more theory we will understand it better. This is a set of vectors that form a spanning set for the null space, and with the basis='pivot' argument we have asked for the spanning set in the format described in Theorem SSNS. The spanning set N is a list of vectors, which we have extracted and named as z1 and z2. The linear combination of these two vectors, named z, will also be in the null space since N is a spanning set for nsp. As verification, we have used the five entries of z in a linear combination of the columns of A, yielding the zero vector (with four entries) as we expect.

We can also just ask Sage if z is in nsp:

Now is an appropriate time to comment on how Sage displays a null space when we just ask about it alone. Just as with a span, the number system and the number of entries are easy to see. Again, dimension should wait for a bit. But you will notice now that the Basis matrix has been replaced by User basis matrix. This is a consequence of our request for something other than the default basis (the 'pivot' basis). As part of its standard information about a null space, or a span, Sage spits out the basis matrix. This is a matrix, whose rows are vectors in a spanning set. This matrix can be requested by itself, using the method .basis_matrix(). It is important to notice that this is very different than the output of .basis() which is a list of vectors. The two objects print very similarly, but even this is different — compare the organization of the brackets and parentheses. Finally the last command should print true for any span or null space Sage creates. If you rerun the commands below, be sure the null space nsp is defined from the code just above.

Here is an example that will simultaneously exercise the span construction and Theorem SSNS, while also pointing the way to the next section.

Begin with \(T\text{,}\) a the set of four vectors of size \(3\) chosen as the four columns of the coefficient matrix in Archetype D. Check that the vector \(\vect{z}_2\) is in the null space of the coefficient matrix of Archetype D, \(\nsp{D}\text{.}\) In fact, it is one of the vectors provided by Theorem SSNS.

\begin{align*} T&=\set{\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3,\,\vect{w}_4} =\set{ \colvector{2\\-3\\1},\, \colvector{1\\4\\1},\, \colvector{7\\-5\\4},\, \colvector{-7\\-6\\-5} } & \vect{z}_2&=\colvector{2\\3\\0\\1} \end{align*}

Now consider the infinite set \(W=\spn{T}\text{.}\)

Applying Theorem SLSLC, we can write the linear combination

\begin{equation*} 2\vect{w}_1+3\vect{w}_2+0\vect{w}_3+1\vect{w}_4=\zerovector \end{equation*}

which we can solve for \(\vect{w}_4\)

\begin{equation*} \vect{w}_4=(-2)\vect{w}_1+(-3)\vect{w}_2\text{.} \end{equation*}

This equation says that whenever we encounter the vector \(\vect{w}_4\text{,}\) we can replace it with a specific linear combination of the vectors \(\vect{w}_1\) and \(\vect{w}_2\text{.}\) So using \(\vect{w}_4\) in the set \(T\text{,}\) along with \(\vect{w}_1\) and \(\vect{w}_2\text{,}\) is excessive. An example of what we mean here can be illustrated by the computation

\begin{align*} 5\vect{w}_1&+(-4)\vect{w}_2+6\vect{w}_3+(-3)\vect{w}_4\\ &=5\vect{w}_1+(-4)\vect{w}_2+6\vect{w}_3+(-3)\left((-2)\vect{w}_1+(-3)\vect{w}_2\right)\\ &=5\vect{w}_1+(-4)\vect{w}_2+6\vect{w}_3+\left(6\vect{w}_1+9\vect{w}_2\right)\\ &=11\vect{w}_1+5\vect{w}_2+6\vect{w}_3\text{.} \end{align*}

So what began as a linear combination of the vectors \(\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3,\,\vect{w}_4\) has been reduced to a linear combination of the vectors \(\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3\text{.}\) A careful proof using our definition of set equality (Definition SE) would now allow us to conclude that this reduction is possible for any vector in \(W\text{,}\) so

\begin{equation*} W=\spn{\left\{\vect{w}_1,\,\vect{w}_2,\,\vect{w}_3\right\}}\text{.} \end{equation*}

So the set \(W\) has not changed, but we have described it as the span of a set of three vectors, rather than as the span of a set of four vectors. Furthermore, we can achieve yet another, similar, reduction.

Check that the vector

\begin{equation*} \vect{z}_1=\colvector{-3\\-1\\1\\0} \end{equation*}

is a solution to the homogeneous system \(\linearsystem{D}{\zerovector}\) (it is the vector \(\vect{z}_1\) provided by the description of the null space of the coefficient matrix \(D\) from Theorem SSNS). Applying Theorem SLSLC, we can write the linear combination,

\begin{equation*} (-3)\vect{w}_1+(-1)\vect{w}_2+1\vect{w}_3=\zerovector \end{equation*}

which we can solve for \(\vect{w}_3\text{,}\)

\begin{equation*} \vect{w}_3=3\vect{w}_1+1\vect{w}_2\text{.} \end{equation*}

This equation says that whenever we encounter the vector \(\vect{w}_3\text{,}\) we can replace it with a specific linear combination of the vectors \(\vect{w}_1\) and \(\vect{w}_2\text{.}\) So, as before, the vector \(\vect{w}_3\) is not needed in the description of \(W\text{,}\) provided we have \(\vect{w}_1\) and \(\vect{w}_2\) available. In particular, a careful proof (such as is done in Example RSC5) would show that

\begin{equation*} W=\spn{\left\{\vect{w}_1,\,\vect{w}_2\right\}}\text{.} \end{equation*}

So \(W\) began life as the span of a set of four vectors, and we have now shown (utilizing solutions to a homogeneous system) that \(W\) can also be described as the span of a set of just two vectors. Convince yourself that we cannot go any further. In other words, it is not possible to dismiss either \(\vect{w}_1\) or \(\vect{w}_2\) in a similar fashion and winnow the set down to just one vector.

What was it about the original set of four vectors that allowed us to declare certain vectors as surplus? And just which vectors were we able to dismiss? And why did we have to stop once we had two vectors remaining? The answers to these questions motivate “linear independence,” our next section and next definition, and so are worth considering carefully now.

We have deferred several mysteries and postponed some explanations of the terms Sage uses to describe certain objects. This is because Sage is a comprehensive tool that you can use throughout your mathematical career, and Sage assumes you already know a lot of linear algebra. Do not worry, you already know some linear algebra and we will very soon will know a whole lot more. And we know enough now that we can now solve one mystery. How can we create all of the solutions to a linear system of equations when the solution set is infinite?

Theorem VFSLS described elements of a solution set as a single vector \(\vect{c}\text{,}\) plus linear combinations of vectors \(\vectorlist{u}{n-r}\text{.}\) The vectors \(\vect{u}_j\) are identical to the vectors \(\vect{z}_j\) in the description of the spanning set of a null space in Theorem SSNS, suitably adjusted from the setting of a general system and the relevant homogeneous system for a null space. We can get the single vector \(\vect{c}\) from the .solve_right() method of a coefficient matrix, once supplied with the vector of constants. The vectors \(\set{\vectorlist{z}{n-r}}\) come from Sage in the basis returned by the .right_kernel(basis='pivot') method applied to the coefficient matrix. Theorem PSPHS amplifies and generalizes this discussion, making it clear that the choice of the particular particular solution \(\vect{c}\) (Sage's choice, and your author's choice) is just a matter of convenience.

In our previous discussion, we used the system from Example ISSI. We will use it one more time.

So we can describe the infinitely many solutions with just five items: the specific solution, \(\vect{c}\text{,}\) and the four vectors of the spanning set. Boom!

As an exercise in understanding Theorem PSPHS, you might begin with soln and add a linear combination of the spanning set for the null space to create another solution (which you can check).

Reading Questions SS Reading Questions

1.

Let S be the set of three vectors below.

\begin{equation*} S=\set{\colvector{1\\2\\-1},\,\colvector{3\\-4\\2},\,\colvector{4\\-2\\1}}\text{.} \end{equation*}

Let \(W=\spn{S}\) be the span of S. Is the vector \(\colvector{-1\\8\\-4}\) in \(W\text{?}\) Give an explanation of the reason for your answer.

2.

Use \(S\) and \(W\) from the previous question. Is the vector \(\colvector{6\\5\\-1}\) in \(W\text{?}\) Give an explanation of the reason for your answer.

3.

For the matrix \(A\) below, find a set \(S\) so that \(\spn{S}=\nsp{A}\text{,}\) where \(\nsp{A}\) is the null space of \(A\text{.}\) (See Theorem SSNS.)

\begin{equation*} A= \begin{bmatrix} 1 & 3 & 1 & 9\\ 2 & 1 & -3 & 8\\ 1 & 1 & -1 & 5 \end{bmatrix} \end{equation*}

Exercises SS Exercises

C22.

For each archetype that is a system of equations, consider the corresponding homogeneous system of equations. Write elements of the solution set to these homogeneous systems in vector form, as guaranteed by Theorem VFSLS. Then write the null space of the coefficient matrix of each system as the span of a set of vectors, as described in Theorem SSNS.

Archetype A, Archetype B, Archetype C, Archetype D/Archetype E, Archetype F, Archetype G/Archetype H, Archetype I, Archetype J

Solution

The vector form of the solutions obtained in this manner will involve precisely the vectors described in Theorem SSNS as providing the null space of the coefficient matrix of the system as a span. These vectors occur in each archetype in a description of the null space. Studying Example VFSAL may be of some help.

C23.

Archetype K and Archetype L are defined as matrices. Use Theorem SSNS directly to find a set \(S\) so that \(\spn{S}\) is the null space of the matrix. Do not make any reference to the associated homogeneous system of equations in your solution.

Solution

Study Example NSDS to understand the correct approach to this question. The solution for each is listed in the Archetypes (Appendix A) themselves.

C40.

Suppose that \(S=\set{\colvector{2\\-1\\3\\4},\,\colvector{3\\2\\-2\\1}}\text{.}\) Let \(W=\spn{S}\) and let \(\vect{x}=\colvector{5\\8\\-12\\-5}\text{.}\) Is \(\vect{x}\in W\text{?}\) If so, provide an explicit linear combination that demonstrates this.

Solution

Rephrasing the question, we want to know if there are scalars \(\alpha_1\) and \(\alpha_2\) such that

\begin{equation*} \alpha_1\colvector{2\\-1\\3\\4}+\alpha_2\colvector{3\\2\\-2\\1}=\colvector{5\\8\\-12\\-5}\text{.} \end{equation*}

Theorem SLSLC allows us to rephrase the question again as a quest for solutions to the system of four equations in two unknowns with an augmented matrix given by

\begin{equation*} \begin{bmatrix} 2 & 3 & 5\\ -1 & 2 & 8\\ 3 & -2 & -12\\ 4 & 1 & -5 \end{bmatrix}\text{.} \end{equation*}

This matrix row-reduces to

\begin{equation*} \begin{bmatrix} \leading{1} & 0 & -2\\ 0 & \leading{1} & 3\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{bmatrix} \end{equation*}

From the form of this matrix, we can see that \(\alpha_1=-2\) and \(\alpha_2=3\) is an affirmative answer to our question. More convincingly,

\begin{equation*} (-2)\colvector{2\\-1\\3\\4}+(3)\colvector{3\\2\\-2\\1}=\colvector{5\\8\\-12\\-5}\text{.} \end{equation*}
C41.

Suppose that \(S=\set{\colvector{2\\-1\\3\\4},\,\colvector{3\\2\\-2\\1}}\text{.}\) Let \(W=\spn{S}\) and let \(\vect{y}=\colvector{5\\1\\3\\5}\text{.}\) Is \(\vect{y}\in W\text{?}\) If so, provide an explicit linear combination that demonstrates this.

Solution

Rephrasing the question, we want to know if there are scalars \(\alpha_1\) and \(\alpha_2\) such that

\begin{equation*} \alpha_1\colvector{2\\-1\\3\\4}+\alpha_2\colvector{3\\2\\-2\\1}=\colvector{5\\1\\3\\5}\text{.} \end{equation*}

Theorem SLSLC allows us to rephrase the question again as a quest for solutions to the system of four equations in two unknowns with an augmented matrix given by

\begin{equation*} \begin{bmatrix} 2 & 3 & 5\\ -1 & 2 & 1\\ 3 & -2 & 3\\ 4 & 1 & 5 \end{bmatrix}\text{.} \end{equation*}

This matrix row-reduces to

\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0\\ 0 & \leading{1} & 0\\ 0 & 0 & \leading{1}\\ 0 & 0 & 0 \end{bmatrix} \end{equation*}

With a pivot column in the last column of this matrix (Theorem RCLS) we can see that the system of equations has no solution, so there are no values for \(\alpha_1\) and \(\alpha_2\) that will allow us to conclude that \(\vect{y}\) is in \(W\text{.}\) So \(\vect{y}\not\in W\text{.}\)

C42.

Suppose \(R=\set{\colvector{2\\-1\\3\\4\\0},\,\colvector{1\\1\\2\\2\\-1},\,\colvector{3\\-1\\0\\3\\-2}}\text{.}\) Is \(\vect{y}=\colvector{1\\-1\\-8\\-4\\-3}\) in \(\spn{R}\text{?}\)

Solution

Form a linear combination, with unknown scalars, of \(R\) that equals \(\vect{y}\)

\begin{equation*} a_1\colvector{2\\-1\\3\\4\\0}+ a_2\colvector{1\\1\\2\\2\\-1}+ a_3\colvector{3\\-1\\0\\3\\-2} = \colvector{1\\-1\\-8\\-4\\-3}\text{.} \end{equation*}

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in \(\spn{R}\text{.}\) By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix

\begin{equation*} \begin{bmatrix} 2 & 1 & 3 & 1\\ -1 & 1 & -1 & -1\\ 3 & 2 & 0 & -8\\ 4 & 2 & 3 & -4\\ 0 & -1 & -2 & -3 \end{bmatrix}\text{.} \end{equation*}

Row-reducing the matrix yields

\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0 & -2\\ 0 & \leading{1} & 0 & -1\\ 0 & 0 & \leading{1} & 2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}

From this we see that the system of equations is consistent (Theorem RCLS), and has a unique solution. This solution will provide a linear combination of the vectors in \(R\) that equals \(\vect{y}\text{.}\) So \(\vect{y}\in R\text{.}\)

C43.

Suppose \(R=\set{\colvector{2\\-1\\3\\4\\0},\,\colvector{1\\1\\2\\2\\-1},\,\colvector{3\\-1\\0\\3\\-2}}\text{.}\) Is \(\vect{z}=\colvector{1\\1\\5\\3\\1}\) in \(\spn{R}\text{?}\)

Solution

Form a linear combination, with unknown scalars, of \(R\) that equals \(\vect{z}\)

\begin{equation*} a_1\colvector{2\\-1\\3\\4\\0}+ a_2\colvector{1\\1\\2\\2\\-1}+ a_3\colvector{3\\-1\\0\\3\\-2} = \colvector{1\\1\\5\\3\\1}\text{.} \end{equation*}

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in \(\spn{R}\text{.}\) By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix,

\begin{equation*} \begin{bmatrix} 2 & 1 & 3 & 1\\ -1 & 1 & -1 & 1\\ 3 & 2 & 0 & 5\\ 4 & 2 & 3 & 3\\ 0 & -1 & -2 & 1 \end{bmatrix}\text{.} \end{equation*}

Row-reducing the matrix yields

\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 0 & 0\\ 0 & \leading{1} & 0 & 0\\ 0 & 0 & \leading{1} & 0\\ 0 & 0 & 0 & \leading{1}\\ 0 & 0 & 0 & 0 \end{bmatrix} \end{equation*}

With a pivot column in the last column, the system is inconsistent (Theorem RCLS), so there are no scalars \(a_1,\,a_2,\,a_3\) that will create a linear combination of the vectors in \(R\) that equal \(\vect{z}\text{.}\) So \(\vect{z}\not\in R\text{.}\)

C44.

Suppose that

\begin{equation*} S=\set{ \colvector{-1 \\ 2 \\ 1},\, \colvector{ 3 \\ 1 \\ 2},\, \colvector{ 1 \\ 5 \\ 4},\, \colvector{-6 \\ 5 \\ 1} }\text{.} \end{equation*}

Let \(W=\spn{S}\) and let \(\vect{y}=\colvector{-5\\3\\0}\text{.}\) Is \(\vect{y}\in W\text{?}\) If so, provide an explicit linear combination that demonstrates this.

Solution

Form a linear combination, with unknown scalars, of \(S\) that equals \(\vect{y}\)

\begin{equation*} a_1\colvector{-1 \\ 2 \\ 1}+ a_2\colvector{ 3 \\ 1 \\ 2}+ a_3\colvector{ 1 \\ 5 \\ 4}+ a_4\colvector{-6 \\ 5 \\ 1} = \colvector{-5\\3\\0}\text{.} \end{equation*}

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in \(\spn{S}\text{.}\) By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix

\begin{equation*} \begin{bmatrix} -1 & 3 & 1 & -6 & -5 \\ 2 & 1 & 5 & 5 & 3 \\ 1 & 2 & 4 & 1 & 0 \end{bmatrix}\text{.} \end{equation*}

Row-reducing the matrix yields

\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 2 & 3 & 2 \\ 0 & \leading{1} & 1 & -1 & -1 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}

From this we see that the system of equations is consistent (Theorem RCLS), and has a infinitely many solutions. Any solution will provide a linear combination of the vectors in \(R\) that equals \(\vect{y}\text{.}\) So \(\vect{y}\in S\text{,}\) for example

\begin{equation*} (-10)\colvector{-1 \\ 2 \\ 1}+ (-2)\colvector{ 3 \\ 1 \\ 2}+ (3)\colvector{ 1 \\ 5 \\ 4}+ (2)\colvector{-6 \\ 5 \\ 1} = \colvector{-5\\3\\0}\text{.} \end{equation*}
C45.

Suppose that

\begin{equation*} S=\set{ \colvector{-1 \\ 2 \\ 1},\, \colvector{ 3 \\ 1 \\ 2},\, \colvector{ 1 \\ 5 \\ 4},\, \colvector{-6 \\ 5 \\ 1} }\text{.} \end{equation*}

Let \(W=\spn{S}\) and let \(\vect{w}=\colvector{2\\1\\3}\text{.}\) Is \(\vect{w}\in W\text{?}\) If so, provide an explicit linear combination that demonstrates this.

Solution

Form a linear combination, with unknown scalars, of \(S\) that equals \(\vect{w}\)

\begin{equation*} a_1\colvector{-1 \\ 2 \\ 1}+ a_2\colvector{ 3 \\ 1 \\ 2}+ a_3\colvector{ 1 \\ 5 \\ 4}+ a_4\colvector{-6 \\ 5 \\ 1} = \colvector{2\\1\\3}\text{.} \end{equation*}

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in \(\spn{S}\text{.}\) By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix

\begin{equation*} \begin{bmatrix} -1 & 3 & 1 & -6 & 2 \\ 2 & 1 & 5 & 5 & 1 \\ 1 & 2 & 4 & 1 & 3 \end{bmatrix}\text{.} \end{equation*}

Row-reducing the matrix yields

\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 2 & 3 & 0 \\ 0 & \leading{1} & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 & \leading{1} \end{bmatrix}\text{.} \end{equation*}

With a pivot column in the last column, the system is inconsistent (Theorem RCLS), so there are no scalars \(a_1,\,a_2,\,a_3,\,a_4\) that will create a linear combination of the vectors in \(S\) that equal \(\vect{w}\text{.}\) So \(\vect{w}\not\in\spn{S}\text{.}\)

C50.

Let \(A\) be the matrix below.

  1. Find a set \(S\) so that \(\nsp{A}=\spn{S}\text{.}\)
  2. If \(\vect{z}=\colvector{3 \\ -5 \\ 1 \\ 2}\text{,}\) then show directly that \(\vect{z}\in\nsp{A}\text{.}\)
  3. Write \(\vect{z}\) as a linear combination of the vectors in \(S\text{.}\)
\begin{align*} A= \begin{bmatrix} 2 & 3 & 1 & 4 \\ 1 & 2 & 1 & 3 \\ -1 & 0 & 1 & 1 \end{bmatrix} \end{align*}
Solution

(1)Theorem SSNS provides formulas for a set \(S\) with this property, but first we must row-reduce \(A\)

\begin{equation*} A\rref \begin{bmatrix} \leading{1} & 0 & -1 & -1 \\ 0 & \leading{1} & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}

\(x_3\) and \(x_4\) would be the free variables in the homogeneous system \(\homosystem{A}\) and Theorem SSNS provides the set \(S=\set{\vect{z}_1,\,\vect{z}_2}\)where

\begin{align*} \vect{z}_1 &= \colvector{1\\-1\\1\\0} & \vect{z}_2 &= \colvector{1\\-2\\0\\1} \end{align*}

(2) Simply employ the components of the vector \(\vect{z}\) as the variables in the homogeneous system \(\homosystem{A}\text{.}\) The three equations of this system evaluate as follows,

\begin{align*} 2(3) + 3(-5)+ 1(1) + 4(2)&=0 \\ 1(3) + 2(-5)+ 1(1) + 3(2)&= 0\\ -1(3) + 0(-5)+ 1(1) + 1(2)&=0 \end{align*}

Since each result is zero, \(\vect{z}\) qualifies for membership in \(\nsp{A}\text{.}\)

(3) By Theorem SSNS we know this must be possible (that is the moral of this exercise). Find scalars \(\alpha_1\) and \(\alpha_2\) so that

\begin{equation*} \alpha_1\vect{z}_1+\alpha_2\vect{z}_2 = \alpha_1\colvector{1\\-1\\1\\0}+\alpha_2\colvector{1\\-2\\0\\1} = \colvector{3 \\ -5 \\ 1 \\ 2} = \vect{z}\text{.} \end{equation*}

Theorem SLSLC allows us to convert this question into a question about a system of four equations in two variables. The augmented matrix of this system row-reduces to

\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 1 \\ 0 & \leading{1} & 2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}

A solution is \(\alpha_1=1\) and \(\alpha_2=2\text{.}\) (Notice too that this solution is unique!)

C60.

For the matrix \(A\) below, find a set of vectors \(S\) so that the span of \(S\) equals the null space of \(A\text{,}\) \(\spn{S}=\nsp{A}\text{.}\)

\begin{equation*} A= \begin{bmatrix} 1 & 1 & 6 & -8\\ 1 & -2 & 0 & 1\\ -2 & 1 & -6 & 7 \end{bmatrix} \end{equation*}
Solution

Theorem SSNS says that if we find the vector form of the solutions to the homogeneous system \(\homosystem{A}\text{,}\) then the fixed vectors (one per free variable) will have the desired property. Row-reduce \(A\text{,}\) viewing it as the augmented matrix of a homogeneous system with an invisible columns of zeros as the last column

\begin{equation*} \begin{bmatrix} \leading{1} & 0 & 4 & -5\\ 0 & \leading{1} & 2 & -3\\ 0 & 0 & 0 & 0 \end{bmatrix}\text{.} \end{equation*}

Moving to the vector form of the solutions (Theorem VFSLS), with free variables \(x_3\) and \(x_4\text{,}\) solutions to the consistent system (it is homogeneous, Theorem HSC) can be expressed as

\begin{equation*} \colvector{x_1\\x_2\\x_3\\x_4} = x_3\colvector{-4\\-2\\1\\0}+ x_4\colvector{5\\3\\0\\1} \end{equation*}

Then with \(S\) given by

\begin{equation*} S=\set{\colvector{-4\\-2\\1\\0},\,\colvector{5\\3\\0\\1}}\text{.} \end{equation*}

Theorem SSNS guarantees that

\begin{equation*} \nsp{A}=\spn{S}=\spn{\set{\colvector{-4\\-2\\1\\0},\,\colvector{5\\3\\0\\1}}}\text{.} \end{equation*}
M10.

Consider the set of all size \(2\) vectors in the Cartesian plane \(\real{2}\text{.}\)

  1. Give a geometric description of the span of a single vector.
  2. How can you tell if two vectors span the entire plane, without doing any row reduction or calculation?
Solution
  1. The span of a single vector \(\vect{v}\) is the set of all linear combinations of that vector. Thus, \(\spn{\vect{v}} = \setparts{\alpha\vect{v}}{\alpha\in\mathbb{R}}\text{.}\) This is the line through the origin and containing the (geometric) vector \(\vect{v}\text{.}\) Thus, if \(\vect{v}=\colvector{v_1\\v_2}\text{,}\) then the span of \(\vect{v}\) is the line through \((0,0)\) and \((v_1,v_2)\text{.}\)
  2. Two vectors will span the entire plane if they point in different directions, meaning that \(\vect{u}\) does not lie on the line through \(\vect{v}\) and vice-versa. That is, for vectors \(\vect{u}\) and \(\vect{v}\) in \(\real{2}\text{,}\) \(\spn{\vect{u}, \vect{v}} = \real{2}\) if \(\vect{u}\) is not a multiple of \(\vect{v}\text{.}\)
M11.

Consider the set of all size \(3\) vectors in Cartesian 3-space \(\real{3}\text{.}\)

  1. Give a geometric description of the span of a single vector.
  2. Describe the possibilities for the span of two vectors.
  3. Describe the possibilities for the span of three vectors.
Solution
  1. The span of a single vector \(\vect{v}\) is the set of all linear combinations of that vector. Thus, \(\spn{\vect{v}} = \setparts{\alpha\vect{v}}{\alpha\in\mathbb{R}}\text{.}\) This is the line through the origin and containing the (geometric) vector \(\vect{v}\text{.}\) Thus, if \(\vect{v} = \colvector{v_1\\v_2\\v_3}\text{,}\) then the span of \(\vect{v}\) is the line through \((0,0,0)\) and \((v_1, v_2, v_3)\text{.}\)
  2. If the two vectors point in the same direction, then their span is the line through them. Recall that while two points determine a line, three points determine a plane. Two vectors will span a plane if they point in different directions, meaning that \(\vect{u}\) does not lie on the line through \(\vect{v}\) and vice-versa. The plane spanned by \(\vect{u} = \colvector{u_1\\u_1\\u_1}\) and \(\vect{v} = \colvector{v_1\\v_2\\v_3}\) is determined by the origin and the points \((u_1, u_2, u_3)\) and \((v_1, v_2, v_3)\text{.}\)
  3. If all three vectors lie on the same line, then the span is that line. If one is a linear combination of the other two, but they are not all on the same line, then they will lie in a plane. Otherwise, the span of the set of three vectors will be all of 3-space.
M12.

Let \(\vect{u} = \colvector{1\\3\\-2}\) and \(\vect{v} = \colvector{2\\-2\\1}\text{.}\)

  1. Find a vector \(\vect{w}_1\text{,}\) different from \(\vect{u}\) and \(\vect{v}\text{,}\) so that \(\spn{\set{\vect{u}, \vect{v}, \vect{w}_1}} = \spn{\set{\vect{u}, \vect{v}}}\text{.}\)
  2. Find a vector \(\vect{w}_2\) so that \(\spn{\set{\vect{u}, \vect{v}, \vect{w}_2}} \ne \spn{\set{\vect{u}, \vect{v}}}\text{.}\)
Solution
  1. If we can find a vector \(\vect{w_1}\) that is a linear combination of \(\vect{u}\) and \(\vect{v}\text{,}\) then \(\spn{\set{\vect{u}, \vect{v}, \vect{w}_1}}\) will be the same set as \(\spn{\set{\vect{u}, \vect{v}}}\text{.}\) Thus, \(\vect{w}_1\) can be any linear combination of \(\vect{u}\) and \(\vect{v}\text{.}\) One such example is \(\vect{w}_1 = 3\vect{u} - \vect{v} = \colvector{1\\11\\-7}\text{.}\)
  2. Now we are looking for a vector \(\vect{w}_2\) that cannot be written as a linear combination of \(\vect{u}\) and \(\vect{v}\text{.}\) How can we find such a vector? Any vector that matches two components but not the third of any element of \(\spn{\set{\vect{u}, \vect{v}}}\) will not be in the span. (Why?) One such example is \(\vect{w}_2 = \colvector{4\\-4\\1}\) (which is nearly \(2\vect{v}\text{,}\) but not quite).
M20.

In Example SCAD we began with the four columns of the coefficient matrix of Archetype D, and used these columns in a span construction. Then we methodically argued that we could remove the last column, then the third column, and create the same set by just doing a span construction with the first two columns. We claimed we could not go any further, and had removed as many vectors as possible. Provide a convincing argument for why a third vector cannot be removed.

M21.

In the spirit of Example SCAD, begin with the four columns of the coefficient matrix of Archetype C, and use these columns in a span construction to build the set \(S\text{.}\) Argue that \(S\) can be expressed as the span of just three of the columns of the coefficient matrix (saying exactly which three) and in the spirit of Exercise SS.M20 argue that no one of these three vectors can be removed and still have a span construction create \(S\text{.}\)

Solution

If the columns of the coefficient matrix from Archetype C are named \(\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3,\,\vect{u}_4\) then we can discover the equation

\begin{equation*} (-2)\vect{u}_1+(-3)\vect{u}_2+\vect{u}_3+\vect{u}_4=\zerovector \end{equation*}

by building a homogeneous system of equations and viewing a solution to the system as scalars in a linear combination via Theorem SLSLC. This particular vector equation can be rearranged to read

\begin{equation*} \vect{u}_4=(2)\vect{u}_1+(3)\vect{u}_2+(-1)\vect{u}_3\text{.} \end{equation*}

This can be interpreted to mean that \(\vect{u}_4\) is unnecessary in \(\spn{\set{\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3,\,\vect{u}_4}}\text{,}\) so that

\begin{equation*} \spn{\set{\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3,\,\vect{u}_4}} = \spn{\set{\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3}}\text{.} \end{equation*}

If we try to repeat this process and find a linear combination of \(\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3\) that equals the zero vector, we will fail. The required homogeneous system of equations (via Theorem SLSLC) has only a trivial solution, which will not provide the kind of equation we need to remove one of the three remaining vectors.

T10.

Suppose that \(\vect{v}_1,\,\vect{v}_2\in\complex{m}\text{.}\) Prove that

\begin{equation*} \spn{\set{\vect{v}_1,\,\vect{v}_2}}=\spn{\set{\vect{v}_1,\,\vect{v}_2,\,5\vect{v}_1+3\vect{v}_2}} \end{equation*}
Solution

This is an equality of sets, so Definition SE applies.

First show that \(X=\spn{\set{\vect{v}_1,\,\vect{v}_2}}\subseteq \spn{\set{\vect{v}_1,\,\vect{v}_2,\,5\vect{v}_1+3\vect{v}_2}}=Y\text{.}\) Choose \(\vect{x}\in X\text{.}\) Then \(\vect{x}=a_1\vect{v}_1+a_2\vect{v}_2\) for some scalars \(a_1\) and \(a_2\text{.}\) Then,

\begin{equation*} \vect{x}=a_1\vect{v}_1+a_2\vect{v}_2=a_1\vect{v}_1+a_2\vect{v}_2+0(5\vect{v}_1+3\vect{v}_2) \end{equation*}

which qualifies \(\vect{x}\) for membership in \(Y\text{,}\) as it is a linear combination of \(\vect{v}_1,\,\vect{v}_2,\,5\vect{v}_1+3\vect{v}_2\text{.}\)

Now show the opposite inclusion

\begin{equation*} Y=\spn{\set{\vect{v}_1,\,\vect{v}_2,\,5\vect{v}_1+3\vect{v}_2}}\subseteq\spn{\set{\vect{v}_1,\,\vect{v}_2}}=X\text{.} \end{equation*}

Choose \(\vect{y}\in Y\text{.}\) Then there are scalars \(a_1,\,a_2,\,a_3\) such that

\begin{equation*} \vect{y}=a_1\vect{v}_1+a_2\vect{v}_2+a_3(5\vect{v}_1+3\vect{v}_2) \end{equation*}

Rearranging, we obtain

\begin{align*} \vect{y}&=a_1\vect{v}_1+a_2\vect{v}_2+a_3(5\vect{v}_1+3\vect{v}_2)\\ &=a_1\vect{v}_1+a_2\vect{v}_2+5a_3\vect{v}_1+3a_3\vect{v}_2&& \knowl{./knowl/property-DVAC.html}{\text{Property DVAC}}\\ &=a_1\vect{v}_1+5a_3\vect{v}_1+a_2\vect{v}_2+3a_3\vect{v}_2&& \knowl{./knowl/property-CC.html}{\text{Property CC}}\\ &=(a_1+5a_3)\vect{v}_1+(a_2+3a_3)\vect{v}_2&& \knowl{./knowl/property-DSAC.html}{\text{Property DSAC}}\text{.} \end{align*}

This is an expression for \(\vect{y}\) as a linear combination of \(\vect{v}_1\) and \(\vect{v}_2\text{,}\) earning \(\vect{y}\) membership in \(X\text{.}\) Since \(X\) is a subset of \(Y\text{,}\) and vice versa, we see that \(X=Y\text{,}\) as desired.

T20.

Suppose that \(S\) is a set of vectors from \(\complex{m}\text{.}\) Prove that the zero vector, \(\zerovector\text{,}\) is an element of \(\spn{S}\text{.}\)

Solution

No matter what the elements of the set \(S\) are, we can choose the scalars in a linear combination to all be zero. Suppose that \(S=\set{\vectorlist{v}{p}}\text{.}\) Then compute

\begin{align*} 0\vect{v}_1+0\vect{v}_2+0\vect{v}_3+\cdots+0\vect{v}_p &=\zerovector+\zerovector+\zerovector+\cdots+\zerovector\\ &=\zerovector\text{.} \end{align*}

But what if we choose \(S\) to be the empty set? The convention is that the empty sum in Definition SSCV evaluates to “zero,” in this case this is the zero vector.

T21.

Suppose that \(S\) is a set of vectors from \(\complex{m}\) and \(\vect{x},\,\vect{y}\in\spn{S}\text{.}\) Prove that \(\vect{x}+\vect{y}\in\spn{S}\text{.}\)

T22.

Suppose that \(S\) is a set of vectors from \(\complex{m}\text{,}\) \(\alpha\in\complexes\text{,}\) and \(\vect{x}\in\spn{S}\text{.}\) Prove that \(\alpha\vect{x}\in\spn{S}\text{.}\)

T30.

For which sets \(S\) is \(\spn{S}\) a finite set? Give a proof for your answer.

Solution

If \(S\) is empty or \(S=\set{\zerovector}\) then \(\spn{S}=\set{\zerovector}\) and is finite. If \(S\) contains any nonzero vector, then all of the scalar multiples of that vector are in the span, and hence the span is infinite.