Section VR Vector Representations
You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.
Subsection VR Vector Representation
We begin by establishing an invertible linear transformation between any vector space $V$ of dimension $n$ and $\complex{n}$. This will allow us to “go back and forth” between the two vector spaces, no matter how abstract the definition of $V$ might be.
Definition VR Vector Representation
Suppose that $V$ is a vector space with a basis $B=\set{\vectorlist{v}{n}}$. Define a function $\ltdefn{\vectrepname{B}}{V}{\complex{n}}$ as follows. For $\vect{w}\in V$ define the column vector $\vectrep{B}{\vect{w}}\in\complex{n}$ by \begin{align*} \vect{w} &= \vectorentry{\vectrep{B}{\vect{w}}}{1}\vect{v}_1+ \vectorentry{\vectrep{B}{\vect{w}}}{2}\vect{v}_2+ \vectorentry{\vectrep{B}{\vect{w}}}{3}\vect{v}_3+ \cdots+ \vectorentry{\vectrep{B}{\vect{w}}}{n}\vect{v}_n \end{align*}
This definition looks more complicated that it really is, though the form above will be useful in proofs. Simply stated, given $\vect{w}\in V$, we write $\vect{w}$ as a linear combination of the basis elements of $B$. It is key to realize that Theorem VRRB guarantees that we can do this for every $\vect{w}$, and furthermore this expression as a linear combination is unique. The resulting scalars are just the entries of the vector $\vectrep{B}{\vect{w}}$. This discussion should convince you that $\vectrepname{B}$ is “well-defined” as a function. We can determine a precise output for any input. Now we want to establish that $\vectrepname{B}$ is a function with additional properties — it is a linear transformation.
Theorem VRLT Vector Representation is a Linear Transformation
The function $\vectrepname{B}$ (Definition VR) is a linear transformation.
The proof of Theorem VRLT provides an alternate definition of vector representation relative to a basis $B$ that we could state as a corollary (Proof Technique LC): $\vectrepname{B}$ is the unique linear transformation that takes $B$ to the standard unit basis.
Example VRC4 Vector representation in $\complex{4}$
Vector representations are most interesting for vector spaces that are not $\complex{m}$.
Example VRP2 Vector representations in $P_2$
Theorem VRI Vector Representation is Injective
The function $\vectrepname{B}$ (Definition VR) is an injective linear transformation.
Theorem VRS Vector Representation is Surjective
The function $\vectrepname{B}$ (Definition VR) is a surjective linear transformation.
We will have many occasions later to employ the inverse of vector representation, so we will record the fact that vector representation is an invertible linear transformation.
Theorem VRILT Vector Representation is an Invertible Linear Transformation
The function $\vectrepname{B}$ (Definition VR) is an invertible linear transformation.
Informally, we will refer to the application of $\vectrepname{B}$ as coordinatizing a vector, while the application of $\ltinverse{\vectrepname{B}}$ will be referred to as un-coordinatizing a vector.
Sage VR Vector Representations
Subsection CVS Characterization of Vector Spaces
Limiting our attention to vector spaces with finite dimension, we now describe every possible vector space. All of them. Really.
Theorem CFDVS Characterization of Finite Dimensional Vector Spaces
Suppose that $V$ is a vector space with dimension $n$. Then $V$ is isomorphic to $\complex{n}$.
Theorem CFDVS is the first of several surprises in this chapter, though it might be a bit demoralizing too. It says that there really are not all that many different (finite dimensional) vector spaces, and none are really any more complicated than $\complex{n}$. Hmmm. The following examples should make this point.
Example TIVS Two isomorphic vector spaces
Example CVSR Crazy vector space revealed
Example ASC A subspace characterized
Theorem IFDVS Isomorphism of Finite Dimensional Vector Spaces
Suppose $U$ and $V$ are both finite-dimensional vector spaces. Then $U$ and $V$ are isomorphic if and only if $\dimension{U}=\dimension{V}$.
Example MIVS Multiple isomorphic vector spaces
Subsection CP Coordinatization Principle
With $\vectrepname{B}$ available as an invertible linear transformation, we can translate between vectors in a vector space $U$ of dimension $m$ and $\complex{m}$. Furthermore, as a linear transformation, $\vectrepname{B}$ respects the addition and scalar multiplication in $U$, while $\vectrepinvname{B}$ respects the addition and scalar multiplication in $\complex{m}$. Since our definitions of linear independence, spans, bases and dimension are all built up from linear combinations, we will finally be able to translate fundamental properties between abstract vector spaces ($U$) and concrete vector spaces ($\complex{m}$).
Theorem CLI Coordinatization and Linear Independence
Suppose that $U$ is a vector space with a basis $B$ of size $n$. Then \begin{align*} S=\set{\vectorlist{u}{k}} \end{align*} is a linearly independent subset of $U$ if and only if \begin{align*} R=\set{\vectrep{B}{\vect{u}_1},\,\vectrep{B}{\vect{u}_2},\,\vectrep{B}{\vect{u}_3},\,\ldots,\,\vectrep{B}{\vect{u}_k}} \end{align*} is a linearly independent subset of $\complex{n}$.
Theorem CSS Coordinatization and Spanning Sets
Suppose that $U$ is a vector space with a basis $B$ of size $n$. Then \begin{align*} \vect{u}\in\spn{\set{\vectorlist{u}{k}}} \end{align*} if and only if \begin{align*} \vectrep{B}{\vect{u}}\in\spn{\set{\vectrep{B}{\vect{u}_1},\,\vectrep{B}{\vect{u}_2},\,\vectrep{B}{\vect{u}_3},\,\ldots,\,\vectrep{B}{\vect{u}_k}}} \end{align*}
Here is a fairly simple example that illustrates a very, very important idea.
Example CP2 Coordinatizing in $P_2$
Example CP2 illustrates the broad notion that computations in abstract vector spaces can be reduced to computations in $\complex{m}$. You may have noticed this phenomenon as you worked through examples in Chapter VS or Chapter LT employing vector spaces of matrices or polynomials. These computations seemed to invariably result in systems of equations or the like from Chapter SLE, Chapter V and Chapter M. It is vector representation, $\vectrepname{B}$, that allows us to make this connection formal and precise.
Knowing that vector representation allows us to translate questions about linear combinations, linear independence and spans from general vector spaces to $\complex{m}$ allows us to prove a great many theorems about how to translate other properties. Rather than prove these theorems, each of the same style as the other, we will offer some general guidance about how to best employ Theorem VRLT, Theorem CLI and Theorem CSS. This comes in the form of a “principle”: a basic truth, but most definitely not a theorem (hence, no proof).
The Coordinatization Principle
Suppose that $U$ is a vector space with a basis $B$ of size $n$. Then any question about $U$, or its elements, which ultimately depends on the vector addition or scalar multiplication in $U$, or depends on linear independence or spanning, may be translated into the same question in $\complex{n}$ by application of the linear transformation $\vectrepname{B}$ to the relevant vectors. Once the question is answered in $\complex{n}$, the answer may be translated back to $U$ through application of the inverse linear transformation $\ltinverse{\vectrepname{B}}$ (if necessary).