From A First Course in Linear Algebra
Version 2.20
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/
This section is in draft form
Theorems & definitions are complete, needs examples
We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.
An upper, or lower, triangular matrix is exactly what it sounds like it should be, but here are the two relevant definitions.
Definition UTM
Upper Triangular Matrix
The square
matrix is upper
triangular if
whenever .
Definition LTM
Lower Triangular Matrix
The square
matrix is lower
triangular if
whenever .
Obviously, properties of a lower triangular matrices will have analogues for upper triangular matrices. Rather than stating two very similar theorems, we will say that matrices are “triangular of the same type” as a convenient shorthand to cover both possibilities and then give a proof for just one type.
Theorem PTMT
Product of Triangular Matrices is Triangular
Suppose that
and are square
matrices of size
that are triangular of the same type. Then
is also triangular
of that type.
Proof We prove this for lower triangular matrices and leave the proof for upper triangular matrices to you. Suppose that and are both lower triangular. We need only establish that certain entries of the product are zero. Suppose that , then
Since whenever , by Definition LTM, is lower triangular.
The inverse of a triangular matrix is triangular, of the same type.
Theorem ITMT
Inverse of a Triangular Matrix is Triangular
Suppose that is a nonsingular
matrix of size that is
triangular. Then the inverse of ,
,
is triangular of the same type. Furthermore, the diagonal entries of
are the reciprocals of the corresponding diagonal entries of
. More
precisely, .
Proof We give the proof for the case when is lower triangular, and leave the case when is upper triangular for you. Consider the process for computing the inverse of a matrix that is outlined in the proof of Theorem CINM. We augment with the size identity matrix, , and row-reduce the matrix to reduced row-echelon form via the algorithm in Theorem REMEF. The proof involves tracking the peculiarities of this process in the case of a lower triangular matrix. Let .
First, none of the diagonal elements of are zero. By repeated expansion about the first row, the determinant of a lower triangular matrix can be seen to be the product of the diagonal entries (Theorem DER). If just one of these diagonal elements was zero, then the determinant of is zero and is singular by Theorem SMZD. Slightly violating the exact algorithm for row reduction we can form a matrix, , that is row-equivalent to , by multiplying row by the nonzero scalar , for . This sets and , and leaves every zero entry of unchanged.
Let denote the matrix obtained form after converting column to a pivot column. We can convert column of into a pivot column with a set of row operations of the form with . The key observation here is that we add multiples of row only to higher-numbered rows. This means that none of the entries in rows through is changed, and since row has zeros in columns through , none of the entries in rows through is changed in columns through . The first columns of form a lower triangular matrix with 1’s on the diagonal. In its conversion to the identity matrix through this sequence of row operations, it remains lower triangular with 1’s on the diagonal.
What happens in columns through of ? These columns began in as the identity matrix, and in each diagonal entry was scaled to a reciprocal of the corresponding diagonal entry of . Notice that trivially, these final columns of form a lower triangular matrix. Just as we argued for the first columns, the row operations that convert into will preserve the lower triangular form in the final columns and preserve the exact values of the diagonal entries. By Theorem CINM, the final columns of is the inverse of , and this matrix has the necessary properties advertised in the conclusion of this theorem.
Not every matrix is diagonalizable, but every linear transformation has a matrix representation that is an upper triangular matrix, and the basis that achieves this representation is especially pleasing. Here’s the theorem.
Theorem UTMR
Upper Triangular Matrix Representation
Suppose that
is a linear transformation. Then there is a basis
for
such that the matrix
representation of
relative to ,
,
is an upper triangular matrix. Each diagonal entry is an eigenvalue of
, and if
is an
eigenvalue of ,
then occurs
times on
the diagonal.
Proof We begin with a proof by induction (Technique I) of the first statement in the conclusion of the theorem. We use induction on the dimension of to show that if is a linear transformation, then there is a basis for such that the matrix representation of relative to , , is an upper triangular matrix.
To start suppose that . Choose any nonzero vector and realize that . Then we can determine uniquely by for some (Theorem LTDB). This description of also gives us a matrix representation relative to the basis as the matrix with lone entry equal to . And this matrix representation is upper triangular (Definition UTM).
For the induction step let , and assume the theorem is true for every linear transformation defined on a vector space of dimension less than . By Theorem EMHE (suitably converted to the setting of a linear transformation), has at least one eigenvalue, and we denote this eigenvalue as . (We will remark later about how critical this step is.) We now consider properties of the linear transformation .
Let be an eigenvector of for . By definition . Then
So is not injective, as it has a nontrivial kernel (Theorem KILT). With an application of Theorem RPNDD we bound the rank of ,
Define to be the subspace of that is the range of , . We define a new linear transformation , on ,
This does not look we have accomplished much, since the action of is identical to the action of . For our purposes this will be a good thing. What is different is the domain and codomain. is defined on , a vector space with dimension less than , and so is susceptible to our induction hypothesis. Verifying that is really a linear transformation is almost entirely routine, with one exception. Employing in our definition of raises the possibility that the outputs of will not be contained within (but instead will lie inside , but outside ). To examine this possibility, suppose that .
Since is the range of , . And by Property SC, . Finally, applying Property AC we see by closure that the sum is in and so we conclude that . This argument convinces us that it is legitimate to define as we did with as the codomain.
is a linear transformation defined on a vector space with dimension less than , so we can apply the induction hypothesis and conclude that has a basis, , such that the matrix representation of relative to is an upper triangular matrix.
By Theorem DSFOS there exists a second subspace of , which we will call , so that is a direct sum of and , . Choose a basis for . So by Theorem DSD, and is basis for by Theorem DSLI and Theorem G. is the basis we desire. What does a matrix representation of look like, relative to ?
Since the definition of and agree on , the first columns of will have the upper triangular matrix representation of in the first rows. The remaining rows of these first columns will be all zeros since the outputs of on are all contained in . The situation for on is not quite as pretty, but it is close.
For , consider
In the penultimate step of this proof, we have rewritten an element of the range of as a linear combination of the basis vectors, , for the range of , , using the scalars . If we incorporate these column vectors into the matrix representation we find occurrences of on the diagonal, and any nonzero entries lying only in the first rows. Together with the upper triangular representation in the upper left-hand corner, the entire matrix representation is now clearly upper triangular. This completes the induction step, so for any linear transformation there is a basis that creates an upper triangular matrix representation.
We have one more statement in the conclusion of the theorem to verify. The eigenvalues of , and their multiplicities, can be computed with the techniques of Chapter E relative to any matrix representation (Theorem EER). We take this approach with our upper triangular matrix representation . Let be the diagonal entry of in row and column . Then the characteristic polynomial, computed as a determinant (Definition CP) with repeated expansions about the first column, is
The roots of the polynomial equation are the eigenvalues of the linear transformation (Theorem EMRCP). So each diagonal entry is an eigenvalue, and is repeated on the diagonal exactly times (Definition AME).
A key step in this proof was the construction of the subspace with dimension strictly less than that of . This required an eigenvalue/eigenvector pair, which was guaranteed to us by Theorem EMHE. Digging deeper, the proof of Theorem EMHE requires that we can factor polynomials completely, into linear factors. This will not always happen if our set of scalars is the reals, . So this is our final explanation of our choice of the complex numbers, , as our set of scalars. In polynomials factor completely, so every matrix has at least one eigenvalue, and an inductive argument will get us to upper triangular matrix representations.
In the case of linear transformations defined on , we can use the inner product (Definition IP) profitably to fine-tune the basis that yields an upper triangular matrix representation. Recall that the adjoint of matrix (Definition A) is written as .
Theorem OBUTR
Orthonormal Basis for Upper Triangular Representation
Suppose that
is a square matrix. Then there is a unitary matrix
, and an upper
triangular matrix ,
such that
and has the eigenvalues of as the entries of the diagonal.
Proof This theorem is a statement about matrices and similarity. We can convert it to a statement about linear transformations, matrix representations and bases (Theorem SCB). Suppose that is an matrix, and define the linear transformation by . Then Theorem UTMR gives us a basis for such that a matrix representation of relative to , , is upper triangular.
Now convert the basis into an orthogonal basis, , by an application of the Gram-Schmidt procedure (Theorem GSP). This is a messy business computationally, but here we have an excellent illustration of the power of the Gram-Schmidt procedure. We need only be sure that is linearly independent and spans , and then we know that is linearly independent, spans and is also an orthogonal set. We will now consider the matrix representation of relative to (rather than ). Write the new basis as . The application of the Gram-Schmidt procedure creates each vector of , say , as the difference of and a linear combination of . We are not concerned here with the actual values of the scalars in this linear combination, so we will write
where the are shorthand for the scalars. The equation above is in a form useful for creating the basis from . To better understand the relationship between and convert it to read
In this form, we recognize that the change-of-basis matrix (Definition CBM) is an upper triangular matrix. By Theorem SCB we have
The inverse of an upper triangular matrix is upper triangular (Theorem ITMT), and the product of two upper triangular matrices is again upper triangular (Theorem PTMT). So is an upper triangular matrix.
Now, multiply each vector of by a nonzero scalar, so that the result has norm 1. In this way we create a new basis which is an orthonormal set (Definition ONS). Note that the change-of-basis matrix is a diagonal matrix with nonzero entries equal to the norms of the vectors in .
Now we can convert our results into the language of matrices. Let be the basis of formed with the standard unit vectors (Definition SUV). Then the matrix representation of relative to is simply , . The change-of-basis matrix has columns that are simply the vectors in , the orthonormal basis. As such, Theorem CUMOS tells us that is a unitary matrix, and by Definition UM has an inverse equal to its adjoint. Write . We have
The inverse of a diagonal matrix is also a diagonal matrix, and so this final expression is the product of three upper triangular matrices, and so is again upper triangular (Theorem PTMT). Thus the desired upper triangular matrix, , is the matrix representation of relative to the orthonormal basis , .
Normal matrices comprise a broad class of interesting matrices, many of which we have met already. But they are most interesting since they define exactly which matrices we can diagonalize via a unitary matrix. This is the upcoming Theorem OD. Here’s the definition.
Definition NRML
Normal Matrix
The square matrix
is normal if .
So a normal matrix commutes with its adjoint. Part of the beauty of this definition is that it includes many other types of matrices. A diagonal matrix will commute with its adjoint, since the adjoint is again diagonal and the entries are just conjugates of the entries of the original diagonal matrix. A Hermitian (self-adjoint) matrix (Definition HM) will trivially commute with its adjoint, since the two matrices are the same. A real, symmetric matrix is Hermitian, so these matrices are also normal. A unitary matrix (Definition UM) has its adjoint as its inverse, and inverses commute (Theorem OSIS), so unitary matrices are normal. Another class of normal matrices is the skew-symmetric matrices. However, these broad descriptions still do not capture all of the normal matrices, as the next example shows.
Example ANM
A normal matrix
Let
Then
so we see by Definition NRML that is normal. However, is not symmetric (hence, as a real matrix, not Hermitian), not unitary, and not skew-symmetric.
A diagonal matrix is very easy to work with in matrix multiplication (Example HPDM) and an orthonormal basis also has many advantages (Theorem COB). How about converting a matrix to a diagonal matrix through a similarity transformation using a unitary matrix (i.e. build a diagonal matrix representation with an orthonormal matrix)? That’d be fantastic! When can we do this? We can always accomplish this feat when the matrix is normal, and normal matrices are the only ones that behave this way. Here’s the theorem.
Theorem OD
Orthonormal Diagonalization
Suppose that
is a square matrix. Then there is a unitary matrix
and a diagonal
matrix ,
with diagonal entries equal to the eigenvalues of
, such that
if and only if
is a normal
matrix.
Proof () Suppose there is a unitary matrix that diagonalizes , resulting in , i.e. . We check the normality of ,
So by Definition NRML, is a normal matrix.
() For the converse, suppose that is a normal matrix. Whether or not is normal, Theorem OBUTR provides a unitary matrix and an upper triangular matrix , whose diagonal entries are the eigenvalues of , and such that . With the added condition that is normal, we will determine that the entries of above the diagonal must be all zero. Here we go. First we show that is normal.
So by Definition NRML, is a normal matrix.
We can translate the normality of into the statement . We now establish an equality we will use repeatedly. For ,
To conclude, we use the above equality repeatedly, beginning with , and discover, row by row, that the entries above the diagonal of are all zero. The key observation is that a sum of squares can only equal zero when each term of the sum is zero. For we have
which forces the conclusions
For we use the same equality, but also incorporate the portion of the above conclusions that says ,
which forces the conclusions
We can repeat this process for the subsequent values of . Notice that it is critical we do this in order, since we need to employ portions of each of the previous conclusions about rows having zero entries in order to successfully get the same conclusion for later rows. Eventually, we conclude that all of the nondiagonal entries of are zero, so the extra assumption of normality forces to be diagonal.
We can rearrange the conclusion of this theorem to read . Recall that a unitary matrix can be viewed as a geometry-preserving transformation (isometry), or more loosely as a rotation of sorts. Then a matrix-vector product, , can be viewed instead as a sequence of three transformations. is unitary, so is a rotation. Since is diagonal, it just multiplies each entry of a vector by a scalar. Diagonal entries that are positive or negative, with absolute values bigger or smaller than 1 evoke descriptions like reflection, expansion and contraction. Generally we can say that “stretches” a vector in each component. Final multiplication by undoes (inverts) the rotation performed by . So a normal matrix is a rotation-stretch-rotation transformation.
The orthonormal basis formed from the columns of can be viewed as a system of mutually perpendicular axes. The rotation by allows the transformation by to be replaced by the simple transformation along these axes, and then brings the result back to the original coordinate system. For this reason Theorem OD is known as the Principal Axis Theorem.
The columns of the unitary matrix in Theorem OD create an especially nice basis for use with the normal matrix. We record this observation as a theorem.
Theorem OBNM
Orthonormal Bases and Normal Matrices
Suppose that is a normal
matrix of size . Then there
is an orthonormal basis of
composed of eigenvectors of .
Proof Let be the unitary matrix promised by Theorem OD and let be the resulting diagonal matrix. The desired set of vectors is formed by collecting the columns of into a set. Theorem CUMOS says this set of columns is orthonormal. Since is nonsingular (Theorem UMI), Theorem CNMB says the set is a basis.
Since is diagonalized by , the diagonal entries of the matrix are the eigenvalues of . An argument exactly like the second half of the proof of Theorem DC shows that each vector of the basis is an eigenvector of .
In a vague way Theorem OBNM is an improvement on Theorem HMOE which said that eigenvectors of a Hermitian matrix for different eigenvalues are always orthogonal. Hermitian matrices are normal and we see that we can find at least one basis where every pair of eigenvectors is orthogonal. Notice that this is not a generalization, since Theorem HMOE states a weak result which applies to many (but not all) pairs of eigenvectors, while Theorem OBNM is a seemingly stronger result, but only asserts that there is one collection of eigenvectors with the stronger property.