From A First Course in Linear Algebra
Version 2.00
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/
This section contributed by Andy Zimmer.
The matrix trace is a function that sends square matrices to scalars. In some ways it is reminiscent of the determinant. And like the determinant, it has many useful and surprising properties.
Definition T
Trace
Suppose A is a square
matrix of size n.
Then the trace of A,
t\left (A\right ), is the sum of the
diagonal entries of A.
Symbolically,
(This definition contains Notation T.) △
The next three proofs make for excellent practice. In some books they would be left as exercises for the reader as they are all “trivial” in the sense they do not rely on anything but the definition of the matrix trace.
Theorem TL
Trace is Linear
Suppose A and
B are square
matrices of size n.
Then t\left (A + B\right ) = t\left (A\right ) + t\left (B\right ).
Furthermore, if α ∈ ℂ,
then t\left (αA\right ) = αt\left (A\right ).
□
Proof These properties are exactly those required for a linear transformation. To prove these results we just manipulate sums,
The second part is as straightforward as the first,
Theorem TSRM
Trace is Symmetric with Respect to Multiplication
Suppose A and
B are square
matrices of size n.
Then t\left (AB\right ) = t\left (BA\right ).
□
Proof
Theorem TIST
Trace is Invariant Under Similarity Transformations
Suppose A and
S are square
matrices of size n
and S is invertible.
Then t\left ({S}^{−1}AS\right ) = t\left (A\right ).
□
Proof Invariant means constant under some operation. In this case the operation is a similarity transformation. A lengthy exercise (but possibly a educational one) would be to prove this result without referencing Theorem TSRM. But here we will,
Now we could define the trace of a linear transformation as the trace of any matrix representation of the transformation. Would this definition be well-defined? That is, will two different representations of the same linear transformation always have the same trace? Why? (Think Theorem SCB.) We will now prove one of the most interesting and surprising results about the trace.
Theorem TSE
Trace is the Sum of the Eigenvalues
Suppose that A is a square
matrix of size n with
distinct eigenvalues {λ}_{1},\kern 1.95872pt {λ}_{2},\kern 1.95872pt {λ}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {λ}_{k}.
Then
Proof It is amazing that the eigenvalues would have anything to do with the sum of the diagonal entries. Our proof will rely on double counting. We will demonstrate two different ways of counting the same thing therefore proving equality. Our object of interest is the coefficient of {x}^{n−1} in the characteristic polynomial of A (Definition CP), which will be denoted {α}_{n−1}. From the proof of Theorem NEM we have,
First we want to prove that {α}_{n−1} is equal to {(−1)}^{n+1}{\mathop{ \mathop{∑}}\nolimits }_{ i=1}^{k}{α}_{ A}\left ({λ}_{i}\right ){λ}_{i} and to do this we will use a straight forward counting argument. Induction can be used here as well (try it), but the intuitive approach is a much stronger technique. Let’s imagine creating each term one by one from the extended product. How do we do this? From each (x − {λ}_{i}) we pick either a x or a {λ}_{i}. But we are only interested in the terms that result in x to the power n − 1. As {\mathop{ \mathop{∑ }}\nolimits }_{i=1}^{k}{α}_{ A}\left ({λ}_{i}\right ) = n, we have n factors of the form (x − {λ}_{i}). Then to get terms with {x}^{n−1} we need to pick x’s in every (x − {λ}_{i}), except one. Since we have n linear factors there are n ways to do this, namely each eigenvalue represented as many times as it’s algebraic multiplicity. Now we have to take into account the sign of each term. As we pick n − 1 x’s and one {λ}_{i} (which has a negative sign in the linear factor) we get a factor of − 1. Then we have to take into account the {(−1)}^{n} in the characteristic polynomial. Thus {α}_{n−1} is the sum of these terms,
Now we will now show that {α}_{n−1} is also equal to {(−1)}^{n−1}t\left (A\right ). For this we will proceed by induction on the size of A. If A is a 1 × 1 square matrix then {p}_{A}\left (x\right ) =\mathop{ det} \left (A − x{I}_{n}\right ) = \left ({\left [A\right ]}_{11} − x\right ) and {(−1)}^{1−1}t\left (A\right ) ={ \left [A\right ]}_{ 11}. With our base case in hand let’s assume A is a square matrix of size n. By Definition CP
First let’s consider the maximum degree of {\left [A − x{I}_{n}\right ]}_{1i}\mathop{ det} \left ((A − x{I}_{n})\left (1|i\right )\right ) when i\mathrel{≠}1. For polynomials, the degree of f, denoted d(f), is the highest power of x in the expression f(x). A well known result of this definition is: if f(x) = g(x)h(x) then d(f) = d(g) + d(h) (can you prove this?). Now {\left [A − x{I}_{n}\right ]}_{1i} has degree zero when i\mathrel{≠}1. Furthermore (A − x{I}_{n})\left (1|i\right ) has n − 1 rows, one of which has all of its entries of degree zero, since column i is removed. The other n − 2 rows have one entry with degree one and the remainer of degree zero. Then by Exercise T.T30, the maximum degree of {\left [A − x{I}_{n}\right ]}_{1i}\mathop{ det} \left ((A − x{I}_{n})\left (1|i\right )\right ) is n − 2. So these terms will not affect the coefficient of {x}^{n−1}. Now we are free to focus all of our attention on the term {\left [A − x{I}_{n}\right ]}_{11}\mathop{ det} \left ((A − x{I}_{n})\left (1|1\right )\right ). As A\left (1|1\right ) is a (n − 1) × (n − 1) matrix the induction hypothesis tells us that \mathop{ det} \left ((A − x{I}_{n})\left (1|1\right )\right ) has a coefficient of {(−1)}^{n−2}t\left (A\left (1|1\right )\right ) for {x}^{n−2}. We also note that the proof of Theorem NEM tells us that the leading coefficent of \mathop{ det} \left ((A − x{I}_{n})\left (1|1\right )\right ) is {(−1)}^{n−1}. Then,
Expanding the product shows {α}_{n−1} (the coefficient of {x}^{n−1}) to be
With two expressions for {α}_{n−1}, we have our result,
T10 Prove there are no square matrices
A and
B such
that AB − BA = {I}_{n}.
Contributed by Andy Zimmer
T12 Assume A is a
square matrix of size n
matrix. Prove t\left (A\right ) = t\left ({A}^{t}\right ).
Contributed by Andy Zimmer
T20 If {T}_{n} = \left \{\left .M ∈ {M}_{nn}\kern 1.95872pt \right \vert \kern 1.95872pt t\left (M\right ) = 0\right \} then
prove {T}_{n} is a
subspace of {M}_{nn}
and determine it’s dimension.
Contributed by Andy Zimmer
T30 Assume A is a
n × n matrix with polynomial
entries. Define md(A,i)
to be the maximum degree of the entries in row
i. Then
d(\mathop{det} \left (A\right )) ≤ md(A, 1) + md(A, 2) + \mathop{\mathop{…}} + md(A,n). (Hint:
If f(x) = h(x) + g(x), then
d(f) ≤\text{max}\{d(h),d(g)\}.)
Contributed by Andy Zimmer Solution [2331]
T40 If A is a square matrix, the matrix exponential is defined as
Prove that \mathop{ det} \left ({e}^{A}\right ) = {e}^{t\left (A\right )}.
(You might want to give some thought to the convergence of the infinite sum as
well.)
Contributed by Andy Zimmer
T30 Contributed by Andy Zimmer Statement [2329]
We will proceed by induction. If A
is a square matrix of size 1, then clearly
d(\mathop{det} \left (A\right )) ≤ md(A, 1). Now assume
A is a square
matrix of size n
then by Theorem DER,
Let’s consider the degree of term j, {(−1)}^{1+j}{\left [A\right ]}_{ 1,j}\mathop{ det} \left (A\left (1|j\right )\right ). By definition of the function md, d({\left [A\right ]}_{1,j}) ≤ md(A,j). We use our induction hypothesis to examine the other part of the product which tells us that
Furthermore by definition of A\left (1|j\right ) (Definition SM) row i of matrix A contains all the entries of the corresponding row in A\left (1|j\right ) then,
So,
Then using the property that if f(x) = g(x)h(x) then d(f) = d(g) + d(h),
As j is arbitrary the degree of all terms in the determinant are so bounded. Finally using the fact that if f(x) = g(x) + h(x) then d(f) ≤\text{max}\{d(h),d(g)\} we have