Section MO  Matrix Operations

From A First Course in Linear Algebra
Version 2.90
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/

In this section we will back up and start simple. First a definition of a totally general set of matrices.

Definition VSM
Vector Space of m × n Matrices
The vector space {M}_{mn} is the set of all m × n matrices with entries from the set of complex numbers.

(This definition contains Notation VSM.)

Subsection MEASM: Matrix Equality, Addition, Scalar Multiplication

Just as we made, and used, a careful definition of equality for column vectors, so too, we have precise definitions for matrices.

Definition ME
Matrix Equality
The m × n matrices A and B are equal, written A = B provided {\left [A\right ]}_{ij} ={ \left [B\right ]}_{ij} for all 1 ≤ i ≤ m, 1 ≤ j ≤ n.

(This definition contains Notation ME.)

So equality of matrices translates to the equality of complex numbers, on an entry-by-entry basis. Notice that we now have yet another definition that uses the symbol “=” for shorthand. Whenever a theorem has a conclusion saying two matrices are equal (think about your objects), we will consider appealing to this definition as a way of formulating the top-level structure of the proof. We will now define two operations on the set {M}_{mn}. Again, we will overload a symbol (‘+’) and a convention (juxtaposition for scalar multiplication).

Definition MA
Matrix Addition
Given the m × n matrices A and B, define the sum of A and B as an m × n matrix, written A + B, according to

\eqalignno{ {\left [A + B\right ]}_{ij} & ={ \left [A\right ]}_{ij} +{ \left [B\right ]}_{ij} & &1 ≤ i ≤ m,\kern 1.95872pt 1 ≤ j ≤ n & & & & }

(This definition contains Notation MA.)

So matrix addition takes two matrices of the same size and combines them (in a natural way!) to create a new matrix of the same size. Perhaps this is the “obvious” thing to do, but it doesn’t relieve us from the obligation to state it carefully.

Example MA
Addition of two matrices in {M}_{23}
If

\eqalignno{ A = \left [\array{ 2&−3& 4\cr 1& 0 &−7 } \right ] & &B = \left [\array{ 6&2&−4\cr 3&5 & 2 } \right ] & & & & }

then

A+B = \left [\array{ 2&−3& 4\cr 1& 0 &−7 } \right ]+\left [\array{ 6&2&−4\cr 3&5 & 2 } \right ] = \left [\array{ 2 + 6&−3 + 2&4 + (−4)\cr 1 + 3 & 0 + 5 & −7 + 2 } \right ] = \left [\array{ 8&−1& 0\cr 4& 5 &−5 } \right ]

Our second operation takes two objects of different types, specifically a number and a matrix, and combines them to create another matrix. As with vectors, in this context we call a number a scalar in order to emphasize that it is not a matrix.

Definition MSM
Matrix Scalar Multiplication
Given the m × n matrix A and the scalar α ∈ {ℂ}^{}, the scalar multiple of A is an m × n matrix, written αA and defined according to

\eqalignno{ {\left [αA\right ]}_{ij} & = α{\left [A\right ]}_{ij} & &\quad 1 ≤ i ≤ m,\kern 1.95872pt 1 ≤ j ≤ n & & & & }

(This definition contains Notation MSM.)

Notice again that we have yet another kind of multiplication, and it is again written putting two symbols side-by-side. Computationally, scalar matrix multiplication is very easy.

Example MSM
Scalar multiplication in {M}_{32}
If

A = \left [\array{ 2 &8\cr −3 &5 \cr 0 &1 } \right ]

and α = 7, then

αA = 7\left [\array{ 2 &8\cr −3 &5 \cr 0 &1 } \right ] = \left [\array{ 7(2) &7(8) \cr 7(−3)&7(5) \cr 7(0) &7(1) } \right ] = \left [\array{ 14 &56\cr −21 &35 \cr 0 & 7 } \right ]

Subsection VSP: Vector Space Properties

With definitions of matrix addition and scalar multiplication we can now state, and prove, several properties of each operation, and some properties that involve their interplay. We now collect ten of them here for later reference.

Theorem VSPM
Vector Space Properties of Matrices
Suppose that {M}_{mn} is the set of all m × n matrices (Definition VSM) with addition and scalar multiplication as defined in Definition MA and Definition MSM. Then

Proof   While some of these properties seem very obvious, they all require proof. However, the proofs are not very interesting, and border on tedious. We’ll prove one version of distributivity very carefully, and you can test your proof-building skills on some of the others. We’ll give our new notation for matrix entries a workout here. Compare the style of the proofs here with those given for vectors in Theorem VSPCV — while the objects here are more complicated, our notation makes the proofs cleaner.

To prove Property DSAM, (α + β)A = αA + βA, we need to establish the equality of two matrices (see Technique GS). Definition ME says we need to establish the equality of their entries, one-by-one. How do we do this, when we do not even know how many entries the two matrices might have? This is where Notation ME comes into play. Ready? Here we go.

For any i and j, 1 ≤ i ≤ m, 1 ≤ j ≤ n,

\eqalignno{ {\left [(α + β)A\right ]}_{ij} & = (α + β){\left [A\right ]}_{ij} & &\text{@(a href="#definition.MSM")Definition MSM@(/a)} & & & & \cr & = α{\left [A\right ]}_{ij} + β{\left [A\right ]}_{ij} & &\text{Distributivity in ${ℂ}^{}$} & & & & \cr & ={ \left [αA\right ]}_{ij} +{ \left [βA\right ]}_{ij} & &\text{@(a href="#definition.MSM")Definition MSM@(/a)} & & & & \cr & ={ \left [αA + βA\right ]}_{ij} & &\text{@(a href="#definition.MA")Definition MA@(/a)} & & & & }

There are several things to notice here. (1) Each equals sign is an equality of numbers. (2) The two ends of the equation, being true for any i and j, allow us to conclude the equality of the matrices by Definition ME. (3) There are several plus signs, and several instances of juxtaposition. Identify each one, and state exactly what operation is being represented by each.

For now, note the similarities between Theorem VSPM about matrices and Theorem VSPCV about vectors.

The zero matrix described in this theorem, O, is what you would expect — a matrix full of zeros.

Definition ZM
Zero Matrix
The m × n zero matrix is written as O = {O}_{m×n} and defined by {\left [O\right ]}_{ij} = 0, for all 1 ≤ i ≤ m, 1 ≤ j ≤ n.

(This definition contains Notation ZM.)

Subsection TSM: Transposes and Symmetric Matrices

We describe one more common operation we can perform on matrices. Informally, to transpose a matrix is to build a new matrix by swapping its rows and columns.

Definition TM
Transpose of a Matrix
Given an m × n matrix A, its transpose is the n × m matrix {A}^{t} given by

{ \left [{A}^{t}\right ]}_{ ij} ={ \left [A\right ]}_{ji},\quad 1 ≤ i ≤ n,\kern 1.95872pt 1 ≤ j ≤ m.

(This definition contains Notation TM.)

Example TM
Transpose of a 3 × 4 matrix
Suppose

D = \left [\array{ 3 &7& 2 &−3\cr −1 &4 & 2 & 8 \cr 0 &3&−2& 5 } \right ].

We could formulate the transpose, entry-by-entry, using the definition. But it is easier to just systematically rewrite rows as columns (or vice-versa). The form of the definition given will be more useful in proofs. So we have

{ D}^{t} = \left [\array{ 3 &−1& 0\cr 7 & 4 & 3 \cr 2 & 2 &−2\cr −3 & 8 & 5 } \right ]

It will sometimes happen that a matrix is equal to its transpose. In this case, we will call a matrix symmetric. These matrices occur naturally in certain situations, and also have some nice properties, so it is worth stating the definition carefully. Informally a matrix is symmetric if we can “flip” it about the main diagonal (upper-left corner, running down to the lower-right corner) and have it look unchanged.

Definition SYM
Symmetric Matrix
The matrix A is symmetric if A = {A}^{t}.

Example SYM
A symmetric 5 × 5 matrix
The matrix

E = \left [\array{ 2 & 3 &−9& 5 & 7\cr 3 & 1 & 6 &−2 &−3 \cr −9& 6 & 0 &−1& 9\cr 5 &−2 &−1 & 4 &−8 \cr 7 &−3& 9 &−8&−3 } \right ]

is symmetric.

You might have noticed that Definition SYM did not specify the size of the matrix A, as has been our custom. That’s because it wasn’t necessary. An alternative would have been to state the definition just for square matrices, but this is the substance of the next proof. Before reading the next proof, we want to offer you some advice about how to become more proficient at constructing proofs. Perhaps you can apply this advice to the next theorem. Have a peek at Technique P now. 

Theorem SMS
Symmetric Matrices are Square
Suppose that A is a symmetric matrix. Then A is square.

Proof   We start by specifying A’s size, without assuming it is square, since we are trying to prove that, so we can’t also assume it. Suppose A is an m × n matrix. Because A is symmetric, we know by Definition SM that A = {A}^{t}. So, in particular, Definition ME requires that A and {A}^{t} must have the same size. The size of {A}^{t} is n × m. Because A has m rows and {A}^{t} has n rows, we conclude that m = n, and hence A must be square by Definition SQM.

We finish this section with three easy theorems, but they illustrate the interplay of our three new operations, our new notation, and the techniques used to prove matrix equalities.

Theorem TMA
Transpose and Matrix Addition
Suppose that A and B are m × n matrices. Then {(A + B)}^{t} = {A}^{t} + {B}^{t}.

Proof   The statement to be proved is an equality of matrices, so we work entry-by-entry and use Definition ME. Think carefully about the objects involved here, and the many uses of the plus sign. For 1 ≤ i ≤ m, 1 ≤ j ≤ n,

\eqalignno{ {\left [{(A + B)}^{t}\right ]}_{ ij} & ={ \left [A + B\right ]}_{ji} & &\text{@(a href="#definition.TM")Definition TM@(/a)} & & & & \cr & ={ \left [A\right ]}_{ji} +{ \left [B\right ]}_{ji} & &\text{@(a href="#definition.MA")Definition MA@(/a)} & & & & \cr & ={ \left [{A}^{t}\right ]}_{ ij} +{ \left [{B}^{t}\right ]}_{ ij} & &\text{@(a href="#definition.TM")Definition TM@(/a)} & & & & \cr & ={ \left [{A}^{t} + {B}^{t}\right ]}_{ ij} & &\text{@(a href="#definition.MA")Definition MA@(/a)} & & & & }

Since the matrices {(A + B)}^{t} and {A}^{t} + {B}^{t} agree at each entry, Definition ME tells us the two matrices are equal.

Theorem TMSM
Transpose and Matrix Scalar Multiplication
Suppose that α ∈ {ℂ}^{} and A is an m × n matrix. Then {(αA)}^{t} = α{A}^{t}.

Proof   The statement to be proved is an equality of matrices, so we work entry-by-entry and use Definition ME. Notice that the desired equality is of n × m matrices, and think carefully about the objects involved here, plus the many uses of juxtaposition. For 1 ≤ i ≤ m, 1 ≤ j ≤ n,

\eqalignno{ {\left [{(αA)}^{t}\right ]}_{ ji} & ={ \left [αA\right ]}_{ij} & &\text{@(a href="#definition.TM")Definition TM@(/a)} & & & & \cr & = α{\left [A\right ]}_{ij} & &\text{@(a href="#definition.MSM")Definition MSM@(/a)} & & & & \cr & = α{\left [{A}^{t}\right ]}_{ ji} & &\text{@(a href="#definition.TM")Definition TM@(/a)} & & & & \cr & ={ \left [α{A}^{t}\right ]}_{ ji} & &\text{@(a href="#definition.MSM")Definition MSM@(/a)} & & & & }

Since the matrices {(αA)}^{t} and α{A}^{t} agree at each entry, Definition ME tells us the two matrices are equal.

Theorem TT
Transpose of a Transpose
Suppose that A is an m × n matrix. Then {\left ({A}^{t}\right )}^{t} = A.

Proof   We again want to prove an equality of matrices, so we work entry-by-entry and use Definition ME. For 1 ≤ i ≤ m, 1 ≤ j ≤ n,

\eqalignno{ {\left [{\left ({A}^{t}\right )}^{t}\right ]}_{ ij} & ={ \left [{A}^{t}\right ]}_{ ji} & &\text{@(a href="#definition.TM")Definition TM@(/a)} & & & & \cr & ={ \left [A\right ]}_{ij} & &\text{@(a href="#definition.TM")Definition TM@(/a)} & & & & }

Its usually straightforward to coax the transpose of a matrix out of a computational device.  See: Computation TM.MMA Computation TM.TI86 Computation TM.SAGE

Subsection MCC: Matrices and Complex Conjugation

As we did with vectors (Definition CCCV), we can define what it means to take the conjugate of a matrix.

Definition CCM
Complex Conjugate of a Matrix
Suppose A is an m × n matrix. Then the conjugate of A, written \overline{A} is an m × n matrix defined by

{ \left [\overline{A}\right ]}_{ij} = \overline{{\left [A\right ]}_{ij}}

(This definition contains Notation CCM.)

Example CCM
Complex conjugate of a matrix
If

A = \left [\array{ 2 − i & 3 &5 + 4i\cr −3 + 6i &2 − 3i & 0 } \right ]

then

\overline{A} = \left [\array{ 2 + i & 3 &5 − 4i\cr −3 − 6i &2 + 3i & 0 } \right ]

The interplay between the conjugate of a matrix and the two operations on matrices is what you might expect.

Theorem CRMA
Conjugation Respects Matrix Addition
Suppose that A and B are m × n matrices. Then \overline{A + B} = \overline{A} + \overline{B}.

Proof   For 1 ≤ i ≤ m, 1 ≤ j ≤ n,

\eqalignno{ {\left [\overline{A + B}\right ]}_{ij} & = \overline{{\left [A + B\right ]}_{ij}} & &\text{@(a href="#definition.CCM")Definition CCM@(/a)} & & & & \cr & = \overline{{\left [A\right ]}_{ij} +{ \left [B\right ]}_{ij}} & &\text{@(a href="#definition.MA")Definition MA@(/a)} & & & & \cr & = \overline{{\left [A\right ]}_{ij}} + \overline{{\left [B\right ]}_{ij}} & &\text{@(a href="fcla-jsmath-2.90li69.html#theorem.CCRA")Theorem CCRA@(/a)} & & & & \cr & ={ \left [\overline{A}\right ]}_{ij} +{ \left [\overline{B}\right ]}_{ij} & &\text{@(a href="#definition.CCM")Definition CCM@(/a)} & & & & \cr & ={ \left [\overline{A} + \overline{B}\right ]}_{ij} & &\text{@(a href="#definition.MA")Definition MA@(/a)} & & & & }

Since the matrices \overline{A + B} and \overline{A} + \overline{B} are equal in each entry, Definition ME says that \overline{A + B} = \overline{A} + \overline{B}.

Theorem CRMSM
Conjugation Respects Matrix Scalar Multiplication
Suppose that α ∈ {ℂ}^{} and A is an m × n matrix. Then \overline{αA} = \overline{α}\overline{A}.

Proof   For 1 ≤ i ≤ m, 1 ≤ j ≤ n,

\eqalignno{ {\left [\overline{αA}\right ]}_{ij} & = \overline{{\left [αA\right ]}_{ij}} & &\text{@(a href="#definition.CCM")Definition CCM@(/a)} & & & & \cr & = \overline{α{\left [A\right ]}_{ij}} & &\text{@(a href="#definition.MSM")Definition MSM@(/a)} & & & & \cr & = \overline{α}\overline{{\left [A\right ]}_{ij}} & &\text{@(a href="fcla-jsmath-2.90li69.html#theorem.CCRM")Theorem CCRM@(/a)} & & & & \cr & = \overline{α}{\left [\overline{A}\right ]}_{ij} & &\text{@(a href="#definition.CCM")Definition CCM@(/a)} & & & & \cr & ={ \left [\overline{α}\overline{A}\right ]}_{ij} & &\text{@(a href="#definition.MSM")Definition MSM@(/a)} & & & & }

Since the matrices \overline{αA} and \overline{α}\overline{A} are equal in each entry, Definition ME says that \overline{αA} = \overline{α}\overline{A}.

Theorem CCM
Conjugate of the Conjugate of a Matrix
Suppose that A is an m × n matrix. Then \overline{\left (\overline{A}\right )} = A.

Proof   For 1 ≤ i ≤ m, 1 ≤ j ≤ n,

\eqalignno{ {\left [\overline{\left (\overline{A}\right )}\right ]}_{ij} & = \overline{{\left [\overline{A}\right ]}_{ij}} & &\text{@(a href="#definition.CCM")Definition CCM@(/a)} & & & & \cr & = \overline{\overline{{\left [A\right ]}_{ij}}} & &\text{@(a href="#definition.CCM")Definition CCM@(/a)} & & & & \cr & ={ \left [A\right ]}_{ij} & &\text{@(a href="fcla-jsmath-2.90li69.html#theorem.CCT")Theorem CCT@(/a)} & & & & }

Since the matrices \overline{\left (\overline{A}\right )} and A are equal in each entry, Definition ME says that \overline{\left (\overline{A}\right )} = A.

Finally, we will need the following result about matrix conjugation and transposes later.

Theorem MCT
Matrix Conjugation and Transposes
Suppose that A is an m × n matrix. Then \overline{\left ({A}^{t}\right )} ={ \left (\overline{A}\right )}^{t}.

Proof   For 1 ≤ i ≤ m, 1 ≤ j ≤ n,

\eqalignno{ {\left [\overline{\left ({A}^{t}\right )}\right ]}_{ ji} & = \overline{{\left [{A}^{t}\right ]}_{ ji}} & &\text{@(a href="#definition.CCM")Definition CCM@(/a)} & & & & \cr & = \overline{{\left [A\right ]}_{ij}} & &\text{@(a href="#definition.TM")Definition TM@(/a)} & & & & \cr & ={ \left [\overline{A}\right ]}_{ij} & &\text{@(a href="#definition.CCM")Definition CCM@(/a)} & & & & \cr & ={ \left [{\left (\overline{A}\right )}^{t}\right ]}_{ ji} & &\text{@(a href="#definition.TM")Definition TM@(/a)} & & & & \cr & & & & }

Since the matrices \overline{\left ({A}^{t}\right )} and {\left (\overline{A}\right )}^{t} are equal in each entry, Definition ME says that \overline{\left ({A}^{t}\right )} ={ \left (\overline{A}\right )}^{t}.

Subsection AM: Adjoint of a Matrix

The combination of transposing and conjugating a matrix will be important in subsequent sections, such as Subsection MINM.UM and Section OD. We make a key definition here and prove some basic results in the same spirit as those above.

Definition A
Adjoint
If A is a matrix, then its adjoint is {A}^{∗} ={ \left (\overline{A}\right )}^{t}.

(This definition contains Notation A.)

You will see the adjoint written elsewhere variously as {A}^{H}, {A}^{∗} or {A}^{†}. Notice that Theorem MCT says it does not really matter if we conjugate and then transpose, or transpose and then conjugate.

Theorem AMA
Adjoint and Matrix Addition
Suppose A and B are matrices of the same size. Then {\left (A + B\right )}^{∗} = {A}^{∗} + {B}^{∗}.

Proof  

\eqalignno{ {\left (A + B\right )}^{∗} & ={ \left (\overline{A + B}\right )}^{t} & &\text{@(a href="#definition.A")Definition A@(/a)} & & & & \cr & ={ \left (\overline{A} + \overline{B}\right )}^{t} & &\text{@(a href="#theorem.CRMA")Theorem CRMA@(/a)} & & & & \cr & ={ \left (\overline{A}\right )}^{t} +{ \left (\overline{B}\right )}^{t} & &\text{@(a href="#theorem.TMA")Theorem TMA@(/a)} & & & & \cr & = {A}^{∗} + {B}^{∗} & &\text{@(a href="#definition.A")Definition A@(/a)} & & & & }

Theorem AMSM
Adjoint and Matrix Scalar Multiplication
Suppose α ∈ ℂ is a scalar and A is a matrix. Then {\left (αA\right )}^{∗} = \overline{α}{A}^{∗}.

Proof  

\eqalignno{ {\left (αA\right )}^{∗} & ={ \left (\overline{αA}\right )}^{t} & &\text{@(a href="#definition.A")Definition A@(/a)} & & & & \cr & ={ \left (\overline{α}\overline{A}\right )}^{t} & &\text{@(a href="#theorem.CRMSM")Theorem CRMSM@(/a)} & & & & \cr & = \overline{α}{\left (\overline{A}\right )}^{t} & &\text{@(a href="#theorem.TMSM")Theorem TMSM@(/a)} & & & & \cr & = \overline{α}{A}^{∗} & &\text{@(a href="#definition.A")Definition A@(/a)} & & & & }

Theorem AA
Adjoint of an Adjoint
Suppose that A is a matrix. Then {\left ({A}^{∗}\right )}^{∗} = A

Proof  

\eqalignno{ {\left ({A}^{∗}\right )}^{∗} & ={ \left (\overline{\left ({A}^{∗}\right )}\right )}^{t} & &\text{@(a href="#definition.A")Definition A@(/a)} & & & & \cr & = \overline{\left ({\left ({A}^{∗}\right )}^{t}\right )} & &\text{@(a href="#theorem.MCT")Theorem MCT@(/a)} & & & & \cr & = \overline{\left ({\left ({\left (\overline{A}\right )}^{t}\right )}^{t}\right )} & &\text{@(a href="#definition.A")Definition A@(/a)} & & & & \cr & = \overline{\left (\overline{A}\right )} & &\text{@(a href="#theorem.TT")Theorem TT@(/a)} & & & & \cr & = A & &\text{@(a href="#theorem.CCM")Theorem CCM@(/a)} & & & & }

Take note of how the theorems in this section, while simple, build on earlier theorems and definitions and never descend to the level of entry-by-entry proofs based on Definition ME. In other words, the equal signs that appear in the previous proofs are equalities of matrices, not scalars (which is the opposite of a proof like that of Theorem TMA).

Subsection READ: Reading Questions

  1. Perform the following matrix computation.
    (6)\left [\array{ 2&−2& 8 &1\cr 4& 5 &−1 &3 \cr 7&−3& 0 &2} \right ]+(−2)\left [\array{ 2& 7 &1&2\cr 3&−1 &0 &5 \cr 1& 7 &3&3} \right ]
  2. Theorem VSPM reminds you of what previous theorem? How strong is the similarity?
  3. Compute the transpose of the matrix below.
    \left [\array{ 6 & 8 &4\cr −2 & 1 &0 \cr 9 &−5&6} \right ]

Subsection EXC: Exercises

C10 Let A = \left [\array{ 1&4&−3\cr 6&3 & 0 } \right ], B = \left [\array{ 3 & 2 &1\cr −2 &−6 &5 } \right ]and C = \left [\array{ 2 &4\cr 4 &0 \cr −2&2 } \right ]. Let α = 4 and β = 1∕2. Perform the following calculations:

  1. A + B
  2. A + C
  3. {B}^{t} + C
  4. A + {B}^{t}
  5. βC
  6. 4A − 3B
  7. {A}^{t} + αC
  8. A + B − {C}^{t}
  9. 4A + 2B − 5{C}^{t}

 
Contributed by Chris Black Solution [583]

C11 Solve the given vector equation for x, or explain why no solution exists:

2\left [\array{ 1&2&3\cr 0&4 &2} \right ]−3\left [\array{ 1&1&2\cr 0&1 &x } \right ] = \left [\array{ −1&1& 0\cr 0 &5 &−2 } \right ]

 
Contributed by Chris Black Solution [584]

C12 Solve the given matrix equation for α, or explain why no solution exists:

α\left [\array{ 1&3& 4\cr 2&1 &−1 } \right ]+\left [\array{ 4&3&−6\cr 0&1 & 1 } \right ] = \left [\array{ 7&12& 6\cr 6& 4 &−2 } \right ]

 
Contributed by Chris Black Solution [584]

C13 Solve the given matrix equation for α, or explain why no solution exists:

α\left [\array{ 3&1\cr 2&0 \cr 1&4 } \right ]−\left [\array{ 4&1\cr 3&2 \cr 0&1 } \right ] = \left [\array{ 2& 1\cr 1&−2 \cr 2& 6} \right ]

 
Contributed by Chris Black Solution [585]

C14 Find α and β that solve the following equation:

α\left [\array{ 1&2\cr 4&1 } \right ]+β\left [\array{ 2&1\cr 3&1 } \right ] = \left [\array{ −1&4\cr 6 &1 } \right ]

 
Contributed by Chris Black Solution [586]

In Chapter V we defined the operations of vector addition and vector scalar multiplication in Definition CVA and Definition CVSM. These two operations formed the underpinnings of the remainder of the chapter. We have now defined similar operations for matrices in Definition MA and Definition MSM. You will have noticed the resulting similarities between Theorem VSPCV and Theorem VSPM.

In Exercises M20–M25, you will be asked to extend these similarities to other fundamental definitions and concepts we first saw in Chapter V. This sequence of problems was suggested by Martin Jackson.

M20 Suppose S = \left \{{B}_{1},\kern 1.95872pt {B}_{2},\kern 1.95872pt {B}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {B}_{p}\right \} is a set of matrices from {M}_{mn}. Formulate appropriate definitions for the following terms and give an example of the use of each.

  1. A linear combination of elements of S.
  2. A relation of linear dependence on S, both trivial and non-trivial.
  3. S is a linearly independent set.
  4. \left \langle S\right \rangle .

 
Contributed by Robert Beezer

M21 Show that the set S is linearly independent in {M}_{2,2}.

S = \left \{\left [\array{ 1&0\cr 0&0 } \right ],\kern 1.95872pt \left [\array{ 0&1\cr 0&0 } \right ],\kern 1.95872pt \left [\array{ 0&0\cr 1&0 } \right ],\kern 1.95872pt \left [\array{ 0&0\cr 0&1 } \right ]\right \}

 
Contributed by Robert Beezer Solution [587]

M22 Determine if the set

S = \left \{\left [\array{ −2&3& 4\cr −1 &3 &−2 } \right ],\kern 1.95872pt \left [\array{ 4&−2&2\cr 0&−1 &1 } \right ],\kern 1.95872pt \left [\array{ −1&−2&−2\cr 2 & 2 & 2 } \right ],\kern 1.95872pt \left [\array{ −1&1& 0\cr −1 &0 &−2 } \right ],\kern 1.95872pt \left [\array{ −1& 2 &−2\cr 0 &−1 &−2 } \right ]\right \}

is linearly independent in {M}_{2,3}.

 
Contributed by Robert Beezer Solution [588]

M23 Determine if the matrix A is in the span of S. In other words, is A ∈\left \langle S\right \rangle ? If so write A as a linear combination of the elements of S.

\eqalignno{ A & = \left [\array{ −13&24& 2\cr −8 &−2 &−20 } \right ] & & \cr S & = \left \{\left [\array{ −2&3& 4\cr −1 &3 &−2 } \right ],\kern 1.95872pt \left [\array{ 4&−2&2\cr 0&−1 &1 } \right ],\kern 1.95872pt \left [\array{ −1&−2&−2\cr 2 & 2 & 2 } \right ],\kern 1.95872pt \left [\array{ −1&1& 0\cr −1 &0 &−2 } \right ],\kern 1.95872pt \left [\array{ −1& 2 &−2\cr 0 &−1 &−2 } \right ]\right \} & & }

 
Contributed by Robert Beezer Solution [590]

M24 Suppose Y is the set of all 3 × 3 symmetric matrices (Definition SYM). Find a set T so that T is linearly independent and \left \langle T\right \rangle = Y .  
Contributed by Robert Beezer Solution [591]

M25 Define a subset of {M}_{3,3} by

{U}_{33} = \left \{\left .A ∈ {M}_{3,3}\right \vert {\left [A\right ]}_{ij} = 0\text{ whenever }i > j\right \}

Find a set R so that R is linearly independent and \left \langle R\right \rangle = {U}_{33}.  
Contributed by Robert Beezer

T13 Prove Property CM of Theorem VSPM. Write your proof in the style of the proof of Property DSAM given in this section.  
Contributed by Robert Beezer Solution [592]

T14 Prove Property AAM of Theorem VSPM. Write your proof in the style of the proof of Property DSAM given in this section.  
Contributed by Robert Beezer

T17 Prove Property SMAM of Theorem VSPM. Write your proof in the style of the proof of Property DSAM given in this section.  
Contributed by Robert Beezer

T18 Prove Property DMAM of Theorem VSPM. Write your proof in the style of the proof of Property DSAM given in this section.  
Contributed by Robert Beezer

A matrix A is skew-symmetric if {A}^{t} = −A Exercises T30–T37 employ this definition.

T30 Prove that a skew-symmetric matrix is square. (Hint: study the proof of Theorem SMS.)  
Contributed by Robert Beezer

T31 Prove that a skew-symmetric matrix must have zeros for its diagonal elements. In other words, if A is skew-symmetric of size n, then {\left [A\right ]}_{ii} = 0 for 1 ≤ i ≤ n. (Hint: carefully construct an example of a 3 × 3 skew-symmetric matrix before attempting a proof.)  
Contributed by Manley Perkel

T32 Prove that a matrix A is both skew-symmetric and symmetric if and only if A is the zero matrix. (Hint: one half of this proof is very easy, the other half takes a little more work.)  
Contributed by Manley Perkel

T33 Suppose A and B are both skew-symmetric matrices of the same size and α,\kern 1.95872pt β ∈ ℂ. Prove that αA + βB is a skew-symmetric matrix.  
Contributed by Manley Perkel

T34 Suppose A is a square matrix. Prove that A + {A}^{t} is a symmetric matrix.  
Contributed by Manley Perkel

T35 Suppose A is a square matrix. Prove that A − {A}^{t} is a skew-symmetric matrix.  
Contributed by Manley Perkel

T36 Suppose A is a square matrix. Prove that there is a symmetric matrix B and a skew-symmetric matrix C such that A = B + C. In other words, any square matrix can be decomposed into a symmetric matrix and a skew-symmetric matrix (Technique DC). (Hint: consider building a proof on Exercise MO.T34 and Exercise MO.T35.)  
Contributed by Manley Perkel

T37 Prove that the decomposition in Exercise MO.T36 is unique (see Technique U). (Hint: a proof can turn on Exercise MO.T31.)  
Contributed by Manley Perkel

Subsection SOL: Solutions

C10 Contributed by Chris Black Statement [575]

  1. A+B = \left [\array{ 4& 6 &−2\cr 4&−3 & 5 } \right ]
  2. A + C is undefined; A and C are not the same size.
  3. {B}^{t}+C = \left [\array{ 5 & 2\cr 6 &−6 \cr −1& 7} \right ]
  4. A + {B}^{t} is undefined; A and {B}^{t} are not the same size.
  5. βC = \left [\array{ 1 &2\cr 2 &0 \cr −1&1} \right ]
  6. 4A−3B = \left [\array{ −5&10&−15\cr 30 &30 &−15 } \right ]
  7. {A}^{t}+αC = \left [\array{ 9 &22\cr 20 & 3 \cr −11& 8} \right ]
  8. A+B−{C}^{t} = \left [\array{ 2& 2 &0\cr 0&−3 &3 } \right ]
  9. 4A+2B−5{C}^{t} = \left [\array{ 0&0&0\cr 0&0 &0 } \right ]

C11 Contributed by Chris Black Statement [576]
The given equation

\eqalignno{ \left [\array{ −1&1& 0\cr 0 &5 &−2 } \right ] & = 2\left [\array{ 1&2&3\cr 0&4 &2} \right ] − 3\left [\array{ 1&1&2\cr 0&1 &x } \right ] = \left [\array{ −1&1& 0\cr 0 &5 &4 −3x } \right ] & & }

is valid only if 4 − 3x = −2. Thus, the only solution is x = 2.

C12 Contributed by Chris Black Statement [576]
The given equation

\eqalignno{ \left [\array{ 7&12& 6\cr 6& 4 &−2 } \right ] & = α\left [\array{ 1&3& 4\cr 2&1 &−1 } \right ] + \left [\array{ 4&3&−6\cr 0&1 & 1 } \right ] & & \cr & = \left [\array{ α &3α&4α\cr 2α & α &−α } \right ] + \left [\array{ 4&3&−6\cr 0&1 & 1 } \right ] & & \cr & = \left [\array{ 4 + α&3 + 3α&−6 + 4α\cr 2α & 1 + α & 1 − α } \right ] & & }

leads to the 6 equations in α:

\eqalignno{ 4 + α & = 7 & & \cr 3 + 3α & = 12 & & \cr − 6 + 4α & = 6 & & \cr 2α & = 6 & & \cr 1 + α & = 4 & & \cr 1 − α & = −2. & & }

The only value that solves all 6 equations is α = 3, which is the solution to the original matrix equation.

C13 Contributed by Chris Black Statement [576]
The given equation

\eqalignno{ \left [\array{ 2& 1\cr 1&−2 \cr 2& 6} \right ] & = α\left [\array{ 3&1\cr 2&0 \cr 1&4 } \right ] + \left [\array{ 4&1\cr 3&2 \cr 0&1 } \right ] = \left [\array{ 3α − 4& α − 1\cr 2α − 3 & −2 \cr α &4α − 1 } \right ] & & }

gives a system of six equations in α:

\eqalignno{ 3α − 4 & = 2 & & \cr α − 1 & = 1 & & \cr 2α − 3 & = 1 & & \cr − 2 & = −2 & & \cr α & = 2 & & \cr 4α − 1 & = 6. & & }

Solving each of these equations, we see that the first three and the fifth all lead to the solution α = 2, the fourth equation is true no matter what the value of α, but the last equation is only solved by α = 7∕4. Thus, the system has no solution, and the original matrix equation also has no solution.

C14 Contributed by Chris Black Statement [577]
The given equation

\eqalignno{ \left [\array{ −1&4\cr 6 &1 } \right ] & = α\left [\array{ 1&2\cr 4&1 } \right ] + β\left [\array{ 2&1\cr 3&1 } \right ] = \left [\array{ α + 2β &2α + β\cr 4α + 3β & α + β } \right ] & & }

gives a system of four equations in two variables

\eqalignno{ α + 2β & = −1 & & \cr 2α + β & = 4 & & \cr 4α + 3β & = 6 & & \cr α + β & = 1. & & }

Solving this linear system by row-reducing the augmnented matrix shows that α = 3, β = −2 is the only solution.

M21 Contributed by Chris Black Statement [578]
Suppose there exist constants α, β, γ, and δ so that

\eqalignno{ α\left [\array{ 1&0\cr 0&0 } \right ] + β\left [\array{ 0&1\cr 0&0 } \right ] + γ\left [\array{ 0&0\cr 1&0 } \right ] + δ\left [\array{ 0&0\cr 0&1 } \right ] & = \left [\array{ 0&0\cr 0&0 } \right ]. & & }

Then,

\eqalignno{ \left [\array{ α&0\cr 0 &0 } \right ] + \left [\array{ 0&β\cr 0&0 } \right ] + \left [\array{ 0&0\cr γ &0 } \right ] + \left [\array{ 0&0\cr 0&δ } \right ] & = \left [\array{ 0&0\cr 0&0 } \right ] & & }

so that \left [\array{ α&β\cr γ &δ } \right ] = \left [\array{ 0&0\cr 0&0 } \right ]. The only solution is then α = 0, β = 0, γ = 0, and δ = 0, so that the set S is a linearly independent set of matrices.

M22 Contributed by Chris Black Statement [579]
Suppose that there exist constants {a}_{1}, {a}_{2}, {a}_{3}, {a}_{4}, and {a}_{5} so that

\eqalignno{ {a}_{1}\left [\array{ −2&3& 4\cr −1 &3 &−2 } \right ] + {a}_{2}\left [\array{ 4&−2&2\cr 0&−1 &1 } \right ] + {a}_{3}\left [\array{ −1&−2&−2\cr 2 & 2 & 2 } \right ] + {a}_{4}\left [\array{ −1&1&0\cr −1 &0 &2 } \right ] + {a}_{5}\left [\array{ −1& 2 &−1\cr 0 &−1 &−2 } \right ]& = \left [\array{ 0&0&0\cr 0&0 &0} \right ].&& }

Then, we have the matrix equality (Definition ME)

\eqalignno{ \left [\array{ −2{a}_{1} + 4{a}_{2} − {a}_{3} − {a}_{4} − {a}_{5}&3{a}_{1} − 2{a}_{2} − 2{a}_{3} + {a}_{4} + 2{a}_{5}& 4{a}_{1} + 2{a}_{2} − 2{a}_{3} − 2{a}_{5} \cr −{a}_{1} + 2{a}_{3} − {a}_{4} & 3{a}_{1} − {a}_{2} + 2{a}_{3} − {a}_{5} &−2{a}_{1} + {a}_{2} + 2{a}_{3} + 2{a}_{4} − 2{a}_{5} } \right ]&= \left [\array{ 0&0&0\cr 0&0 &0} \right ],&& }

which yields the linear system of equations

\eqalignno{ − 2{a}_{1} + 4{a}_{2} − {a}_{3} − {a}_{4} − {a}_{5} & = 0 & & \cr 3{a}_{1} − 2{a}_{2} − 2{a}_{3} + {a}_{4} + 2{a}_{5} & = 0 & & \cr 4{a}_{1} + 2{a}_{2} − 2{a}_{3} − 2{a}_{5} & = 0 & & \cr − {a}_{1} + 2{a}_{3} − {a}_{4} & = 0 & & \cr 3{a}_{1} − {a}_{2} + 2{a}_{3} − {a}_{5} & = 0 & & \cr − 2{a}_{1} + {a}_{2} + 2{a}_{3} + 2{a}_{4} − 2{a}_{5} & = 0. & & }

By row-reducing the associated 6 × 5 homogeneous system, we see that the only solution is {a}_{1} = {a}_{2} = {a}_{3} = {a}_{4} = {a}_{5} = 0, so these matrices are a linearly independent subset of {M}_{2,3}.

M23 Contributed by Chris Black Statement [579]
The matrix A is in the span of S, since

\eqalignno{ \left [\array{ −13&24& 2\cr −8 &−2 &−20 } \right ]& = 2\left [\array{ −2&3& 4\cr −1 &3 &−2 } \right ] − 2\left [\array{ 4&−2&2\cr 0&−1 &1 } \right ] − 3\left [\array{ −1&−2&−2\cr 2 & 2 & 2 } \right ] + 4\left [\array{ −1& 2 &−2\cr 0 &−1 &−2 } \right ]&& }

Note that if we were to write a complete linear combination of all of the matrices in S, then the fourth matrix would have a zero coefficient.

M24 Contributed by Chris Black Statement [580]
Since any symmetric matrix is of the form

\eqalignno{ \left [\array{ a&b&c\cr b&d &e \cr c&e&f } \right ]& = \left [\array{ a&0&0\cr 0&0 &0 \cr 0&0&0 } \right ] + \left [\array{ 0&b&0\cr b&0 &0 \cr 0&0&0} \right ] + \left [\array{ 0&0&c\cr 0&0 &0 \cr c&0&0} \right ] + \left [\array{ 0&0&0\cr 0&d &0 \cr 0&0&0 } \right ] + \left [\array{ 0&0&0\cr 0&0 &e \cr 0&e&0} \right ] + \left [\array{ 0&0&0\cr 0&0 &0 \cr 0&0&f} \right ],&& }

Any symmetric matrix is a linear combination of the linearly independent vectors in set T below, so that \left \langle T\right \rangle = Y :

T = \left \{\left [\array{ 1&0&0\cr 0&0 &0 \cr 0&0&0} \right ],\left [\array{ 0&1&0\cr 1&0 &0 \cr 0&0&0} \right ],\left [\array{ 0&0&1\cr 0&0 &0 \cr 1&0&0} \right ],\left [\array{ 0&0&0\cr 0&1 &0 \cr 0&0&0} \right ],\left [\array{ 0&0&0\cr 0&0 &1 \cr 0&1&0} \right ],\left [\array{ 0&0&0\cr 0&0 &0 \cr 0&0&1} \right ]\right \}

(Something to think about: How do we know that these matrices are linearly independent?)

T13 Contributed by Robert Beezer Statement [580]
For all A,\kern 1.95872pt B ∈ {M}_{mn} and for all 1 ≤ i ≤ m, 1 ≤ i ≤ n,

\eqalignno{ {\left [A + B\right ]}_{ij} & ={ \left [A\right ]}_{ij} +{ \left [B\right ]}_{ij} & &\text{@(a href="#definition.MA")Definition MA@(/a)} & & & & \cr & ={ \left [B\right ]}_{ij} +{ \left [A\right ]}_{ij} & &\text{Commutativity in ${ℂ}^{}$} & & & & \cr & ={ \left [B + A\right ]}_{ij} & &\text{@(a href="#definition.MA")Definition MA@(/a)} & & & & }

With equality of each entry of the matrices A + B and B + A being equal Definition ME tells us the two matrices are equal.