Section MO Matrix Operations
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
Subsection MEASM Matrix Equality, Addition, Scalar Multiplication
Definition VSM. Vector Space of \(m\times n\) Matrices.
The vector space \(M_{mn}\) is the set of all \(m\times n\) matrices with entries from the set of complex numbers.
Just as we made, and used, a careful definition of equality for column vectors, so too, we have precise definitions for matrices.
Definition ME. Matrix Equality.
The \(m\times n\) matrices \(A\) and \(B\) are equal, written \(A=B\) provided \(\matrixentry{A}{ij}=\matrixentry{B}{ij}\) for all \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{.}\)
So equality of matrices translates to the equality of complex numbers, on an entry-by-entry basis. Notice that we now have yet another definition that uses the symbol “=” for shorthand. Whenever a theorem has a conclusion saying two matrices are equal (think about your objects), we will consider appealing to this definition as a way of formulating the top-level structure of the proof.
We will now define two operations on the set \(M_{mn}\text{.}\) Again, we will overload a symbol (‘+’) and a convention (juxtaposition for scalar multiplication).
Definition MA. Matrix Addition.
Given the \(m\times n\) matrices \(A\) and \(B\text{,}\) define the sum of \(A\) and \(B\) as an \(m\times n\) matrix, written \(A+B\text{,}\) by \(\matrixentry{A+B}{ij}=\matrixentry{A}{ij}+\matrixentry{B}{ij}\text{,}\) for \(1\leq i\leq m,\,1\leq j\leq n\text{.}\)
So matrix addition takes two matrices of the same size and combines them (in a natural way!) to create a new matrix of the same size. Perhaps this is the “obvious” thing to do, but it does not relieve us from the obligation to state it carefully.
Example MA. Addition of two matrices in \(M_{23}\).
If
then
Our second operation takes two objects of different types, specifically a number and a matrix, and combines them to create another matrix. As with vectors, in this context we call a number a scalar in order to emphasize that it is not a matrix.
Definition MSM. Matrix Scalar Multiplication.
Given the \(m\times n\) matrix \(A\) and the scalar \(\alpha\in\complexes\text{,}\) the scalar multiple of \(A\) is the \(m\times n\) matrix, written \(\alpha A\text{,}\) and defined by \(\matrixentry{\alpha A}{ij}=\alpha\matrixentry{A}{ij}\text{,}\) for \(1\leq i\leq m,\,1\leq j\leq n\text{.}\)
Notice again that we have yet another kind of multiplication, and it is again written putting two symbols side-by-side. Computationally, scalar matrix multiplication is very easy.
Example MSM. Scalar multiplication in \(M_{32}\).
For the scalar \(\alpha=7\) and the matrix \(A\text{,}\)
Sage MS. Matrix Spaces.
Sage defines our set \(M_{mn}\) as a “matrix space” with the command MatrixSpace(R, m, n)
where R
is a number system and m
and n
are the number of rows and number of columns, respectively. This object does not have much functionality defined in Sage. Its main purposes are to provide a parent for matrices, and to provide another way to create matrices. The two matrix operations just defined (addition and scalar multiplication) are implemented as you would expect. In the example below, we create two matrices in \(M_{23}\) from just a list of 6 entries, by coercing the list into a matrix by using the relevant matrix space as if it were a function. Then we can perform the basic operations of matrix addition (Definition MA) and matrix scalar multiplication (Definition MSM).
Coercion can make some interesting conveniences possible. Notice how the scalar 37
in the following expression is coerced to \(37\) times an identity matrix of the proper size.
This coercion only applies to sums with square matrices. You might try this again, but with a rectangular matrix, just to see what the error message says.
Subsection VSP Vector Space Properties
With definitions of matrix addition and scalar multiplication we can now state, and prove, several properties of each operation, and some properties that involve their interplay. We now collect ten of them here for later reference.
Theorem VSPM. Vector Space Properties of Matrices.
Suppose that \(M_{mn}\) is the set of all \(m\times n\) matrices (Definition VSM) with addition and scalar multiplication as defined in Definition MA and Definition MSM. Then
- ACM Additive Closure, Matrices
If \(A,\,B\in M_{mn}\text{,}\) then \(A+B\in M_{mn}\text{.}\)
- SCM Scalar Closure, Matrices
If \(\alpha\in\complexes\) and \(A\in M_{mn}\text{,}\) then \(\alpha A\in M_{mn}\text{.}\)
- CM Commutativity, Matrices
If \(A,\,B\in M_{mn}\text{,}\) then \(A+B=B+A\text{.}\)
- AAM Additive Associativity, Matrices
If \(A,\,B,\,C\in M_{mn}\text{,}\) then \(A+\left(B+C\right)=\left(A+B\right)+C\text{.}\)
- ZM Zero Matrix, Matrices
There is a matrix, \(\zeromatrix\text{,}\) called the zero matrix, such that \(A+\zeromatrix=A\) for all \(A\in M_{mn}\text{.}\)
- AIM Additive Inverses, Matrices
If \(A\in M_{mn}\text{,}\) then there exists a matrix \(-A\in M_{mn}\) so that \(A+(-A)=\zeromatrix\text{.}\)
- SMAM Scalar Multiplication Associativity, Matrices
If \(\alpha,\,\beta\in\complexes\) and \(A\in M_{mn}\text{,}\) then \(\alpha(\beta A)=(\alpha\beta)A\text{.}\)
- DMAM Distributivity across Matrix Addition, Matrices
If \(\alpha\in\complexes\) and \(A,\,B\in M_{mn}\text{,}\) then \(\alpha(A+B)=\alpha A+\alpha B\text{.}\)
- DSAM Distributivity across Scalar Addition, Matrices
If \(\alpha,\,\beta\in\complexes\) and \(A\in M_{mn}\text{,}\) then \((\alpha+\beta)A=\alpha A+\beta A\text{.}\)
- OM One, Matrices
If \(A\in M_{mn}\text{,}\) then \(1A=A\text{.}\)
Proof.
While some of these properties seem very obvious, they all require proof. However, the proofs are not very interesting, and border on tedious. We will prove one version of distributivity very carefully, and you can test your proof-building skills on some of the others. We will give our new notation for matrix entries a workout here. Compare the style of the proofs here with those given for vectors in Theorem VSPCV — while the objects here are more complicated, our notation makes the proofs cleaner.
To prove Property DSAM, \((\alpha+\beta)A=\alpha A+\beta A\text{,}\) we need to establish the equality of two matrices (see Proof Technique GS). Definition ME says we need to establish the equality of their entries, one-by-one. How do we do this, when we do not even know how many entries the two matrices might have? This is where the notation for matrix entries, given in Definition M, comes into play. Ready? Here we go.
For any \(i\) and \(j\text{,}\) \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)
There are several things to notice here. (1) Each equals sign is an equality of scalars (numbers). (2) The two ends of the equation, being true for any \(i\) and \(j\text{,}\) allow us to conclude the equality of the matrices by Definition ME. (3) There are several plus signs, and several instances of juxtaposition. Identify each one, and state exactly what operation is being represented by each.
For now, note the similarities between Theorem VSPM about matrices and Theorem VSPCV about vectors.
The zero matrix described in this theorem, \(\zeromatrix\text{,}\) is what you would expect — a matrix full of zeros.
Definition ZM. Zero Matrix.
The \(m\times n\) zero matrix is written as \(\zeromatrix=\zeromatrix_{m\times n}\) and defined by \(\matrixentry{\zeromatrix}{ij}=0\text{,}\) for all \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{.}\)
Subsection TSM Transposes and Symmetric Matrices
We describe one more common operation we can perform on matrices. Informally, to transpose a matrix is to build a new matrix by swapping its rows and columns.
Definition TM. Transpose of a Matrix.
Given an \(m\times n\) matrix \(A\text{,}\) its transpose is the \(n\times m\) matrix \(\transpose{A}\) given by \(\matrixentry{\transpose{A}}{ij}=\matrixentry{A}{ji}\text{,}\) for \(1\leq i\leq n,\,1\leq j\leq m\text{.}\)
Example TM. Transpose of a \(3\times 4\) matrix.
We could formulate the transpose, entry-by-entry, using the definition. But it is easier to just systematically rewrite rows as columns (or vice-versa). The form of the definition given will be more useful in proofs. So for the \(3\times 4\) matrix \(D\) we have
It will sometimes happen that a matrix is equal to its transpose. In this case, we will call a matrix symmetric. These matrices occur naturally in certain situations, and also have some nice properties, so it is worth stating the definition carefully. Informally a matrix is symmetric if we can “flip” it about the main diagonal (upper-left corner, running down to the lower-right corner) and have it look unchanged.
Definition SYM. Symmetric Matrix.
The matrix \(A\) is symmetric if \(A=\transpose{A}\text{.}\)
Example SYM. A symmetric \(5\times 5\) matrix.
The matrix \(E\) below is symmetric.
You might have noticed that Definition SYM did not specify the size of the matrix \(A\text{,}\) as has been our custom. That is because it was not necessary. An alternative would have been to state the definition just for square matrices, but this is the substance of the next proof.
Before reading the next proof, we want to offer you some advice about how to become more proficient at constructing proofs. Perhaps you can apply this advice to the next theorem. Have a peek at Proof Technique P now.
Theorem SMS. Symmetric Matrices are Square.
Suppose that \(A\) is a symmetric matrix. Then \(A\) is square.
Proof.
We start by specifying \(A\)'s size, without assuming it is square, since we are trying to prove that, so we cannot also assume it. Suppose \(A\) is an \(m\times n\) matrix. Because \(A\) is symmetric, we know by Definition SYM that \(A=\transpose{A}\text{.}\) So, in particular, Definition ME requires that \(A\) and \(\transpose{A}\) must have the same size. The size of \(\transpose{A}\) is \(n\times m\text{.}\) Because \(A\) has \(m\) rows and \(\transpose{A}\) has \(n\) rows, we conclude that \(m=n\text{,}\) and hence \(A\) must be square by Definition SQM.
We finish this section with three easy theorems, but they illustrate the interplay of our three new operations, our new notation, and the techniques used to prove matrix equalities.
Theorem TMA. Transpose and Matrix Addition.
Suppose that \(A\) and \(B\) are \(m\times n\) matrices. Then \(\transpose{(A+B)}=\transpose{A}+\transpose{B}\text{.}\)
Proof.
The statement to be proved is an equality of matrices, so we work entry-by-entry and use Definition ME. Think carefully about the objects involved here, and the many uses of the plus sign. Realize too that while \(A\) and \(B\) are \(m\times n\) matrices, the conclusion is a statement about the equality of two \(n\times m\) matrices! So we begin with a preparation for Definition ME. For \(1\leq i\leq n\text{,}\) \(1\leq j\leq m\text{,}\)
Since the matrices \(\transpose{(A+B)}\) and \(\transpose{A}+\transpose{B}\) agree at each entry, Definition ME tells us the two matrices are equal.
Theorem TMSM. Transpose and Matrix Scalar Multiplication.
Suppose that \(\alpha\in\complexes\) and \(A\) is an \(m\times n\) matrix. Then \(\transpose{(\alpha A)}=\alpha\transpose{A}\text{.}\)
Proof.
The statement to be proved is an equality of matrices, so we work entry-by-entry and use Definition ME. Notice that the desired equality is of \(n\times m\) matrices, and think carefully about the objects involved here, plus the many uses of juxtaposition. For \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)
Since the matrices \(\transpose{(\alpha A)}\) and \(\alpha\transpose{A}\) agree at each entry, Definition ME tells us the two matrices are equal.
Theorem TT. Transpose of a Transpose.
Suppose that \(A\) is an \(m\times n\) matrix. Then \(\transpose{\left(\transpose{A}\right)}=A\text{.}\)
Proof.
We again want to prove an equality of matrices, so we work entry-by-entry and use Definition ME. For \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)
Since the matrices \(\transpose{\left(\transpose{A}\right)}\) and \(A\) agree at each entry, Definition ME tells us the two matrices are equal.
Subsection TM Triangular Matrices
An upper, or lower, triangular matrix is exactly what it sounds like it should be, but here are the two relevant definitions.
Definition UTM. Upper Triangular Matrix.
The square matrix \(A\) is upper triangular if \(\matrixentry{A}{ij}=0\) whenever \(i\gt j\text{.}\)
Definition LTM. Lower Triangular Matrix.
The square matrix \(A\) is lower triangular if \(\matrixentry{A}{ij}=0\) whenever \(i\lt j\text{.}\)
So an upper triangular matrix has all its entries “below” the diagonal equal to zero, and a lower triangular matrix has all its entries “above” the diagonal equal to zero.
Example ULTM. Upper and lower triangular matrices.
The matrix \(U\) is upper triangular, while \(L\) is lower triangular.
As a square matrix, we can ask when a triangular matrix is nonsingular (Definition NM). Turns out we can tell just by looking.
Theorem NTM. Nonsingular Triangular Matrices.
Suppose that \(A\) is a triangular matrix. Then \(A\) is nonsingular if and only if every diagonal entry is nonzero.
Proof.
We give a proof for an upper triangular matrix, since it is marginally simpler, so suppose \(A\) is an upper triangular matrix of size \(n\text{.}\)
(⇐)
For each column, its entry on the diagonal is nonzero, so we can form its reciprocal (Property MICN) and use this as the multiple for a row operation of the second type (Definition RO) to form a row-equivalent matrix that replaces the entry by a \(1\text{.}\) The entry, along with repeated row operations of the third type, will form a row-equivalent matrix where the column is a pivot column. So the matrix is row-equivalent to a square matrix where every column is a pivot column; in other words, the identity matrix (Definition IM). By Theorem NMRRI we see that \(A\) is nonsingular.
(⇒)
We will establish the contrapositive (Proof Technique CP): if there is a zero diagonal entry then the matrix is singular. Suppose that the entry in row \(k\) and column \(k\) is zero. Then the first \(k\) columns of \(A\) are each vectors with \(k-1\) arbitrary entries, followed by \(n-k+1\) zeros. These \(k\) column vectors are a linearly independent set, which can be established with a nontrivial relation of linear dependence obtained by an argument entirely similar to the proof of Theorem MVSLD. The one change is that the relevant coefficient matrix would be a \(k-1\times k\) matrix formed from the first \(k\) columns and first \(k-1\) rows of \(A\text{.}\) With linearly dependent columns, Theorem NMLIC says \(A\) is singular.
Example NSTM. Nonsingular and singular triangular matrices.
We can now see immediately with Theorem NTM that the matrix \(U\) of Example ULTM is singular, and the matrix \(L\) of Example ULTM is nonsingular. Indeed, these two matrices could be very helpful as you study the proof of Theorem NTM.
Subsection MCC Matrices and Complex Conjugation
As we did with vectors (Definition CCCV), we can define what it means to take the conjugate of a matrix.
Definition CCM. Complex Conjugate of a Matrix.
Suppose \(A\) is an \(m\times n\) matrix. Then the conjugate of \(A\text{,}\) written \(\conjugate{A}\) is an \(m\times n\) matrix defined by \(\matrixentry{\conjugate{A}}{ij}=\conjugate{\matrixentry{A}{ij}}\) for \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{.}\)
Example CCM. Complex conjugate of a matrix.
For the \(2\times 3\) matrix \(A\text{,}\) we compute the conjugate of \(A\text{,}\) \(\conjugate{A}\text{.}\)
The interplay between the conjugate of a matrix and the two operations on matrices is what you might expect.
Theorem CRMA. Conjugation Respects Matrix Addition.
Suppose that \(A\) and \(B\) are \(m\times n\) matrices. Then \(\conjugate{A+B}=\conjugate{A}+\conjugate{B}\text{.}\)
Proof.
For \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)
Since the matrices \(\conjugate{A+B}\) and \(\conjugate{A}+\conjugate{B}\) are equal in each entry, Definition ME says that \(\conjugate{A+B}=\conjugate{A}+\conjugate{B}\text{.}\)
Theorem CRMSM. Conjugation Respects Matrix Scalar Multiplication.
Suppose that \(\alpha\in\complexes\) and \(A\) is an \(m\times n\) matrix. Then \(\conjugate{\alpha A}=\conjugate{\alpha}\conjugate{A}\text{.}\)
Proof.
For \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)
Since the matrices \(\conjugate{\alpha A}\) and \(\conjugate{\alpha}\conjugate{A}\) are equal in each entry, Definition ME says that \(\conjugate{\alpha A}=\conjugate{\alpha}\conjugate{A}\text{.}\)
Theorem CCM. Conjugate of the Conjugate of a Matrix.
Suppose that \(A\) is an \(m\times n\) matrix. Then \(\conjugate{\left(\conjugate{A}\right)}=A\text{.}\)
Proof.
For \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)
Since the matrices \(\conjugate{\left(\conjugate{A}\right)}\) and \(A\) are equal in each entry, Definition ME says that \(\conjugate{\left(\conjugate{A}\right)}=A\text{.}\)
Finally, we will need the following result about matrix conjugation and transposes later.
Theorem MCT. Matrix Conjugation and Transposes.
Suppose that \(A\) is an \(m\times n\) matrix. Then \(\conjugate{\left(\transpose{A}\right)}=\transpose{\left(\conjugate{A}\right)}\text{.}\)
Proof.
For \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)
Since the matrices \(\conjugate{\left(\transpose{A}\right)}\) and \(\transpose{\left(\conjugate{A}\right)}\) are equal in each entry, Definition ME says that \(\conjugate{\left(\transpose{A}\right)}=\transpose{\left(\conjugate{A}\right)}\text{.}\)
Subsection AM Adjoint of a Matrix
The combination of transposing and conjugating a matrix will be important in subsequent sections, such as Subsection MINM.UM and Section OD. We make a key definition here and prove some basic results in the same spirit as those above.
Definition A. Adjoint.
If \(A\) is a matrix, then its adjoint is \(\adjoint{A}=\transpose{\left(\conjugate{A}\right)}\text{.}\)
You will see the adjoint written elsewhere variously as \(A^H\text{,}\) \(A^\ast\) or \(A^\dagger\text{.}\) Notice that Theorem MCT says it does not really matter if we conjugate and then transpose, or transpose and then conjugate.
Theorem AMA. Adjoint and Matrix Addition.
Suppose \(A\) and \(B\) are matrices of the same size. Then \(\adjoint{\left(A+B\right)}=\adjoint{A}+\adjoint{B}\text{.}\)
Proof.
Theorem AMSM. Adjoint and Matrix Scalar Multiplication.
Suppose \(\alpha\in\complexes\) is a scalar and \(A\) is a matrix. Then \(\adjoint{\left(\alpha A\right)}=\conjugate{\alpha}\adjoint{A}\text{.}\)
Proof.
Theorem AA. Adjoint of an Adjoint.
Suppose that \(A\) is a matrix. Then \(\adjoint{\left(\adjoint{A}\right)}=A\text{.}\)
Proof.
Take note of how the theorems in this section, while simple, build on earlier theorems and definitions and never descend to the level of entry-by-entry proofs based on Definition ME. In other words, the equal signs that appear in the previous proofs are equalities of matrices, not scalars (which is the opposite of a proof like that of Theorem TMA).
Sage MO. Matrix Operations.
Every operation in this section is implemented in Sage. The only real subtlety is determining if certain matrices are symmetric, which we will discuss below. In linear algebra, the term “adjoint” has two unrelated meanings, so you need to be careful when you see this term. In particular, in Sage it is used to mean something different. So our version of the adjoint is implemented as the matrix method .conjugate_transpose()
. Here are some straightforward examples.
With these constructions, we can test, or demonstrate, some of the theorems above. Of course, this does not make the theorems true, but is satisfying nonetheless. This can be an effective technique when you are learning new Sage commands or new linear algebra — if your computations are not consistent with theorems, then your understanding of the linear algebra may be flawed, or your understanding of Sage may be flawed, or Sage may have a bug! Note in the following how we use comparison (==
) between matrices as the implementation of matrix equality (Definition ME).
The opposite is true — you can use theorems to convert, or express, Sage code into alternative, but mathematically equivalent forms.
Here is the subtlety. With approximate numbers, such as in RDF
and CDF
, it can be tricky to decide if two numbers are equal, or if a very small number is zero or not. In these situations Sage allows us to specify a “tolerance” — the largest number that can be effectively considered zero. Consider the following:
Clearly the last result is not correct. This is because \(0.000000000001 = 1.0\times 10^{-12}\) is “small enough” to be confused as equal to the zero in the other corner of the matrix. However, Sage will let us set our own idea of when two numbers are equal, by setting a tolerance on the difference between two numbers that will allow them to be considered equal. The default tolerance is set at \(1.0\times 10^{-12}\text{.}\) Here we use Sage's syntax for scientific notation to specify the tolerance.
This is not a course in numerical linear algebra, even if that is a fascinating field of study. To concentrate on the main ideas of introductory linear algebra, whenever possible we will concentrate on number systems like the rational numbers or algebraic numbers where we can rely on exact results. If you are ever unsure if a number system is exact or not, just ask.
Reading Questions MO Reading Questions
1.
Perform the following matrix computation.
2.
Theorem VSPM reminds you of what previous theorem? How strong is the similarity?
3.
Compute the transpose of the matrix below.
Exercises MO Exercises
C10.
Let \(A = \begin{bmatrix} 1 & 4 & -3 \\ 6 & 3 & 0\end{bmatrix}\text{,}\) \(B = \begin{bmatrix} 3 & 2 & 1 \\ -2 & -6 & 5\end{bmatrix}\) and \(C = \begin{bmatrix} 2 & 4 \\ 4 & 0 \\ -2 & 2\end{bmatrix}\text{.}\) Let \(\alpha = 4\) and \(\beta = 1/2\text{.}\) Perform the following calculations: (1) \(A + B\text{,}\) (2) \(A + C\text{,}\) (3) \(\transpose{B} + C\text{,}\) (4) \(A + \transpose{B}\text{,}\) (5) \(\beta C\text{,}\) (6) \(4A - 3B\text{,}\) (7) \(\transpose{A} + \alpha C\text{,}\) (8) \(A + B - \transpose{C}\text{,}\) (9) \(4A + 2B - 5\transpose{C}\text{.}\)
- \(A + B = \begin{bmatrix} 4 & 6 & -2 \\ 4 & -3 & 5 \end{bmatrix}\text{.}\)
- \(A + C\) is undefined; \(A\) and \(C\) are not the same size.
- \(\transpose{B} + C = \begin{bmatrix} 5 & 2 \\ 6 & -6 \\ -1 & 7 \end{bmatrix}\text{.}\)
- \(A + \transpose{B}\) is undefined; \(A\) and \(\transpose{B}\) are not the same size.
- \(\beta C = \begin{bmatrix} 1 & 2 \\ 2 & 0 \\ -1 & 1 \end{bmatrix}\text{.}\)
- \(4A - 3B = \begin{bmatrix} -5 & 10 & -15\\ 30 & 30 & -15 \end{bmatrix}\text{.}\)
- \(\transpose{A} + \alpha C = \begin{bmatrix} 9 & 22 \\ 20 & 3\\ -11 & 8 \end{bmatrix}\text{.}\)
- \(A + B - \transpose{C} = \begin{bmatrix} 2 & 2 & 0\\ 0 & -3 & 3\end{bmatrix}\text{.}\)
- \(4A + 2B - 5\transpose{C} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}\text{.}\)
C11.
Solve the given vector equation for \(x\text{,}\) or explain why no solution exists.
The given equation
is valid only if \(4 - 3x = -2\text{.}\) Thus, the only solution is \(x = 2\text{.}\)
C12.
Solve the given matrix equation for \(\alpha\text{,}\) or explain why no solution exists.
The given equation
leads to the 6 equations in \(\alpha\)
The only value that solves all 6 equations is \(\alpha = 3\text{,}\) which is the solution to the original matrix equation.
C13.
Solve the given matrix equation for \(\alpha\text{,}\) or explain why no solution exists.
The given equation
gives a system of six equations in \(\alpha\)
Solving each of these equations, we see that the first three and the fifth all lead to the solution \(\alpha = 2\text{,}\) the fourth equation is true no matter what the value of \(\alpha\text{,}\) but the last equation is only solved by \(\alpha = 7/4\text{.}\) Thus, the system has no solution, and the original matrix equation also has no solution.
C14.
Find \(\alpha\) and \(\beta\) that solve the following equation.
The given equation
gives a system of four equations in two variables
Solving this linear system by row-reducing the augmented matrix shows that \(\alpha = 3\text{,}\) \(\beta = -2\) is the only solution.
M10.
Describe matrices that are simultaneously both upper triangular and lower triangular.
In Chapter V we defined the operations of vector addition and vector scalar multiplication in Definition CVA and Definition CVSM. These two operations formed the underpinnings of the remainder of the chapter. We have now defined similar operations for matrices in Definition MA and Definition MSM. You will have noticed the resulting similarities between Theorem VSPCV and Theorem VSPM.
In Exercises M20–M25, you will be asked to extend these similarities to other fundamental definitions and concepts we first saw in Chapter V. This sequence of problems was suggested by Martin Jackson.
M20.
Suppose \(S=\set{B_1,\,B_2,\,B_3,\,\ldots,\,B_p}\) is a set of matrices from \(M_{mn}\text{.}\) Formulate appropriate definitions for the following terms and give an example of the use of each.
- A linear combination of elements of \(S\text{.}\)
- A relation of linear dependence on \(S\text{,}\) both trivial and nontrivial.
- \(S\) is a linearly independent set.
- \(\spn{S}\text{.}\)
M21.
Show that the set \(S\) is linearly independent in \(M_{22}\text{.}\)
Suppose there exist constants \(\alpha\text{,}\) \(\beta\text{,}\) \(\gamma\text{,}\) and \(\delta\) so that
Then
so that
The only solution is then \(\alpha = 0\text{,}\) \(\beta = 0\text{,}\) \(\gamma = 0\text{,}\) and \(\delta = 0\text{,}\) so that the set \(S\) is a linearly independent set of matrices.
M22.
Determine if the set \(S\) below is linearly independent in \(M_{23}\text{.}\)
Suppose that there exist constants \(a_1\text{,}\) \(a_2\text{,}\) \(a_3\text{,}\) \(a_4\text{,}\) and \(a_5\) so that
Then, we have the matrix equality (Definition ME)
which yields the linear system of equations
By row-reducing the associated \(6\times 5\) homogeneous system, we see that the only solution is \(a_1 = a_2 = a_3 = a_4 = a_5 = 0\text{,}\) so these matrices are a linearly independent subset of \(M_{23}\text{.}\)
M23.
Determine if the matrix \(A\) is in the span of \(S\text{.}\) In other words, is \(A\in\spn{S}\text{?}\) If so write \(A\) as a linear combination of the elements of \(S\text{.}\)
The matrix \(A\) is in the span of \(S\text{,}\) since
Note that if we were to write a complete linear combination of all of the matrices in \(S\text{,}\) then the fourth matrix would have a zero coefficient.
M24.
Suppose \(Y\) is the set of all \(3\times 3\) symmetric matrices (Definition SYM). Find a set \(T\) so that \(T\) is linearly independent and \(\spn{T}=Y\text{.}\)
Since any symmetric matrix is of the form
any symmetric matrix is a linear combination of the linearly independent vectors in set \(T\) below, so that \(\spn{T} = Y\)
(Something to think about: How do we know that these matrices are linearly independent?)
M25.
Define a subset of \(M_{33}\) by
Find a set \(R\) so that \(R\) is linearly independent and \(\spn{R}=U_{33}\text{.}\)
T13.
Prove Property CM of Theorem VSPM. Write your proof in the style of the proof of Property DSAM given in this section.
For all \(A,\,B\in M_{mn}\) and for all \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)
With equality of each entry of the matrices \(A+B\) and \(B+A\) being equal Definition ME tells us the two matrices are equal.
T14.
Prove Property AAM of Theorem VSPM. Write your proof in the style of the proof of Property DSAM given in this section.
T17.
Prove Property SMAM of Theorem VSPM. Write your proof in the style of the proof of Property DSAM given in this section.
T18.
Prove Property DMAM of Theorem VSPM. Write your proof in the style of the proof of Property DSAM given in this section.
T25.
Prove that the transpose of an upper triangular matrix is a lower triangular matrix. Or vice-versa, your choice.
A matrix \(A\) is skew-symmetric if \(\transpose{A}=-A\) Exercises T30–T37 employ this definition.
T30.
Prove that a skew-symmetric matrix is square. (Hint: study the proof of Theorem SMS.)
T31.
Prove that a skew-symmetric matrix must have zeros for its diagonal elements. In other words, if \(A\) is skew-symmetric of size \(n\text{,}\) then \(\matrixentry{A}{ii}=0\) for \(1\leq i\leq n\text{.}\) (Hint: carefully construct an example of a \(3\times 3\) skew-symmetric matrix before attempting a proof.)
T32.
Prove that a matrix \(A\) is both skew-symmetric and symmetric if and only if \(A\) is the zero matrix. (Hint: one half of this proof is very easy, the other half takes a little more work.)
T33.
Suppose \(A\) and \(B\) are both skew-symmetric matrices of the same size and \(\alpha,\,\beta\in\complexes\text{.}\) Prove that \(\alpha A + \beta B\) is a skew-symmetric matrix.
T34.
Suppose \(A\) is a square matrix. Prove that \(A+\transpose{A}\) is a symmetric matrix.
T35.
Suppose \(A\) is a square matrix. Prove that \(A-\transpose{A}\) is a skew-symmetric matrix.
T36.
Suppose \(A\) is a square matrix. Prove that there is a symmetric matrix \(B\) and a skew-symmetric matrix \(C\) such that \(A=B+C\text{.}\) In other words, any square matrix can be decomposed into a symmetric matrix and a skew-symmetric matrix (Proof Technique DC). (Hint: consider building a proof on Exercise MO.T34 and Exercise MO.T35.)
T37.
Prove that the decomposition in Exercise MO.T36 is unique (see Proof Technique U). (Hint: a proof can turn on Exercise MO.T31.)