Section MINM  Matrix Inverses and Nonsingular Matrices

From A First Course in Linear Algebra
Version 2.10
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/

We saw in Theorem CINM that if a square matrix A is nonsingular, then there is a matrix B so that AB = {I}_{n}. In other words, B is halfway to being an inverse of A. We will see in this section that B automatically fulfills the second condition (BA = {I}_{n}). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We’ll make all these connections precise now. Not many examples or definitions in this section, just theorems.

Subsection NMI: Nonsingular Matrices are Invertible

We need a couple of technical results for starters. Some books would call these minor, but essential, results “lemmas.” We’ll just call ’em theorems.  See Technique LC for more on the distinction. 

The first of these technical results is interesting in that the hypothesis says something about a product of two square matrices and the conclusion then says the same thing about each individual matrix in the product. This result has an analogy in the algebra of complex numbers: suppose α,\kern 1.95872pt β ∈ ℂ, then αβ\mathrel{≠}0 if and only if α\mathrel{≠}0 and β\mathrel{≠}0. We can view this result as suggesting that the term “nonsingular” for matrices is like the term “nonzero” for scalars.

Theorem NPNT
Nonsingular Product has Nonsingular Terms
Suppose that A and B are square matrices of size n. The product AB is nonsingular if and only if A and B are both nonsingular.

Proof   (⇒) We’ll do this portion of the proof in two parts, each as a proof by contradiction (Technique CD). Assume that AB is nonsingular. Establishing that B is nonsingular is the easier part, so we will do it first, but in reality, we will need to know that B is nonsingular when we prove that A is nonsingular.

You can also think of this proof as being a study of four possible conclusions in the table below. One of the four rows must happen (the list is exhaustive). In the proof we learn that the first three rows lead to contradictions, and so are impossible. That leaves the fourth row as a certainty, which is our desired conclusion.

A
B
Case






Singular Singular 1



Nonsingular Singular 1



Singular Nonsingular 2



Nonsingular Nonsingular

Part 1. Suppose B is singular. Then there is a nonzero vector z that is a solution to ℒS\kern -1.95872pt \left (B,\kern 1.95872pt 0\right ). So

\eqalignno{ (AB)z & = A(Bz) & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = A0 & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMZM")Theorem MMZM@(/a)} & & & & \cr & & & & }

Because z is a nonzero solution to ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt 0\right ), we conclude that AB is singular (Definition NM). This is a contradiction, so B is nonsingular, as desired.

Part 2. Suppose A is singular. Then there is a nonzero vector y that is a solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ). Now consider the linear system ℒS\kern -1.95872pt \left (B,\kern 1.95872pt y\right ). Since we know B is nonsingular from Case 1, the system has a unique solution (Theorem NMUS), which we will denote as w. We first claim w is not the zero vector either. Assuming the opposite, suppose that w = 0 (Technique CD). Then

\eqalignno{ y & = Bw & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & = B0 & &\text{Hypothesis} & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMZM")Theorem MMZM@(/a)} & & & & \text{contrary to $y$ being nonzero. So $w\mathrel{≠}0$. The pieces are in place, so here we go,} \cr (AB)w & = A(Bw) & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = Ay & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & & & & }

So w is a nonzero solution to ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt 0\right ), and thus we can say that AB is singular (Definition NM). This is a contradiction, so A is nonsingular, as desired.

() Now assume that both A and B are nonsingular. Suppose that x ∈ {ℂ}^{n} is a solution to ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt 0\right ). Then

\eqalignno{ 0 & = \left (AB\right )x & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & = A\left (Bx\right ) & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & }

By Theorem SLEMM, Bx is a solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ), and by the definition of a nonsingular matrix (Definition NM), we conclude that Bx = 0. Now, by an entirely similar argument, the nonsingularity of B forces us to conclude that x = 0. So the only solution to ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt 0\right ) is the zero vector and we conclude that AB is nonsingular by Definition NM.

This is a powerful result in the “forward” direction, because it allows us to begin with a hypothesis that something complicated (the matrix product AB) has the property of being nonsingular, and we can then conclude that the simpler constituents (A and B individually) then also have the property of being nonsingular. If we had thought that the matrix product was an artificial construction, results like this would make us begin to think twice.

The contrapositive of this result is equally interesting. It says that A or B (or both) is a singular matrix if and only if the product AB is singular. Notice how the negation of the theorem’s conclusion (A and B both nonsingular) becomes the statement “at least one of A and B is singular.” (See Technique CP.)

Theorem OSIS
One-Sided Inverse is Sufficient
Suppose A and B are square matrices of size n such that AB = {I}_{n}. Then BA = {I}_{n}.

Proof   The matrix {I}_{n} is nonsingular (since it row-reduces easily to {I}_{n}, Theorem NMRRI). So A and B are nonsingular by Theorem NPNT, so in particular B is nonsingular. We can therefore apply Theorem CINM to assert the existence of a matrix C so that BC = {I}_{n}. This application of Theorem CINM could be a bit confusing, mostly because of the names of the matrices involved. B is nonsingular, so there must be a “right-inverse” for B, and we’re calling it C.

Now

\eqalignno{ BA & = (BA){I}_{n} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = (BA)(BC) & &\text{@(a href="fcla-jsmath-2.10li32.html#theorem.CINM")Theorem CINM@(/a)} & & & & \cr & = B(AB)C & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = B{I}_{n}C & &\text{Hypothesis} & & & & \cr & = BC & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = {I}_{n} & &\text{@(a href="fcla-jsmath-2.10li32.html#theorem.CINM")Theorem CINM@(/a)} & & & & }

which is the desired conclusion.

So Theorem OSIS tells us that if A is nonsingular, then the matrix B guaranteed by Theorem CINM will be both a “right-inverse” and a “left-inverse” for A, so A is invertible and {A}^{−1} = B.

So if you have a nonsingular matrix, A, you can use the procedure described in Theorem CINM to find an inverse for A. If A is singular, then the procedure in Theorem CINM will fail as the first n columns of M will not row-reduce to the identity matrix. However, we can say a bit more. When A is singular, then A does not have an inverse (which is very different from saying that the procedure in Theorem CINM fails to find an inverse). This may feel like we are splitting hairs, but its important that we do not make unfounded assumptions. These observations motivate the next theorem.

Theorem NI
Nonsingularity is Invertibility
Suppose that A is a square matrix. Then A is nonsingular if and only if A is invertible.

Proof   () Suppose A is invertible, and suppose that x is any solution to the homogeneous system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ). Then

\eqalignno{ x & = {I}_{n}x & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = \left ({A}^{−1}A\right )x & &\text{@(a href="fcla-jsmath-2.10li32.html#definition.MI")Definition MI@(/a)} & & & & \cr & = {A}^{−1}\left (Ax\right ) & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = {A}^{−1}0 & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMZM")Theorem MMZM@(/a)} & & & & }

So the only solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ) is the zero vector, so by Definition NM, A is nonsingular.

() Suppose now that A is nonsingular. By Theorem CINM we find B so that AB = {I}_{n}. Then Theorem OSIS tells us that BA = {I}_{n}. So B is A’s inverse, and by construction, A is invertible.

So for a square matrix, the properties of having an inverse and of having a trivial null space are one and the same. Can’t have one without the other.

Theorem NME3
Nonsingular Matrix Equivalences, Round 3
Suppose that A is a square matrix of size n. The following are equivalent.

  1. A is nonsingular.
  2. A row-reduces to the identity matrix.
  3. The null space of A contains only the zero vector, N\kern -1.95872pt \left (A\right ) = \left \{0\right \}.
  4. The linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) has a unique solution for every possible choice of b.
  5. The columns of A are a linearly independent set.
  6. A is invertible.

Proof   We can update our list of equivalences for nonsingular matrices (Theorem NME2) with the equivalent condition from Theorem NI.

In the case that A is a nonsingular coefficient matrix of a system of equations, the inverse allows us to very quickly compute the unique solution, for any vector of constants.

Theorem SNCM
Solution with Nonsingular Coefficient Matrix
Suppose that A is nonsingular. Then the unique solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) is {A}^{−1}b.

Proof   By Theorem NMUS we know already that ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) has a unique solution for every choice of b. We need to show that the expression stated is indeed a solution (the solution). That’s easy, just “plug it in” to the corresponding vector equation representation (Theorem SLEMM),

\eqalignno{ A\left ({A}^{−1}b\right ) & = \left (A{A}^{−1}\right )b & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = {I}_{n}b & &\text{@(a href="fcla-jsmath-2.10li32.html#definition.MI")Definition MI@(/a)} & & & & \cr & = b & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & }

Since Ax = b is true when we substitute {A}^{−1}b for x, {A}^{−1}b is a (the!) solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ).

Subsection UM: Unitary Matrices

Recall that the adjoint of a matrix is {A}^{∗} ={ \left (\overline{A}\right )}^{t} (Definition A).

Definition UM
Unitary Matrices
Suppose that U is a square matrix of size n such that {U}^{∗}U = {I}_{ n}. Then we say U is unitary.

This condition may seem rather far-fetched at first glance. Would there be any matrix that behaved this way? Well, yes, here’s one.

Example UM3
Unitary matrix of size 3

U = \left [\array{ {1+i\over \sqrt{5}} &{3+2\kern 1.95872pt i\over \sqrt{55}} & {2+2i\over \sqrt{22}} \cr {1−i\over \sqrt{5}} &{2+2\kern 1.95872pt i\over \sqrt{55}} & {−3+i\over \sqrt{22}} \cr {i\over \sqrt{5}} &{3−5\kern 1.95872pt i\over \sqrt{55}} &− {2\over \sqrt{22}} } \right ]

The computations get a bit tiresome, but if you work your way through the computation of {U}^{∗}U, you will arrive at the 3 × 3 identity matrix {I}_{3}.

Unitary matrices do not have to look quite so gruesome. Here’s a larger one that is a bit more pleasing.

Example UPM
Unitary permutation matrix
The matrix

P = \left [\array{ 0&1&0&0&0 \cr 0&0&0&1&0 \cr 1&0&0&0&0 \cr 0&0&0&0&1 \cr 0&0&1&0&0 } \right ]

is unitary as can be easily checked. Notice that it is just a rearrangement of the columns of the 5 × 5 identity matrix, {I}_{5} (Definition IM).

An interesting exercise is to build another 5 × 5 unitary matrix, R, using a different rearrangement of the columns of {I}_{5}. Then form the product PR. This will be another unitary matrix (Exercise MINM.T10). If you were to build all 5! = 5 × 4 × 3 × 2 × 1 = 120 matrices of this type you would have a set that remains closed under matrix multiplication. It is an example of another algebraic structure known as a group since together the set and the one operation (matrix multiplication here) is closed, associative, has an identity ({I}_{5}), and inverses (Theorem UMI). Notice though that the operation in this group is not commutative!

If a matrix A has only real number entries (we say it is a real matrix) then the defining property of being unitary simplifies to {A}^{t}A = {I}_{ n}. In this case we, and everybody else, calls the matrix orthogonal, so you may often encounter this term in your other reading when the complex numbers are not under consideration.

Unitary matrices have easily computed inverses. They also have columns that form orthonormal sets. Here are the theorems that show us that unitary matrices are not as strange as they might initially appear.

Theorem UMI
Unitary Matrices are Invertible
Suppose that U is a unitary matrix of size n. Then U is nonsingular, and {U}^{−1} = {U}^{∗}.

Proof   By Definition UM, we know that {U}^{∗}U = {I}_{ n}. The matrix {I}_{n} is nonsingular (since it row-reduces easily to {I}_{n}, Theorem NMRRI). So by Theorem NPNT, U and {U}^{∗} are both nonsingular matrices.

The equation {U}^{∗}U = {I}_{ n} gets us halfway to an inverse of U, and Theorem OSIS tells us that then U{U}^{∗} = {I}_{ n} also. So U and {U}^{∗} are inverses of each other (Definition MI).

Theorem CUMOS
Columns of Unitary Matrices are Orthonormal Sets
Suppose that A is a square matrix of size n with columns S = \left \{{A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{n}\right \}. Then A is a unitary matrix if and only if S is an orthonormal set.

Proof   The proof revolves around recognizing that a typical entry of the product {A}^{∗}A is an inner product of columns of A. Here are the details to support this claim.

\eqalignno{ {\left [{A}^{∗}A\right ]}_{ ij} & ={ \mathop{∑ }}_{k=1}^{n}{\left [{A}^{∗}\right ]}_{ ik}{\left [A\right ]}_{kj} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.EMP")Theorem EMP@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{n}{\left [{\left (\overline{A}\right )}^{t}\right ]}_{ ik}{\left [A\right ]}_{kj} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.EMP")Theorem EMP@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{n}{\left [\kern 1.95872pt \overline{A}\kern 1.95872pt \right ]}_{ ki}{\left [A\right ]}_{kj} & &\text{@(a href="fcla-jsmath-2.10li30.html#definition.TM")Definition TM@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{n}\overline{{\left [A\right ]}_{ ki}}{\left [A\right ]}_{kj} & &\text{@(a href="fcla-jsmath-2.10li30.html#definition.CCM")Definition CCM@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{n}{\left [A\right ]}_{ kj}\overline{{\left [A\right ]}_{ki}} & &\text{@(a href="fcla-jsmath-2.10li69.html#property.CMCN")Property CMCN@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{n}{\left [{A}_{ j}\right ]}_{k}\overline{{\left [{A}_{i}\right ]}_{k}} & & & & \cr & = \left \langle {A}_{j},\kern 1.95872pt {A}_{i}\right \rangle & &\text{@(a href="fcla-jsmath-2.10li28.html#definition.IP")Definition IP@(/a)} & & & & }

We now employ this equality in a chain of equivalences,

\eqalignno{ &\text{$S = \left \{{A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{n}\right \}$ is an orthonormal set} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \left \langle {A}_{j},\kern 1.95872pt {A}_{i}\right \rangle = \left \{\array{ 0\quad &\text{if $i\mathrel{≠}j$} \cr 1\quad &\text{if $i = j$} } \right . & &\text{@(a href="fcla-jsmath-2.10li28.html#definition.ONS")Definition ONS@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt {\left [{A}^{∗}A\right ]}_{ ij} = \left \{\array{ 0\quad &\text{if $i\mathrel{≠}j$} \cr 1\quad &\text{if $i = j$} } \right . & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt {\left [{A}^{∗}A\right ]}_{ ij} ={ \left [{I}_{n}\right ]}_{ij},\ 1 ≤ i ≤ n,\ 1 ≤ j ≤ n & &\text{@(a href="fcla-jsmath-2.10li21.html#definition.IM")Definition IM@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt {A}^{∗}A = {I}_{ n} & &\text{@(a href="fcla-jsmath-2.10li30.html#definition.ME")Definition ME@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \text{$A$ is a unitary matrix} & &\text{@(a href="#definition.UM")Definition UM@(/a)} & & & & }

Example OSMC
Orthonormal set from matrix columns
The matrix

U = \left [\array{ {1+i\over \sqrt{5}} &{3+2\kern 1.95872pt i\over \sqrt{55}} & {2+2i\over \sqrt{22}} \cr {1−i\over \sqrt{5}} &{2+2\kern 1.95872pt i\over \sqrt{55}} & {−3+i\over \sqrt{22}} \cr {i\over \sqrt{5}} &{3−5\kern 1.95872pt i\over \sqrt{55}} &− {2\over \sqrt{22}} } \right ]

from Example UM3 is a unitary matrix. By Theorem CUMOS, its columns

\left \{\left [\array{ {1+i\over \sqrt{5}} \cr {1−i\over \sqrt{5}} \cr {i\over \sqrt{5}} } \right ],\kern 1.95872pt \left [\array{ {3+2\kern 1.95872pt i\over \sqrt{55}} \cr {2+2\kern 1.95872pt i\over \sqrt{55}} \cr {3−5\kern 1.95872pt i\over \sqrt{55}} } \right ],\kern 1.95872pt \left [\array{ {2+2i\over \sqrt{22}} \cr {−3+i\over \sqrt{22}} \cr − {2\over \sqrt{22}} } \right ]\right \}

form an orthonormal set. You might find checking the six inner products of pairs of these vectors easier than doing the matrix product {U}^{∗}U. Or, because the inner product is anti-commutative (Theorem IPAC) you only need check three inner products (see Exercise MINM.T12).

When using vectors and matrices that only have real number entries, orthogonal matrices are those matrices with inverses that equal their transpose. Similarly, the inner product is the familiar dot product. Keep this special case in mind as you read the next theorem.

Theorem UMPIP
Unitary Matrices Preserve Inner Products
Suppose that U is a unitary matrix of size n and u and v are two vectors from {ℂ}^{n}. Then

\eqalignno{ \left \langle Uu,\kern 1.95872pt Uv\right \rangle & = \left \langle u,\kern 1.95872pt v\right \rangle & &\text{and} &\left \Vert Uv\right \Vert & = \left \Vert v\right \Vert & & & & & & }

Proof  

\eqalignno{ \left \langle Uu,\kern 1.95872pt Uv\right \rangle & = {(Uu)}^{t}\overline{Uv} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMIP")Theorem MMIP@(/a)} & & & & \cr & = {u}^{t}{U}^{t}\overline{Uv} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMT")Theorem MMT@(/a)} & & & & \cr & = {u}^{t}{U}^{t}\overline{U}\overline{v} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMCC")Theorem MMCC@(/a)} & & & & \cr & = {u}^{t}{\left (\overline{\overline{U}}\right )}^{t}\overline{U}\overline{v} & &\text{@(a href="fcla-jsmath-2.10li69.html#theorem.CCT")Theorem CCT@(/a)} & & & & \cr & = {u}^{t}\overline{{\left (\overline{U}\right )}^{t}}\overline{U}\overline{v} & &\text{@(a href="fcla-jsmath-2.10li30.html#theorem.MCT")Theorem MCT@(/a)} & & & & \cr & = {u}^{t}\overline{{\left (\overline{U}\right )}^{t}U}\overline{v} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMCC")Theorem MMCC@(/a)} & & & & \cr & = {u}^{t}\overline{{U}^{∗}U}\overline{v} & &\text{@(a href="fcla-jsmath-2.10li30.html#definition.A")Definition A@(/a)} & & & & \cr & = {u}^{t}\overline{{I}_{ n}}\overline{v} & &\text{@(a href="#definition.UM")Definition UM@(/a)} & & & & \cr & = {u}^{t}{I}_{ n}\overline{v} & &\text{@(a href="fcla-jsmath-2.10li21.html#definition.IM")Definition IM@(/a)} & & & & \cr & = {u}^{t}\overline{v} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = \left \langle u,\kern 1.95872pt v\right \rangle & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMIP")Theorem MMIP@(/a)} & & & & \cr & & & & }

The second conclusion is just a specialization of the first conclusion.

\eqalignno{ \left \Vert Uv\right \Vert & = \sqrt{{\left \Vert Uv \right \Vert }^{2}} & & & & \cr & = \sqrt{\left \langle Uv, \kern 1.95872pt Uv\right \rangle } & &\text{@(a href="fcla-jsmath-2.10li28.html#theorem.IPN")Theorem IPN@(/a)} & & & & \cr & = \sqrt{\left \langle v, \kern 1.95872pt v\right \rangle } & & & & \cr & = \sqrt{{\left \Vert v\right \Vert }^{2}} & &\text{@(a href="fcla-jsmath-2.10li28.html#theorem.IPN")Theorem IPN@(/a)} & & & & \cr & = \left \Vert v\right \Vert & & & & }

Aside from the inherent interest in this theorem, it makes a bigger statement about unitary matrices. When we view vectors geometrically as directions or forces, then the norm equates to a notion of length. If we transform a vector by multiplication with a unitary matrix, then the length (norm) of that vector stays the same. If we consider column vectors with two or three slots containing only real numbers, then the inner product of two such vectors is just the dot product, and this quantity can be used to compute the angle between two vectors. When two vectors are multiplied (transformed) by the same unitary matrix, their dot product is unchanged and their individual lengths are unchanged. The results in the angle between the two vectors remaining unchanged.

A “unitary transformation” (matrix-vector products with unitary matrices) thus preserve geometrical relationships among vectors representing directions, forces, or other physical quantities. In the case of a two-slot vector with real entries, this is simply a rotation. These sorts of computations are exceedingly important in computer graphics such as games and real-time simulations, especially when increased realism is achieved by performing many such computations quickly. We will see unitary matrices again in subsequent sections (especially Theorem OD) and in each instance, consider the interpretation of the unitary matrix as a sort of geometry-preserving transformation. Some authors use the term isometry to highlight this behavior. We will speak loosely of a unitary matrix as being a sort of generalized rotation.

A final reminder: the terms “dot product,” “symmetric matrix” and “orthogonal matrix” used in reference to vectors or matrices with real number entries correspond to the terms “inner product,” “Hermitian matrix” and “unitary matrix” when we generalize to include complex number entries, so keep that in mind as you read elsewhere.

Subsection READ: Reading Questions

  1. Compute the inverse of the coefficient matrix of the system of equations below and use the inverse to solve the system.
    \eqalignno{ 4{x}_{1} + 10{x}_{2} & = 12 & & \cr 2{x}_{1} + 6{x}_{2} & = 4 & & }
  2. In the reading questions for Section MISLE you were asked to find the inverse of the 3 × 3 matrix below.
    \left [\array{ 2 & 3 & 1 \cr 1 &−2&−3 \cr −2& 4 & 6 } \right ]

    Because the matrix was not nonsingular, you had no theorems at that point that would allow you to compute the inverse. Explain why you now know that the inverse does not exist (which is different than not being able to compute it) by quoting the relevant theorem’s acronym.

  3. Is the matrix A unitary? Why?
    A = \left [\array{ {1\over \sqrt{22}}\left (4 + 2i\right )& {1\over \sqrt{374}}\left (5 + 3i\right ) \cr {1\over \sqrt{22}}\left (−1 − i\right )& {1\over \sqrt{374}}\left (12 + 14i\right ) \cr } \right ]

Subsection EXC: Exercises

C20 Let A = \left [\array{ 1&2&1 \cr 0&1&1 \cr 1&0&2} \right ] and B = \left [\array{ −1&1&0 \cr 1 &2&1 \cr 0 &1&1 } \right ]. Verify that AB is nonsingular.  
Contributed by Chris Black

C40 Solve the system of equations below using the inverse of a matrix.

\eqalignno{ {x}_{1} + {x}_{2} + 3{x}_{3} + {x}_{4} = 5 & & \cr − 2{x}_{1} − {x}_{2} − 4{x}_{3} − {x}_{4} = −7 & & \cr {x}_{1} + 4{x}_{2} + 10{x}_{3} + 2{x}_{4} = 9 & & \cr − 2{x}_{1} − 4{x}_{3} + 5{x}_{4} = 9 & & }

 
Contributed by Robert Beezer Solution [740]

M10 Find values of x, y z so that matrix A = \left [\array{ 1&2&x \cr 3&0&y \cr 1&1&z} \right ] is invertible.  
Contributed by Chris Black Solution [741]

M11 Find values of x, y z so that matrix A = \left [\array{ 1&x&1 \cr 1&y&4 \cr 0&z&5 } \right ] is singular.  
Contributed by Chris Black Solution [742]

M15 If A and B are n × n matrices, A is nonsingular, and B is singular, show directly that AB is singular, without using Theorem NPNT.  
Contributed by Chris Black Solution [743]

M20 Construct an example of a 4 × 4 unitary matrix.  
Contributed by Robert Beezer Solution [741]

M80 Matrix multiplication interacts nicely with many operations. But not always with transforming a matrix to reduced row-echelon form. Suppose that A is an m × n matrix and B is an n × p matrix. Let P be a matrix that is row-equivalent to A and in reduced row-echelon form, Q be a matrix that is row-equivalent to B and in reduced row-echelon form, and let R be a matrix that is row-equivalent to AB and in reduced row-echelon form. Is PQ = R? (In other words, with nonstandard notation, is \text{rref}(A)\text{rref}(B) = \text{rref}(AB)?)

Construct a counterexample to show that, in general, this statement is false. Then find a large class of matrices where if A and B are in the class, then the statement is true.  
Contributed by Mark Hamrick Solution [743]

T10 Suppose that Q and P are unitary matrices of size n. Prove that QP is a unitary matrix.  
Contributed by Robert Beezer

T11 Prove that Hermitian matrices (Definition HM) have real entries on the diagonal. More precisely, suppose that A is a Hermitian matrix of size n. Then {\left [A\right ]}_{ii} ∈ {ℝ}^{}, 1 ≤ i ≤ n.  
Contributed by Robert Beezer

T12 Suppose that we are checking if a square matrix of size n is unitary. Show that a straightforward application of Theorem CUMOS requires the computation of {n}^{2} inner products when the matrix is unitary, and fewer when the matrix is not orthogonal. Then show that this maximum number of inner products can be reduced to {1\over 2}n(n + 1) in light of Theorem IPAC.  
Contributed by Robert Beezer

Subsection SOL: Solutions

C40 Contributed by Robert Beezer Statement [737]
The coefficient matrix and vector of constants for the system are

\eqalignno{ \left [\array{ 1 & 1 & 3 & 1 \cr −2&−1&−4&−1 \cr 1 & 4 &10& 2 \cr −2& 0 &−4& 5 } \right ] & &b = \left [\array{ 5 \cr −7 \cr 9 \cr 9 } \right ] & & & & }

{A}^{−1} can be computed by using a calculator, or by the method of Theorem CINM. Then Theorem SNCM says the unique solution is

{ A}^{−1}b = \left [\array{ 38 & 18 & −5 &−2 \cr 96 & 47 &−12&−5 \cr −39&−19& 5 & 2 \cr −16& −8 & 2 & 1 } \right ]\left [\array{ 5 \cr −7 \cr 9 \cr 9 } \right ] = \left [\array{ 1 \cr −2 \cr 1 \cr 3 } \right ]

M20 Contributed by Robert Beezer Statement [738]
The 4 × 4 identity matrix, {I}_{4}, would be one example (Definition IM). Any of the 23 other rearrangements of the columns of {I}_{4} would be a simple, but less trivial, example. See Example UPM.

M10 Contributed by Chris Black Statement [737]
There are an infinite number of possible answers. We want to find a vector \left [\array{ x \cr y \cr z } \right ]so that the set

\eqalignno{ S = \left \{\left [\array{ 1 \cr 3 \cr 1 } \right ],\left [\array{ 2 \cr 0 \cr 1 } \right ],\left [\array{ x \cr y \cr z } \right ]\right \} & & }

is a linearly independent set. We need a vector not in the span of the first two columns, which geometrically means that we need it to not be in the same plane as the first two columns of A. We can choose any values we want for x and y, and then choose a value of z that makes the three vectors independent.

I will (arbitrarily) choose x = 1, y = 1. Then, we have

\eqalignno{ A = \left [\array{ 1&2&1 \cr 3&0&1 \cr 1&1&z} \right ] &\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&2z − 1 \cr 0&\text{1}& 1 − z \cr 0&0&4 − 6z } \right ] & & }

which is invertible if and only if 4 − 6z\mathrel{≠}0. Thus, we can choose any value as long as z\mathrel{≠}{2\over 3}, so we choose z = 0, and we have found a matrix A = \left [\array{ 1&2&1 \cr 3&0&1 \cr 1&1&0} \right ] that is invertible.

M11 Contributed by Chris Black Statement [738]
There are an infinite number of possible answers. We need the set of vectors

\eqalignno{ S & = \left \{\left [\array{ 1 \cr 1 \cr 0 } \right ],\left [\array{ x \cr y \cr z } \right ],\left [\array{ 1 \cr 4 \cr 5 } \right ]\right \} & & }

to be linearly dependent. One way to do this by inspection is to have \left [\array{ x \cr y \cr z } \right ] = \left [\array{ 1 \cr 4 \cr 5 } \right ]. Thus, if we let x = 1, y = 4, z = 5, then the matrix A = \left [\array{ 1&1&1 \cr 1&4&4 \cr 0&5&5} \right ] is singular.

M15 Contributed by Chris Black Statement [738]
If B is singular, then there exists a vector x\mathrel{≠}0 so that x ∈N\kern -1.95872pt \left (B\right ). Thus, Bx = 0, so A(Bx) = (AB)x = 0, so x ∈N\kern -1.95872pt \left (AB\right ). Since the null space of AB is not trivial, AB is a nonsingular matrix.

M80 Contributed by Robert Beezer Statement [738]
Take

\eqalignno{ A & = \left [\array{ 1&0 \cr 0&0 } \right ] &B & = \left [\array{ 0&0 \cr 1&0 } \right ] & & & & }

Then A is already in reduced row-echelon form, and by swapping rows, B row-reduces to A. So the product of the row-echelon forms of A is AA = A\mathrel{≠}O. However, the product AB is the 2 × 2 zero matrix, which is in reduced-echelon form, and not equal to AA. When you get there, Theorem PEEF or Theorem EMDRO might shed some light on why we would not expect this statement to be true in general.

If A and B are nonsingular, then AB is nonsingular (Theorem NPNT), and all three matrices A, B and AB row-reduce to the identity matrix (Theorem NMRRI). By Theorem MMIM, the desired relationship is true.