8.6 Inverse of a Matrix

INTRODUCTION

The concept of the determinant of an n × n, or square, matrix will play an important role in this and the following section.

8.6.1 Finding the Inverse

In the real number system, if a is a nonzero number, then there exists a number b such that ab = ba = 1. The number b is called the multiplicative inverse of the number a and is denoted by a−1. For a square matrix A it is also important to know whether we can find another square matrix B of the same order such that AB = BA = I. We have the following definition.

DEFINITION 8.6.1 Inverse of a Matrix

Let A be an n × n matrix. If there exists an n × n matrix B such that

AB = BA = I, (1)

where I is the n × n identity, then the matrix A is said to be nonsingular or invertible. The matrix B is said to be the inverse of A.

For example, the matrix A = is nonsingular or invertible since the matrix B = is its inverse. To verify this, observe that

and

Unlike the real number system, where every nonzero number a has a multiplicative inverse, not every nonzero n × n matrix A has an inverse.

EXAMPLE 1 Matrix with No Inverse

The matrix A = has no multiplicative inverse. To see this suppose B = . Then

Inspection of the last matrix shows that it is impossible to obtain the 2 × 2 identity matrix I, because there is no way of selecting b11, b12, b21, and b22 to get 1 as the entry in the second row and second column. We conclude that the matrix A = has no inverse.

An n × n matrix that has no inverse is called singular. If A is nonsingular, its inverse is denoted by B = A−1.

Important.

Note that the symbol −1 in the notation A−1 is not an exponent; in other words, A−1 is not a reciprocal. Also, if A is nonsingular, its inverse is unique.

Properties

The following theorem lists some properties of the inverse of a matrix.

THEOREM 8.6.1 Properties of the Inverse

Let A and B be nonsingular matrices. Then

(i) (A−1)−1 = A

(ii) (AB)−1 = B−1A−1

(iii) (AT)−1 = (A−1)T.

PROOF of (i):

This part of the theorem states that if A is nonsingular, then its inverse A−1 is also nonsingular and its inverse is A. To prove that A−1 is nonsingular, we have to show that a matrix B can be found such that A−1B = BA−1 = I. But since A is assumed to be nonsingular, we know from (1) that AA−1 = A−1A = I and, equivalently, A−1A = AA−1 = I. The last matrix equation indicates that the required matrix, the inverse of A−1, is B = A. Consequently, (A−1)−1 = A.

Theorem 8.6.1(ii) extends to any finite number of nonsingular matrices:

that is, the inverse of a product of nonsingular matrices is the product of the inverses in reverse order.

In the discussion that follows we are going to consider two different ways of finding A−1 for a nonsingular matrix A. The first method utilizes determinants, whereas the second employs the elementary row operations introduced in Section 8.2.

Adjoint Method

Recall from (6) of Section 8.4 that the cofactor Cij of the entry aij of an n × n matrix A is Cij = (−1)i+jMij, where Mij is the minor of aij; that is, the determinant of the (n − 1) × (n − 1) submatrix obtained by deleting the ith row and the jth column of A.

DEFINITION 8.6.2 Adjoint Matrix

Let A be an n × n matrix. The matrix that is the transpose of the matrix of cofactors corresponding to the entries of A:

is called the adjoint of A and is denoted by adj A.

The next theorem will give a compact formula for the inverse of a nonsingular matrix in terms of the adjoint of the matrix. However, because of the determinants involved, this method becomes unwieldy for n ≥ 4.

THEOREM 8.6.2 Finding the Inverse

Let A be an n × n matrix. If det A ≠ 0, then

(2)

PROOF:

For brevity we prove the case when n = 3. Observe that

(3)

since det A = ai1Ci1 + ai2Ci2 + ai3Ci3, for i = 1, 2, 3, are the expansions of det A by cofactors along the first, second, and third rows, and

in view of Theorem 8.5.9. Thus, (3) is the same as

A (adj A) = (det A) = (det A) I

or A((1/det A) adj A) = I. Similarly, it can be shown in exactly the same manner that ((1/det A) adj A)A = I. Hence, by definition, A−1 = (1/det A) adj A.

For future reference we observe in the case of a 2 × 2 nonsingular matrix

A =

that the cofactors are C11 = a22, C12 = −a21, C21 = −a12, and C22 = a11. In this case,

adj A=

It follows from (2) that

(4)

For a 3 × 3 nonsingular matrix

A =

and so on. After the adjoint of A is formed, (2) gives

(5)

EXAMPLE 2 Inverse of a Matrix

Find the inverse of A = .

SOLUTION

Since det A = 10 − 8 = 2, it follows from (4) that

A−1 = .

Check:

EXAMPLE 3 Inverse of a Matrix

Find the inverse of A = .

SOLUTION

Since det A = 12, we can find A−1 from (5). The cofactors corresponding to the entries in A are

From (5) we then obtain

The reader is urged to verify that AA−1 = A−1A = I.

We are now in a position to prove a necessary and sufficient condition for an n × n matrix A to have an inverse.

THEOREM 8.6.3 Nonsingular Matrices and det A

An n × n matrix A is nonsingular if and only if det A ≠ 0.

PROOF:

We shall first prove the sufficiency. Assume det A ≠ 0. Then A is nonsingular, since A−1 can be found from Theorem 8.6.2.

To prove the necessity, we must assume that A is nonsingular and prove that det A ≠ 0. Now from Theorem 8.5.6, AA−1 = A−1A = I implies

(det A)(det A−1) = (det A−1)(det A) = det I.

But since det I = 1 (why?), the product (det A)(det A−1) = 1 ≠ 0 shows that we must have det A ≠ 0.

For emphasis we restate Theorem 8.6.3 in an alternative manner:

An n × n matrix A is singular if and only if det A = 0. (6)

EXAMPLE 4 Using (6)

The 2 × 2 matrix A = has no inverse; that is, A is singular, because det A = 6 − 6 = 0.

Because of the number of determinants that must be evaluated, the foregoing method for calculating the inverse of a matrix is tedious when the order of the matrix is large. In the case of 3 × 3 or larger matrices, the next method is a particularly efficient means for finding A−1.

Row Operations Method

Although it would take us beyond the scope of this book to prove it, we shall nonetheless use the following results:

THEOREM 8.6.4 Finding the Inverse

If an n × n matrix A can be transformed into the n × n identity I by a sequence of elementary row operations, then A is nonsingular. The same sequence of operations that transforms A into the identity I will also transform I into A−1.

It is convenient to carry out these row operations on A and I simultaneously by means of an n × 2n matrix obtained by augmenting A with the identity I as shown here:

The procedure for finding A−1 is outlined in the following diagram:

EXAMPLE 5 Inverse by Elementary Row Operations

Find the inverse of A = .

SOLUTION

We shall use the same notation as we did in Section 8.2 when we reduced an augmented matrix to reduced row-echelon form:

Since I appears to the left of the vertical line, we conclude that the matrix to the right of the line is

A−1 = .

If row reduction of (A|I) leads to the situation

where the matrix B contains a row of zeros, then necessarily A is singular. Since further reduction of B always yields another matrix with a row of zeros, we can never transform A into I.

EXAMPLE 6 A Singular Matrix

The matrix A = has no inverse, since

Since the matrix to the left of the vertical bar has a row of zeros, we can stop at this point and conclude that A is singular.

8.6.2 Using the Inverse to Solve Systems

A system of m linear equations in n variables x1, x2, . . ., xn,

(7)

can be written compactly as a matrix equation AX = B, where

Special Case

Let us suppose now that m = n in (7) so that the coefficient matrix A is n × n. In particular, if A is nonsingular, then the system AX = B can be solved by multiplying both of the equations by A−1. From A−1(AX) = A−1B, we get (A−1A)X = A−1B. Since A−1A = I and IX = X, we have

(8)

EXAMPLE 7 Using (8) to Solve a System

Use the inverse of the coefficient matrix to solve the system

2x1 − 9x2 = 15

3x1 + 6x2 = 16.

SOLUTION

The given system can be written as

Since = 39 ≠ 0, the coefficient matrix is nonsingular. Consequently, from (4) we get

Using (8) it follows that

and so x1 = 6 and .

EXAMPLE 8 Using (8) to Solve a System

Use the inverse of the coefficient matrix to solve the system

SOLUTION

We found the inverse of the coefficient matrix

A =

in Example 5. Thus, (8) gives

Consequently, x1 = 19, x2 = 62, and x3 = −36.

Uniqueness

When det A ≠ 0 the solution of the system AX = B is unique. Suppose not—that is, suppose det A ≠ 0 and X1 and X2 are two different solution vectors. Then AX1 = B and AX2 = B imply AX1 = AX2. Since A is nonsingular, A−1 exists, and so A−1(AX1) = A−1(AX2) and (A−1A)X1 = (A−1A)X2. This gives IX1 = IX2 or X1 = X2, which contradicts our assumption that X1 and X2 were different solution vectors.

Homogeneous Systems

A homogeneous system of equations can be written AX = 0. Recall that a homogeneous system always possesses the trivial solution X = 0 and possibly an infinite number of solutions. In the next theorem we shall see that a homogeneous system of n equations in n variables possesses only the trivial solution when A is nonsingular.

THEOREM 8.6.5 Trivial Solution Only

A homogeneous system of n linear equations in n variables AX = 0 has only the trivial solution if and only if A is nonsingular.

PROOF:

We prove the sufficiency part of the theorem. Suppose A is nonsingular. Then by (8), we have the unique solution X = A−10 = 0.

The next theorem will answer the question: When does a homogeneous system of n linear equations in n variables possess a nontrivial solution? Bear in mind that if a homogeneous system has one nontrivial solution, it must have an infinite number of solutions.

THEOREM 8.6.6 Existence of Nontrivial Solutions

A homogeneous system of n linear equations in n variables AX = 0 has a nontrivial solution if and only if A is singular.

In view of Theorem 8.6.6, we can conclude that a homogeneous system of n linear equations in n variables AX = 0 possesses

  • only the trivial solution if and only if det A ≠ 0, and
  • a nontrivial solution if and only if det A = 0.

The last result will be put to use in Section 8.8.

REMARKS

(i) As a practical means of solving n linear equations in n variables, the use of an inverse matrix offers few advantages over the method of Section 8.2. However, in applications we sometimes need to solve a system AX = B several times; that is, we need to examine the solutions of the system corresponding to the same coefficient matrix A but different input vectors B. In this case, the single calculation of A−1 enables us to obtain these solutions quickly through the matrix multiplication A−1B.

(ii) In Definition 8.6.1 we saw that if A is an n × n matrix and if there exists another n × n matrix B that commutes with A such that

AB = I and BA = I, (9)

then B is the inverse of A. Although matrix multiplication is in general not commutative, the condition in (9) can be relaxed somewhat in this sense: If we find an n × n matrix B for which AB = I, then it can be proved that BA = I as well, and so B is the inverse of A. As a consequence of this result, in the subsequent sections of this chapter if we wish to prove that a certain matrix B is the inverse of a given matrix A, it will suffice to show only that AB = I. We need not demonstrate that B commutes with A to give I.

8.6 Exercises Answers to selected odd-numbered problems begin on page ANS-18.

8.6.1 Finding the Inverse

In Problems 1 and 2, verify that the matrix B is the inverse of the matrix A.

In Problems 3–14, use Theorem 8.6.3 to determine whether the given matrix is singular or nonsingular. If it is nonsingular, use Theorem 8.6.2 to find the inverse.

In Problems 15−26, use Theorem 8.6.4 to find the inverse of the given matrix or show that no inverse exists.

In Problems 27 and 28, use the given matrices to find (AB)−1.

  1. If A−1 = , what is A?
  2. If A is nonsingular, then (AT)−1 = (A−1)T. Verify this for A = .
  3. Find a value of x such that the matrix A = is its own inverse.
  4. The rotation matrix

    M =

    played an important role in Problems 45–48 of Exercises 8.1. Find In the context of those earlier problems, what does represent?

  5. A nonsingular matrix A is said to be orthogonal if A−1 = AT.

    (a) Verify that the matrix in Problem 32 is orthogonal.

    (b) Verify that A = is an orthogonal matrix.

  6. Show that if A is an orthogonal matrix (see Problem 33), then det A = ±1.
  7. Suppose A and B are nonsingular matrices. Then show that AB is nonsingular.
  8. Suppose A and B are matrices and that either A or B is singular. Then show that AB is singular.
  9. Suppose A is a nonsingular matrix. Then show that
  10. Suppose . Then show that either or A is singular.
  11. Suppose A and B are matrices, A is nonsingular, and . Then show that .
  12. Suppose A and B are matrices, A is nonsingular, and . Then show that .
  13. Suppose A and B are nonsingular matrices. Is necessarily nonsingular?
  14. Suppose A is a nonsingular matrix. Then show that is nonsingular.
  15. Suppose A and B are nonzero matrices and Then show that both A and B are singular.
  16. Consider the 3 × 3 diagonal matrix

    A = .

    Determine conditions such that A is nonsingular. If A is nonsingular, find A−1. Generalize your results to an n × n diagonal matrix.

8.6.2 Using the Inverse to Solve Systems

In Problems 45−52, use an inverse matrix to solve the given system of equations.















In Problems 53 and 54, write the system in the form AX = B. Use X = A−1B to solve the system for each matrix B.

  1. 7x1 − 2x2 = b1

    3x1 − 2x2 = b2,



In Problems 55–58, without solving, determine whether the given homogeneous system of equations has only the trivial solution or a nontrivial solution.







  1. The system of equations for the currents i1, i2, and i3 in the network shown in FIGURE 8.6.1 is

    where Rk and Ek, k = 1, 2, 3, are constants.

    (a) Express the system as a matrix equation AX = B.

    (b) Show that the coefficient matrix A is nonsingular.

    (c) Use X = A−1B to solve for the currents.

    An electric network consists of three parallel circuits with the following components: battery E subscript 1 and resistor R subscript 1, battery E subscript 2, and resistor R subscript 2, battery E subscript 3 and resistor R subscript 3. The current i subscript 1 flows downward from the battery E subscript 1 through the resistor R subscript 1; the current i subscript 2 flows downward from the battery E subscript 2 through the resistor R subscript 2; the current i subscript 3 flows downward from the battery E subscript 3 through the resistor R subscript 3. The battery E subscript 2 and resistor R subscript 2 are in between the 2 nodes.

    FIGURE 8.6.1 Network in Problem 59

  2. Consider the square plate shown in FIGURE 8.6.2, with the temperatures as indicated on each side. Under some circumstances it can be shown that the approximate temperatures u1, u2, u3, and u4 at the points P1, P2, P3, and P4, respectively, are given by

    (a) Show that the above system can be written as the matrix equation

    (b) Solve the system in part (a) by finding the inverse of the coefficient matrix.

    A square plate is divided by 2 equidistant vertical dotted lines and 2 horizontal equidistant dotted lines into nine equal parts. The intersection of the two vertical lines with the two horizontal lines are indicated by four dots and labeled as follows: bottom left, P subscript 1; top left, P subscript 2; top right, P subscript 3; bottom right, P subscript 4. The top side of the big square is indicated by an arrow and is labeled u equals 200. The right side of the big square is indicated by an arrow and is labeled u equals 100. The bottom side of the big square is indicated by an arrow and is labeled u equals 100. The left side of the big square is indicated by an arrow and is labeled u equals 100.

    FIGURE 8.6.2 Plate in Problem 60