# Linear Algebra: #9 Invertible Matrices

**V**→

**W**be a linear mapping, and let {

**v**, . . . ,

_{1}**v**} ⊂

_{n}**V**and {

**w**, . . . ,

_{1}**w**} ⊂

_{m}**W**be bases for

**V**and

**W**, respectively. Then, as we have seen, the mapping f can be uniquely described by specifying the values of f(

**v**), for each j = 1, . . . , n. We have

_{j}And the resulting matrix A (as given above) is the matrix describing f with respect to these

*given bases*.

**A particular case**

This is the case that

**V = W**. So we have the linear mapping f :

**V**→

**V**. But now, we only need a single basis for

**V**. That is, {

**v**, . . . ,

_{1}**v**} ⊂

_{n}**V**is the only basis we need. Thus the matrix for f with respect to this single basis is determined by the specifications

**A trivial example**

For example, one particular case is that we have the identity mapping

f = id :

Thus f(**V**→**V**.**v**) =

**v**, for all

**v**∈

**V**. In this case it is obvious that the matrix of the mapping is the n × n identity matrix I

_{n}.

**Regular matrices**

Let us now assume that A is some regular n × n matrix. As we have seen in theorem 23, there is an isomorphism f :

**V**→

**V**, such that A is the matrix representing f with respect to the given basis of

**V**. According to theorem 17, the inverse mapping f

^{−1}is also linear, and we have f

^{−1}◦ f = id. So let f

^{−1}be represented by the matrix B (again with respect to the same basis {

**v**, . . . ,

_{1}**v**}). Then we must have the matrix equation

_{n}
B · A = I

_{n}Or, put another way, in the multiplication system of matrix algebra we must have B = A

^{−1}. That is, the matrix A is

*invertible*.

**Theorem 24**Every regular matrix is invertible.

**Deﬁnition**

The set of all regular n×n matrices over the ﬁeld F is denoted GL(n, F).

**Theorem 25**GL(n, F) is a group under matrix multiplication. The identity element is the identity matrix.

*Proof*

We have already seen in an exercise that matrix multiplication is associative. The fact that the identity element in GL(n, F) is the identity matrix is clear. By definition, all members of GL(n, F) have an inverse. It only remains to see that GL(n, F) is closed under matrix multiplication. So let A, C ∈ GL(n, F). Then there exist A

^{−1}, C

^{−1}∈ GL(n, F), and we have that C

^{−1}· A

^{−1}is itself an n × n matrix. But then

(C

^{−1}A^{−1})AC = C^{−1}(A^{−1}A)C = C^{−1}I_{n}C = C^{−1}C = I_{n}Therefore, according to the definition of GL(n, F), we must also have AC ∈ GL(n, F).

**Simplifying matrices using multiplication with regular matrices**

**Theorem 26**Let A be an m × n matrix. Then there exist regular matrices C ∈ GL(m, F) and D ∈ GL(n, F) such that the matrix A' = CAD

^{−1}consists simply of zeros, except possibly for a block in the upper left-hand corner, which is an identity matrix. That is

(Note that A' is also an m × n matrix. That is, it is not necessarily square.)

*Proof*

A is the representation of a linear mapping f :

**V**→

**W**, with respect to bases {

**v**, . . . ,

_{1}**v**} and {

_{n}**w**, . . . ,

_{1}**w**} of

_{m}**V**and

**W**, respectively. The idea of the proof is to now ﬁnd

*new*bases {

**x**, . . . ,

_{1}**x**} ⊂

_{n}**V**and {

**y**, . . . ,

_{1}**y**} ⊂

_{m}**W**, such that the matrix of f with respect to these new bases is as simple as possible.

So to begin with, let us look at ker(f) ⊂

**V**. It is a subspace of

**V**, so its dimension is at most n. In general, it might be less than n, so let us write dim(ker(f)) = n − p, for some integer 0 ≤ p ≤ n. Therefore we choose a basis for ker(f), and we call it

{

**x**, . . . ,_{p+1}**x**} ⊂ ker(f) ⊂_{n}**V**.Using the extension theorem (theorem 12), we extend this to a basis

{

**x**, . . . ,_{1}**x**,_{p}**x**, . . . ,_{p+1}**x**}_{n}for

**V**.

Now at this stage, we look at the images of the vectors {

**x**, . . . ,

_{1}**x**} under f in

_{p}**W**. We ﬁnd that the set {f(

**x**), . . . , f(

_{1}**x**)} ⊂

_{p}**W**is linearly independent. To see this, let us assume that we have the vector equation

for some choice of the scalars a

_{i}. But that means that the summation in bracket in the right-most hand side of the above equation ∈ ker(f). However {

**x**, . . . ,

_{p+1}**x**} is a basis for ker(f). Thus we have

_{n}for appropriate choices of scalars b

_{j}. But {

**x**, . . . ,

_{1}**x**,

_{p}**x**, . . . ,

_{p+1}**x**} is a basis for

_{n}**V**. Thus it is itself linearly independent and therefore we must have a

_{i}= 0 and b

_{j}= 0 for all possible i and j. In particular, since the a

_{i}’s are all zero, we must have the set {f(

**x**), . . . , f(

_{1}**x**)} ⊂

_{p}**W**being linearly independent.

To simplify the notation, let us call f(

**x**) =

_{i}**y**for each i = 1, . . . , p. Then we can again use the extension theorem to ﬁnd a basis

_{i}
{

**y**, . . . ,_{1}**y**,_{p}**y**, . . . ,_{p+1}**y**}_{m}of

**W**.

So now we define the isomorphism g :

**V**→

**V**by the rule

g(

**x**) =_{i}**v**, for all i = 1, . . . , n._{i}Similarly the isomorphism h :

**W**→

**W**is defined by the rule

h(

**y**) =_{j}**w**, for all j = 1, . . . , m._{j}Let D be the matrix representing the mapping g with respect to the basis {

**v**, . . . ,

_{1}**v**} of

_{n}**V**, and also let C be the matrix representing the mapping h with respect to the basis {

**w**, . . . ,

_{1}**w**} of

_{m}**W**.

Let us now look at the mapping

h · f · g

For the basis vector ^{−1}:**V**→**W**.**v**∈

_{i}**V**, we have

This mapping must therefore be represented by a matrix in our simple form, consisting of only zeros, except possibly for a block in the upper left-hand corner which is an identity matrix. Furthermore, the rule that the composition of linear mappings is represented by the product of the respective matrices leads to the conclusion that the matrix A' = CAD

^{−1}must be of the desired form.

## No comments:

## Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.

Comments will only be published after moderation