FOLLOW US ON TWITTER
SHARE THIS PAGE ON FACEBOOK, TWITTER, WHATSAPP ... USING THE BUTTONS ON THE LEFT


YOUR PARTICIPATION FOR THE GROWTH OF PHYSICS REFERENCE BLOG

Monday, June 9, 2014

Linear Algebra: #9 Invertible Matrices

  • Linear Algebra: #9 Invertible Matrices

Let f : VW be a linear mapping, and let {v1, . . . , vn} ⊂ V and {w1, . . . , wm} ⊂ W be bases for V and W, respectively. Then, as we have seen, the mapping f can be uniquely described by specifying the values of f(vj), for each j = 1, . . . , n. We have
Linear Algebra: #9 Invertible Matrices equation pic 1
 And the resulting matrix A (as given above) is the matrix describing f with respect to these given bases.


A particular case 
This is the case that V = W. So we have the linear mapping f : VV. But now, we only need a single basis for V. That is, {v1, . . . , vn} ⊂ V is the only basis we need. Thus the matrix for f with respect to this single basis is determined by the specifications
Linear Algebra: #9 Invertible Matrices equation pic 2

A trivial example
For example, one particular case is that we have the identity mapping
f = id : VV
Thus f(v) = v, for all vV. In this case it is obvious that the matrix of the mapping is the n × n identity matrix In.


Regular matrices
Let us now assume that A is some regular n × n matrix. As we have seen in theorem 23, there is an isomorphism f : VV, such that A is the matrix representing f with respect to the given basis of V. According to theorem 17, the inverse mapping f−1 is also linear, and we have f−1◦ f = id. So let f−1 be represented by the matrix B (again with respect to the same basis {v1, . . . , vn}). Then we must have the matrix equation

B · A = In

Or, put another way, in the multiplication system of matrix algebra we must have B = A−1. That is, the matrix A is invertible.


Theorem 24
Every regular matrix is invertible.

Definition
The set of all regular n×n matrices over the field F is denoted GL(n, F).


Theorem 25
GL(n, F) is a group under matrix multiplication. The identity element is the identity matrix.

Proof
We have already seen in an exercise that matrix multiplication is associative. The fact that the identity element in GL(n, F) is the identity matrix is clear. By definition, all members of GL(n, F) have an inverse. It only remains to see that GL(n, F) is closed under matrix multiplication. So let A, C ∈ GL(n, F). Then there exist A−1, C−1 ∈ GL(n, F), and we have that C−1 · A−1 is itself an n × n matrix. But then

(C−1A−1 )AC = C−1(A−1 A)C = C−1InC = C−1C = In

Therefore, according to the definition of GL(n, F), we must also have AC ∈ GL(n, F).


Simplifying matrices using multiplication with regular matrices


Theorem 26
Let A be an m × n matrix. Then there exist regular matrices C ∈ GL(m, F) and D ∈ GL(n, F) such that the matrix A' = CAD−1 consists simply of zeros, except possibly for a block in the upper left-hand corner, which is an identity matrix. That is

Linear Algebra: #9 Invertible Matrices equation pic 3

(Note that A' is also an m × n matrix. That is, it is not necessarily square.)

Proof
A is the representation of a linear mapping f : VW, with respect to bases {v1, . . . , vn} and {w1, . . . , wm} of V and W, respectively. The idea of the proof is to now find new bases {x1, . . . , xn} ⊂ V and {y1, . . . , ym} ⊂ W, such that the matrix of f with respect to these new bases is as simple as possible.

So to begin with, let us look at ker(f) ⊂ V. It is a subspace of V, so its dimension is at most n. In general, it might be less than n, so let us write dim(ker(f)) = n − p, for some integer 0 ≤ p ≤ n. Therefore we choose a basis for ker(f), and we call it

{xp+1, . . . , xn} ⊂ ker(f) ⊂ V

Using the extension theorem (theorem 12), we extend this to a basis

{x1, . . . , xp, xp+1, . . . , xn}

for V.

Now at this stage, we look at the images of the vectors {x1, . . . , xp} under f in W. We find that the set {f(x1), . . . , f(xp)} ⊂ W is linearly independent. To see this, let us assume that we have the vector equation

Linear Algebra: #9 Invertible Matrices equation pic 4

for some choice of the scalars ai. But that means that the summation in bracket in the right-most hand side of the above equation ∈ ker(f). However {xp+1, . . . , xn} is a basis for ker(f). Thus we have

Linear Algebra: #9 Invertible Matrices equation pic 5

for appropriate choices of scalars bj. But {x1, . . . , xp, xp+1, . . . , xn} is a basis for V. Thus it is itself linearly independent and therefore we must have ai = 0 and bj = 0 for all possible i and j. In particular, since the ai’s are all zero, we must have the set {f(x1), . . . , f(xp)} ⊂ W being linearly independent.

To simplify the notation, let us call f(xi) = yi  for each i = 1, . . . , p. Then we can again use the extension theorem to find a basis

{y1, . . . , yp, yp+1, . . . , ym}

of W.

So now we define the isomorphism g : VV by the rule
g(xi) = vi , for all i = 1, . . . , n. 

Similarly the isomorphism h : WW is defined by the rule
h(yj) = wj , for all j = 1, . . . , m. 

Let D be the matrix representing the mapping g with respect to the basis {v1, . . . , vn} of V, and also let C be the matrix representing the mapping h with respect to the basis {w1, . . . , wm} of W.

Let us now look at the mapping
h · f · g−1 : VW.
For the basis vector viV, we have

Linear Algebra: #9 Invertible Matrices equation pic 6

This mapping must therefore be represented by a matrix in our simple form, consisting of only zeros, except possibly for a block in the upper left-hand corner which is an identity matrix. Furthermore, the rule that the composition of linear mappings is represented by the product of the respective matrices leads to the conclusion that the matrix A' = CAD−1 must be of the desired form.

No comments:

Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.
Comments will only be published after moderation

Currently Viewing: Physics Reference | Linear Algebra: #9 Invertible Matrices