FOLLOW US ON TWITTER
SHARE THIS PAGE ON FACEBOOK, TWITTER, WHATSAPP ... USING THE BUTTONS ON THE LEFT


YOUR PARTICIPATION FOR THE GROWTH OF PHYSICS REFERENCE BLOG

Friday, June 13, 2014

Linear Algebra: #12 Elementary Matrices

  • Linear Algebra: #12 Elementary Matrices

These are n × n matrices which we denote by Sij, Si(a), and Sij(c). They are such that when any n × n matrix A is multiplied on the right by such an S, then the given elementary column operation is performed on the matrix A. Furthermore, if the matrix A is multiplied on the left by such an elementary matrix, then the given row operation on the matrix is performed. It is a simple matter to verify that the following matrices are the ones we are looking for.

Linear Algebra: #12 Elementary Matrices equation pic 1

Here, everything is zero except for the two elements at the positions ij and ji, which have the value 1. Also the diagonal elements are all 1 except for the elements at ii and jj, which are zero.

Then we have

Linear Algebra: #12 Elementary Matrices equation pic 2

That is, Si(a) is a diagonal matrix, all of whose diagonal elements are 1 except for the single element at the position ii, which has the value a.

Finally we have

Linear Algebra: #12 Elementary Matrices equation pic 3

So this is again just the n × n identity matrix, but this time we have replaced the zero in the ij-th position with the scalar c. It is an elementary exercise to see that:


Theorem 32
Each of the n × n elementary matrices are regular. And thus we can prove that these elementary matrices generate the group GL(n, F). Furthermore, for every elementary matrix, the inverse matrix is again elementary.


Theorem 33
Every matrix in GL(n, F) can be represented as a product of elementary matrices.

Proof
Linear Algebra: #12 Elementary Matrices equation pic 4

Let A be some arbitrary regular matrix. We have already seen that A can be transformed into a matrix in step form by means of elementary row operations. That is, there is some sequence of elementary matrices: S1, . . . , Sp, such that the product
 A = Sp · · · S1
is an n ×n matrix in step form. However, since A was a regular matrix, the number of steps must be equal to n. That is, A must be a triangular matrix whose diagonal elements are all equal to 1.

Linear Algebra: #12 Elementary Matrices equation pic 5

But now it is obvious that the elements above the diagonal can all be reduced to zero by elementary row operations of type Sij(c). These row operations can again be realized by multiplication of A on the right by some further set of elementary matrices: Sp+1, · · · , Sq. This gives us the matrix equation

 Sq· · · Sp+1Sp · · · S1A = In
or
A =  S1−1 . . . Sp−1Sp+1−1· · · Sq−1

Since the inverse of each elementary matrix is itself elementary, we have thus expressed A as a product of elementary matrices.

This proof also shows how we can go about programming a computer to calculate the inverse of an invertible matrix. Namely, through the process of Gauss elimination, we convert the given matrix into the identity matrix In. During this process, we keep multiplying together the elementary matrices which represent the respective row operations. In the end, we obtain the inverse matrix

A−1 = Sq· · · Sp+1Sp · · · S1

We also note that this is the method which can be used to obtain the value of the determinant function for the matrix. But first we must find out what the definition of determinants of matrices is!

No comments:

Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.
Comments will only be published after moderation

Currently Viewing: Physics Reference | Linear Algebra: #12 Elementary Matrices