# Linear Algebra: #12 Elementary Matrices

_{ij}, S

_{i}(a), and S

_{ij}(c). They are such that when any n × n matrix A is multiplied on the

*right*by such an S, then the given elementary column operation is performed on the matrix A. Furthermore, if the matrix A is multiplied on the

*left*by such an elementary matrix, then the given row operation on the matrix is performed. It is a simple matter to verify that the following matrices are the ones we are looking for.

Here, everything is zero except for the two elements at the positions ij and ji, which have the value 1. Also the diagonal elements are all 1 except for the elements at ii and jj, which are zero.

Then we have

That is, S

_{i}(a) is a diagonal matrix, all of whose diagonal elements are 1 except for the single element at the position ii, which has the value a.

Finally we have

So this is again just the n × n identity matrix, but this time we have replaced the zero in the ij-th position with the scalar c. It is an elementary exercise to see that:

**Theorem 32**Each of the n × n elementary matrices are regular. And thus we can prove that these elementary matrices generate the group GL(n, F). Furthermore, for every elementary matrix, the inverse matrix is again elementary.

**Theorem 33**Every matrix in GL(n, F) can be represented as a product of elementary matrices.

*Proof*

Let A be some arbitrary regular matrix. We have already seen that A can be transformed into a matrix in step form by means of elementary row operations. That is, there is some sequence of elementary matrices: S

_{1}, . . . , S

_{p}, such that the product

A

is an n ×n matrix in step form. However, since A was a regular matrix, the number
of steps must be equal to n. That is, A^{∗}= S_{p}· · · S_{1}A^{∗}must be a

*triangular*matrix whose diagonal elements are all equal to 1.

But now it is obvious that the elements above the diagonal can all be reduced to zero by elementary row operations of type S

_{ij}(c). These row operations can again be realized by multiplication of A on the right by some further set of elementary matrices: S

_{p+1}, · · · , S

_{q}. This gives us the matrix equation

S

or_{q}_{}· · · S_{p+1}S_{p}· · · S_{1}A = I_{n}
A = S

_{1}^{−1}. . . S_{p}^{−1}S_{p+1}^{−1}· · · S_{q}^{−1}Since the inverse of each elementary matrix is itself elementary, we have thus expressed A as a product of elementary matrices.

This proof also shows how we can go about programming a computer to calculate the inverse of an invertible matrix. Namely, through the process of Gauss elimination, we convert the given matrix into the identity matrix I

_{n}. During this process, we keep multiplying together the elementary matrices which represent the respective row operations. In the end, we obtain the inverse matrix

A

^{−1}= S_{q}· · · S_{p+1}S_{p}· · · S_{1}.We also note that this is the method which can be used to obtain the value of the determinant function for the matrix. But ﬁrst we must ﬁnd out what the definition of determinants of matrices is!

## No comments:

## Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.

Comments will only be published after moderation