# Linear Algebra: #6 Linear Mappings and Matrices

^{2}into itself — which should have been simple to describe — has brought with it long lines of lists of coordinates which are difficult to think about. In three and more dimensions, things become even worse! Thus it is obvious that we need a more sensible system for describing these linear mappings. The usual system is to use

*matrices*.

Now, the most obvious problem with our previous notation for vectors was that the lists of the coordinates (x

_{1}, . . . , x

_{n}) run over the page, leaving hardly any room left over to describe symbolically what we want to do with the vector. The solution to this problem is to write vectors not as

*horizontal*lists, but rather as

*vertical*lists. We say that the horizontal lists are

*row vectors*, and the vertical lists are

*column vectors*. This is a great improvement! So whereas before, we wrote

**v**= (x

_{1}, . . . , x

_{n}),

now we will write

It is true that we use up lots of vertical space on the page in this way, but since
the rest of the writing is horizontal, we can afford to waste this vertical space. In
addition, we have a very nice system for writing down the coordinates of the vectors
after they have been mapped by a linear mapping.

To illustrate this system, consider the rotation of the plane through the angle φ, which was described in the last section. In terms of row vectors, we have (x

That is, matrix multiplication gives the result of the linear mapping.

To illustrate this system, consider the rotation of the plane through the angle φ, which was described in the last section. In terms of row vectors, we have (x

_{1}, x_{2}) being rotated into the new vector(x_{1}cos φ − x_{2}sin φ, x_{1}sin φ + x_{2}cos φ). But if we change into the column vector notation, we have
A ·

**v**= f(**v**)That is, matrix multiplication gives the result of the linear mapping.

**Expressing f : V → W in terms of bases for both V and W**

The example we have been thinking about up till now (a rotation of ℜ

^{2}) is a linear mapping of ℜ

^{2}into itself. More generally, we have linear mappings from a vector space

**V**to a different vector space

**W**(although, of course, both

**V**and

**W**are vector spaces over the same ﬁeld F).

So let {

**v**, . . . ,

_{1}**v**} be a basis for

_{n}**V**and let {

**w**, . . . ,

_{1}**w**} be a basis for

_{m}**W**. Finally, let f :

**V**→

**W**be a linear mapping. An arbitrary vector

**v**∈

**V**can be expressed in terms of the basis for

**V**as

The question is now, what is f(

**v**)? As we have seen, f(

**v**) can be expressed in terms of the images f(

**v**) of the basis vectors of

_{j}**V**. Namely

But then, each of these vectors f(

**v**) in

_{j}**W**can be expressed in terms of the basis vectors in

**W**, say

for appropriate choices of the “numbers” c

_{ij}∈ F. Therefore, putting this all together, we have

In the matrix notation, using column vectors relative to the two bases {

**v**, . . . ,

_{1}**v**} and {

_{n}**w**, . . . ,

_{1}**w**}, we can write this as

_{m}When looking at this m×n matrix which represents the linear mapping f :

**V**→

**W**, we can imagine that the matrix consists of n columns. The i-th column is then

That is, it represents a vector in

**W**, namely the vector

**v**= c

_{i}_{1i}

**w**+ · · · + c

_{1}_{mi}

**w**.

_{m}But what is this vector

**u**? In the matrix notation, we have

_{i}where the single non-zero element of this column matrix is a 1 in the i-th position from the top. But then we have

**V**→

**W**are the images of the basis vectors of

**V**.

**Two linear mappings, one after the other**

Things become more interesting when we think about the following situation. Let

**V**,

**W**and

**X**be vector spaces over a common ﬁeld F. Assume that f :

**V**→

**W**and g :

**W**→

**X**are linear. Then the composition f ◦ g :

**V**→

**X**, given by

f ◦ g(

**v**) = g(f(**v**))for all

**v**∈

**V**is clearly a linear mapping. One can write this as

Let {

**v**, . . . ,

_{1}**v**} be a basis for

_{n}**V**, {

**w**, . . . ,

_{1}**w**} be a basis for

_{m}**W**, and {

**x**, . . . ,

_{1}**x**} be a basis for

_{r}**X**. Assume that the linear mapping f is given by the matrix

and the linear mapping g is given by the matrix

There are so many summations here! How can we keep track of everything? The answer is to use the matrix notation. The composition of linear mappings is then simply represented by matrix

*multiplication*. That is, if

then we have

So this is the reason we have defined matrix multiplication in this way. Recall that if A is an m x n matrix and B is an r x m matrix, then the product BA is an r x n matrix whose kj-th element is the summation of d

_{ki}c

_{ij}with i from 1 to m.

## No comments:

## Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.

Comments will only be published after moderation