# Linear Algebra: #5 Linear Mappings

**Deﬁnition**

Let

**V**and

**W**be vector spaces, both over the ﬁeld F. Let f :

**V**→

**W**be a mapping from the vector space

**V**to the vector space

**W**. The mapping f is called a

*linear mapping*if

f(a

**u**+ b**v**) = af(**u**) + bf(**v**)for all a, b ∈ F and all

**u**,

**v**∈

**V**.

By choosing a and b to be either 0 or 1, we immediately see that a linear mapping always has both f(a

**v**) = af(

**v**) and f(

**u**+

**v**) = f(

**u**) + f(

**v**), for all a ∈ F and for all

**u**and

**v**∈

**V**. Also it is obvious that f(

**0**) =

**0**always.

**Deﬁnition**

Let f :

**V**→

**W**be a linear mapping. The

*kernel of the mapping*, denoted by ker(f), is the set of vectors in

**V**which are mapped by f into the zero vector in

**W**.

**Theorem 15**If ker(f) = {

**0**}, that is, if the zero vector in

**V**is the only vector which is mapped into the zero vector in

**W**under f, then f is an injection (

*monomorphism*). The converse is of course trivial.

*Proof*

That is, we must show that if

**u**and

**v**are two vectors in

**V**with the property that f(

**u**) = f(

**v**), then we must have

**u**=

**v**. But

f(

**u**) = f(**v**)**⇒****0**= f(**u**) − f(**v**) = f(**u**−**v**).Thus the vector

**u − v**is mapped by f to the zero vector. Therefore we must have

**u − v = 0**, or

**u = v**.

Conversely, since f(

**0**) =

**0**always holds, and since f is an injection, we must have ker(f) = {

**0**}.

**Theorem 16**Let f :

**V**→

**W**be a linear mapping and let A = {

**w**, . . . ,

_{1}**w**} ⊂

_{m}**W**be linearly independent. Assume that m vectors are given in

**V**, so that they form a set B = {

**v**, . . . ,

_{1}**v**} ⊂

_{m}**V**with f(

**v**) =

_{i}**w**, for all i. Then the set B is also linearly independent.

_{i}*Proof*

Let a

_{1}, . . . , a

_{m}∈ F be given such that a

_{1}

**v**+ · · · + a

_{1}_{m}

**v**=

_{m}**0**. But then

**0**= f(

**0**) = f(a

_{1}

**v**+ · · · + a

_{1}_{m}

**v**) = a

_{m}_{1}f(

**v**)+ · · · + a

_{1}_{m}f(

**v**) = a

_{m}_{1}

**w**+ · · · + a

_{1}_{m}

**w**

_{m}Since A is linearly independent, it follows that all the a

_{i}`s must be zero. But that implies that the set B is linearly independent.

**Remark**

If B = {

**v**, . . . ,

_{1}**v**} ⊂

_{m}**V**is linearly independent, and f :

**V**→

**W**is linear, still, it does not necessarily follow that {f(

**v**), . . . , f(

_{1}**v**)} is linearly independent in

_{m}**W**. On the other hand, if f is an injection, then {f(

**v**), . . . , f(

_{1}**v**)} is linearly independent. This follows since, if a

_{m}_{1}f(

**v**)+ · · · + a

_{1}_{m}f(

**v**) =

_{m}**0**, then we have

**0**= a

_{1}f(

**v**)+ · · · + a

_{1}_{m}f(

**v**) = f(a

_{m}_{1}

**v**+ · · · + a

_{1}_{m}

**v**) = f(

_{m}**0**)

_{}But since f is an injection, we must have a

_{1}

**v**+ · · · + a

_{1}_{m}

**v**=

_{m}**0**. Thus a

_{i}= 0 for all i.

On the other hand, what is the condition for f :

**V**→

**W**to be a surjection (epimorphism)? That is, f(

**V**) =

**W**. Or put another way, for every

**w**∈

**W**, can we ﬁnd some vector

**v**∈

**V**with f(

**v**) =

**w**? One way to think of this is to consider a basis B ⊂

**W**. For each

**w**∈ B, we take

f

^{-1}(**w**) = {**v**∈**V**:**f(****v**) =**w**}Then f is a surjection if f

^{-1}(

**w**) ≠ ∅, for all

**w**∈ B.

**Deﬁnition**

A linear mapping which is a bijection (that is, an injection and a surjection) is called an

*isomorphism*. Often one writes

**V**≅

**W**to say that there exists an isomorphism from

**V**to

**W**.

**Theorem 17**Let f :

**V**→

**W**be an isomorphism. Then the inverse mapping f

^{-1}:

**W**→

**V**is also a linear mapping.

*Proof*

To see this, let a, b ∈ F and

**x, y**∈

**W**be arbitrary. Let f (

**x**) =

**u**∈

**V**and f

^{-1}(

**y**) =

**v**∈

**V**, say. Then

f(a

**u**+ b**v**) = (f(af^{-1}(**x**) + bf^{-1}(**y**)) = af(f^{-1}(**x**)) + bf(f^{-1}(**y**)) = a**x +**b**y****Therefore, since f is a bijection, we must have**

f

^{-1}(a**x +**b**y) =****a**af**u**+ b**v**=^{-1}(**x**) + bf^{-1}(**y**)

**Theorem 18**Let

**V**and

**W**be ﬁnite dimensional vector spaces over a ﬁeld F, and let f :

**V**→

**W**be a linear mapping. Let B = {

**v**, . . . ,

_{1}**v**} be a basis for

_{n}**V**. Then f is uniquely determined by the n vectors {f(

**v**), . . . , f(

_{1}**v**)} in

_{n}**W**.

*Proof*

Let

**v**∈

**V**be an arbitrary vector in

**V**. Since B is a basis for

**V**, we can uniquely write

**v**= a

_{1}

**v**+ · · · + a

_{1}_{n}

**v**

_{n}with a

_{i}∈ F, for each i. Then, since the mapping f is linear, we have

Therefore we see that if the values of f(

**v**), . . . , f(

_{1}**v**) are given, then the value of f(

_{n}**v**) is uniquely determined, for each

**v**∈

**V**.

On the other hand, let A = {

**u**, . . . ,

_{1}**u**} be a set of n arbitrarily given vectors in

_{n}**W**. Then let a mapping f :

**V**→

**W**be defined by the rule

f(

**v**) = a_{1}**u**+ · · · + a_{1}_{n}**u**_{n}for each arbitrarily given vector

**v**∈

**V**, where

**v**= a

_{1}

**v**+ · · · + a

_{1}_{n}

**v**. Clearly the mapping is uniquely determined, since

_{n}**v**is uniquely determined as a linear combination of the basis vectors B. It is a trivial matter to verify that the mapping which is so defined is also linear. We have f(

**v**) =

_{i}**u**for all the basis vectors

_{i}**v**∈ B.

_{i}

**Theorem 19**Let

**V**and

**W**be two ﬁnite dimensional vector spaces over a ﬁeld F. Then we have

**V**≅

**W**⇔ dim(

**V**) = dim(

**W**).

*Proof*

“⇒” Let f :

**V**→

**W**be an isomorphism, and let B = {

**v**, . . . ,

_{1}**v**} ⊂

_{n}**V**be a basis for

**V**. Then, as shown in our Remark above, we have A = {f(

**v**), . . . , f(

_{1}**v**)} ⊂

_{n}**W**being linearly independent. Furthermore, since B is a basis of

**V**, we have [B] =

**V**. Thus [A] =

**W**also. Therefore A is a basis of

**W**, and it contains precisely n elements; thus dim(

**V**) = dim(

**W**).

“⇐” Take B = {

**v**, . . . ,

_{1}**v**} ⊂

_{n}**V**to again be a basis of

**V**and let A = {

**w**, . . . ,

_{1}**w**} ⊂

_{n}**W**be some basis of

**W**(with n elements). Now define the mapping f :

**V**→

**W**by the rule f(

**v**) =

_{i}**w**, for all i. By theorem 18 we see that a linear mapping f is thus uniquely determined. Since A and B are both bases, it follows that f must be a bijection.

_{i}This immediately gives us a complete classiﬁcation of all ﬁnite-dimensional vector spaces. For let

**V**be a vector space of dimension n over the ﬁeld F. Then clearly F

^{n}is also a vector space of dimension n over F. The canonical basis is the set of vectors {

**e**, . . . ,

_{1}**e**}, where

_{n}**e**{0, . . . ,0, 1 , 0, . . . ,0} with the 1 in the i-th position

_{i}=for each i. Therefore, when thinking about

**V**, we can think that it is “really” just F

^{n}. On the other hand, the central idea in the theory of linear algebra is that we can look at things using different possible bases (or “frames of reference” in physics). The space F

^{n}seems to have a preferred, ﬁxed frame of reference, namely the canonical basis. Thus it is better to think about an abstract

**V**, with various possible bases.

**Examples**

For these examples, we will consider the 2-dimensional real vector space ℜ

^{2}, together with its canonical basis B = {

**e**,

_{1}**e**} = {(1, 0), (0, 1)}.

_{2}-
f
_{1}: ℜ^{2}→ ℜ^{2}with f_{1}(**e**) = (−1, 0) and f_{1}_{1}(**e**) = (0, 1). This is a_{2}*reﬂection*of the 2-dimensional plane into itself, with the axis of reﬂection being the second coordinate axis; that is the set of points (x_{1}, x_{2}) ∈ ℜ^{2}with x_{1}= 0. -
f
_{2}: ℜ^{2}→ ℜ^{2}with f_{2}(**e**) =_{1}**e**and f_{2}_{1}(**e**) =_{2}**e**. This is a_{1}*reﬂection*of the 2-dimensional plane into itself, with the axis of reﬂection being the diagonal axis x_{1}= x_{2}. -
f
_{3}: ℜ^{2}→ ℜ^{2}with f_{3}(**e**) = (cos φ, sin φ) and f_{1}_{1}(**e**) = (−sin φ, cos φ), for some real number φ ∈ ℜ. This is a_{2}*rotation*of the plane about its middle point, through an angle of φ.

(In analysis, we learn about the formulas of trigonometry. In particular we have

cos(θ + φ) = cos(θ) cos(φ) − sin(θ) sin(φ),sin(θ + φ) = sin(θ) cos(φ) − cos(θ) sin(φ).Taking θ = π/2, we note that cos(φ + π/2) = −sin(φ) and sin(φ + π/2) = cos(φ).)

For let**v**= (x_{1}, x_{2}) be some arbitrary point of the plane ℜ^{2}. Then we have

Looking at this from the point of view of geometry, the question is, what happens to the vector

**v**when it is rotated through the angle φ while preserving its length? Perhaps the best way to look at this is to think about

**v**in polar coordinates. That is, given any two real numbers x

_{1}and x

_{2}then, assuming that they are not both zero, we ﬁnd two unique real numbers r ≥ 0 and θ ∈ [0, 2π), such that

x

_{1}= r cos θ and x_{2}= r sin θ,where r = √(x

_{1}

^{2}+ x

_{2}

^{2}). Then

**v**= (r cos θ, r sin θ). So a rotation of

**v**through the angle φ must bring it to the new vector (r cos(φ + θ), r sin(φ + θ)) which, if we remember the formulas for cosines and sines of sums, turns out to be

(r(cos(θ) cos(φ) − sin(θ) sin(φ)), r(sin(θ) cos(φ) − cos(θ) sin(φ)).

But then, remembering that x

_{1}= r cos θ and x

_{2}= r sin θ, we see that the rotation brings the vector

**v**into the new vector

(x

_{1}cos φ − x_{2}sin φ, x_{1}sin φ + x_{2}cos φ),
which was precisely the specification for f

_{3}(**v**).
## No comments:

## Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.

Comments will only be published after moderation