Linear Algebra: #5 Linear Mappings
Let V and W be vector spaces, both over the field F. Let f : V → W be a mapping from the vector space V to the vector space W. The mapping f is called a linear mapping if
f(au + bv) = af(u) + bf(v)
for all a, b ∈ F and all u, v ∈ V.
By choosing a and b to be either 0 or 1, we immediately see that a linear mapping always has both f(av) = af(v) and f(u + v) = f(u) + f(v), for all a ∈ F and for all u and v ∈ V. Also it is obvious that f(0) = 0 always.
Definition
Let f : V → W be a linear mapping. The kernel of the mapping, denoted by ker(f), is the set of vectors in V which are mapped by f into the zero vector in W.
Theorem 15
If ker(f) = {0}, that is, if the zero vector in V is the only vector which is mapped into the zero vector in W under f, then f is an injection (monomorphism). The converse is of course trivial.
Proof
That is, we must show that if u and v are two vectors in V with the property that f(u) = f(v), then we must have u = v. But
f(u) = f(v) ⇒ 0 = f(u) − f(v) = f(u − v).
Thus the vector u − v is mapped by f to the zero vector. Therefore we must have u − v = 0, or u = v.
Conversely, since f(0) = 0 always holds, and since f is an injection, we must have ker(f) = {0}.
Theorem 16
Let f : V → W be a linear mapping and let A = {w1, . . . , wm} ⊂ W be linearly independent. Assume that m vectors are given in V, so that they form a set B = {v1, . . . , vm} ⊂ V with f(vi) = wi , for all i. Then the set B is also linearly independent.
Proof
Let a1, . . . , am ∈ F be given such that a1v1 + · · · + amvm = 0. But then
0 = f(0) = f(a1v1 + · · · + amvm) = a1f(v1)+ · · · + amf(vm) = a1w1 + · · · + amwm
Since A is linearly independent, it follows that all the ai`s must be zero. But that implies that the set B is linearly independent.
Remark
If B = {v1, . . . , vm} ⊂ V is linearly independent, and f : V → W is linear, still, it does not necessarily follow that {f(v1), . . . , f(vm)} is linearly independent in W. On the other hand, if f is an injection, then {f(v1), . . . , f(vm)} is linearly independent. This follows since, if a1f(v1)+ · · · + amf(vm) = 0, then we have
0 = a1f(v1)+ · · · + amf(vm) = f(a1v1 + · · · + amvm) = f(0)
But since f is an injection, we must have a1v1 + · · · + amvm = 0. Thus ai = 0 for all i.
On the other hand, what is the condition for f : V → W to be a surjection (epimorphism)? That is, f(V) = W. Or put another way, for every w ∈ W, can we find some vector v ∈ V with f(v) = w? One way to think of this is to consider a basis B ⊂ W. For each w ∈ B, we take
f-1(w) = {v ∈ V : f(v) = w }
Then f is a surjection if f-1(w) ≠ ∅, for all w ∈ B.
Definition
A linear mapping which is a bijection (that is, an injection and a surjection) is called an isomorphism. Often one writes V ≅ W to say that there exists an isomorphism from V to W.
Theorem 17
Let f : V → W be an isomorphism. Then the inverse mapping f-1 : W → V is also a linear mapping.
Proof
To see this, let a, b ∈ F and x, y ∈ W be arbitrary. Let f (x) = u ∈ V and f-1(y) = v ∈ V, say. Then
f(au + bv) = (f(af-1(x) + bf-1(y)) = af(f-1(x)) + bf(f-1(y)) = ax + by
Therefore, since f is a bijection, we must have
f-1(ax + by) = au + bv = af-1(x) + bf-1(y)
Theorem 18
Let V and W be finite dimensional vector spaces over a field F, and let f : V → W be a linear mapping. Let B = {v1, . . . , vn} be a basis for V. Then f is uniquely determined by the n vectors {f(v1), . . . , f(vn)} in W.
Proof
Let v ∈ V be an arbitrary vector in V. Since B is a basis for V, we can uniquely write
v = a1v1 + · · · + anvn
with ai ∈ F, for each i. Then, since the mapping f is linear, we have
Therefore we see that if the values of f(v1), . . . , f(vn) are given, then the value of f(v) is uniquely determined, for each v ∈ V.
On the other hand, let A = {u1, . . . , un} be a set of n arbitrarily given vectors in W. Then let a mapping f : V → W be defined by the rule
f(v) = a1u1 + · · · + anun
for each arbitrarily given vector v ∈ V, where v = a1v1 + · · · + anvn. Clearly the mapping is uniquely determined, since v is uniquely determined as a linear combination of the basis vectors B. It is a trivial matter to verify that the mapping which is so defined is also linear. We have f(vi) = ui for all the basis vectors vi ∈ B.
Theorem 19
Let V and W be two finite dimensional vector spaces over a field F. Then we have
V ≅ W ⇔ dim(V) = dim(W).
Proof
“⇒” Let f : V → W be an isomorphism, and let B = {v1, . . . , vn} ⊂ V be a basis for V. Then, as shown in our Remark above, we have A = {f(v1), . . . , f(vn)} ⊂ W being linearly independent. Furthermore, since B is a basis of V, we have [B] = V. Thus [A] = W also. Therefore A is a basis of W, and it contains precisely n elements; thus dim(V) = dim(W).
“⇐” Take B = {v1, . . . , vn} ⊂ V to again be a basis of V and let A = {w1, . . . , wn} ⊂ W be some basis of W (with n elements). Now define the mapping f : V → W by the rule f(vi) = wi , for all i. By theorem 18 we see that a linear mapping f is thus uniquely determined. Since A and B are both bases, it follows that f must be a bijection.
This immediately gives us a complete classification of all finite-dimensional vector spaces. For let V be a vector space of dimension n over the field F. Then clearly Fn is also a vector space of dimension n over F. The canonical basis is the set of vectors {e1, . . . , en}, where
ei = {0, . . . ,0, 1 , 0, . . . ,0} with the 1 in the i-th position
for each i. Therefore, when thinking about V, we can think that it is “really” just Fn. On the other hand, the central idea in the theory of linear algebra is that we can look at things using different possible bases (or “frames of reference” in physics). The space Fn seems to have a preferred, fixed frame of reference, namely the canonical basis. Thus it is better to think about an abstract V, with various possible bases.
Examples
For these examples, we will consider the 2-dimensional real vector space ℜ2, together with its canonical basis B = {e1, e2} = {(1, 0), (0, 1)}.
- f1 : ℜ2 → ℜ2 with f1(e1) = (−1, 0) and f1(e2) = (0, 1). This is a reflection of the 2-dimensional plane into itself, with the axis of reflection being the second coordinate axis; that is the set of points (x1, x2) ∈ ℜ2 with x1 = 0.
- f2 : ℜ2 → ℜ2 with f2(e1) = e2 and f1(e2) = e1. This is a reflection of the 2-dimensional plane into itself, with the axis of reflection being the diagonal axis x1 = x2 .
-
f3
: ℜ2
→ ℜ2
with f3(e1) = (cos φ, sin φ) and f1(e2) = (−sin φ, cos φ), for some
real number φ ∈ ℜ. This is a rotation of the plane about its middle point,
through an angle of φ.
(In analysis, we learn about the formulas of trigonometry. In particular we have
cos(θ + φ) = cos(θ) cos(φ) − sin(θ) sin(φ),sin(θ + φ) = sin(θ) cos(φ) − cos(θ) sin(φ).Taking θ = π/2, we note that cos(φ + π/2) = −sin(φ) and sin(φ + π/2) = cos(φ).)
For let v = (x1, x2) be some arbitrary point of the plane ℜ2. Then we have
Looking at this from the point of view of geometry, the question is, what happens to the vector v when it is rotated through the angle φ while preserving its length? Perhaps the best way to look at this is to think about v in polar coordinates. That is, given any two real numbers x1 and x2 then, assuming that they are not both zero, we find two unique real numbers r ≥ 0 and θ ∈ [0, 2π), such that
x1 = r cos θ and x2 = r sin θ,
where r = √(x12+ x22). Then v = (r cos θ, r sin θ). So a rotation of v through the angle φ must bring it to the new vector (r cos(φ + θ), r sin(φ + θ)) which, if we remember the formulas for cosines and sines of sums, turns out to be
(r(cos(θ) cos(φ) − sin(θ) sin(φ)), r(sin(θ) cos(φ) − cos(θ) sin(φ)).
But then, remembering that x1 = r cos θ and x2 = r sin θ, we see that the rotation brings the vector v into the new vector
(x1 cos φ − x2 sin φ, x1 sin φ + x2
cos φ),
which was precisely the specification for f3(v).
No comments:
Post a Comment
If it's a past exam question, do not include links to the paper. Only the reference.
Comments will only be published after moderation