FOLLOW US ON TWITTER
SHARE THIS PAGE ON FACEBOOK, TWITTER, WHATSAPP ... USING THE BUTTONS ON THE LEFT


YOUR PARTICIPATION FOR THE GROWTH OF PHYSICS REFERENCE BLOG

Thursday, June 26, 2014

Linear Algebra: #22 Dual Spaces

  • Linear Algebra: #22 Dual Spaces

Again let V be a vector space over a field F (and, although its not really necessary here, we continue to take F = ℜ or ℂ).

Definition
The dual space to V is the set of all linear mappings f : V → F. We denote the dual space by V*.

Examples
  • Let V = ℜn. Then let fi be the projection onto the i-th coordinate. That is, if ej is the j-th canonical basis vector, then

    Linear Algebra: #22 Dual Spaces equation pic 1
    So each fi is a member of V*, for i = 1, . . . , n, and as we will see, these dual vectors form a basis for the dual space.


  • More generally, let V be any finite dimensional vector space, with some basis {v1, . . . , vn}. Let fi : V → F be defined as follows. For an arbitrary vector vV there is a unique linear combination

    v = a1v1 + · · · + anvn

    Then let fi(vi ) = ai. Again, fiV*, and we will see that the n vectors, f1, . . . , fn form a basis of the dual space.


  • Let C0([0, 1]) be the space of continuous functions f : [0, 1] → ℜ. As we have seen, this is a real vector space, and it is not finite dimensional. For each f ∈ C0([0, 1]) let

    Linear Algebra: #22 Dual Spaces equation pic 2
    This gives us a linear mapping Λ : C0([0, 1]) → ℜ. Thus it belongs to the dual space of C0([0, 1]).


  • Another vector in the dual space to C0([0, 1]) is given as follows. Let x ∈ [0, 1] be some fixed point. Then let Γx : C0([0, 1]) → ℜ is defined to be Γ(f) = f(x), for all f ∈ C0([0, 1]).


  • For this last example, let us assume that V is a vector space with scalar 0 product. (Thus F = ℜ or ℂ) For each vV, let φv(u) = <v, u>. Then φvV*.


Theorem 56
Let V be a finite dimensional vector space (over ℂ) and let V* be the dual space. For each vV, let φv : V → ℂ be given by φv(u) = <v, u>. Then given an orthonormal basis {v1, . . . , vn} of V, we have that {φv1, . . ., φvn} is a basis of V*. This is called the dual basis to {v1, . . . , vn}.

Proof
Let φ ∈ V* be an arbitrary linear mapping φ : V → ℂ. But, as always, we remember that φ is uniquely determined by vectors (which in this case are simply complex numbers) φ(v1), . . . , φ(vn). Say φ(vj) ∈ ℂ, for each j. Now take some arbitrary vector vV. There is the unique expression

Linear Algebra: #22 Dual Spaces equation pic 3

Therefore, φ = c1φv1 + · · · + cnφvn, and so {φv1, . . ., φvn} generates V*.

To show that {φv1, . . ., φvn} is linearly independent, let φ = c1φv1 + · · · + cnφvn be some linear combination, where cj ≠ 0, for at least one j. But then φ(vj) = cj ≠ 0, and thus φ ≠ 0 in V*.

Corollary
dim(V*) = dim(V).

Corollary
More specifically, we have an isomorphism VV*, such that v → φv for each v ∈ V.

But somehow, this isomorphism doesn’t seem to be very “natural”. It is defined in terms of some specific basis of V. What if V is not finite dimensional so that we have no basis to work with? For this reason, we do not think of V and V* as being “really” just the same vector space. [In case we have a scalar product, then there is a “natural” mapping V → V*, where v → φv, such that φv(u) = <v, u>, for all uV.]

On the other hand, let us look at the dual space of the dual space (V*)*. (Perhaps this is a slightly mind-boggling concept at first sight!) We imagine that “really” we just have (V*)* = V. For let Φ ∈ (V*)*. That means, for each φ ∈ V* we have Φ(φ) being some complex number. On the other hand, we also have φ(v) being some complex number, for each VV. Can we uniquely identify each VV with some Φ ∈ (V*)*, in the sense that both always give the same complex numbers, for all possible φ ∈ V*?

Let us say that there exists a vV such that Φ(φ) = φ(v), for all φ ∈ V*. In fact, if we define φv to be Φ(φ) = φ(v), for each φ ∈ V*, then we certainly have a linear mapping, V* → ℂ. On the other hand, given some arbitrary Φ ∈ (V*)*, do we have a unique vV such that Φ(φ) = φ(v), for all φ ∈ V*? At least in the case where V is finite dimensional, we can affirm that it is true by looking at the dual basis.


Dual mappings 
Let V and W be two vector spaces (where we again assume that the field is ℂ). Assume that we have a linear mapping f : VW. Then we can define a linear mapping f * : W*V* in a natural way as follows. For each φ ∈ W*, let f *(φ) = φ ◦ f. So it is obvious that f *(φ) : V → ℂ is a linear mapping. Now assume that V and W have scalar products, giving us the mappings s : VV* and t : WW*. So we can draw a little “diagram” to describe the situation.

Linear Algebra: #22 Dual Spaces equation pic 4
The mappings s and t are isomorphisms, so we can go around the diagram, using the mapping f adj = s−1 ◦ f* ◦ t : WV. This is the adjoint mapping to f. So we see that in the case V = W, we have that a self-adjoint mapping f : VV is such that f adj = f.

Does this correspond with our earlier definition, namely that <u, f(v)> = <f(u), v> for all u and vV? To answer this question, look at the diagram, which now has the form

Linear Algebra: #22 Dual Spaces equation pic 5
where s(v) ∈ V* is such that s(v)(u) = <v, u>, for all uV. Now f adj = s−1 ◦ f* ◦ s; that is, the condition    f adj = f becomes s−1 ◦ f* ◦ s = f. Since s is an isomorphism, we can equally say that the condition is that f* ◦ s = s ◦ f. So let v be some arbitrary vector in V. We have s ◦ f(v) = f* ◦ s(v). However, remembering that this is an element of V*, we see that this means

(s ◦ f(v))(u) = (f* ◦ s)(v)(u), 

for all uV. But (s ◦ f(v))(u) = <f(v), u> and (f* ◦ s)(v)(u) = <v, f(u)>. Therefore we have

<f(v), u> = <v, f(u)>

for all v and uV, as expected.




This is the last section for this series on Linear Algebra. But that is not to say that there is nothing more that you have to know about the subject. For example, when studying the theory of relativity you will encounter tensors, which are combinations of linear mappings and dual mappings. One speaks of “covariant” and “contravariant” tensors. That is, linear mappings and dual mappings.

But then, proceeding to the general theory of relativity, these tensors are used to describe differential geometry. That is, we no longer have a linear (that is, a vector) space. Instead, we imagine that space is curved, and in order to describe this curvature, we define a thing called the tangent vector space which you can think of as being a kind of linear approximation to the spacial structure near a given point. And so it goes on, leading to more and more complicated mathematical constructions, taking us away from the simple “linear” mathematics which we have seen in this semester.

After a few years of learning the mathematics of contemporary theoretical physics, perhaps you will begin to ask yourselves whether it really makes so much sense after all. Can it be that the physical world is best described by using all of the latest techniques which pure mathematicians happen to have been playing around with in the last few years — in algebraic topology, functional analysis, the theory of complex functions, and so on and so forth? Or, on the other hand, could it be that physics has been loosing touch with reality, making constructions similar to the theory of epicycles of the medieval period, whose conclusions can never be verified using practical experiments in the real world?




IMPORTANT NOTE:
This series on Linear Algebra has been taken from the lecture notes prepared by Geoffrey Hemion. I used his notes when studying Linear Algebra for my physics course and it was really helpful. So, I thought that you could also benefit from his notes. The document can be found at his homepage.

No comments:

Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.
Comments will only be published after moderation

Currently Viewing: Physics Reference | Linear Algebra: #22 Dual Spaces