# Linear Algebra: #22 Dual Spaces

**V**be a vector space over a ﬁeld F (and, although its not really necessary here, we continue to take F = ℜ or ℂ).

**Deﬁnition**

The

*dual space*to

**V**is the set of all linear mappings f :

**V**→ F. We denote the dual space by

**V**.

^{*}**Examples**

- Let V = ℜ
^{n}. Then let f_{i}be the projection onto the i-th coordinate. That is, if**e**is the j-th canonical basis vector, then_{j}

So each f_{i}is a member of**V**, for i = 1, . . . , n, and as we will see, these^{*}*dual vectors*form a basis for the dual space. - More generally, let
**V**be any ﬁnite dimensional vector space, with some basis {**v**, . . . ,_{1}**v**}. Let f_{n}_{i}:**V**→ F be defined as follows. For an arbitrary vector**v**∈**V**there is a unique linear combination

**v**= a_{1}**v**+ · · · + a_{1}_{n}**v**._{n}

Then let f_{i}(**v**) = a_{i}_{i}. Again, f_{i}∈**V**, and we will see that the n vectors, f^{*}_{1}, . . . , f_{n}form a basis of the dual space. - Let C
_{0}([0, 1]) be the space of continuous functions f : [0, 1] → ℜ. As we have seen, this is a real vector space, and it is not ﬁnite dimensional. For each f ∈ C_{0}([0, 1]) let

This gives us a linear mapping Λ : C_{0}([0, 1]) → ℜ. Thus it belongs to the dual space of C_{0}([0, 1]). - Another vector in the dual space to C
_{0}([0, 1]) is given as follows. Let x ∈ [0, 1] be some ﬁxed point. Then let Γ_{x}: C_{0}([0, 1]) → ℜ is defined to be Γ(f) = f(x), for all f ∈ C_{0}([0, 1]). - For this last example, let us assume that
**V**is a vector space with scalar 0 product. (Thus F = ℜ or ℂ) For each**v**∈**V**, let φ_{v}(**u**) = <**v**,**u**>. Then φ_{v}∈**V**.^{*}

**Theorem 56**Let

**V**be a ﬁnite dimensional vector space (over ℂ) and let

**V**be the dual space. For each

^{*}**v**∈

**V**, let φ

_{v}:

**V**→ ℂ be given by φ

_{v}(

**u**) = <

**v**,

**u**>. Then given an orthonormal basis {

**v**, . . . ,

_{1}**v**} of

_{n}**V**, we have that {φ

_{v1}, . . ., φ

_{vn}} is a basis of

**V**. This is called the

^{*}*dual basis*to {

**v**, . . . ,

_{1}**v**}.

_{n}*Proof*

Let φ ∈

**V**be an arbitrary linear mapping φ :

^{*}**V**→ ℂ. But, as always, we remember that φ is uniquely determined by vectors (which in this case are simply complex numbers) φ(

**v**), . . . , φ(

_{1}**v**). Say φ(

_{n}**v**) ∈ ℂ, for each j. Now take some arbitrary vector

_{j}**v**∈

**V**. There is the unique expression

Therefore, φ = c

_{1}φ

_{v1}+ · · · + c

_{n}φ

_{vn}, and so {φ

_{v1}, . . ., φ

_{vn}} generates

**V**.

^{*}To show that {φ

_{v1}, . . ., φ

_{vn}} is linearly independent, let φ = c

_{1}φ

_{v1}+ · · · + c

_{n}φ

_{vn}be some linear combination, where c

_{j}≠ 0, for at least one j. But then φ(

**v**) = c

_{j}_{j}≠ 0, and thus φ ≠ 0 in

**V**.

^{*}**Corollary**

dim(

**V**) = dim(

^{*}**V**).

**Corollary**

More specifically, we have an isomorphism

**V**→

**V**, such that

^{*}**v**→ φ

_{v}for each

**v**∈ V.

But somehow, this isomorphism doesn’t seem to be very “natural”. It is defined in terms of some specific basis of

**V**. What if

**V**is not ﬁnite dimensional so that we have no basis to work with? For this reason, we do not think of

**V**and

**V**as being “really” just the same vector space. [In case we have a scalar product, then there is a “natural” mapping V →

^{*}**V**, where

^{*}**v**→ φ

_{v}, such that φ

_{v}(

**u**) = <

**v**,

**u**>, for all

**u**∈

**V**.]

On the other hand, let us look at the dual space of the dual space (

**V**)

^{*}**. (Perhaps this is a slightly mind-boggling concept at ﬁrst sight!) We imagine that “really” we just have (**

^{*}**V**)

^{*}**=**

^{*}**V**. For let Φ ∈ (

**V**)

^{*}**. That means, for each φ ∈**

^{*}**V**we have Φ(φ) being some complex number. On the other hand, we also have φ(

^{*}**v**) being some complex number, for each

**V**∈

**V**. Can we uniquely identify each

**V**∈

**V**with some Φ ∈ (

**V**)

^{*}**, in the sense that both always give the same complex numbers, for all possible φ ∈**

^{*}**V**?

^{*}Let us say that there exists a

**v**∈

**V**such that Φ(φ) = φ(

**v**), for all φ ∈

**V**. In fact, if we define φ

^{*}_{v}to be Φ(φ) = φ(

**v**), for each φ ∈

**V**, then we certainly have a linear mapping,

^{*}**V**→ ℂ. On the other hand, given some arbitrary Φ ∈ (

^{*}**V**)

^{*}**, do we have a unique**

^{*}**v**∈

**V**such that Φ(φ) = φ(v), for all φ ∈

**V**? At least in the case where

^{*}**V**is ﬁnite dimensional, we can affirm that it is true by looking at the dual basis.

**Dual mappings**

Let

**V**and

**W**be two vector spaces (where we again assume that the ﬁeld is ℂ). Assume that we have a linear mapping f :

**V**→

**W**. Then we can define a linear mapping f

**:**

^{*}**W**→

^{*}**V**in a natural way as follows. For each φ ∈

^{*}**W**, let f

^{*}**(φ) = φ ◦ f. So it is obvious that f**

^{*}**(φ) :**

^{*}**V**→ ℂ is a linear mapping. Now assume that

**V**and

**W**have scalar products, giving us the mappings s :

**V**→

**V**and t :

^{*}**W**→

**W**. So we can draw a little “diagram” to describe the situation.

^{*}The mappings s and t are isomorphisms, so we can go around the diagram, using the mapping f

^{adj}= s

^{−1}

**◦ f**

^{}^{*}◦ t :

**W**→

**V**. This is the adjoint mapping to f. So we see that in the case

**V = W**, we have that a self-adjoint mapping f :

**V**→

**V**is such that f

^{adj}= f.

Does this correspond with our earlier definition, namely that <

**u**, f(

**v**)> = <f(

**u**),

**v**> for all

**u**and

**v**∈

**V**? To answer this question, look at the diagram, which now has the form

where s(

**v**) ∈

**V**is such that s(

^{*}**v**)(

**u**) = <

**v**,

**u**>, for all

**u**∈

**V**. Now f

^{adj}= s

^{−1}◦ f

^{*}◦ s; that is, the condition f

^{adj}= f becomes s

^{−1}◦ f

^{*}◦ s = f. Since s is an isomorphism, we can equally say that the condition is that f

^{*}◦ s = s ◦ f. So let

**v**be some arbitrary vector in

**V**. We have s ◦ f(

**v**) = f

^{*}◦ s(

**v**). However, remembering that this is an element of

**V**, we see that this means

^{*}
(s ◦ f(

**v**))(**u**) = (f^{*}◦ s)(**v**)(**u**),for all

**u**∈

**V**. But (s ◦ f(

**v**))(

**u**) = <f(

**v**),

**u**> and (f

^{*}◦ s)(

**v**)(

**u**) = <

**v**, f(

**u**)>. Therefore we have

<f(

**v**),**u**> = <**v**, f(**u**)>for all

**v**and

**u**∈

**V**, as expected.

This is the last section for this series on Linear Algebra. But that is not to say that there is nothing more that you have to know about the subject. For example, when studying the theory of relativity you will encounter tensors, which are combinations of linear mappings and dual mappings. One speaks of “covariant” and “contravariant” tensors. That is, linear mappings and dual mappings.

But then, proceeding to the general theory of relativity, these tensors are used to describe differential geometry. That is, we no longer have a linear (that is, a vector) space. Instead, we imagine that space is curved, and in order to describe this curvature, we define a thing called the tangent vector space which you can think of as being a kind of linear approximation to the spacial structure near a given point. And so it goes on, leading to more and more complicated mathematical constructions, taking us away from the simple “linear” mathematics which we have seen in this semester.

After a few years of learning the mathematics of contemporary theoretical physics, perhaps you will begin to ask yourselves whether it really makes so much sense after all. Can it be that the physical world is best described by using all of the latest techniques which pure mathematicians happen to have been playing around with in the last few years — in algebraic topology, functional analysis, the theory of complex functions, and so on and so forth? Or, on the other hand, could it be that physics has been loosing touch with reality, making constructions similar to the theory of epicycles of the medieval period, whose conclusions can never be verified using practical experiments in the real world?

__IMPORTANT NOTE__:

This series on Linear Algebra has been taken from the lecture notes prepared by Geoffrey Hemion. I used his notes when studying Linear Algebra for my physics course and it was really helpful. So, I thought that you could also benefit from his notes. The document can be found at his homepage.

## No comments:

## Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.

Comments will only be published after moderation