# Linear Algebra: #11 Eigenvalues, Eigenspaces, Matrices which can be Diagonalized

**Deﬁnition**

Let f :

**V**→

**V**be a linear mapping of an n-dimensional vector space into itself. A subspace

**U**⊂

**V**is called invariant with respect to f if f(

**U**) ⊂

**U.**That is, f(

**u**) ∈

**U**for all

**u**∈

**U**.

**Theorem 28**Assume that the r dimensional subspace

**U**⊂

**V**is invariant with respect to f :

**V**→

**V**. Let A be the matrix representing f with respect to a given basis {

**v**, . . . ,

_{1}**v**} of

_{n}**V**. Then A is similar to a matrix A' which has the following form

*Proof*

Let {

**u**, . . . ,

_{1}**u**} be a basis for the subspace

_{r}**U**. Then extend this to a basis {

**u**, . . . ,

_{1}**u**,

_{r}**u**, . . . ,

_{r+1}**u**} of

_{n}**V**. The matrix of f with respect to this new basis has the desired form.

**Deﬁnition**

Let

**U**, . . . ,

_{1}**U**⊂

_{p}**V**be

*subspaces*. We say that

**V**is the

*direct sum of these subspaces*if

**V**=

**U**+ · · · +

_{1}**U**, and furthermore if

_{p}**v**=

**u**+ · · · +

_{1}**u**such that

_{p}**u**∈

_{i}**U**, for each i, then this expression for

_{i}**v**is unique. In other words, if

**v**=

**u**+ · · · +

_{1}**u**=

_{p}**u**+ · · · +

_{1}'**u**with

_{p}'**u**∈

_{i}'**U**for each i, then

_{i}**u**=

_{i}**u**, for each i. In this case, one writes

_{i}'**V**=

**U**⊕ · · · ⊕

_{1}**U**

_{p}This immediately gives the following result:

**Theorem 29**Let f :

**V**→

**V**be such that there exist subspaces

**U**⊂

_{i}**V**, for i = 1, . . . , p, such that

**V**=

**U**⊕ · · · ⊕

_{1}**U**and also f is invariant with respect to each

_{p}**U**. Then there exists a basis of

_{i}**V**such that the matrix of f with respect to this basis has the following

*block form*.

where each block A

_{i}is a square matrix, representing the restriction of f to the subspace

**U**.

_{i}*Proof*

Choose the basis to be a union of bases for each of the

**U**.

_{i}A special case is when the invariant subspace is an eigenspace.

**Deﬁnition**

Assume that λ ∈ F is an eigenvalue of the mapping f :

**V**→

**V.**The set {

**v**∈

**V**: f(

**v**) = λ

**v**} is called the

*eigenspace*of λ with respect to the mapping f. That is, the eigenspace is the set of all eigenvectors (and with the zero vector

**0**included) with eigenvalue λ.

**Theorem 30**Each eigenspace is a subspace of

**V**.

*Proof*

Let

**u, w ∈ V**be in the eigenspace of λ. Let a, b ∈ F be arbitrary scalars. Then we have

f(a

**u**+ b**w**) = af(**u**) + bf(**w**) = aλ**u**+ bλ**w**= λ(a**u**+ b**w**).Obviously if λ

_{1}and λ

_{2}are two different (λ

_{1}≠ λ

_{2}) eigenvalues, then the only common element of the eigenspaces is the zero vector

**0**. Thus if every vector in

**V**is an eigenvector, then we have the situation of theorem 29. One very particular case is that we have n different eigenvalues, where n is the dimension of

**V**.

**Theorem 31**Let λ

_{1}, . . . , λ

_{n}, be eigenvalues of the linear mapping f :

**V**→

**V**, where λ

_{i}≠ λ

_{j}for i ≠ j. Let

**v**, . . . ,

_{1}**v**be eigenvectors to these eigenvalues. That is,

_{n}**v**≠

_{i}**0**and f(

**v**) = λ

_{i}_{i}

**v**, for each i = 1, . . . , n. Then the set {

_{i}**v**, . . . ,

_{1}**v**} is linearly independent.

_{n}*Proof*

Assume to the contrary that there exist a

_{1}, . . . , a

_{n}, not all zero, with

a

_{1}**v**+ · · · + a_{1}_{n}**v**=_{n}**0**.Assume further that as few of the a

_{i}as possible are non-zero. Let a

_{p}be the ﬁrst non-zero scalar. That is, a

_{i}= 0 for i < p, and a

_{p}≠ 0. Obviously some other a

_{k}is non-zero, for some k ≠ p, for otherwise we would have the equation 0 = a

_{p}

**v**, which would imply that

_{p}_{}

**v**=

_{p}**0**, contrary to the assumption that

**v**is an eigenvector. Therefore we have

_{p}But, remembering that λ

_{i}≠ λ

_{j}for i ≠ j, we see that the scalar term for

**v**is zero, yet all other non-zero scalar terms remain non-zero. Thus we have found a new sum with fewer non-zero scalars than in the original sum with the a

_{p}_{i}s. This is a contradiction.

Therefore, in this particular case, the given set of eigenvectors {

**v**, . . . ,

_{1}**v**} form a basis for

_{n}**V**. With respect to this basis, the matrix of the mapping is

*diagonal*, with the diagonal elements being the eigenvalues.

## No comments:

## Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.

Comments will only be published after moderation