Linear Algebra: #11 Eigenvalues, Eigenspaces, Matrices which can be Diagonalized
Let f : V → V be a linear mapping of an n-dimensional vector space into itself. A subspace U ⊂ V is called invariant with respect to f if f(U) ⊂ U. That is, f(u) ∈ U for all u ∈ U.
Theorem 28
Assume that the r dimensional subspace U ⊂ V is invariant with respect to f : V → V. Let A be the matrix representing f with respect to a given basis {v1, . . . , vn} of V. Then A is similar to a matrix A' which has the following form
Proof
Let {u1, . . . , ur} be a basis for the subspace U. Then extend this to a basis {u1, . . . , ur, ur+1, . . . , un} of V. The matrix of f with respect to this new basis has the desired form.
Definition
Let U1, . . . , Up ⊂ V be subspaces. We say that V is the direct sum of these subspaces if V = U1+ · · · +Up, and furthermore if v = u1+ · · · +up such that ui ∈ Ui, for each i, then this expression for v is unique. In other words, if v = u1+ · · · +up = u1'+ · · · +up' with ui' ∈ Ui for each i, then ui = ui', for each i. In this case, one writes V = U1 ⊕ · · · ⊕ Up
This immediately gives the following result:
Theorem 29
Let f : V → V be such that there exist subspaces Ui ⊂ V, for i = 1, . . . , p, such that V = U1 ⊕ · · · ⊕ Up and also f is invariant with respect to each Ui. Then there exists a basis of V such that the matrix of f with respect to this basis has the following block form.
where each block Ai is a square matrix, representing the restriction of f to the subspace Ui.
Proof
Choose the basis to be a union of bases for each of the Ui.
A special case is when the invariant subspace is an eigenspace.
Definition
Assume that λ ∈ F is an eigenvalue of the mapping f : V → V. The set {v ∈ V : f(v) = λv} is called the eigenspace of λ with respect to the mapping f. That is, the eigenspace is the set of all eigenvectors (and with the zero vector 0 included) with eigenvalue λ.
Theorem 30
Each eigenspace is a subspace of V.
Proof
Let u, w ∈ V be in the eigenspace of λ. Let a, b ∈ F be arbitrary scalars. Then we have
f(au + bw) = af(u) + bf(w) = aλu + bλw = λ(au + bw).
Obviously if λ1 and λ2 are two different (λ1 ≠ λ2 ) eigenvalues, then the only common element of the eigenspaces is the zero vector 0. Thus if every vector in V is an eigenvector, then we have the situation of theorem 29. One very particular case is that we have n different eigenvalues, where n is the dimension of V.
Theorem 31
Let λ1, . . . , λn, be eigenvalues of the linear mapping f : V → V, where λi ≠ λj for i ≠ j. Let v1, . . . , vn be eigenvectors to these eigenvalues. That is, vi ≠ 0 and f(vi) = λivi, for each i = 1, . . . , n. Then the set {v1, . . . , vn} is linearly independent.
Proof
Assume to the contrary that there exist a1, . . . , an, not all zero, with
a1v1
+ · · · + anvn
= 0.
Assume further that as few of the ai as possible are non-zero. Let ap be the first non-zero scalar. That is, ai = 0 for i < p, and ap≠ 0. Obviously some other ak is non-zero, for some k ≠ p, for otherwise we would have the equation 0 = apvp , which would imply that vp = 0, contrary to the assumption that vp is an eigenvector. Therefore we have
But, remembering that λi ≠ λj for i ≠ j, we see that the scalar term for vp is zero, yet all other non-zero scalar terms remain non-zero. Thus we have found a new sum with fewer non-zero scalars than in the original sum with the ais. This is a contradiction.
Therefore, in this particular case, the given set of eigenvectors {v1, . . . , vn} form a basis for V. With respect to this basis, the matrix of the mapping is diagonal, with the diagonal elements being the eigenvalues.
No comments:
Post a Comment
If it's a past exam question, do not include links to the paper. Only the reference.
Comments will only be published after moderation