# Linear Algebra: #18 Orthogonal Bases

**V**is now assumed to be either Euclidean, or else unitary — that is, it is deﬁned over either the real numbers ℜ, or else the complex numbers ℂ. In either case we have a scalar product <·,·> :

**V × V**→ F (here, F = ℜ or ℂ).

As always, we assume that

**V**is ﬁnite dimensional, and thus it has a basis {

**v**, . . . ,

_{1}**v**}. Thinking about the canonical basis for ℜ

_{n}^{n}or ℂ

^{n}, and the inner product as our scalar product, we see that it would be nice if we had

- <
**v**,_{j}**v**> = 1, for all j (that is, the basis vectors are_{j}*normalized*), and furthermore - <
**v**,_{j}**v**> = 0, for all j ≠ k (that is, the basis vectors are an orthogonal set in_{k}**V**).

That is to say, {

**v**, . . . ,

_{1}**v**} is an

_{n}*orthonormal*basis of

**V**. Unfortunately, most bases are not orthonormal. But this doesn’t really matter. For, starting from any given basis, we can successively alter the vectors in it, gradually changing it into an orthonormal basis. This process is often called the

*Gram-Schmidt orthonormalization process*. But ﬁrst, to show you why orthonormal bases are good, we have the following theorem.

**Theorem 48**Let

**V**have the orthonormal basis {

**v**, . . . ,

_{1}**v**}, and let

_{n}**x**∈

**V**be arbitrary. Then

That is, the coefficients of

**x**, with respect to the orthonormal basis, are simply the scalar products with the respective basis vectors.

*Proof*

This follows simply because if

**x**= ∑ a

_{j}

**v**, with (j = 1, ... , n), then we have for each k,

_{j}So now to the Gram-Schmidt process. To begin with, if a non-zero vector

**v**∈

**V**is not normalized — that is, its norm is not one — then it is easy to multiply it by a scalar, changing it into a vector with norm one. For we have <

**v, v**> 0. Therefore ||

**v**|| = √<

**v**,

**v**> > 0 and we have

In other words, we simply multiply the vector by the inverse of its norm.

**Theorem 49**Every ﬁnite dimensional vector space

**V**which has a scalar product has an orthonormal basis.

*Proof*

The proof proceeds by constructing an orthonormal basis {

**u**, . . . ,

_{1}**u**} from a given, arbitrary basis {

_{n}**v**, . . . ,

_{1}**v**}. To describe the construction, we use induction on the dimension, n. If n = 1 then there is almost nothing to prove. Any non-zero vector is a basis for

_{n}**V**, and as we have seen, it can be normalized by dividing by the norm. (That is, scalar multiplication with the inverse of the norm.)

So now assume that n ≥ 2, and furthermore assume that the Gram-Schmidt process can be constructed for any n−1 dimensional space. Let

**U**⊂

**V**be the subspace spanned by the ﬁrst n − 1 basis vectors {

**v**, . . . ,

_{1}**v**

**}. Since**

_{n−1}**U**is only n − 1 dimensional, our assumption is that there exists an orthonormal basis {

**u**, . . . ,

_{1}**u**

**} for**

_{n−1}**U**. Clearly, adding in

**v**gives a new basis {

_{n}**u**, . . . ,

_{1}**u**

**,**

_{n−1}**v**} for

_{n}**V**.

[Since both {

**v**, . . . ,

_{1}**v**

**} and {**

_{n−1}**u**, . . . ,

_{1}**u**

**} are bases for**

_{n−1}**U**, we can write each

**v**as a linear combination of the

_{j}**u**’s. Therefore {

_{k}**u**, . . . ,

_{1}**u**

**,**

_{n−1}**v**} spans

_{n}**V**, and since the dimension is n, it must be a basis.]

Unfortunately, this last vector,

**v**, might disturb the nice orthonormal character of the other vectors. Therefore, we replace

_{n}**v**with the new vector (A linear independent set remains linearly independent if one of the vectors has some linear combination of the

_{n}*other*vectors added on to it.)

## No comments:

## Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.

Comments will only be published after moderation