# Linear Algebra: #20 Characterizing Orthogonal, Unitary, and Hermitian Matrices

**20.1 Orthogonal matrices**

Let

**V**be an n-dimensional real vector space (that is, over the real numbers ℜ), and let {

**v**, . . . ,

_{1}**v**} be an orthonormal basis for

_{n}**V**. Let f :

**V**→

**V**be an orthogonal mapping, and let A be its matrix with respect to the basis {

**v**, . . . ,

_{1}**v**}. Then we say that A is an

_{n}*orthogonal matrix*.

**Theorem 50**The n × n matrix A is orthogonal ⇔ A

^{−1}= A

^{t}(Recall that if a

_{ij}is the ij-th element of A, then the ij-th element of A

^{t}is a

_{ji}. That is, everything is “ﬂipped over” the main diagonal in A.)

*Proof*

For an orthogonal mapping f, we have <

**u**,

**w**> = <f(

**u**), f(

**w**)>, for all j and k. But in the matrix notation, the scalar product becomes the inner product. That is, if

In other words, the matrix whose jk-th element is always <

**v**,

_{j}**v**> is the n×n identity matrix I

_{k}_{n}. On the other hand,

That is, we obtain the j-th column of the matrix A. Furthermore, since <

**v**,

_{j}**v**> = <f(

_{k}**v**), f(

_{j}**v**)>, we must have the matrix whose jk-th elements are <f(

_{k}**v**), f(

_{j}**v**)> being again the identity matrix. So

_{k}But now, if you think about it, you see that this is just one part of the matrix multiplication A

^{t}A. All together, we have

Thus we conclude that A

^{−1}= A

^{t}. (Note: this was only the proof that f orthogonal ⇒ A

^{−1}= A

^{t}. The proof in the other direction, going backwards through our argument, is easy, and is left as an exercise for you.)

**20.2 Unitary matrices**

**Theorem 51**The n × n matrix A is unitary ⇔ A

^{−1}= A

^{t}. (The matrix A is obtained by taking the complex conjugates of all its elements.)

*Proof*

Entirely analogous with the case of orthogonal matrices. One must note however, that the inner product in the complex case is

**20.3 Hermitian and symmetric matrices**

**Theorem 52**The n × n matrix A is Hermitian ⇔ A

^{−1}= A

^{t}.

*Proof*

This is again a matter of translating the condition <

**v**, f(

_{j}**v**)> = <f(

_{k}**v**),

_{j}**v**> into matrix notation, where f is the linear mapping which is represented by the matrix A, with respect to the orthonormal basis {

_{k}**v**, . . . ,

_{1}**v**}. We have

_{n}In particular, we see that in the real case, self-adjoint matrices are symmetric.

## No comments:

## Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.

Comments will only be published after moderation