# Linear Algebra: #15 Why is the Determinant Important?

*The transformation formula for integrals in higher-dimensional spaces*.

This is a theorem which is usually dealt with in the Analysis III lecture. Let G ⊂ ℜ^{n}be some open region, and let f : G → ℜ be a continuous function. Then the integral

has some particular value (assuming, of course, that the integral converges). Now assume that we have a continuously differentiable injective mapping φ : G → ℜ^{n}and a continuous function F : φ(G) → ℜ. Then we have the formula

Here, D(φ(**x**)) is the Jacobi matrix of φ at the point**x**.

This formula reﬂects the geometric idea that the determinant measures the change of the volume of n-dimensional space under the mapping φ.

If φ is a linear mapping, then take Q ⊂ ℜ^{n}to be the unit cube: Q = {(x_{1}, . . . , x_{n}) : 0 ≤ x_{i}≤ 1, ∀i}. Then the volume of Q, which we can denote by vol(Q) is simply 1. On the other hand, we have vol(φ(Q)) = det(A), where A is the matrix representing φ with respect to the canonical coordinates for ℜ^{n}. (A negative determinant — giving a negative volume — represents an orientation-reversing mapping.)__The__*characteristic polynomial*.

Let f :**V → V**be a linear mapping, and let**v**be an eigenvector of f with f(**v**) = λ**v**. That means that (f − λI_{n})(**v**) =**0**; therefore the mapping (f − λI_{n}) :**V → V**is singular. Now consider the matrix A, representing f with respect to some particular basis of**V**. Since λI_{n}is the matrix representing the mapping λI_{n}, we must have that the difference A − λI_{n}is a singular matrix. In particular, we have det(A − λI_{n}) = 0.

Another way of looking at this is to take a “variable” x, and then calculate (for example, using the Leibniz formula) the polynomial in x

P(x) = det(A − xI_{n}).

This polynomial is called the*characteristic polynomial*for the matrix A. Therefore we have the theorem:

**Theorem 41**

The zeros of the characteristic polynomial of A are the eigenvalues of the linear mapping f :**V**→**V**which A represents.

Obviously the degree of the polynomial is n for an n × n matrix A. So let us write the characteristic polynomial in the standard form

P(x) = c_{n}x^{n}+ c_{n}_{−1}x^{n}^{−1}+ · · · + c_{1}x + c_{0}.

The coefficients c_{0}, . . . , c_{n}are all elements of our ﬁeld F.

Now the matrix A represents the mapping f with respect to a particular choice of basis for the vector space**V**. With respect to some other basis, f is represented by some other matrix A', which is similar to A. That is, there exists some C ∈ GL(n, F) with A' = C^{−1}AC. But we have

Therefore we have:

**Theorem 42**

The characteristic polynomial is invariant under a change of basis; that is, under a similarity transformation of the matrix.

In particular, each of the coefficients c_{i}of the characteristic polynomial P(x) = c_{n}x^{n}+ c_{n}_{−1}x^{n}^{−1}+ · · · + c_{1}x + c_{0}remains unchanged after a similarity transformation of the matrix A.

What is the coefficient c_{n}? Looking at the Leibniz formula, we see that the term x^{n}can only occur in the product

(a_{11}− x)(a_{22}− x) · · · (a_{nn}− x) = (−1)x^{n}− (a_{11}+ a_{22}+ · · · + a_{nn})x^{n−1}+ · · · .

Therefore c_{n}= 1 if n is even, and c_{n}= −1 if n is odd. This is not particularly interesting.

So let us go one term lower and look at the coefficient c_{n}_{−1}. Where does x^{n}^{−1}occur in the Leibniz formula? Well, as we have just seen, there certainly is the term

(−1)^{n−1}(a_{11}+ a_{22}+ · · · + a_{nn})x^{n−1},

which comes from the product of the diagonal elements in the matrix A − xI_{n}. Do any other terms also involve the power x^{n−1}? Let us look at Leibniz formula more carefully in this situation. We have

Here, δ_{ij}= 1 if i = j. Otherwise, δ_{ij}= 0. Now if σ is a*non-trivial*permutation — not just the identity mapping — then obviously we must have two*different*numbers i_{1}and i_{2}, with σ(i_{1}) ≠ i_{1}and also σ(i_{2}) ≠ i_{2}. Therefore we see that these further terms in the sum can only contribute at most n − 2 powers of x. So we conclude that the (n − 1)-st coefficient is

c_{n}_{−1}+ = (−1)^{n−1}(a_{11}+ a_{22}+ · · · + a_{nn}).

**Definition**

Let A be an n × n matrix. The trace of A (in German, the spur of A) is the*sum of the diagonal elements*:

tr(A) = a_{11}+ a_{22}+ · · · + a_{nn}.

**Theorem 43**

tr(A) remains unchanged under a similarity transformation.

**An example**

Let f : ℜ

^{2}→ ℜ

^{2}be a rotation through the angle θ. Then, with respect to the canonical basis of ℜ

^{2}, the matrix of f is

That is to say, if λ ∈ ℜ is an eigenvalue of f, then λ must be a zero of the characteristic polynomial. That is,

λ

^{2}− 2λ cos θ + 1 = 0.But, looking at the well-known formula for the roots of quadratic polynomials, we see that such a λ can only exist if |cos θ| = 1. That is, θ = 0 or π. This reﬂects the obvious geometric fact that a rotation through any angle other than 0 or π rotates any vector away from its original axis. In any case, the two possible values of θ give the two possible eigenvalues for f, namely +1 and −1.

## No comments:

## Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.

Comments will only be published after moderation