# Linear Algebra: #8 Systems of Linear Equations

*geometry*, and instead we will consider simple linear equations. In particular, we consider a system of m equations in n unknowns.

We can also think about this as being a vector equation. That is, if

then our system of linear equations is just the single vector equation

A ·

**x**=**b**But what is the most obvious way to solve this system of equations? It is a simple matter to write down an algorithm, as follows. The numbers a

_{ij}and b

_{k}are given (as elements of F), and the problem is to ﬁnd the numbers x

_{i}.

- Let i : = 1 and j : = 1.
- if a
_{ij}= 0 then if a_{kj}= 0 for all i < k ≤ m, set j : = j + 1. Otherwise ﬁnd the smallest index k > i such that a_{kj}≠ 0 and exchange the i-th equation with the k-th equation. kj - Multiply both sides of the (possibly new) i-th equation by a
_{ij}^{—1}. Then for each i < k ≤ m, subtract a_{ij}times the i-th equation from the k-th equation. Therefore, at this stage, after this operation has been carried out, we will have a_{kj}= 0, for all k > i. - Set i : = i + 1. If i ≤ n then return to step 2.

So at this stage, we have transformed the system of linear equations into a system in step form.

The next thing is to solve the system of equations in step form. The problem is that perhaps there is no solution, or perhaps there are many solutions. The easiest way to decide which case we have is to reorder the variables — that is the various x

_{i}— so that the steps start in the upper left-hand corner, and they are all one unit wide. That is, things then look like this:

(Note that this reordering of the variables is like our ﬁrst elementary column operation for matrices.)

So now we observe that:

- If b
≠ 0 for some k +1 ≤_{l}*l*≤ m, then the system of equations has*no solution*. - Otherwise, if k = n then the system has precisely one single solution. It
l
is obtained by working backwards through the equations. Namely, the last
equation is simply x
_{n}= b_{n}, so that is clear. But then, substitute b_{n}for x_{n}in the n−1-st equation, and we then have x_{n-1}= b_{n-1}− a_{n-1n}b_{n}. By this method, we progress back to the ﬁrst equation and obtain values for all the x_{j}, for 1 ≤ j ≤ n. - Otherwise, k < n. In this case we can assign
*arbitrary*values to the variables x_{k+1}, . . . , x_{n}, and then that ﬁxes the value of x_{k}. But then, as before, we progressively obtain the values of x_{k-1}, x_{k-2}and so on, back to x_{1}.

*Gaussian Elimination*”.

All of this can be looked at in terms of our matrix notation. Let us call the following m×n+1 matrix the augmented matrix for our system of linear equations:

Then by means of elementary row and column operations, the matrix is transformed into the new matrix which is in

*simple*step form

**Finding the eigenvectors of linear mappings**

**Deﬁnition**

Let

**V**be a vector space over a ﬁeld F, and let f :

**V**→

**V**be a linear mapping of

**V**into itself. An

*eigenvector*of f is a non-zero vector

**v**∈

**V**(so we have

**v**≠

**0**) such that there exists some λ ∈ F with f(

**v**) = λ

**v**. The scalar λ is then called the

*eigenvalue*associated with this eigenvector.

So if f is represented by the n ×n matrix A (with respect to some given basis of

**V**), then the problem of ﬁnding eigenvectors and eigenvalues is simply the problem of solving the equation

A

**v**= λ**v**.But here both λ and

**v**are variables. So how should we go about things? Well, as we will see, it is necessary to look at the

*characteristic polynomial*of the matrix, in order to ﬁnd an eigenvalue λ. Then, once an eigenvalue is found, we can consider it to be a constant in our system of linear equations. And they become the

*homogeneous*system

(That is, all the b

_{i}are zero. Thus a homogeneous system with matrix A has the form A

**v**=

**0**)

which can be easily solved to give us the (or one of the) eigenvector(s) whose eigenvalue is λ.

Now the n × n

*identity matrix*is

Thus we see that an eigenvalue is any scalar λ ∈ F such that the vector equation (A − λE)

**v**=

**0**has a solution vector

**v**∈ V, such that

**v**≠

**0**.

[Given any solution vector

**v**, then clearly we can multiply it with any scalar κ ∈ F, and we have

(A − λE)(κ

Therefore, as long as κ ≠ 0, we can say that κ**v**) = κ(A − λE)**v**= κ**0**=**0**.**v**is also an eigenvector whose eigenvalue is λ.]

## No comments:

## Post a Comment

If it's a past exam question, do not include links to the paper. Only the reference.

Comments will only be published after moderation