Skip to content

The Ultimate All-In-One Compendium to Learn Linear Algebra: Part 2

Fast-track course on linear algebra for scientists and engineers.

Note: I recommend reviewing Part 1 before moving forward.

Gaussian Elimination

Linear Independence

When solving equations with multiple variables, you must first verify that the system is defined to ensure it has a unique solution. A system is defined when the number of independent equations are equal to the number of variables in the system. So in the case in which you have n number of variables, you must have n independent equations.

\begin{array}{c} a_{11}x_1+a_{12}x_2+a_{13}x_3+\dotsm+a_{1n}x_n = b_1 \\ a_{21}x_1+a_{22}x_2+a_{23}x_3+\dotsm+a_{2n}x_n = b_2 \\ \vdots \\ a_{m1}x_1+a_{m2}x_2+a_{m3}x_3+\dotsm+a_{mn}x_n = b_m \end{array}

If the linear system contains more equations than the number of variables (m>n) , it is an overdefined system, so some equations must be eliminated to define the system. If it has less equations than the number of variables m<n , then it is an underdefined system, so some variables should be eliminated to define the system. An independent equation means the equation is not a linear combination of one or more of the remaining equations. Given any set of m number of vectors, a linear combination of these vectors is an expression of the form:

c_1\vec{a_1}+c_2\vec{a_2}+\dotsm+c_m\vec{a_m}

Where c_1,c_2,...,c_m are any scalars (can include zero).

Consider a simple linear system in which there are two variables x_1,x_2 and two equations:

x_1+2x_2=3

2x_1+4x_2=6

We can rewrite the linear system in matrix form:

Matrix Form

\vec{x}=\braket{x_1,x_2}=\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}

\mathbf{A}=\begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix}

\vec{b}=\braket{b_1,b_2}=\begin{bmatrix} 3\\6 \end{bmatrix}

\mathbf{A}\vec{x}=\vec{b}

\begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = \begin{bmatrix} 3 \\ 6 \end{bmatrix}

Try going back and forth between matrix form and written form of the linear system to prove to yourself that they are equivalent representations of the same expression. Let’s expand the matrix form and see what the result is:

\begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = \begin{bmatrix} (1\times x_1+2\times x_2) \\ (2\times x_1+4\times x_2) \end{bmatrix} = \begin{bmatrix} 3 \\ 6 \end{bmatrix}

\begin{align*} x_1+2x_2&=3 \\ 2x_1+4x_2&=6 \end{align*}

Upon closer examination of the linear system, we observe that \vec{b} is a linear combination of \vec{x} . This is true because the second equation is double the first equation:

\begin{align*} 2\times(x_1+2x_2)&=2\times(3) \\ 2x_1+4x_2&=6 \\ \end{align*}

In this case, we refer to the first equation as being a linear combination of the second equation and is not independent. Therefore, the system is underdefined since we only have one independent equation. It can be difficult to determine linear independence in this way, especially when you work with larger matrices. An easier way to determine linear independence is to rewrite the linear system in the form of an augmented matrix and then compute the rank of the matrix. The augmented matrix determines the system completely since it contains all the numbers in a single matrix. The rank of the augmented matrix gives the number of linearly independent equations in the linear system.

Augmented Matrix

Given a linear system with a coefficient matrix \mathbf{A} , variable vector \vec{x} , and constant vector \vec{b} , the augmented matrix is \mathbf{A} with \vec{b} appended as an extra column.

\mathbf{A}\vec{x}=\vec{b}

\mathbf{A}=\begin{bmatrix} a_{11} & \dotsm & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \dotsm & a_{mn} \end{bmatrix}

\vec{x}=\begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} \text{ and } \vec{b} = \begin{bmatrix} b_1 \\ \vdots \\ b_m \end{bmatrix}

\text{Augmented Matrix: } \mathbf{\tilde{A}} = \left[\begin{array}{ccc|c} a_{11} & \dotsm & a_{1n} & b_1 \\ \vdots & \ddots & \vdots & \vdots \\ a_{m1} & \dotsm & a_{mn} & b_m \end{array}\right]

The line inside the augmented matrix is optional and is used to keep track that the final column of the matrix is obtained from vector \vec{b} and the rest of the columns are obtained from matrix \mathbf{A} . The augmented matrix can be drawn without the augment line as shown below:

\text{Augmented Matrix: } \mathbf{\tilde{A}} = \begin{bmatrix} a_{11} & \dotsm & a_{1n} & b_1 \\ \vdots & \ddots & \vdots & \vdots \\ a_{m1} & \dotsm & a_{mn} & b_m \end{bmatrix}

Augmented matrix form is an elegant way of representing the linear system since it can be easily manipulated to determine the number of linearly independent rows and solve the system. To effectively utilize the augmented matrix we must first discuss how to manipulate matrices.

Matrix Manipulation

Similar to how we utilized Gaussian elimination to manually solve the linear system algebraically through symbolic manipulation, we can manipulate the augmented matrix to perform Gaussian elimination as well. Consider the same linear system from before:

\begin{align*}x_1 + x_2 &= 1 \\ 2x_1 - x_2 &=0 \end{align*}

\text{Matrix Form: } \begin{bmatrix} 1 & 1 \\ 2 & -1 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\begin{bmatrix} 1 \\ 0 \end{bmatrix}

\text{Augmented Matrix Form: } \begin{bmatrix} 1 & 1 & 1 \\ 2 & -1 & 0 \end{bmatrix}

Now that we defined the linear system in augmented matrix form, we can manipulate the augmented matrix to achieve an upper triangular matrix, therefore eliminating a variable, using a variety of row operations or manipulations described below:

Swap Rows

Swapping rows involves the interchange of two rows. Let’s swap rows 1 and 2 for the augmented matrix for example:

\begin{bmatrix} 1 & 1 & 1 \\ 2 & -1 & 0 \end{bmatrix}\xrightarrow{R_1\leftrightarrow R_2}\begin{bmatrix} 2 & -1 & 0 \\ 1 & 1 & 1 \end{bmatrix}

Note: Only rows can be swapped, not columns!

Scalar Multiplication

Scalar multiplication is the multiplication of a row by a nonzero constant c . This means each element within that row is scaled by a multiple of c . Now, let’s multiply row 1 by the scalar -1/2 :

\begin{bmatrix} 2 & -1 & 0 \\ 1 & 1 & 1 \end{bmatrix} \xrightarrow{R_{1'}=-1/2R_1}\begin{bmatrix} -1 & 1/2 & 0 \\ 1 & 1 & 1 \end{bmatrix}

Add Rows

Adding rows involves adding each element of one row to the element in the same column in the other row. Now, add row 1 to row 2 to perform the Gaussian elimination and eliminate the variable x_1 :

\begin{bmatrix} -1 & 1/2 & 0 \\ 1 & 1 & 1 \end{bmatrix}\xrightarrow{R_{2'}=R_1+R_2}\begin{bmatrix} -1 & 1/2 & 0 \\ 0 & 3/2 & 1\end{bmatrix}

Now that we have accomplished Gaussian elimination, we can see that x_2=2/3 , which is the same result we arrived at before:

\begin{matrix} x_1 & x_2 & b \\ \Bigg\lbrack\begin{matrix} -1 \\ 0\end{matrix} & \begin{matrix} 1/2 \\ 3/2\end{matrix} & \begin{matrix}0 \\ 1\end{matrix}\Bigg\rbrack \end{matrix}

By examining row 2, we can see that the equation gives us 3/2x_2=1 which, when solved for x_2 we get x_2=2/3 . Now we can work backwards and solve for x_1 in row 1, which gives the same answer of x_1 = 1/3 . You may be wondering what to do if you cannot arrive at a Gaussian elimination through row operations. We don’t want to do through all that work to find in the end that the linear system has no solutions or has infinitely many solutions. To deal with this, rank can be computed to calculate the number of linearly independent equations, and therefore we can tell if the linear is determined, underdetermined, or overdetermined.

Rank

The rank of a matrix \mathbf{A} is the maximum number of linearly independent row vectors of \mathbf{A} . This is computed by calculating the number of nonzero rows of the matrix in row-reduced echelon form. Reduced echelon form (REF) is a form of the matrix in which no more variable eliminations can be done with row operations. In this form, rows of zeros, if present, are the last rows, and, in each nonzero row, the leftmost nonzero entry is farther to the right than in the previous row. For example, a reduced echelon form of the augmented matrix from the previous example is:

REF\left(\begin{bmatrix} 1 & 1 & 1 \\ 2 & -1 & 0 \end{bmatrix}\right)=\begin{bmatrix} 1 & 1 & 1 \\ 2 & -1 & 0 \end{bmatrix}\xrightarrow{R_1\leftrightarrow R_2}\begin{bmatrix} 2 & -1 & 0 \\ 1 & 1 & 1 \end{bmatrix}\xrightarrow{R_{2'}=-1/2R_1+R_2}\begin{bmatrix} -1 & 1/2 & 0 \\ 0 & 3/2 & 1\end{bmatrix}

\boxed{REF\left(\begin{bmatrix} 1 & 1 & 1 \\ 2 & -1 & 0 \end{bmatrix}\right)=\begin{bmatrix} -1 & 1/2 & 0 \\ 0 & 3/2 & 1\end{bmatrix}}

Row-reduced echelon form (RREF) normalizes the matrix so that the first nonzero element of each row is 1 and the leftmost element in each row is the only nonzero entry in that row (excluding the final column vector). This is a special case of REF. The RREF of the augmented matrix is given by dividing row 2 by 3/2 and row 1 by -1:

RREF\left(\begin{bmatrix} -1 & 1/2 & 0 \\ 0 & 3/2 & 1 \end{bmatrix}\right)= \begin{bmatrix} -1 & 1/2 & 0 \\ 0 & 3/2 & 1\end{bmatrix}\xrightarrow{R_{1'}=-1/3R_2+R_1}\begin{bmatrix} -1 & 0 & -1/3 \\ 0 & 3/2 & 1\end{bmatrix} \xrightarrow{\begin{align*}R_{1'}&=-1R_1\\R_{2'}&=2/3R_2\end{align*}}\begin{bmatrix} 1 & 0 & 1/3 \\ 0 & 1 & 2/3\end{bmatrix}

\boxed{RREF\left(\begin{bmatrix} -1 & 1/2 & 0 \\ 0 & 3/2 & 1 \end{bmatrix}\right)= \begin{bmatrix} 1 & 0 & 1/3 \\ 0 & 1 & 2/3\end{bmatrix}}

If the RREF shows inconsistencies, which are defined as inequalities, in the matrix, such as a row in which all the variables are zero and the final column is a nonzero constant:

\text{Inconsistent Matrix: } \begin{bmatrix} 1 & 2 & 3 & 4 \\ 0 & 1 & 0 & 3 \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{1} \end{bmatrix}

Another example of an inconsistent matrix is that in which the same variable is set equal to different values:

\text{Inconsistent Matrix: } \begin{bmatrix} 1 & 2 & 3 & 4 \\ 0 & \mathbf{1} & 0 & \mathbf{3} \\ 0 & \mathbf{1} & 0 & \mathbf{1} \end{bmatrix}

Otherwise, if the equations are correct, we refer to the system as being consistent. From the calculated rank, we can determine if the system has a unique solution, no solution, or infinitely many solutions. Consider we have an m\times n matrix \mathbf{A} with a rank of r :

Unique SolutionNo SolutionInfinitely Many Solutions
r=n and system is consistent r<m and system is inconsistentSetting any variable to any arbitrary value and being able to solve all other equations
Cases of rank

If the rank is equal to the number of variables and the system is consistent, then the system can always be solved using Gaussian elimination. Let’s review by solving a linear system using Gaussian elimination.

Gaussian Elimination Example

Consider the following linear system:

\begin{align*} x_1 -x_2 +x_3&=0\\ -x_1 +x_2 -x_3&=0\\ 10x_2 +25x_3&=90\\ 20x_1+10x_2&=80\end{align*}

First, rewrite the system in matrix form.

\mathbf{A}\vec{x}=\vec{b}

\begin{bmatrix} 1 & -1 & 1\\ -1 & 1 & -1 \\ 0 & 10 & 25 \\ 20 & 10 & 0 \end{bmatrix}\begin{bmatrix} x_1\\x_2\\x_3 \end{bmatrix}=\begin{bmatrix} 0\\0\\90\\80 \end{bmatrix}

Now, write the augmented matrix for the system.

\mathbf{\tilde{A}}=\begin{bmatrix} 1 & -1 & 1 & 0 \\ -1 & 1 & -1 & 0 \\ 0 & 10 & 25 & 90 \\ 20 & 10 & 0 & 80 \end{bmatrix}

Next, using row operations perform Gaussian elimination and find a row-echelon form (REF).

REF\left(\mathbf{\tilde{A}}\right)=REF\left(\begin{bmatrix} 1 & -1 & 1 & 0 \\ -1 & 1 & -1 & 0 \\ 0 & 10 & 25 & 90 \\ 20 & 10 & 0 & 80 \end{bmatrix}\right)

\begin{bmatrix} 1 & -1 & 1 & 0 \\ -1 & 1 & -1 & 0 \\ 0 & 10 & 25 & 90 \\ 20 & 10 & 0 & 80 \end{bmatrix}\xrightarrow{R_{2'}=R_1+R_2}\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 10 & 25 & 90 \\ 20 & 10 & 0 & 80 \end{bmatrix}\xrightarrow{R_2\leftrightarrow R_4}\begin{bmatrix} 1 & -1 & 1 & 0 \\ 20 & 10 & 0 & 80 \\ 0 & 10 & 25 & 90 \\ 0 & 0 & 0 & 0 \end{bmatrix}

\begin{bmatrix} 1 & -1 & 1 & 0 \\ 20 & 10 & 0 & 80 \\ 0 & 10 & 25 & 90 \\ 0 & 0 & 0 & 0 \end{bmatrix}\xrightarrow{R_{2'}=-20R_1+R_2}\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 30 & -20 & 80 \\ 0 & 10 & 25 & 90 \\ 0 & 0 & 0 & 0 \end{bmatrix}\xrightarrow{R_2\leftrightarrow R_3}\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 10 & 25 & 90 \\ 0 & 30 & -20 & 80 \\ 0 & 0 & 0 & 0 \end{bmatrix}

\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 10 & 25 & 90 \\ 0 & 30 & -20 & 80 \\ 0 & 0 & 0 & 0 \end{bmatrix}\xrightarrow{R_{3'}=-3R_2+R_3}\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 10 & 25 & 90 \\ 0 & 0 & -95 & -190 \\ 0 & 0 & 0 & 0 \end{bmatrix}\xrightarrow{R_{2'}=1/10R_2}\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 1 & 2.5 & 9 \\ 0 & 0 & -95 & -190 \\ 0 & 0 & 0 & 0 \end{bmatrix}

\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 1 & 2.5 & 9 \\ 0 & 0 & -95 & -190 \\ 0 & 0 & 0 & 0 \end{bmatrix}\xrightarrow{R_{3'}=-1/95R_3}\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 1 & 2.5 & 9 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}

\boxed{REF\left(\mathbf{\tilde{A}}\right)=\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 1 & 2.5 & 9 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}}

Calculate the reduced-row echelon form (RREF).

RREF(\tilde{\mathbf{A}})=RREF(REF(\mathbf{\tilde{A}}))=\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 1 & 2.5 & 9 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}\xrightarrow{R_{2'}=-2.5R_3+R_2}\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 1 & 0 & 4 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}

\begin{bmatrix} 1 & -1 & 1 & 0 \\ 0 & 1 & 0 & 4 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}\xrightarrow{R_{1'}=-R_3+R_1}\begin{bmatrix} 1 & -1 & 0 & -2 \\ 0 & 1 & 0 & 4 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}\xrightarrow{R_{1'}=R_2+R_1}\begin{bmatrix} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 4 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}

\boxed{RREF(\mathbf{\tilde{A}})=\begin{bmatrix} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 4 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}}

Now, solve for \vec{x} .

\boxed{\vec{x}=\left[\begin{align*} x_1&=2 \\ x_2&=4 \\ x_3&=2 \end{align*}\right]}

Continue the lesson in Part 3 to learn about Gauss-Jordan elimination and matrix eigenvalue problems.

Tags:

2 thoughts on “The Ultimate All-In-One Compendium to Learn Linear Algebra: Part 2”

  1. Attractive element of content. I simply stumbled upon your web site and in accession capital to
    say that I get actually enjoyed account your weblog posts.
    Any way I will be subscribing in your feeds or even I achievement you get admission to constantly
    fast.

  2. It is the best time to make a few plans for the future and
    it’s time to be happy. I’ve read this submit and if
    I may just I desire to suggest you few fascinating issues or suggestions.
    Maybe you can write next articles regarding
    this article. I desire to learn even more
    issues approximately it!

Leave a Reply to Mark Lantz Cancel reply

Your email address will not be published. Required fields are marked *

Contents