Skip to content

The Ultimate All-In-One Compendium to Learn Linear Algebra: Part 3

Fast-track course on linear algebra for scientists and engineers.

Note: I recommend reviewing Part 1 and Part 2 before proceeding.

Linear Systems

Gauss-Jordan Elimination

While Gaussian elimination enables us to solve any dimension augmented matrix, another powerful method Gauss-Jordan elimination can be used to solve square matrices quickly. While Gaussian elimination utilizes row operations to transform the augmented matrix into reduced-row echelon form, Gauss-Jordan elimination solves the matrix form of the linear system using the inverse of the coefficient matrix. Consider the following linear system:

\mathbf{A}\vec{x}=\vec{b}

If \mathbf{A} is a square matrix, meaning it has dimensions of n\times n , then the equation can algebraically solve for \vec{x} using the following property:

\mathbf{A}\mathbf{A^{-1}} = \mathbf{A^{-1}}\mathbf{A}=\mathbf{I}

Where \mathbf{A^{-1}} is the inverse of the matrix \mathbf{A} . Therefore, since multiplying \mathbf{A} by \mathbf{A^{-1}} gives the identity matrix \mathbf{I} , we can multiply both sides of the equation by the inverse matrix to eliminate \mathbf{A} on the left side of the equation, and this will give us the result of \vec{x} .

\mathbf{A^{-1}}\mathbf{A}\vec{x} = \mathbf{A^{-1}}\vec{b}

\mathbf{I}\vec{x}=\mathbf{A^{-1}}\vec{b}

\text{Recall the property: } \mathbf{I}\mathbf{A} = \mathbf{A}

\boxed{\vec{x}=\mathbf{A^{-1}}\vec{b}}

Note that not all matrices have an inverse, however, so this method is not always applicable. If \mathbf{A} has a matrix, it is referred to as a nonsingular matrix. A singular matrix is a matrix that does not have an inverse. To determine if a matrix is invertible (meaning it is nonsingular), we must calculate the determinant of the matrix. If the determinant is nonzero, then \mathbf{A} is nonsingular.

Determinant of a Matrix

The determinant of a matrix is denoted by

D= \det(\mathbf{A})=\begin{vmatrix} a_{11} & a_{12} & \dotsm & a_{1n} \\ a_{21} & a_{22} & \dotsm & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dotsm & a_{nn} \end{vmatrix}

D=\sum_{k=1}^{n}(-1)^{j+k}a_{jk}M_{jk}

Where k is the number of columns, j is the number of rows, a_{jk} is the element at the index j,k , and M_{jk} is the determinant of the submatrix formed by eliminating the row and column of a_{jk}.

Usually we mainly encounter 2\times 2 or 3\times 3 determinants, however if you follow the same pattern as below we can compute the determinant of any n\times n matrix.

Determinant of a 2\times 2 matrix

D_{2\times 2}=\begin{vmatrix} a & b \\ c & d \end{vmatrix}=\sum_{k=1}^{2}(-1)^{j+k}a_{jk}M_{jk} = (-1)^{1+1}a_{11}M_{11}+(-1)^{1+2}a_{12}M_{12}

M_{11} = \begin{vmatrix} \xcancel{\mathbf{a}} & \xcancel{b} \\ \xcancel{c} & d \end{vmatrix} = \begin{vmatrix}d\end{vmatrix}=d

M_{12} = \begin{vmatrix} \xcancel{a} & \xcancel{\mathbf{b}} \\ c & \xcancel{d} \end{vmatrix} = \begin{vmatrix}c\end{vmatrix}=c

(-1)^{1+1}a_{11}M_{11}+(-1)^{1+2}a_{12}M_{12}=a(d)-b(c)=ad-bc

\boxed{D_{2\times 2}=\begin{vmatrix} a & b \\ c & d \end{vmatrix}=ad-bc}

Example

\mathbf{A}=\begin{bmatrix} 2 & 1 \\ 6 & 4 \end{bmatrix}

\det(\mathbf{A})=\begin{vmatrix} 2 & 1 \\ 6 & 4 \end{vmatrix}=2\times4-1\times6=2

\boxed{\det(\mathbf{A})=2}

Determinant of a 3\times 3 matrix

D_{3\times 3}=\begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix}=\sum_{k=1}^{3}(-1)^{j+k}a_{jk}M_{jk} = (-1)^{1+1}a_{11}M_{11}+(-1)^{1+2}a_{12}M_{12}+(-1)^{1+3}a_{13}M_{13}

M_{11} = \begin{vmatrix} \xcancel{\mathbf{a}} & \xcancel{b} & \xcancel{c} \\ \xcancel{d} & e & f \\ \xcancel{g} & h & i \end{vmatrix} = \begin{vmatrix}e&f\\h&i \end{vmatrix} =ei-fh

M_{12} = \begin{vmatrix} \xcancel{a} & \xcancel{\mathbf{b}} & \xcancel{c} \\ d & \xcancel{e} & f \\ g & \xcancel{h} & i \end{vmatrix} = \begin{vmatrix}d&f\\g&i \end{vmatrix} =di-fg

M_{13} = \begin{vmatrix} \xcancel{a} & \xcancel{b} & \xcancel{\mathbf{c}} \\ d & e & \xcancel{f} \\ g & h & \xcancel{i} \end{vmatrix} = \begin{vmatrix}d&e\\g&h \end{vmatrix} =dh-eg

(-1)^{1+1}a_{11}M_{11}+(-1)^{1+2}a_{12}M_{12}+(-1)^{1+3}a_{13}M_{13}=a(ei-fh)-b(di-fg)+c(dh-eg)

\boxed{D_{3\times3}=\begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix}=a(ei-fh)-b(di-fg)+c(dh-eg)}

Example

\mathbf{A}=\begin{bmatrix} 5 & 0 & 6 \\ 3 & 1 & 2 \\ 0 & 0 & 1 \end{bmatrix}

\det(\mathbf{A})=\begin{vmatrix} 5 & 0 & 6 \\ 3 & 1 & 2 \\ 0 & 0 & 1 \end{vmatrix}=5(1\times1-2\times0)-0(3\times1-2\times0)+6(3\times0-1\times0)=5-0+0=5

\boxed{\det(\mathbf{A})=5}

If the determinant is nonzero, then \mathbf{A} is nonsingular, so we can commence to computing the inverse \mathbf{A^{-1}} .

Inverse of a Matrix

The inverse of a nonsingular n\times n matrix \mathbf{A} is a matrix \mathbf{A^{-1}} that satisfies the following rule:

\mathbf{A}\mathbf{A^{-1}}=\mathbf{A^{-1}}\mathbf{A}=\mathbf{I}

To find the inverse of \mathbf{A} , augment \mathbf{A} with the identity matrix \mathbf{I} , and apply row operations to the augmented matrix, transforming the left side into the identity matrix. The resulting matrix on the right side will be the inverse of \mathbf{A} .

Inverse of a 2\times 2 matrix

\mathbf{A}=\begin{bmatrix} a & b \\ c & d \end{bmatrix}

\mathbf{I}=\begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}

\left[\begin{array}{c|c}\mathbf{A} & \mathbf{I}\end{array}\right] = \left[\begin{array}{cc|cc} a & b & 1 & 0 \\ c & d & 0 & 1 \end{array}\right]\xrightarrow{\text{Row Operations}}\left[\begin{array}{cc|cc}1 & 0 & a' & b' \\ 0 & 1 & c' & d'\end{array}\right]

\boxed{\mathbf{A^{-1}} = \begin{bmatrix}a' & b' \\ c' & d'\end{bmatrix}}

Example

Using the example 2\times2 matrix from the determinant example above, we already know the determinant is nonzero.

\mathbf{A}=\begin{bmatrix} 2 & 1 \\ 6 & 4 \end{bmatrix}

\det(\mathbf{A})=2\stackrel{\checkmark}{\neq}0

\left[\begin{array}{c|c}\mathbf{A} & \mathbf{I}\end{array}\right]= \left[\begin{array}{cc|cc} 2 & 1 & 1 & 0\\ 6 & 4 & 0 & 1 \end{array}\right]

\left[\begin{array}{cc|cc}2 & 1 & 1 & 0 \\ 6 & 4 & 0 & 1\end{array}\right]\xrightarrow{R_{2'}=-3R_1+R_2}\left[\begin{array}{cc|cc}2 & 1 & 1 & 0 \\ 0 & 1 & -3 & 1\end{array}\right]\xrightarrow{R_{1'}=-R_2+R_1}\left[\begin{array}{cc|cc}2 & 0 & 4 & -1 \\ 0 & 1 & -3 & 1\end{array}\right]

\left[\begin{array}{cc|cc}2 & 0 & 4 & -1 \\ 0 & 1 & -3 & 1\end{array}\right]\xrightarrow{R_{1'}=1/2R_1}\left[\begin{array}{cc|cc}1 & 0 & 2 & -1/2 \\ 0 & 1 & -3 & 1\end{array}\right]

\boxed{\mathbf{A^{-1}}=\begin{bmatrix}2 & -1/2 \\ -3 & 1\end{bmatrix}}

Inverse of a 3\times 3 matrix

\mathbf{A}=\begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix}

\mathbf{I}=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}

\left[\begin{array}{c|c}\mathbf{A} & \mathbf{I}\end{array}\right]=\left[\begin{array}{ccc|ccc}a & b & c & 1 & 0 & 0 \\ d & e & f & 0 & 1 & 0 \\ g & h & i & 0 & 0 & 1 \end{array}\right]\xrightarrow{\text{Row Operations}}\left[\begin{array}{ccc|ccc}1 & 0 & 0 & a' & b' & c' \\ 0 & 1 & 0 & d' & e' & f' \\ 0 & 0 & 1 & g' & h' & i' \end{array}\right]

\boxed{\mathbf{A^{-1}}=\begin{bmatrix} a' & b' & c' \\ d' & e' & f' \\ g' & h' & i' \end{bmatrix}}

Example

Using the example 3\times3 matrix from the determinant example above, we already know the determinant is nonzero.

\mathbf{A}=\begin{bmatrix} 5 & 0 & 6 \\ 3 & 1 & 2 \\ 0 & 0 & 1 \end{bmatrix}

\det(\mathbf{A})=5\stackrel{\checkmark}{\neq}0

\left[\begin{array}{c|c}\mathbf{A} & \mathbf{I}\end{array}\right]= \left[\begin{array}{ccc|ccc} 5 & 0 & 6 & 1 & 0 & 0\\ 3 & 1 & 2 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{array}\right]

\left[\begin{array}{ccc|ccc} 5 & 0 & 6 & 1 & 0 & 0\\ 3 & 1 & 2 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{array}\right]\xrightarrow{R_{2'}=-2R_3+R_2}\left[\begin{array}{ccc|ccc} 5 & 0 & 6 & 1 & 0 & 0\\ 3 & 1 & 0 & 0 & 1 & -2 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{array}\right]\xrightarrow{R_{1'}=-6R_3+R_1}\left[\begin{array}{ccc|ccc} 5 & 0 & 0 & 1 & 0 & -6\\ 3 & 1 & 0 & 0 & 1 & -2 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{array}\right]

\left[\begin{array}{ccc|ccc} 5 & 0 & 0 & 1 & 0 & -6\\ 3 & 1 & 0 & 0 & 1 & -2 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{array}\right]\xrightarrow{R_{1'}=1/5R_1}\left[\begin{array}{ccc|ccc} 1 & 0 & 0 & 1/5 & 0 & -6/5\\ 3 & 1 & 0 & 0 & 1 & -2 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{array}\right]\xrightarrow{R_{2'}=-3R_1+R_2}\left[\begin{array}{ccc|ccc} 1 & 0 & 0 & 1/5 & 0 & -6/5\\ 0 & 1 & 0 & -3/5 & 1 & 8/5 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{array}\right]

\boxed{\mathbf{A^{-1}}=\left[\begin{array}{ccc|ccc} 1 & 0 & 0 & 1/5 & 0 & -6/5\\ 0 & 1 & 0 & -3/5 & 1 & 8/5 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{array}\right]}

Gauss-Jordan Elimination Example

Consider the following linear system:

\begin{align*} 3x_1 - 2x_2 + x_3 &= 13 \\ -2x_1 + x_2 + 4x_3 &= 11 \\ x_1 + 4x_2 - 5x_3 &= -31 \end{align*}

First, rewrite the system in matrix form.

\mathbf{A}\vec{x}=\vec{b}

\begin{bmatrix} 3 & -2 & 1 \\ -2 & 1 & 4 \\ 1 & 4 & -5 \end{bmatrix}\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}=\begin{bmatrix} 13 \\ 11\\ -31 \end{bmatrix}

Now, compute the determinant to verify the matrix \mathbf{A} is nonsingular. If it is, then we can proceed to solve the equation for \vec{x} using Gauss-Jordan elimination.

\det(\mathbf{A})=\begin{vmatrix} 3 & -2 & 1 \\ -2 & 1 & 4 \\ 1 & 4 & -5 \end{vmatrix} = 3(1\times-5-4\times4)--2(-2\times-5-4\times1)+1(-2\times4-1\times1)=-60

\boxed{\det(\mathbf{A})=-60\stackrel{\checkmark}{\neq}0}

\mathbf{A^{-1}}\mathbf{A}\vec{x}=\mathbf{A^{-1}}\vec{b}

\vec{x}=\mathbf{A^{-1}}\vec{b}

Compute the inverse matrix.

\left[\begin{array}{c|c}\mathbf{A}&\mathbf{I}\end{array}\right]=\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\ -2 & 1 & 4 & 0 & 1 & 0 \\ 1 & 4 & -5 & 0 & 0 & 1 \end{array}\right]

\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\ -2 & 1 & 4 & 0 & 1 & 0 \\ 1 & 4 & -5 & 0 & 0 & 1 \end{array}\right]\xrightarrow{R_{2'}=2/3R_1+R_2}\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\[6pt] 0 & -\frac{1}{3} & \frac{14}{3} & \frac{2}{3} & 1 & 0 \\[6pt] 1 & 4 & -5 & 0 & 0 & 1 \end{array}\right]\xrightarrow{R_{2'}=-3R_2}\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\ 0 & 1 & -14 & -2 & -3 & 0 \\ 1 & 4 & -5 & 0 & 0 & 1 \end{array}\right]

\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\ 0 & 1 & -14 & -2 & -3 & 0 \\ 1 & 4 & -5 & 0 & 0 & 1 \end{array}\right]\xrightarrow{R_{3'}=-1/3R_1+R_3}\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\[6pt] 0 & 1 & -14 & -2 & -3 & 0 \\[6pt] 0 & \frac{14}{3} & -\frac{16}{3} & -\frac{1}{3} & 0 & 1 \end{array}\right]\xrightarrow{R_{3'}=3R_3}\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\ 0 & 1 & -14 & -2 & -3 & 0 \\ 0 & 14 & -16 & -1 & 0 & 3 \end{array}\right]

\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\ 0 & 1 & -14 & -2 & -3 & 0 \\ 0 & 14 & -16 & -1 & 0 & 3 \end{array}\right]\xrightarrow{R_{3'}=-14R_2+R_3}\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\ 0 & 1 & -14 & -2 & -3 & 0 \\ 0 & 0 & 180 & 27 & 42 & 3 \end{array}\right]\xrightarrow{R_{3'}=1/180R_3}\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\ 0 & 1 & -14 & -2 & -3 & 0 \\ 0 & 14 & -16 & -1 & 0 & 3 \end{array}\right]

\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\ 0 & 1 & -14 & -2 & -3 & 0 \\ 0 & 14 & -16 & -1 & 0 & 3 \end{array}\right]\xrightarrow{R_{3'}=-14R_2+R_3}\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\[6pt] 0 & 1 & -14 & -2 & -3 & 0 \\[6pt] 0 & 0 & 1 & \frac{3}{20} & \frac{7}{30} & \frac{1}{60} \end{array}\right]\xrightarrow{R_{2'}=14R_3+R_2}\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\[6pt] 0 & 1 & 0 & \frac{1}{10} & \frac{4}{15} & \frac{7}{30} \\[6pt] 0 & 0 & 1 & \frac{3}{20} & \frac{7}{30} & \frac{1}{60} \end{array}\right]

\left[\begin{array}{ccc|ccc} 3 & -2 & 1 & 1 & 0 & 0 \\[6pt] 0 & 1 & 0 & \frac{1}{10} & \frac{4}{15} & \frac{7}{30} \\[6pt] 0 & 0 & 1 & \frac{3}{20} & \frac{7}{30} & \frac{1}{60} \end{array}\right]\xrightarrow{R_{1'}=2R_2+R_1} \left[\begin{array}{ccc|ccc} 3 & 0 & 1 & \frac{6}{5} & \frac{8}{15} & \frac{7}{15} \\[6pt] 0 & 1 & 0 & \frac{1}{10} & \frac{4}{15} & \frac{7}{30} \\[6pt] 0 & 0 & 1 & \frac{3}{20} & \frac{7}{30} & \frac{1}{60} \end{array}\right]\xrightarrow{R_{1'}=-R_3+R_1}\left[\begin{array}{ccc|ccc} 3 & 0 & 0 & \frac{21}{20} & \frac{3}{10} & \frac{9}{20} \\[6pt] 0 & 1 & 0 & \frac{1}{10} & \frac{4}{15} & \frac{7}{30} \\[6pt] 0 & 0 & 1 & \frac{3}{20} & \frac{7}{30} & \frac{1}{60} \end{array}\right]

\left[\begin{array}{ccc|ccc} 3 & 0 & 0 & \frac{21}{20} & \frac{3}{10} & \frac{9}{20} \\[6pt] 0 & 1 & 0 & \frac{1}{10} & \frac{4}{15} & \frac{7}{30} \\[6pt] 0 & 0 & 1 & \frac{3}{20} & \frac{7}{30} & \frac{1}{60} \end{array}\right]\xrightarrow{R_{1'}=1/3R_1}\left[\begin{array}{ccc|ccc} 1 & 0 & 0 & \frac{7}{20} & \frac{1}{10} & \frac{3}{20} \\[6pt] 0 & 1 & 0 & \frac{1}{10} & \frac{4}{15} & \frac{7}{30} \\[6pt] 0 & 0 & 1 & \frac{3}{20} & \frac{7}{30} & \frac{1}{60} \end{array}\right]

\boxed{\mathbf{A^{-1}}=\begin{bmatrix} \frac{7}{20} & \frac{1}{10} & \frac{3}{20} \\[6pt] \frac{1}{10} & \frac{4}{15} & \frac{7}{30} \\[6pt] \frac{3}{20} & \frac{7}{30} & \frac{1}{60} \end{bmatrix}}

Now, solve for \vec{x} .

\vec{x}=\mathbf{A^{-1}}\vec{b}=\begin{bmatrix} \frac{7}{20} & \frac{1}{10} & \frac{3}{20} \\[6pt] \frac{1}{10} & \frac{4}{15} & \frac{7}{30} \\[6pt] \frac{3}{20} & \frac{7}{30} & \frac{1}{60} \end{bmatrix}\begin{bmatrix}13 \\ 11 \\ -31\end{bmatrix}=\begin{bmatrix} 13\times \frac{7}{20} + 11\times \frac{1}{10} + -31\times \frac{3}{20}=1 \\[6pt] 13\times \frac{1}{10} + 11\times \frac{4}{15} + -31\times \frac{7}{30}=-3 \\[6pt] 13\times \frac{3}{20} + 11\times \frac{7}{30} + -31\times \frac{1}{60}=4 \end{bmatrix}

\boxed{\vec{x}=\left[\begin{align*} x_1&=1 \\ x_2&=-3 \\ x_3&=4 \end{align*}\right]}

Matrix Eigenvalue Problems

Matrix eigenvalue problems stand as a cornerstone in the realm of linear algebra with significant applications in the sciences and engineering disciplines. These problems serve as powerful tools for unraveling the inherent structures of linear transformations, paving the way for profound insights and developing essential skills for engineers and scientists aiming to solve real-world problems. The applications are broad and encompass fields such as signal processing, control systems, and structural analysis.

A matrix eigenvalue problem considers the following equation:

\mathbf{A}\vec{x}=\lambda\vec{x}

Where \mathbf{A} is a square matrix of dimensions n\times n , \lambda is an unknown scalar, and \vec{x} is an unknown vector.

It is immediately obvious that for \vec{x}=\mathbf{0} , any \mathbf{A} and \lambda will satisfy the equation, so this solution is not of interest. This is termed the trivial solution. Therefore, we place a condition such that \vec{x}\neq\mathbf{0} . The \lambda solutions are called eigenvalues of \mathbf{A} and the \vec{x} solutions are called eigenvectors of \mathbf{A} .

The first step in solving an eigenvalue problem is determining the eigenvalues of the system. Do this by writing out the matrix equation into the corresponding linear equations. Let’s consider a 2\times 2 matrix for this example.

\mathbf{A}\vec{x}=\lambda\vec{x}

\begin{bmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\lambda\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}

\begin{align*} a_{11}x_1+a_{12}x_2 &= \lambda x_1 \\ a_{21}x_1 + a_{22}x_2 &= \lambda x_2 \end{align*}

Now, combine the like terms into the left side of the equation.

\begin{align*}(a_{11}-\lambda)x_1 + a_{12}x_2 &= 0 \\ a_{21}x_1 + (a_{22}-\lambda)x_2 &= 0 \end{align*}

Notice that the lambda term is subtracted along the diagonal of the matrix, so we can rewrite the subtraction term as being equal to the identity matrix times the lambda value. This can be expressed as the following matrix equation.

(\mathbf{A}-\lambda\mathbf{I})\vec{x}=\mathbf{0}

The nontrivial solution for this system is given if and only if the coefficient determinant is zero:

\det(\mathbf{A}-\lambda\mathbf{I})=0

Now, we can set up the equation to solve for the eigenvalues.

\det(\mathbf{A}-\lambda\mathbf{I})=\begin{vmatrix}(a_{11}-\lambda) & a_{12} \\ a_{21} & (a_{22}-\lambda) \end{vmatrix}=(a_{11}-\lambda)(a_{22}-\lambda)-a_{12}a_{21}=0

\lambda^2-(a_{11}+a_{22})\lambda+a_{11}a_{22}-a_{12}a_{21}=0

To find the eigenvalues we can use the quadratic formula.

\boxed{\lambda=\frac{(a_{11}+a_{22})\pm\sqrt{(a_{11}+a_{22})^2-4(a_{11}a_{22}-a_{12}a_{21})}}{2}}

Having found the eigenvalues, the final step is to solve for the eigenvectors by substituting the eigenvalues.

\lambda_1=\frac{(a_{11}+a_{22})+\sqrt{(a_{11}+a_{22})^2-4(a_{11}a_{22}-a_{12}a_{21})}}{2}\\ \lambda_2=\frac{(a_{11}+a_{22})-\sqrt{(a_{11}+a_{22})^2-4(a_{11}a_{22}-a_{12}a_{21})}}{2}

First, substitute \lambda_1 .

\begin{bmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\lambda_1\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}

\begin{align*} (a_{11}-\lambda_1)x_1 + a_{12}x_2 &= 0 \\ a_{21}x_1 + (a_{22}-\lambda_1)x_2 &= 0 \end{align*}

Then, choose x_1 and solve for x_2 to give the eigenvector. Let’s work through an example eigenvalue problem.

Matrix Eigenvalue Problem Example

Consider the following eigenvalue problem.

\mathbf{A}\vec{x}=\lambda\vec{x}

\begin{bmatrix}-5 & 2 \\ 2 & -2\end{bmatrix}\begin{bmatrix}x_1 \\ x_2\end{bmatrix}=\lambda\begin{bmatrix}x_1 \\ x_2\end{bmatrix}

Find the eigenvalues.

\det(\mathbf{A}-\lambda\mathbf{I})=\begin{vmatrix}(-5-\lambda) & 2 \\ 2 & (-2-\lambda)\end{vmatrix}=0

\lambda^2+7\lambda+6=0\to(\lambda+6)(\lambda+1)=0

\boxed{\lambda_1,\lambda_2=-6,-1}

Find the eigenvectors.

\lambda_1=-6

\begin{bmatrix}-5 & 2 \\ 2 & -2\end{bmatrix}\begin{bmatrix}x_1 \\ x_2\end{bmatrix}=-6\begin{bmatrix}x_1 \\ x_2\end{bmatrix}

\begin{align*}-5x_1+2x_2&=-6x_1\\ 2x_1-2x_2&=-6x_2\end{align*}

\begin{align*}x_1+2x_2&=0\\ 2x_1+4x_2&=0\end{align*}

\text{Select }x_1=2\text{: } x_2=-1

\vec{x_1}=\begin{bmatrix}2\\-1\end{bmatrix}

\lambda_2=-1

\begin{bmatrix}-5 & 2 \\ 2 & -2\end{bmatrix}\begin{bmatrix}x_1 \\ x_2\end{bmatrix}=-1\begin{bmatrix}x_1 \\ x_2\end{bmatrix}

\begin{align*}-5x_1+2x_2&=-1x_1\\ 2x_1-2x_2&=-1x_2\end{align*}

\begin{align*}4x_1+2x_2&=0\\ 2x_1-1x_2&=0\end{align*}

\text{Select }x_1=1\text{: } x_2=-2

\vec{x_2}=\begin{bmatrix}1\\-2\end{bmatrix}

\boxed{\vec{x_1},\vec{x_2}=\begin{bmatrix}2\\-1\end{bmatrix},\begin{bmatrix}1\\-2\end{bmatrix}}

Tags:

4 thoughts on “The Ultimate All-In-One Compendium to Learn Linear Algebra: Part 3”

  1. Hello just wanted to give you a quick heads up. The
    words in your article seem to be running off the screen in Internet explorer.
    I’m not sure if this is a formatting issue or something to do with internet browser compatibility but I thought I’d post to let you know.
    The layout look great though! Hope you get the issue solved soon. Thanks

  2. You’re so interesting! I don’t believe I’ve read a single thing like this before.
    So great to find someone with some genuine
    thoughts on this issue. Seriously.. many thanks for starting this up.
    This site is one thing that is required on the web, someone with a little originality!

  3. Admiring the hard work you put into your site and in depth information you present.
    It’s awesome to come across a blog every once in a while
    that isn’t the same old rehashed material. Fantastic read!
    I’ve bookmarked your site and I’m including your RSS feeds to my
    Google account.

Leave a Reply to buy instagram followers non drop Cancel reply

Your email address will not be published. Required fields are marked *

Contents