\( \newcommand{\mat}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vxy}[2]{\begin{bmatrix}#1 \\ #2\end{bmatrix}} \newcommand{\sm}[1]{\bigl[\begin{smallmatrix}#1\end{smallmatrix}\bigr]} \newcommand{\A}[0]{\mathbf{A}} \newcommand{\L}[0]{\mathbf{L}} \newcommand{\D}[0]{\mathbf{D}} \newcommand{\U}[0]{\mathbf{U}} \newcommand{\I}[0]{\mathbf{I}} \newcommand{\BB}[0]{\mathbf{B}} \newcommand{\CC}[0]{\mathbf{C}} \newcommand{\RR}[0]{\mathbf{R}} \newcommand{\TT}[0]{\mathbf{T}} \newcommand{\xx}[0]{\mathbf{x}} \newcommand{\bb}[0]{\mathbf{b}} \newcommand{\pp}[0]{\mathbf{p}} \newcommand{\qq}[0]{\mathbf{q}} \newcommand{\uu}[0]{\mathbf{u}} \newcommand{\vv}[0]{\mathbf{v}} \)

Linear Algebra, Part 7: Gauss-Jordan Elimination

posted March 7, 2014

In this post we'll look at how to use Gauss-Jordan elimination to find the inverse of a matrix.

In Part 4 of this series we looked at how to perform elimination to solve for a single solution vector \(\xx\) in \(\A\xx=\bb\). Now we'll look at an extension of Gaussian elimination, called Gauss-Jordan elimination, which we can use to find the entire matrix \(\A^{-1}\) if it exists.

Suppose we have

\[ \A = \mat{1 & 2 & 3 \\ 1 & 3 & 5 \\ 2 & 7 & 9} \]

and we need to find its inverse. In other words, we want to find the matrix \(\A^{-1}\) such that \(\A\A^{-1}=\I\). Written another way, we want to find the column vectors \(\A^{-1}_i\) in

\[ \A[\A^{-1}_1 \A^{-1}_2 \A^{-1}_3] = \I. \]

This is equivalent to saying we need to solve three different systems:

\[ \begin{align*} \A\A^{-1}_1 &= \I_1 \\ \A\A^{-1}_2 &= \I_2 \\ \A\A^{-1}_3 &= \I_3 \end{align*} \]

To solve the first one, we'd write the right-hand side \(\I_1\) next to \(\A\) and proceed with elimination, e.g.,

\[ \mat{1 & 2 & 3 & 1 \\ 1 & 3 & 5 & 0 \\ 2 & 7 & 9 & 0}. \]

Once elimination and back-substitution have completed, the above would be transformed from \(\A\I_1\) to \(\I\A^{-1}_1\). Gauss-Jordan elimination works by solving all of these simultaneously, transforming \(\A\I\) to \(\I\A^{-1}\). We just need to write \(\A\I\) and then perform the elimination process on the augmented matrix:

\[ \mat{1 & 2 & 3 & 1 & 0 & 0 \\ 1 & 3 & 5 & 0 & 1 & 0 \\ 2 & 7 & 9 & 0 & 0 & 1} \]

The steps follow:

\[ \implies \mat{1 & 2 & 3 & 1 & 0 & 0 \\ 1 & 3 & 5 & 0 & 1 & 0 \\ 2 & 7 & 9 & 0 & 0 & 1} \implies \mat{1 & 2 & 3 & 1 & 0 & 0 \\ 0 & 1 & 2 & -1 & 1 & 0 \\ 2 & 7 & 9 & 0 & 0 & 1} \] \[ \implies \mat{1 & 2 & 3 & 1 & 0 & 0 \\ 0 & 1 & 2 & -1 & 1 & 0 \\ 0 & 3 & 3 & -2 & 0 & 1} \implies \mat{1 & 2 & 3 & 1 & 0 & 0 \\ 0 & 1 & 2 & -1 & 1 & 0 \\ 0 & 0 & -3 & 1 & -3 & 1} \] \[ \implies \mat{1 & 2 & 3 & 1 & 0 & 0 \\ 0 & 1 & 0 & -\tfrac{1}{3} & -1 & \tfrac{2}{3} \\ 0 & 0 & -3 & 1 & -3 & 1} \implies \mat{1 & 2 & 0 & 2 & -3 & 1 \\ 0 & 1 & 0 & -\tfrac{1}{3} & -1 & \tfrac{2}{3} \\ 0 & 0 & -3 & 1 & -3 & 1} \] \[ \implies \mat{1 & 0 & 0 & 2 \tfrac{2}{3} & -1 & -\tfrac{1}{3} \\ 0 & 1 & 0 & -\tfrac{1}{3} & -1 & \tfrac{2}{3} \\ 0 & 0 & -3 & 1 & -3 & 1} \]

Lastly, we divide each row by the diagonal on the left side to yield \(\I\A^{-1}\):

\[ \implies \mat{1 & 0 & 0 & 2 \tfrac{2}{3} & -1 & -\tfrac{1}{3} \\ 0 & 1 & 0 & -\tfrac{1}{3} & -1 & \tfrac{2}{3} \\ 0 & 0 & 1 & -\tfrac{1}{3} & 1 & -\tfrac{1}{3}} \]

Afterword

In this post we've seen a technique for using Gaussian elimination as a building block to solve a more complex problem: rather than solving for a single vector, we solved for a whole matrix. In the next post in this series I'll cover \(\A=\L\D\U\) matrix factorization.