In this post we'll look at how to construct an \(\A=\L\D\U\) factorization of an invertible matrix.
There are numerous useful factorizations of matrices but \(\A = \L\U\) (or \(\A=\L\D\U\)) is the first one we come to. We automatically get \(\U\) as a by-product of the elimination process: once elimination finishes, and before back-substitution is carried out, we have an upper-triangular matrix \(\U\). See Part 4 of this series for a worked example.
Since elimination yields \(\U\), we can view elimination as a function which takes a matrix \(\A\) and produces a new matrix \(\U\). That function performs the sequence of multiplications of elimination matries \(\E_{ij}\) mentioned in Part 5 of this series. So we have
\[ (\E_{ij}\cdots\E_{ij})\A = \U. \]
Manipulating that algebraically, we can invert the elimination steps so that \((\E_{ij}\cdots\E_{ij})^{-1}\U = \A\). This inverted elimination process can be written \(\L\), so we have \(\A = \L\U\). Now we have the matrix factors \(\L\) and \(\U\).
In terms of the typical representation \(\A\xx = \bb\), we instead have
\[ \begin{align*} \A\xx &= \bb \\ \L\U\xx &= \bb \end{align*} \]
and
\[ \begin{align*} \xx &= \A^{-1}\bb \\ \xx &= \U^{-1}\L^{-1}\bb. \end{align*} \]
Lastly, we can factor the pivots from \(\U\) by dividing each row in \(\U\) by its pivot and then placing the pivots in a separate diagonal matrix \(\D\). For example:
\[ \begin{align*} \A &= \mat{1 & 2 \\ 3 & 4} \\ \L\U &= \mat{1 & 0 \\ 3 & 1}\mat{1 & 2 \\ 0 & -2} \\ \L\D\U &= \mat{1 & 0 \\ 3 & 1}\mat{1 & 0 \\ 0 & -2}\mat{1 & 2 \\ 0 & 1} \end{align*} \]
In this post we've looked at how elimination produces matrix factors \(\L\) and \(\U\). Next time I'll cover symmetric matrices and transposes.