Next  |  Prev  |  Up  |  Top  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

### Block matrix decompositions

• The main formula we need concerns the inverse of a block matrix:

We take the scenic route, obtaining several other results that will be useful later. In fact, mostly the entire theory of recursive algorithms for autoregressive modeling comes from these block matrix results. The development follows that of Appendix A in [KSH00]: We omit many extensions.

• Consider the matrix-vector equation:

Interpreting as two equations, if we multiply the top equation by and add to the bottom we eliminate , obtaining:

which is easily solved for . The quantity is called the Schur complement of and will be denoted ). To solve for , we take our solution for and substitute in to the top equation, which has been left alone: .

• The elimination step is equivalent to multiplying (both sides) on the left by a matrix , and the substitution step (using rather than ) is equivalent to multiplying on the right by where

• According to the elimination and substitution steps, these matrices block-diagonalize the original matrix:

• Note that to reverse the action of adding a multiple of one equation to another, we subtract that multiple. Hence, we may write:
 (14)

• Alternatively, we could have done the elimination step by adding a multiple () of the bottom equation to the top. This gives a second block decomposition:
 (15)

where is the Schur complement of .

• From the block decompositions we get inversion formulas:
 (16)

 (17)

• Matrix inversion lemma An important result comes by equating block elements of the two inversion formulas, e.g:
 (18)

• The matrix inversion lemma arises naturally when considering any time-update step. Time updates to involve adding a block column to , e.g. . Hence . Here (18) obtains an efficient way for inverting the sum

Next  |  Prev  |  Up  |  Top  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search