- The main formula we need concerns the inverse of a block matrix:

We take the scenic route, obtaining several other results that will be useful later. In fact, mostly the entire theory of recursive algorithms for autoregressive modeling comes from these block matrix results. The development follows that of Appendix A in [KSH00]: We omit many extensions. - Consider the matrix-vector equation:

Interpreting as two equations, if we multiply the top equation by and add to the bottom we eliminate , obtaining:

which is easily solved for . The quantity is called the*Schur complement*of and will be denoted ). To solve for , we take our solution for and substitute in to the top equation, which has been left alone: . - The elimination step is equivalent to multiplying (both sides)
on the left by a matrix , and the substitution step (using
rather than ) is equivalent to multiplying on the right by where

- According to the elimination and substitution steps, these
matrices
*block-diagonalize*the original matrix:

- Note that to reverse the action of adding a multiple of one equation
to another, we subtract that multiple. Hence, we may write:

- Alternatively, we could have done the elimination step by adding a
multiple () of the bottom equation to the top. This
gives a second block decomposition:

where is the Schur complement of . - From the block decompositions we get inversion formulas:

**Matrix inversion lemma**An important result comes by equating block elements of the two inversion formulas, e.g:

- The matrix inversion lemma arises naturally when considering
any
*time-update*step. Time updates to involve adding a block column to , e.g. . Hence . Here (18) obtains an efficient way for inverting the sum

Download lattice.pdf

Download lattice_2up.pdf

Copyright ©

Center for Computer Research in Music and Acoustics (CCRMA), Stanford University

[Automatic-links disclaimer]