In the previous post I introduced an as-far-as-I-know novel method for performing progressive least squares optimisation with spherical basis functions. Here, I’ll go into more detail about how it works, and also derive my original, approximate method from the corrected version.

Many thanks to Peter-Pike Sloan for providing the first part of this derivation.

We’ll be dealing with spherical integrals for the sake of this post, but everything is equally applicable to hemispheres by restricting the integration domain. For example:

will be used as shorthand for ‘the integral over the sphere of the function , where is a direction vector. All integrals will be done in respect to .

is taken to mean the value of the function we’re trying to fit in direction ; this value will usually be obtained using Monte Carlo sampling.

We’ll also assuming fixed basis functions parameterised only by their direction, such that is the value of the ith basis function in direction . The basis functions will be evaluated by multiplying with a per-basis amplitude and summing, such that the result in direction is given by:

In the case of spherical Gaussians, , and , or the value of the ith lobe evaluated in direction .

Our goal is to minimise the squared difference between and so that the fit matches the original function as closely as possible. Mathematically, that can be expressed as:

To minimise, we differentiate the function with respect to each unknown and then set the derivative to 0.

Let . Therefore, for each .

Therefore, by setting ,

At this step, we now have a method for producing a Monte Carlo estimate of the raw moments : as each sample comes in, multiply it by each basis function and add it to the estimate for each lobe. This is in fact what was done for the naïve projection used in The Order: 1886. To reconstruct the lobe amplitudes we need to multiply by the inverse of :

This is a perfectly valid method of performing least squares without storing all of the samples at every step, although it can be noisier than if all samples were used to perform the fit. However, it does require a large matrix multiplication to reconstruct the amplitudes, which is unsuitable for progressive rendering.

In the ‘running average’ algorithm, we want to reconstruct the amplitudes as every sample comes in so that the results can be displayed at every iteration. There are therefore a few more steps we need to perform.

Let’s take the above equation and look at it for a single sample:

We can rearrange this to solve for a single amplitude :

This is effectively the equation that is performed at each step of the ‘running average’ algorithm. Each estimate is accumulated and averaged to give a Monte Carlo estimate of the true value of for the function.

It’s worth noting that there’s an inherent inaccuracy here since each successive b_i estimate is based on previous estimates; an early high-variance estimate will propagate throughout the rest of the solve process. If the average values for were known and exact then this method would also be exact; however, that would defeat the purpose! In practice, though, this inaccuracy tends to have a very small impact.

One effective way to combat this issue is to gradually increase the sample weights over time. Anecdotally, I can say using an exponential weighting of provides a quality boost with very noisy input compared to , with only a very slight reduction in quality for when represents the true function value. If the total sample count is unknown, as in progressive rendering, using seems to work reasonably well.

In this equation, the integral in the denominator can be calculated using Monte Carlo integration in the same way that is. In fact, it turns out that computing both of them in lockstep improves the accuracy of the algorithm since any sampling bias in the numerator will be partially balanced out by the bias in the denominator. However, it’s also true that the integral may be wildly inaccurate at small sample counts; therefore, to balance that out, I recommend clamping the estimator for the integral to at least the true integral. Alternatively, it’s possible to always use the precomputed true integral on the denominator and only estimate the vector, although this results in slightly increased error.



My original algorithm was created by experimentation. I thought it would be worth going through why it worked and the approximations it made. Note that none of this is necessary to understand the corrected equation – it’s purely for curiosity and interest!

Effectively, at each step, it solved the following equation:

If we rearrange that to get into a form vaguely resembling our proper solution above:

For the spherical integral of a spherical Gaussian basis function with itself, , since and . Therefore,

This is very close to our ‘correct’ equation above. In fact, it becomes equal when

We can rearrange that a little further:

Since we assume that, as the fit converges, , we’re left with:

In other words, using the original algorithm for a given sample, the error is mostly determined by how close is to . Since the influence of samples with higher basis weights is greater anyway, this turned out to be a reasonable approximation. However, given the option, I’d still recommend using the corrected algorithm!