In the last post we introduced the *moment generating function*. It’s defined as a power series, with its coefficients determined by the moments of a probability distribution.

The moment-generating function turned out to be equal to the expected value of the function

.

We also mentioned that the moments determine the probability distribution just as the probability distribution determines the moments. So, if we know the moment-generating function we know the probability distribution, and vice versa. To get the moments from the generating function, we can either use cleverness to figure out the series expansion for and use the fact that the n^{th} coefficient is the n^{th} moment divided by

,

or we can directly compute the coefficient by a Taylor’s series expansion and get the result

,

i.e., the n^{th} derivative of the moment generating function, evaluated at .

We can compute the moment generating function for well-known probability distributions. For example, for the uniform distribution from 0 to 1 it turns out to be

.

For the chi-square distribution with *n* degrees of freedom it’s

.

For the ubiquitous and all-important normal distribution, the moment generating function is

,

where of course is the mean and is the variance.

If the random variable isn’t a random variable at all, but instead is just some constant , then the moment generating function is simply

.

From that we can deduce that , , , etc., just as a constant should behave.

What if we took two independent random variables, which may not follow the same distribution, and add them? Let the moment generating function for the distribution of the random variable be , and let that for the random variable be . Since the variables may not follow the same distribution, the generating functions and may not be the same. What is the moment generating function for the distribution of the sum ?

We can determine it by using the definition

.

This is of course

.

Because we assumed that *x* and *y* are *independent*, this is equal to

.

We have the valuable result that *the moment generating function for the sum of two independent random variables is the product of their moment generating functions*.

This motivates us to define yet another function, the *cumulant generating function*, which is the logarithm of the moment generating function

.

The we have the useful result that *the cumulant generating function for the sum of two independent random variables is the sum of their cumulant generating functions*. If we know the cumulant generating function we can compute the moment generating function, so we can in turn determine the probability distribution. Put another way, if two variables have the same cumulant generating function then they follow the same probability distribution.

Just as the moment generating function determines the moments , the cumulant generating function determines the cumulants

.

Note that the sum starts at , i.e., there’s no term which doesn’t contain a power of .

We can also determine the cumulant generating function for well-known distributions, just by taking the logarithm of the moment generating function. For a constant (a random variable that’s not random), it’s simply

,

while for the normal distribution

.

### Averaging

Now suppose we have some random variable which follows an unknown probability distribution, and that we sample a large number *N* of data points with which we compute an average. The average is

.

Assume further that each sample value is independent of the others. The moment generating function for is

.

Since all the values are independent, this is

.

Each term is the moment generating function for a single value at , so we have

.

Taking the logarithm of both sides, we see that the cumulant generating function for the average has a similar simplicity

.

Let’s use our series expansion for the cumulant generating function, with the cumulants for a single data value, to get

Hence the 1st cumulant *of the average* is , the same as the 1st cumulant of the data itself. The 2nd cumulant of the average is , the 3rd cumulant of the average is , etc.

Now consider what happens as grows larger and larger. Each successive term in the cumulant generating function has a higher power of , so each successive term gets smaller and smaller. To zeroth order, as goes to infinity the cumulant generating function becomes

.

But that’s just the cumulant generating function for a random variable which is not a random variable, i.e., a constant. To get the lowest-order random behavior as grows, we have to include the 1st-order term

.

This is the lowest nontrivial order as approaches infinity, so it gives us the *asymptotic* cumulant generating function for the average of a large number of data points.

And what is the asymptotic distribution for the average? The asymptotic distribution has the same form as for the normal distribution, with only two nonzero cumulants. We therefore have the *central limit theorem*: the average of a large number of data points asymptotically follows the normal distribution.

The 1st cumulant is the same as that of the raw data, the 2nd cumulant is that of the raw data divided by . But the 1st and 2nd cumulants are directly related to the mean and variance by

,

.

Therefore the mean of the average is the same as the mean of the data, and the variance of the average is the variance of the data divided by .

This “proof” of the central limit theorem has been more a sketch than a proof. A rigorous approach would use, not the moment generating function but the *characteristic function*

.

This has the virtue that it always exists. It’s also equal to the complex conjugate of the Fourier transform of the probability density function (if *that* exists!). There are a lot more complications than I’ve outlined, but I hope this brief introduction has at least made the central limit theorem sensible and plausible.

## 3 responses so far ↓

Deep Climate// June 15, 2009 at 7:14 pm |The we have the useful resultI guess that’s a typo and and should be “Thus …”

naught101// June 16, 2009 at 12:48 pm |/me waits for the punchline

Ray Ladbury// June 17, 2009 at 1:30 am |Nice summary and quite elegant.

I often use the method of moments when I have limited data and want to get a feel for model dependence of the conclusion. For instance, for data between 0 and infinity, I’ll get a best fit to a lognormal, which is skewed right and a Weibull, which is skewed left. If the likelihood doesn’t distinguish between the two forms, I know I don’t have enough data to estimate skew. Is there maybe some more elegant way to look at this?