Open Mind

Glacier Mass Balance

February 3, 2009 · 53 Comments

Mauri Pelto was kind enough to provide a link to some data shown in a recent RealClimate post about glacier mass balance. The upshot of that post is that the scientific community is finally getting a good database of glacier mass balance measurements. Although only a small fraction of glaciers are measured, there are enough of them, and their geographic distribution is sufficient, to get a picture of the changes in glaciers worldwide. And that picture is quite clear.


In the RealClimate post a couple of graphs are shown, including this one of mass balance for a sample of 30 glaciers worldwide with mass balance measurements since at least 1980:

ann_bal

and this one of the cumulative mass balance over this time interval:

cum_bal_total

The question arose in comments to the RealClimate post, is the decline statistically significant? I don’t generally like to give opinions without running numbers, but this is one case in which I was tempted because I’ve got a lot of experience with this, and visual inspection alone is sufficient to get a good idea: yes, the decline is significant. But now that I have numbers (and Mauri’s link includes an extra year of data), let’s see what they say.

The question really comes down to: is global average glacier volume declining? This is equivalent to asking, is the annual mass balance (the change from one year to the next) negative, i.e., are glaciers losing mass rather than gaining-and-losing in a plausibly random fashion?

One important aspect to understand is: should we analyze mass balance b_t (year-to-year change) or cumulative mass balance C_t (the sum of mass balance over time)? We can investigate this by looking at the autocorrelation function (ACF) of these variables. If the statistically relevant variable is cumulative balance, then annual mass balance is a difference series (since b_t is equal to C_t - C_{t-1}), in which case the balance series b_t should show strong negative autocorrelation at lag 1. But it doesn’t:

acfb

However, we need to be a little cautious. The mass balance time series appears to show a trend. A linear fit is statistically significant:

bline

However a better model (significantly so) is not a linear trend but a step function, with a negative average balance from 1980 to 2001, and a different negative average balance from 2002 to 2007:

bstep

Do the residuals from this model show strong negative autocorrelation at lag 1? The answer is again no:

acfbres1

The evidence is that the physically relevant variable is indeed mass balance b_t (the year-to-year change), and cumulative mass balance C_t is a convenient way to visualize the accumulated change over time.

So the question becomes: are the values of b_t less than zero with statistical significance? The answer, again, is strongly suggested by visual inspection of the graphs because 25 out of 28 values are negative,and the last 18 values are all negative. (note: in the RealClimate graph only the last 17 are negative, but Mauri’s link includes an extra year of data). When we run the numbers, we get an average over time of -399 mm/yr. The “standard error” of that estimate is 68 mm/yr, so the “t-statistic” is -5.9. Yes, that’s significant. No doubt.

We can even test whether the averages during the two intervals indicated by the “step function” model (1980 to 2001, and 2002 to 2007) are significantly different. The data from 1980 to 2001 give an average of -273 mm/yr which is significantly different from zero since the standard error is 53 mm/yr. Data from 2002 to 2007 average -861 mm/yr, again significantly different from zero in spite of having only 6 data points (the standard error is 136 mm/yr). And the difference between the two averages is also statistically significant.

So: not only is glacier mass balance significantly negative, it’s significantly more so from 2002 to 2007 than it was from 1980 to 2001.

It’s also worth noting that the most negative mass balances are for 1998, the year of the strong el Nino, and from 2002 onward, years of extreme high temperature (compared to the 20th century). That suggests that the reason for such consistent mass loss from glaciers is: temperature increase. But nailing that down from a causal perspective, is best left to the glaciologists who actually study such things.

Categories: Global Warming
Tagged:

53 responses so far ↓

  • Kevin McKinney // February 3, 2009 at 3:25 pm

    Thanks for another informative post!

    An elementary question: what are the units abbreviated “mm”?

    [Response: They're millimeters. Mass balance is often expressed as "millimeters water equivalent," which is the depth of water corresponding to the mass loss of the glacier.]

  • chriscolose // February 3, 2009 at 3:29 pm

    Tamino, thanks for this post. Could you please direct me to where the data for mass balance is that you used?

    [Response: It's at the very last link on the bottom of this page. There are multiple data sets, in this post I've only looked at the average of 30 "representative" glaciers.]

  • Bob North // February 3, 2009 at 3:31 pm

    Tamino - I think I probably would have been willing to opine that the trend was significant just from visual observation of the first two graphs. That being said, does the data indicate anything about what we are looking at in terms of total % mass loss. In other words, since 1980, have we lost 1%, 5%, 10%, or 50% of the total glacial mass? I tried to go back to Real Climate to see if Dr. Pelto mentioned it in his post but the site appears to be down at the present time.

    Thanks.

    [Response: I don't know what the total glacial mass is, so I don't know the answer to your question.]

  • if0x // February 3, 2009 at 3:42 pm

    Whilst the two graphs pictured do, indeed, appear conclusive, I’m a bit perturbed about the lack of a zero point on the second (cumulative ice loss), because it means we don’t have a context within which to frame the decline.

    Obviously, year on year on year decline ultimately leads to complete glacier loss, but we don’t know, from the graphs, whether the decline shown represents 0.0001% or 80%. The point being that one of those options might give greater immediate concern than the other.

  • mauri pelto // February 3, 2009 at 4:47 pm

    Every glacier obviously has a unique thickness, so that a given thickness loss represents a different percent of total glacier volume loss. W e can accurately measure surface mass balance, but it is difficult to accurately assess total volume, we just have not invested, nor is cost effective to do so, in producing accurate 3-d maps of glaciers. So it is better to stick with what you can accurately measure. The average thickness of the glacier in the WGMS is less than 100 m, thus the volume loss of 12 m is at least 15% of the total volume. In the North Cascades it is close to 25% of the total volume. http://www.nichols.edu/departments/glacier/north%20cascade%20glacier%20mass%20balance.htm Just as we do not report sea level or lake level changes as a percent of depth or total volume, we do not report glacier mass balance changes as total volume change. Two of the glacier I began measuring in 1984 are 100% gone now.

  • Climate Criminal // February 3, 2009 at 6:18 pm

    As integrators of local temperature where the glaciers are located, the dwindling mass balance would certainly seem to be confirmatory evidence of climate change.

    Add to this to the repeat photography of glaciers, and it’s perfectly clear that many glaciers are shrinking rapidly and quite a few have disappeared completely.

    Little wonder that denialists choose to manufacture misinformation about glaciers.

  • Kevin McKinney // February 3, 2009 at 7:17 pm

    Thanks, Tamino. I was actually thinking “It can’t possibly be millimeters, can it?” This joke’s on me. . .

  • Dave A // February 3, 2009 at 10:12 pm

    Tamino,

    There are thousands of glaciers across the world. How representative can the RC 30 be?

    For example, the WGMS said this in 2007

    “When looking at individual fluctuation series, one finds
    a high rate of variability and sometimes widely contrasting
    behaviour of neighbouring ice bodies”.

    [Response: The 30 are chosen to be truly representative. And when you look at the *statistics* of the hundreds of glaciers for which there are records, they paint the same picture as the representative sample. As Mauri Pelto points out in his RealClimate post, "The cumulative mass balance index, based on 30 glaciers with 30 years of record and for all glaciers is not appreciably different." He also mentions "In 2005 there were 442 glaciers examined, 26 advancing, 18 stationary and 398 retreating - implying that "only" 90% are retreating. In 2005, for the first time ever, no observed Swiss glaciers advanced."

    Attempts to dispute the message of glacier changes are futile: receding glaciers vastly outnumber those advancing, and the statistical significance is beyond question.]

  • Ed Davies // February 3, 2009 at 10:19 pm

    You mention the rather obvious connection between the 1998 ice loss and that year’s el Nino. The follow on question, then, is can you improve the statistical significance of the results by subtracting out some function of the Southern Oscillation Index or similar? I’m wondering if the shape of the step change would be clearer, for example.

  • Allen63 // February 4, 2009 at 12:01 am

    Very interesting regarding the step function aspect.

    My view has been that temperature (particularly from satellite measurements) over the same period follows a step function. It is not surprising that Glacier mass follows temperature — and is, thus, a step function.

    I would expect a lag in Glacier mass — it is interesting that the lag seems short. Putting it differently, equilibrium is reached quickly. Perhaps that implies a “surface” effect not involving a great percentage of mass.

    Still, its the step function aspect I think I see in temperature data — now implied by Glaciers — that is interesting.

    Is the step function a “chance” occurrence — a confluence of mostly known factors. Or, is their a major underlying climate driver (perhaps not well appreciated) that changes abruptly? Could the possible step function aspect be a clue that needs following up from the Climate Science community?

    One of the better value added posts.

  • Philippe Chantreau // February 4, 2009 at 12:36 am

    Funny that on these indicators of climate for which photos actually are useful, nobody at Watts’ seems eager to go take some.
    I expect to soon see arguments saying why taking pictures of glaciers is of limited interest and misleading, unlike taking pictures of thermometers…

  • Allen63 // February 4, 2009 at 1:31 am

    BTW, one source of the “step” could be the 1998 strong el nino heating itself. It may have added the necessary energy — associated by a melt back of “world ice” exposing more sea and land — thus, perhaps, changing albedo — resulting in a new equilibrium temperature.

    Perhaps the glaciers melt back until their altitude increases sufficiently. That is, the lower temperature at higher altitude offsets the temperature increase. So, equilibrium is reached quickly.

    Just thoughts.

  • kevin // February 4, 2009 at 2:38 am

    Allen63: I don’t believe the evidence presented here indicates that the glaciers are in equilibrium. I think what is shown is that after the “step,” they are further out of equilibrium than before. However, since they were losing mass both before and after the 2001/2002 step change, they have not been in equilibrium at any time during the period of observation. Am I wrong?

  • if0x // February 4, 2009 at 8:41 am

    Allen63: Presumably yes, the glaciers will retreat to a new equilibrium (vanishing completely could be considered an equilibrium, I guess, but not an especially helpful one).

    However, isn’t the point more that, aside from the whole ‘canary-in-a-coal-mine’ utility of glacier retreat as an indicator of rising temperatures, there are millions (if not billions?) of people (not to mention the rest of the ecosystem) dependent upon meltwater from glaciers to sustain them through the dry season?

    If you reduce the volume of ice available, then does it not also follow that there will be less seasonal melt, which in turn provides less water to feed into the river systems?

  • Adam Gallon // February 4, 2009 at 10:47 am

    Interesting data, but the question “Has it hapened before” comes to mind, especially in light of this article http://news.bbc.co.uk/1/hi/sci/tech/7580294.stm
    “The Roman coins found on the Schnidejoch are being seen as proof that the Romans used this route to cross the Alps from Italy to their territories in northern Europe. Interestingly, one of the Earth’s chillier periods coincides with the decline of the Roman empire”
    This article is very interesting too.
    http://archiv.ethlife.ethz.ch/e/articles/sciencelife/gruenealpen.html
    “”Between 1900 and 2300 years ago the lower tips of the glaciers lay at least 300 metres higher than today. ”
    http://archiv.ethlife.ethz.ch/images/al04_126-l.jpg
    Are we seeing part of what is a natural cycle or a natural cycle with a human-induced component?
    If the latter’s the case, then what is the level of this component?
    The famous “Ötzi” found high up in the mountains, dated to c-3,300BC, does appear indictate a period of comparable warmth to that we’re currently experiencing.
    Latest analysis of his clothing suggests he had a pastoral existance and may have been herding sheep or goats in the mountains.
    Are we seeing some evidence of a c-2,000 year climate cycle?

  • Allen63 // February 4, 2009 at 11:11 am

    Kevin,

    All I am saying is that, if their mass loss truly is a “step function”, that implies relatively long periods during which annual mass ups and downs oscillate around a new average (as graphically indicated to be happening — above). I used the word “equilibrium” to characterize that.

    In that sense, one would say that the net gain or loss in average mass has been very minimal since about 2002 — and that is shown on the graph.

    Even if a phenomena is characterized by a step function, the steps may be up or down over time. So, the current “equilibrium” at the current “step” may be temporary (a few years — not decades).

    In the absence of knowing what exactly is causing the “steps” to happen, the next “step” could be up or down. That is why I am interested to know what caused the step loss in mass (as opposed to a steady loss in mass).

    As I posted above, the step loss in mass seems to parallel (what I perceive to be) the step gain in temperature over the same time period. That is, temperature jumped up in the 1998-2000 time frame — and has remained fairly flat since then.

    Again, its interesting to have the apparent step change in temperature confirmed by an apparent step change in glacier mass. Thus, I find the above “Open Mind” analysis to be a great value added post.

  • Allen63 // February 4, 2009 at 11:33 am

    ERROR :)

    I mistook “mass balance” for cumulative mass balance — it was late last night. So, the glacier “mass” is not in “equilibrium”. Rather, the “rate of loss of mass” is in “equilibrium”. So, when I wrote the above post, I was wrong about that aspect.

    I was also wrong in the direction my thoughts were heading on my original post — regarding mechanism. Parenthetically, I did not write out all my thoughts — but, I was thinking partly wrongly. A step change in rate of loss makes more physical sense than a step change in absolute mass (which is how I read the above — though it clearly doe not say that).

    Still, the interesting thing to me is that the step function change in the rate of loss confirms the apparent step function change in the temperature.

  • Allen63 // February 4, 2009 at 1:41 pm

    ifox, kevin,

    As I posted, I was partly in error regarding “what” aspect of Glacier activity was being “stepped”. Although, the interesting correlation with temperature remains, for me.

    Yes, temperature has risen and glaciers have retreated — both showing the same “step function” behavior (in my view).

    I have not studied the consequences of glacier retreat.

    In general, I think what is needed is a global quantitative view of consequences of all aspects of global temperature and CO2 change — wherein, all the good and the bad consequences are accurately globally quantitatively-weighted and various mitigation strategies are accurately globally quantitatively-weighted — including quantitative impacts on other of societies issues. While such efforts have begun, they are in their infancy, in my view.

  • dhogaza // February 4, 2009 at 4:06 pm

    Latest analysis of his clothing suggests he had a pastoral existance and may have been herding sheep or goats in the mountains.
    Are we seeing some evidence of a c-2,000 year climate cycle?

    No straw is so thin as to escape grasping by the denialsphere …

  • Kevin McKinney // February 4, 2009 at 4:16 pm

    Adam wrote:

    “The famous “Ötzi” found high up in the mountains, dated to c-3,300BC, does appear indictate a period of comparable warmth to that we’re currently experiencing.
    Latest analysis of his clothing suggests he had a pastoral existance and may have been herding sheep or goats in the mountains.
    Are we seeing some evidence of a c-2,000 year climate cycle?”

    Surely you jest. Even if Otzi was a pastoralist, there is still no way to determine whether where he died was pasture or glacier. A very, very slim evidentiary reed at best!

  • t_p_hamilton // February 4, 2009 at 4:53 pm

    Adam Gallon asked:”Are we seeing some evidence of a c-2,000 year climate cycle?”

    No. Today is MUCH hotter than 2000 years ago. What does that tell you about the future of the Alpine glaciers? (all glaciers as a matter of fact)

  • Florifulgurator // February 4, 2009 at 5:17 pm

    @ Adam Gallon “Climate Cycle”: The ice has meanwhile receded quite some way from the spot where Ötzi was found.

  • luminous beauty // February 4, 2009 at 5:53 pm

    Adam,

    Some perspective:

    http://www.pnas.org/content/103/28/10536.abstract?ijkey=9e4fbaa67ac2f744d526542cf6652753d47cbfab&keytype2=tf_ipsecsha

  • mauri pelto // February 4, 2009 at 7:31 pm

    Kevin is right on. For the smaller glaciers retreat is getting rid of their poorest performing sections (most ablating), after doing so like a company they hope to improve the bottom line. That despite this glacier budget cutting so to speak, the glaciers are experiencing even more negative balances, indicates they are not approaching equilibrium. Some may reach equilibrium eventually others will be in state of disequilibrium and disappear. http://www.nichols.edu/departments/glacier/diseqilibrium.html

  • Steve Bloom // February 4, 2009 at 9:07 pm

    Intact leather items were found with Otzi, which shows that they were buried in snow immediately and stayed that way for the entire intervening time. In other words, Otzi died on snow/ice cover which has just now melted.

    There is a known mid-Holocene warm period (lasting from about 8,000 to 5,000 years ago), probably approximately as warm as the present, which likely resulted in the sharp retreat or temporary disappearance of most of these glaciers. There’s not much argument about this warm period since unlike the ephemeral Medieval Warm Period, Roman Warm Period etc. it was caused by Milankovitch cycles.

  • David B. Benson // February 4, 2009 at 9:36 pm

    Adam Gallon // February 4, 2009 at 10:47 am — I encourage to read W.F. Ruddiman’s “Plows, Plagues and Petroleum” to better understand the entire Holocene.

  • David B. Benson // February 5, 2009 at 2:24 am

    Steve Bloom // February 4, 2009 at 9:07 pm — In costal Brtish Columbia, the warm period was over by about 7000 years ago:

    http://news.softpedia.com/news/Fast-Melting-Glaciers-Expose-7-000-Years-Old-Fossil-Forest-69719.shtml

  • Philippe Chantreau // February 5, 2009 at 5:39 am

    The reference to Otzi is quite irrelevant.

    The man may have lived in a pastoral society but was no pastor himself. He was equipped with many indicators of high social status and of functions that were not herding. There is evidence that he was murdered.

    His presence in the place where he was discovered was most likely the result of his traveling. He had on him types of foods that would be used for a long march. It is not likely that he died anywhere near where he lived, which would have been much lower in a valley.

    To my knowledge, there is no evidence that the local climate was warmer in his days. If that was the case, then it would have had to suddenly (from one year to the next) become much colder and remain that way for 4000 years, until the current warming allowed us to find his remains preserved in ice. How probable is that?

  • Polyaulax // February 5, 2009 at 8:26 am

    It is pretty obvious that Otzi died on a glacier: no ice, no Otzi. From memory,he died at the Hauslabjoch, over 3200m a.s.l., and never a route of first choice, giving access only to extensive ice fields.

    The fact that organic artefacts spanning 6000years are found at the Schnidejoch attests to its consistently icy condition, and perhaps the extra risk of crossing it. Why did people risk crossing it,at 300m higher than the Rawilpass? Was it undefended? Part of a higher network of sub-passes enabling avoidance of the major routes?Were people guided there to be robbed? Were they lost?Was the extensive broad stony basin of the Rawilpass still an icefield,or its artefacts continually scavenged given the frequency of its use? So many …questions.

  • mauri pelto // February 5, 2009 at 5:35 pm

    Tamino quantitatively how much better a fit is the step function versus the linear function? BAMS State of Climate 2008 Report will want to know.
    I would love you to look at the longer data set too. I wonder if a step function would be an even more clear choice there.

    [Response: Statistically it's only a little better; I spoke too strongly when I said "significantly better," because even though it's true in a pure statistical sense it gives too strong a suggestion of demonstrable superiority. Nonetheless, the step-function model is statistically better; both AICc (corrected Akaike Information Criterion) and BIC (Bayesian Information Criterion) prefer the step-function model over the linear model (AIC by only 0.083, BIC by only 0.062). Both tests take into account the number of degrees of freedom of the model, but the differences are close enough to zero that one can't distinguish between the models with certainty.

    And it's well to keep in mind Cox's maxim that "All models are wrong. Some models are useful."

    I'll definitely study the longer time series to see whether alternative (to linear) models might be demonstrably superior, and report the results soon.]

  • John Mashey // February 5, 2009 at 7:20 pm

    Cox’s maxim? Did you mean Box?

    [Response: D'oh! My mistake.]

  • John Mashey // February 5, 2009 at 7:56 pm

    Q: for Mauri Pelto and tamino
    (trying to understand whether the step function is “real” and if so,what it means?)

    About a year ago, we had some discussion on glaciers, with some useful comments by Mauri:

    http://climateprogress.org/2008/03/17/record-global-glacial-melt/#comment-9578

    including:
    “The response time is a crucial item, there is the initial response time 24 years for Aletsch and the overall response time, which is the time for a glacier to complete 2/3 of its adjustment to a climate change.”

    As I understand it: glaciers have different response times and “averaging” periods, i.e., they do their own plot-smoothing physically. Bigger glaciers respond slower and average over longer periods. Of course, precipitation and sometimes geometry matter as well.

    But, that raises the question: do we have good models for the distribution of estimated response times in the WGMS data? I looked, but didn’t easily find that.

    I ask, because it seems to me that it’s unclear that one can attach much significance to any specific year (like 1998) if the glaciers take a while to respond, with varying response times.

    Needless to say, I have not the slightest doubt that the glaciers are headed up the hill - the issue is understanding the more detailed behaviors.

  • David B. Benson // February 5, 2009 at 8:16 pm

    Tamino — corrected Akaike Information Criterion?

    [Response: It's a 2nd-order correction to AIC for small samples. You can find out a little about in on Wiki.]

  • David B. Benson // February 6, 2009 at 12:08 am

    Tamino — Thank you. Am just finishing the code to use AICc.

  • mauri pelto // February 6, 2009 at 12:31 am

    John if we are talking about terminus behavior you are correct, response time must be included. Mass balance is the most sensitive climate parameter for glaciers because it is a direct response to local weather conditions for a year, nothing more nothing less (Haeberli, 1995; Pelto and Hedlund, 2001). The change in glacier length is a smoothed and delayed response to the mass balance changes (Haeberli, 1995). This delay is the response time. WGMS is good in that it has a clear focus, collect and report the data. Determining response time is then a function for others to determine. Each glacier is different. The Pelto and Hedlund, 2001 paper tests two ways to determine response time. http://www.nichols.edu/departments/glacier/terminus_behavior_and_response_t.htm is a visual version of that paper.

  • Philippe Chantreau // February 6, 2009 at 1:28 am

    Interesting questions indeed Polyaulax. It may have been that it was a route that those willing to take more risks would use in order to have their movements remain unnoticed. Otzi was certainly not a lambda type of guy and perhaps some was at stake during this trip that required him to go with discretion.

    Obviously that didn’t save his life. It surely is kinda fun to imagine who he was and what intrigue could have brought him to his premature death on the ice, away from the beaten path.

  • John Mashey // February 6, 2009 at 5:50 am

    Mauri: thanks, that helps; I didn’t understand how different the sensitivity was between mass blaance and terminus length, so that cleared it up.

  • Gavin's Pussycat // February 9, 2009 at 5:29 am

    > (AIC by only 0.083, BIC by only 0.062).

    Hmm, doesn’t this suggest Akaike weights of close to 51%/49%? Or, both models are roughly equally plausible?

    exp(-1/2 Delta AIC) = 0.96 and 1.00 for the two models, and you should divide by their sum.

    Under these circumstances I would prefer the linear fit model without skip as it has a physical rationale, which the skip model does not. Right?

    BTW do you just use RSS (residual sum of squares) assuming uncorrelated gaussian errors, as explained in the wiki? How critical is this assumption?

    [Response: I'm not sure we're "on the same page." The step-function model is slightly, but definitely preferred. AIC and BIC aren't measures for the two different models -- AIC alone is a comparison of the two models, and BIC is another; both point to superiority of the step-change model. You really don't need to use both AIC and BIC (in fact ordinarily one would only apply one of them), I just did so for "completeness."]

  • Gavin's Pussycat // February 9, 2009 at 6:51 am

    BTW I suspect that what Wikipedia says about the $\chi^2$ version of AIC is plain wrong. What would make sense is

    $AIC_{\chi^2}=2K+n\ln\frac{\chi^2}{n}$,

    where

    $\chi^2=v^T Q^{-1}v$,

    v being the abstract vector of residuals, and Q the observations covariance matrix. This quantity is known to be chi-square distributed with n-K degrees of freedom. This applies for general correlated data; in fact, you can transform the data to make Q the unit matrix, reducing this to the special i.i.d. case.

    (Hope the LaTeX gets through)

  • Gavin's Pussycat // February 9, 2009 at 1:31 pm

    Tamino, I was talking only about AIC.

    exp(-0.5*0.000)=1.0000 (model 1)
    exp(-0.5*0.083)=0.9593 (model 2)

    Akaike weight 1 = 1.0000/(1.0000+0.9593)=0.51
    Akaike weight 2 = 0.9593/(1.0000+0.9593)=0.49

    You can do the same exercise separately for BIC, yielding very similar figures.

    From the literature I gather that the difference in AIC between two models (their “Delta”) should be something like 5 for the one to be clearly preferred over the other (at the 90%-plus level in this case).

    [Response: Now I see -- we're more on the same page.

    But it doesn't make sense to me for delta-AIC to have to be as big as 5. Consider two models for which the simpler model is a "subset" of the more complex (e.g. linear vs quadratic); in such a case we can directly apply an F-test. A delta-AIC anything bigger than zero means that (in the large-sample approximation) the more complex model *passes* the F-test (albeit not at an extremely high level of significance). ???

    And the wiki page states "The preferred model is the one with the lowest AIC value" with no reference to how *much* bigger. That's the way I learned it; maybe I'm out of date.]

  • Ray Ladbury // February 9, 2009 at 2:54 pm

    Tamino, FWIW, my experience with AIC is that for small sample sizes, sampling errors can cause the favored model to vary as more data are added. At some point, though, there starts to be a clear winner. If there isn’t a clear winner, an average with Akaike weights may give a better answer.
    Anyway, that is what I have read, and it seems to be borne out in the analyses I’ve done.

    Have you looked at how AIC changes as a function of time? I would guess we are right at the point where the step model is starting to be favored. Have you looked at the reference to Anderson and Burnham?

    [Response: I quite agree that for small samples (and this is a small sample), the fact that AIC is a random variable means it can "change sides" when more data become available (which is one of the reasons I computed both AIC and BIC). So in this case, I think it should be taken more as a "guide" than as a rule, and we definitely can't say that one model or the other is a sure-fire winner. It's especially problematic since the data since 2002 are *so* sparse. I look forward to the accumulation of more data (but of course that'll take years). And I'll take a look at Anderson and Burnham.]

  • Gavin's Pussycat // February 9, 2009 at 4:40 pm

    What Ray said…

    If you find Akaike weights like 51%/49%, what you should do depends on what you’re after.

    If you want to construct “as good as possible” predictions of whatever function of the model parameters, the optimal thing is averaging over models using the Akaike weights as weights.

    If, on the other hand, you want to know which model best describes “the truth”, you’re out of luck: like Chou en-Lai remarked about the French Revolution, it’s “too soon to tell” ;-)

    Tough but fascinating stuff. Needing it myself I have been reading everything I could lay my hands on.

    BTW it would be interesting to try piecewise linear with two pieces, and no skip.

  • David B. Benson // February 10, 2009 at 12:26 am

    Could also use

    http://en.wikipedia.org/wiki/Bayes_factor

    to determine which hypothesis the weight of the evidence favors. In that case, the decibans down have a translation into significance given by Harold Jeffreys.

  • Gavin's Pussycat // February 10, 2009 at 10:48 am

    David, interesting, but doesn’t look very operational (at least I fail to see it). How would you apply this to the case in question?

  • David B. Benson // February 10, 2009 at 7:42 pm

    Gavin’s Pussycat // February 10, 2009 at 10:48 am — Choose two hypotheses, H0 and H1. Suppose H0 is the linear fit and H! is the step function from Tamino’s original posting in this thread. Assume the residuals are normally distributed (usually works well enough for the Bayes factor method). Then determine ln(P(E|H)) where E is the evidence, i.e., the data and H is first H0 then H1.

    The ratio in decibans gives a numerical indication of how much better H1 is than H0 (or the reverse). Less than 5.0 decibans is considered insignificant (and I really, really like it when I can obtain at least 10.0 decibans).

  • Gavin's Pussycat // February 11, 2009 at 10:17 am

    David, perhaps I am dense, but how do you obtain P(E|H)? What is E concretely?

  • Ray Ladbury // February 11, 2009 at 5:10 pm

    David, I’m not sure how you handle the case where the number of parameters in the models are different–as is the case here. It would seem to me that you would have to use AIC or BIC in that case–and it would also seem to me that you could average AIC/BIC over the parameter space and get something equivalent to the Bayes Factor. However, that might still not get you what you want. For instance, suppose you have rather weak evidence (e.g. suspension data in a failure test–that is the test ends before failure). There might be a broad range of parameters that are consistent with this result. Thus, you’d have high likelihood over those parameters. However, another model might favor a narrow range of parameters for the same data. It seems the density and the gradient could be important as well.

  • David B. Benson // February 11, 2009 at 9:36 pm

    Gavin’s Pussycat // February 11, 2009 at 10:17 am — The evidence E is the collection of values in the time series, E = [E0, E1, ... En]. Each hypothesis H predicts some value Vi and the residuals Vi - Ei are assumed to be Gaussian, from which the variance is obtained. Then P(E|H) = P(E1|H)P(E2|H)…P(En|H) where each P(Ei|H) is the probability of the ith residual according to the Gaussian distribution N(0,variance).

    Ray Ladbury // February 11, 2009 at 5:10 pm — The Bayes factor method does not take then number of parameters into account. Once one has an estimate for the variance then, of course, both AICc and BIC can be computed. These measures, according to the little bit of literature on information criteria that I have looked at, only provide a rank ordering. The values are not to be used to determine how much better one hypothesis is than another; Bayes factor method does that task.

    It often happens that the model with more parameters provides the better fit. If the fit is enough better so that the variance is much the smallest, it will still pass the BIC test. (In Tamino’s example, that happens.) But I am often asked “how much better?” Then Sir Harold has provided me with nifty names to answer that question, “substantially”, “strongly”, “decisively”. Alas, all to often it is just “too close to call” at which point I now point to BIC rank orderings to say, “well, according to BIC …” in a quite hesitant tone, especially when AICs gives the opposite ordering.

    This will, I hope, help in obtaining some better quality data to boost the difference between H0 and H1.

  • Ray Ladbury // February 12, 2009 at 5:16 pm

    David Benson,
    Actually, the relative values of AICc and BIC for the different models are significant, and since AICc is an unbiased and efficient estimator for the K-L distance, and since the “goodness of fit” term is the likelihood, I think the difference in AICc ought to scale as a chi^2, but I’m not sure how many DOF it should have. I do know that I’ve seen differences in AIC used to quantify the degree to which one model is favored over another–e.g. along the same lines as the Bayes Factor.

    What you want to avoid is something like this:

    http://blogs.discovermagazine.com/cosmicvariance/2007/07/13/the-best-curve-fitting-ever/

  • David B. Benson // February 12, 2009 at 7:33 pm

    Ray Ladbury // February 12, 2009 at 5:16 pm wrote “I do know that I’ve seen differences in AIC used to quantify the degree to which one model is favored over another…”

    I would certainly appreciate a reference.

  • Ray Ladbury // February 13, 2009 at 1:32 am

    From : Burnham, K.P. and Anderson, D.R. 2002. Model Selection and Multimodel Inference. Springer Verlag. (second edition), p 70.

    AIC diff support for equivalency of models

    0-2 substantial

    4-7 weak

    > 10 none

    From: Raftery, A.E. 1995. Sociological Methodology. 25:111-163, p 70

    BIC can be used in a similar way, with similar scales (i.e. >10, the models have very different performance).

    I’m mainly familiar with Anderson and Burnham, but I’ll look for others.

  • David B. Benson // February 13, 2009 at 9:49 pm

    Ray Ladbury // February 13, 2009 at 1:32 am — Thank you!

  • steve woodruff // February 14, 2009 at 3:14 pm

    Please consult a statistician to learn why your positive 1 year lag autocorrelation can support what is clear from the graphs - mass balance is generally decreasing over time (positive autocorrelation could also support a general over time increase, the statistician will explain why). You’ve mixed up year to year change with autocorrelation of year to year change - quite different things in general.

    In summary, all the studies outlined above are consistent and support a falling mass balance hypothesis.

Leave a Comment