Open Mind

More on Glacier Mass Balance

February 6, 2009 · 20 Comments

I’ve taken a brief look at more of the data provided by Mauri Pelto, for the average mass balance of a larger number of glaciers than included in the representative sample of 30 glaciers; this is a “quickie” report on what it shows.


This series starts in 1946, giving more than twice as long a time span as the previous data. However, the number of glaciers included in the average is highly variable. Many of the early years are an average over only 10 (or fewer) glaciers. Nonetheless, here’s a plot of the average over time:

bn1

The red line is a lowess smooth of the data. Mass balance was positive in the early section, is negative in the late section, and in fact analysis shows that on average it’s negative for most of the time.

The earliest data includes very few glaciers, and I don’t have any information about the geographic distribution of those samples, so let’s take a look at the data since 1958, after which every year has at least 30 glaciers in the sample:

bn2

The lowess smooth indicates a distinct change in behavior around 2000. In fact this is easily confirmed by analysis of the data. Until 2001 the data are consistent with a linear decrease in mass balance but values from 2002 onward are too far below that model to be consistent.

In the previous post, I stated that a step-function model was a better fit than a simple linear model over the entire time span 1980 to 2007, but it was only slightly so. Using the longer time span, the difference is now considerable — in fact modeling mass balance as a linear function of time for the whole span from 1958 to 2007 is simply not plausible. A plausible model is a linear decrease until 2001, followed by a much lower average value from 2002 to 2007:

bn3

In the previous analysis, a change in behavior about 2002 was clearly (but not strongly) indicated, the step-function model preferred by both AICc (corrected Akaike Information Criterion) and BIC (Bayesian Information Criterion). There are two differences between this analysis and that. First, the change in behavior about 2002 is now strongly indicated. Second, the behavior prior to 2001 is no longer consistent with a constant value. Rather it shows statisically significant decline, indicating that average glacier mass balance was not just negative for most of the time span 1958 to 2001, it was getting more so. In both data sets, 1998 shows extreme negative mass balance.

Caveats are required because the sample is inhomogenous, with the number of glaciers included in the yearly average changing strongly from year to year. And as I said before, I have no knowledge of the geographic distribution of the glaciers forming the average. So this information must be considered suggestive rather than conclusive. Nonetheless, a qualitative change in glacier mass balance is indicated by both the entire sample and the representative sample, right around 2002, which is suggestive of the effect of climate change and certainly merits further study. It also illustrates the great value, one might even say the necessity, of the subject of Mauri’s original post on RealClimate, the construction of a global glacier index.

Categories: Global Warming
Tagged:

20 responses so far ↓

  • John Mashey // February 6, 2009 at 5:36 pm

    Thanks. Great material.

    A presentation (or Tufte-esque) question:
    when I look at the version with the step function, I think:

    a) I feel pretty good about the slope of the 1958-2001 part, given the number of data elements.

    b) I feel far less confident of the 2002-2007 part, given the small number of elements and the variability.

    The question is: is there a good, economical way to show this graphically? I’m particularly in audiences that are unaccustomed to error bars, and get confused by seeing similar, definite lines on charts, where there is no visual indication of uncertainty.

    [Response: Hmmm... of course I thought of error bars, but then I read the part about "unaccustomed to error bars." Maybe, a solid line to represent the linear trend/average value, and dashed lines to represent the 95% confidence interval (like this graph. Also, I always try to use black for data and some other color (red seems to be the most visually obvious) for fits/models/etc.]

  • Bob North // February 6, 2009 at 6:58 pm

    Tamino - good and interesting post. As to what John is asking for how about using a solid line for the 1958-2001 trend and a dashed or dotted line for the 2002-2007 part. I have used such an approach when developing contour maps to graphically illustrate the lesser degree of certainty in areas with sparser control.

  • tortoise // February 6, 2009 at 7:25 pm

    I would vote for error bars. I think most reasonably intelligent readers can learn to interpret them pretty quickly even if they’re not already familiar with them, and I like having a visual indication of the range of plausible values. Tamino’s 95%CI dashed-line idea works, too. I would prefer either to simply noting somehow that we’re less confident about a certain part of the graph.

  • Frank O'Dwyer // February 6, 2009 at 8:16 pm

    John Mashey,

    “The question is: is there a good, economical way to show this graphically? ”

    I don’t know how easy it would be to do, but I’ve seen graphs done with a thicker line where the thickness represents the uncertainty, and with a fade/blur so that it is sharper in the centre and less saturated/focussed at the edges. This would be a pretty nice visual representation.

  • Hank Roberts // February 6, 2009 at 8:36 pm

    http://www.wunderground.com/education/ricky/05.16.jpg

    http://www.wunderground.com/hurricane/2007/ipcc2007_1850-2005.png

    (gray fade for confidence)

    http://www.nexyad.net/HTML/Res/e-book-tutorial-statistics/MeanEstimationAmongCardinal.gif

    (upper and lower lines)

    Ironically, Google Image searches turn up far more charts on no-it-ain’t-so sites than science sites. Something about convincing people with pictures …

  • David B. Benson // February 6, 2009 at 11:08 pm

    Tamino — As I read it, the step model shown in the graph has four parameters:

    two for the linear trend part;
    one of the data of the step;
    one for the flat value after the step.

    Is this right?

    [Response:Yes.]

  • John Mashey // February 6, 2009 at 11:19 pm

    Thanks to all so far. I ahve seen:

    1) The bare lines.
    2) Lines with standard error bars. [as in AR4 TS, Figure TS.18 on sea level]
    3) Lines with gray zones, as per the version of MBH in the TAR (Figure 5 of SPM).
    4) Several shades of gray, as in SRES (TAR TS, Figure 22).
    5) Explicit lines, as in examples given.
    6) Combinations, as in AR4’s Figure 6.10 [reconstructions for last 1300years], which includes (somewhat) the sort of visualization Fran discusses. Actually, that whole section (6.6 The last 2,000 years” has a variety of different approaches.
    7) I’ve seen cases where something started with error bars or gray zone, but lost them in some other publication.

    As per Tufte (and if you like this sort of discussion, and Tufte gives his course near you, GO. It’s worth it.), compelling visualizations are hard.

    This issue arises to me because:

    a) There’s a lot of ink/complexity in charts with an error-bar per point.

    b) Given a dark line in a grayed area, many people register the line, but not really the area. Then, given another line (as in a reconstruction), the tendency is to compare the two lines and ignore the uncertainty. I.e., people often treat lines as more precise than they are. Of course, people vary widely in their ability to tolerate ambiguity and uncertainty.

    c) I don’t know what’s ideal (although maybe somewhere there are cognitive psychologists, like I used to manage, who study this), but even if it were, it’s unclear whether standard graphing tools would do it or not, especially the variable shading approach.

    d) The target audience matters.

    e) However, a compelling chart propagates far beyond the original audience. This is especially true with the Web, but even in the old print days, we always knew that if we created a good graphic, it would get copied around, yielding free publicity…. which is why “no-it-aint-so” sites (some of whom are rooted in PR expertise) do what they do. Charts are *very* powerful … which is why “How to Lie with Charts” by Gerald Everett Jones is a useful book… (in self-defense).

    f) Anyway, a useful experiment to consider might be to occasionally show the same data several different ways and see what people think.

  • saltator // February 7, 2009 at 4:03 am

    “In both data sets, 1998 shows extreme negative mass balance.”

    Possibly as a result of the super El Nino in that year?

  • Pomatomus // February 7, 2009 at 4:57 am

    [edit]

    [Response: It looks like you're Richard Steckis (aka "saltator"), since you have the same IP address. Why are you posting as a "sock puppet"?]

  • Pomatomus // February 7, 2009 at 5:14 am

    saltator is my wordpress name. I was logged into wordpress at the time.

    Just still want to contribute. More sensibly this time.

  • Pomatomus // February 7, 2009 at 5:16 am

    I have also had problems with my employer being worried that people will associate my real name with them and mistake my comments as theirs.

  • Allen63 // February 7, 2009 at 10:07 am

    If one is “actively looking” for some evidence (however slight) of steps, then one might say that four steps exist.

    Start to c 1957, 1957 to c 1975, 1975 to c 1998, and 1998 to present.

    One might look for steps simply because: if they exist (even in weak form), that may say something about causes and effects. Consequently, the potential steps may have more or less “statistical significance” but also a significance in how one might approach thinking about GW cause and effect. Obviously, its got me thinking.

    Anyhow, yours is a worthwhile value added analysis.

  • Sekerob // February 7, 2009 at 10:51 am

    The Calderone glacier is nearby to me so will be most pleased to re-visit and provide update. It was 19C here yesterday, 2nd day in a row, in winter, 25 clicks from the ski slopes with the various ski lifts in the region that have idled since the year they were put up, before 1998! Remove the 1998 effect, offset by a good La nina in the following 2 years and you find unabaited temp rise. And, RSS data for January continue to point that way.

    The Med is interconnected to ENSO btw… at a approx. 3.6 years interval a number of studies I saw referenced, and read, have found.

  • dko // February 8, 2009 at 12:50 am

    Hey, there’s that 3.6-year cycle again!

    My speculation is that 3.6 years is the average life expectancy of a groundhog. I’ve been reviewing the prognostications of Punxsutawney Phil from 1886 to 2008 and the years he predicted six more weeks of winter were 0.15 C cooler than the years where he predicted an early spring (no shadow). (HadCRU data)

    I’ll leave the hard math stuff for Tamino.

  • Ray Ladbury // February 8, 2009 at 3:15 am

    Allen63,
    It is not a matter of “looking” for steps. Each step makes the model more complicated (i.e. it adds more parameters). The improvement a complicated model gives has to be exponential in the increase in parametric complexity.

  • Sekerob // February 8, 2009 at 11:28 am

    So, dko are you inferring that Phil’s shadow tells him he’s about to pass away? This 6 week winter shadow tale goes way way back, long before Phil migrated to the US. Phil would be put to the stake here had he cast a shadow. 19C again… I’m on the shady shielded side of the Apennines… LoL

  • Tenney Naumer // February 8, 2009 at 7:02 pm

    Sekerob, can you point me to those studies connecting the ENSO with the Mediterranean?
    Appreciate it.

  • dko // February 8, 2009 at 8:18 pm

    Hmm…we’re working with sparse data, Sekerob, so I could only speculate at Phil’s degree of mortal self-awareness. He is not answering questions at the moment — so he may be in the field conducting research. Or back in his burrow asleep.

    He does have an official Web site (though, thankfully, no blog) where his fans claim 100% accuracy. You can imagine the pressure.

    Frankly, I suspect there has been a succession of Phils over the years, owing to losses from predation, vehicle traffic, and local varmint hunters. (The somber ceremony, deep in the burrow, must resemble the new Phantom assuming his father’s mantle in the Skull Cave.)

    At any rate, Phil’s pronouncements over the years have been better than TSI at predicting global temperature anomaly. I think some Phils have been better than others, though, giving rise to an approximate 3.6-year step function.

    For 2009, Phil saw his shadow, which portends a cool year with no super-El Nino and the UK paralyzed by a cm of snow. Meanwhile, Eli sees less ice at the poles, leaving one to wonder whether Phil is on the payroll of the Heartland Institute or has signed Senator Inhofe’s list. (Someone really should check.)

  • Eli Rabett // February 9, 2009 at 9:13 pm

    Allen raises an important point. Statistics says little about mechanism and perforce after finding a statistical result one should delve further into mechanism.

    Statistically, there should be enough data to say something about the global distribution of the glacial mass balance, and that by itself might say more about the mechanism (for example arctic vs antarctic vs tropical (there are a few of those) vs. mid latitude, etc.

  • Hank Roberts // February 15, 2009 at 6:00 am

    Glacier decline between 1963 and 2006 in the Cordillera Real, Bolivia

    The volume changes of 21 glaciers in the Cordillera Real have been determined between 1963 and 2006 using photogrammetric measurements. These data form the longest series of mass balances obtained with such accuracy in the tropical Andes. Our analysis reveals that temporal mass balance fluctuations are similar, revealing a common response to climate over the entire studied region. The mass of these glaciers has clearly been decreasing since 1975 without any significant acceleration of this trend over recent years. We have found a clear relationship between the average mass balance of these glaciers as a function of exposure and altitude. From this relationship, the ice volume loss of 376 glaciers has been assessed in this region. The results show that these glaciers lost 43% of their volume between 1963 and 2006, essentially over the 1975–2006 period and 48% of their surface area between 1975 and 2006.

    Received 6 October 2008; accepted 18 December 2008; published 11 February 2009.

    Citation: Soruco, A., C. Vincent, B. Francou, and J. F. Gonzalez (2009), Glacier decline between 1963 and 2006 in the Cordillera Real, Bolivia, Geophys. Res. Lett., 36, L03502, doi:10.1029/2008GL036238.

Leave a Comment