Open Mind

On the Edge

January 16, 2007 · 8 Comments

One of the most important things to know about earth’s climate is a quantity called climate sensitivity. Actually there are two uses of this term. One is the temperature change caused by a change in climate forcing: if we increase the heat budget at earth’s surface by, say, one watt per square meter (1 W/m^2), how much will temperature rise? The other is the temperature change caused by a doubling of CO2 in the atmosphere. This can be called dT2x, and that’s the one I’d like to discuss.

If there weren’t any feedbacks in the climate system, it would be straightforward to compute; dT2x = 1.2oC. But there are feedbacks. For example, there’s ice-albedo feedback. Ice and snow are highly reflective, bouncing much of the incoming solar energy right back into space. Global warming means less snow and ice, means less solar energy reflected back to space, means more solar energy absorbed by earth, means even more warming. This is an example of a positive feedback; “positive” doesn’t mean “good,” it simply means that the feedback tends to amplify a change. Negative feedbacks tend to suppress a change. Another example is water vapor feedback. Water vapor is also a greenhouse gas; warming means more water vapor in the air, means more greenhouse effect from water vapor, means even more warming — another positive feedback.

It turns out that most of the feedbacks in the climate system are positive, so man-made global warming from increased CO2 (and other greenhouse gases) tends to be amplified. We expect that the climate sensitivity to doubling CO2 (dT2x) is greater than 1.2oC. How much greater is a hotly debated topic.


The best estimates are based on computer model simulations which include things like changes in ice and snow cover and total atmospheric humidity. The evidence indicates that dT2x is probably between 1.5 and 4.5oC, with the “best estimate” being 3oC. One of the difficulties in estimating it is that computer models (global climate models, or GCMs) take a long time to run, even on the fastest computers. The Japanese have built the world’s fastest supercomputer, the “earth system simulator,” specifically to run computer models of the whole planet. But a single run can take quite a long time.

And when we estimate quantities by computer simulation, it’s far better to use more than one computer run. Instead of running one simulation and computing the change, we can run dozens, even hundreds of simulations and compute the change for each. This gives us and ensemble of results, and the average over the entire ensemble gives us a considerably better estimate than any single model run. Also, the amount of variation in the results gives us some idea about how different the result might be in the real world, based on those uncertain factors like chaotic dynamics.

What if we could run thousands of simulations for our ensemble? Wouldn’t that be great? Unfortunately, that’s beyond the scope of even the fastest computers. But it’s not beyond the scope of a new strategy called distributed computing. The idea is to share the workload among a lot of computers. In fact, certain tasks can be run by personal computers in private homes; you can download the program and data, it will run as a “background process,” and report its results to the central computer via the internet.

This has been done, in a project called “climateprediction.net” It has produced the only ensemble of simulations with more than 2,000 runs in the ensemble. The results were published in a paper titled “Uncertainty in predictions of the climate response to rising levels of greenhouse gases conditions,” by Stainforth et al. (2005, Nature, vol. 433, pg. 403).

For the most part, the experiments confirmed what had been shown in previous experiments; for most model runs dT2x clustered around 3.4oC, certainly no surprise. But one very interesting and unexpected result emerged. For a fraction (4.2%) of the model runs, the sensitivity dT2x was very high: more than 8oC. For some of the model runs, dT2x was as high as 11.5oC.

stainfor.JPG

Climate models simulate certain processes by “parameterization.” Since the researchers had so many model runs, they were able to experiment with different “parameters,” and noted that when some of the parameters were changed this affected the likelihood of getting very high dT2x.

If all perturbations to one parameter (the cloud-to-rain conversion threshold) are omitted, the red histogram in Fig. 2a is obtained, with a slightly increased fraction (4.9%) of model versions >8K. If perturbations to another parameter (the entrainment coefficient) are omitted, the blue histogram in Fig. 2a is obtained, with no model versions >8K.

Is climate sensitivity dT2x really as high as 11.5oC? It’s very unlikely. Even in this experiment, is was a rare occurence, and other research tends to place dT2x squarely in the IPCC range from 1.5 to 4.5oC. But although it’s very unlikely, it’s not impossible. Still, this possibility is way out on the edge of possibility.

But we’d better hope that doubling CO2 doesn’t create that much temperature change. 11.5oC is 20.7oF. That much temperature change would do more than dry up the American midwest and melt the Greenland and West Antarctic ice sheets. It would lead to a genuine disaster of epic proportions.

This is a worst-case scenario, and again, it’s way out on the edge of possibility — extremely unlikely. But we’re nearly sure to see CO2 levels reach double their pre-industrial levels by 2100, maybe even by 2050. So sooner or later, we’ll find out. Here’s hoping we don’t learn the hard way.


Categories: Global Warming

8 responses so far ↓

  • Glen Raphael // January 17, 2007 at 12:27 am | Reply

    I remember reading the claim (on some skeptic-friendly forum) that a small fraction of early simulation runs from ClimatePrediction.net actually produced results in which the temperature decreased with increased CO2, and that these results were discarded. I haven’t been able to verify the claim. Do you know anything about it?

    [Response: In fact it’s true. From Stainforth et al.:

    … Six of these model versions show a significant cooling tendency in the doubled-CO2 phase. This cooling is also due to known limitations with the use of a simplified ocean (see Supplementary Information) so these simulations are excluded from the remaining analysis of sensitivity.

  • Eli Rabett // January 17, 2007 at 2:41 am | Reply

    The issue becomes what is the probability assigned to each set of parameters that are used. You cannot assume that all parameters are equally probable a priori.

  • britandgrit // January 17, 2007 at 1:47 pm | Reply

    Hi tamino,

    As I understand it, the CO2 level won’t double for another 100 years or so at the current rate of increase. Do I have my numbers right on this?

    the Grit

    [Response: Basically, yes.

    I should mention that discussion of the effect of doubling CO2 on the real world is generally about the consequences of reaching double pre-industrial levels. Pre-industrial was 280 ppmv, we're already about 35% above that at a mean level of around 382 (it also fluctuates in an annual cycle). The current rate of increase is about 2.1 ppmv per year. At that rate we'll reach double pre-industrial (560 ppmv) a little before 2100.

    But realistic forecasts expect that the rate will increase. This is because the rate is increasing, and has been (with a minor hiccup around 1992) since they've been recording it at Mauna Loa. And that's just from industrial activity! If the Siberian permafrost all melts, it alone could double atmospheric CO2.]

  • Chris O'Neill // January 17, 2007 at 4:39 pm | Reply

    I think it’s also worth keeping in mind the CO2 level that represents half a doubling (i.e. the square root of 2 times) the pre-industrial, i.e. 396 ppmv. The ultimate temperature rise caused by this CO2 level is half the ultimate rise caused by a doubling of CO2. Given that the level is now (average over 2006) 382 ppmv and rising at 2.1 ppmv per annum, the half-doubling of CO2 is expected to occur by 2013. This is just around the corner industrially so it’s pretty much inevitable. Let’s hope the ultimate temperature rise from this half-doubling of CO2 (probably around 1.5 oC) doesn’t do too much damage.

  • Gil Pearson // January 17, 2007 at 8:08 pm | Reply

    Tamino,

    I posted a slightly edited version of this question for Hans Erren yesterday. I don’t think he noticed it, but perhaps it is appropriate to ask you this question on this thread.

    I am intrigued by the recent paper on ocean heat content (Lyman 2006). As I understand it measurements of ocean heat from the Argo network will give us an excellent empirical way of determine the most probable CO2 climate sensitivity. Ocean heat content seems to be a direct indicator of the radiated imbalance without confounding surface effects such as weather. My problem is that I do not know how to translate the latest apparent trend of radiated imbalance as expressed in watts into an inferred CO2 sensitivity. Perhaps you can help me here.

    Hansen’s in his 2005 paper noted an accelerating trend in ocean heat that indicated a radiative imbalance of .85 W at the end of the 1993 to 2003 period. He said that this inferred a climate sensitivity of 2.7 degrees, close to the 3.0 degrees predicted in the models. He calculated that once “heat in the pipeline” was taken into acount, the terrestrial temperature record indicated a similar result. Thus ocean heat was the “Smoking Gun” of AGW.

    We now have two further years of data from Lyman and if it is correct the ocean cooled from 2004 to 2005 rather drastically. There is still a net warming from 1993 to 2005, however, if the Lyman results are correct, the only trend you can ascribe to AGW is now .33 W.

    My question, if we assume that the IPCC TAR forcing definitions are correct, what does all this imply about climate sensitivity? I think that there is a logarithmic translation between energy imbalance in watts and equalization temperature. I realize that the Lyman data is new, however assuming that it is confirmed, and using Hansens own methods, wouldn’t the sensitivity to 2XCO2 scale to something like 1 to 1.5 degrees?

    Thanks and Regards, Gil Pearson

    [Response: Outstanding question. Thank you.

    I have only slight familiarity with this issue; I'll revisit the literature to determine how Hansen uses radiative imbalance to estimate climate sensitivity. This is an excellent topic for a post, so I'll probably answer with one some time in the next few days (believe it or not, I have a job!). Stay tuned, I'll give ya the best I can.

    Two caveats: 1. I've often said that I'm not a climate scientist -- I'm a mathematician with a strong background in physics and astronomy. 2. For readers who might be intimidated by the complexity of the question, don't be. I welcome both difficult, complicated questions (like this excellent one), and those that are so simple that you might be afraid to ask. Don't hesitate, and I guarantee that any comments which attempt to intimidate or ridicule basic questions will simply not appear.]

    [Response again: I've posted on the topic here.]

  • britandgrit // January 17, 2007 at 8:26 pm | Reply

    One more question if I may, where can I get the figures for how much mass of carbon we’re talking about?

    the Grit

    [Response: there's an excellent graph of the numbers here. It also gives URLs (not links, but you can cut-and-paste the URL) for the data sources.]

    [Response again: I realized that the graph linked to shows the carbon excess, relative to the average over the last 1000 years or so. You can take the "C mass in atm" numbers from that graph, and add about 480 Gt (480000 Mt, to use their units), to get the atmospheric total.]

  • Hank Roberts // January 18, 2007 at 2:50 am | Reply

    This may help on Lyman (found using Google Scholar)
    http://asd-www.larc.nasa.gov/ceres/STM/2006_10/0610261650Tak.pdf

  • Steve Bloom // January 18, 2007 at 3:33 am | Reply

    In response to Gil’s question, the problem with the Lyman et al results is very much the last two years of data (2004-5). If they’re correct about the cooling, then sea level must have dropped 3 mm in each of those years. At the same time, they agree that measured sea level has actually gone up 3 mm in each of those years. For both to be true, 6 mm of melting is required for each year. So far so good, but the problem is that 6 mm/yr is a whole lot of melting and is contradicted by measurements. Measured sources are maybe 2 mm/yr, and even throwing in various fudge factors it’s hard to get to much more than 3 mm/yr (probably over-generous since Josh Willis thinks it’s only 2 mm/yr). So, there’s a minimum 3 mm/yr discrepancy, which is very large indeed. In sum, it’s not really appropriate to try to draw a lot of conclusions from Lyman et al until this mass imbalance is understood.

    There was extensive discussion of this matter in this Real Climate post, these two posts (here and here ) over on RP Sr.’s blog, plus it may be instructive to look at the relevant abstracts (here, here, here and here) from the December AGU meeting. Finally, since it’s a little hard to find down in the comments of one of RP Sr.’s posts, here’s the link to the Lyman et al EGU poster containing the 2004-5 numbers I discussed above.

Leave a Comment