Site Google Custom Search

RealClimate logo

24 Mar 2006

Climate sensitivity: Plus ça change…

Filed under: — gavin @ 7:43 pm

Almost 30 years ago, Jule Charney made the first modern estimate of the range of climate sensitivity to a doubling of CO2. He took the average from two climate models (2ºC from Suki Manabe at GFDL, 4ºC from Jim Hansen at GISS) to get a mean of 3ºC, added half a degree on either side for the error and produced the canonical 1.5-4.5ºC range which survived unscathed even up to the IPCC TAR (2001) report. Admittedly, this was not the most sophisticated calculation ever, but individual analyses based on various approaches have not generally been able to improve substantially on this rough estimate, and indeed, have often suggested that quite high numbers (>6ºC) were difficult to completely rule out. However, a new paper in GRL this week by Annan and Hargreaves combines a number of these independent estimates to come up with the strong statement that the most likely value is about 2.9ºC with a 95% probability that the value is less than 4.5ºC.

Before I get into what the new paper actually shows, a brief digresssion...

We have discussed climate sensitivity frequently in previous posts and we have often referred to the constraints on its range that can be derived from paleo-climates, particularly the last glacial maximum (LGM). I was recently asked to explain why we can use the paleo-climate record this way when it is clear that the greenhouse gas changes (and ice sheets and vegetation) in the past were feedbacks to the orbital forcing rather than imposed forcings. This could seem a bit confusing.

First, it probably needs to be made clearer that generally speaking radiative forcing and climate sensitivity are useful constructs that apply to a subsystem of the climate and are valid only for restricted timescales - the atmosphere and upper ocean on multi-decadal periods. This corresponds in scope (not un-coincidentally) to the atmospheric component of General Circulation Models (GCMs) coupled to (at least) a mixed-layer ocean. For this subsystem, many of the longer term feedbacks in the full climate system (such as ice sheets, vegetation response, the carbon cycle) and some of the shorter term bio-geophysical feedbacks (methane, dust and other aerosols) are explicitly excluded. Changes in these excluded feaures are therefore regarded as external forcings.

Why this subsystem? Well, historically it was the first configuration in which projections of climate change in the future could be usefully made. More importantly, this system has the very nice property that the global mean of instantaneous forcing calculations (the difference in the radiation fluxes at the tropopause when you change greenhouse gases or aerosols or whatever) are a very good predictor for the eventual global mean response. It is this empirical property that makes radiative forcing and climate sensitivity such useful concepts. For instance, this allows us to compare the global effects of very different forcings in a consistent manner, without having to run the model to equilibirum every time.

To see why a more expansive system may not be as useful, we can think about the forcings for the ice ages themselves. These are thought to be driven by the large regional changes in insolation driven by orbital changes. However, in the global mean, these changes sum to zero (or very close to it), and so the global mean sensitivity to global mean forcings is huge (or even undefined) and not very useful to understanding the eventual ice sheet growth or carbon cycle feedbacks. The concept could be extended to include some of the shorter time scale bio-geophysical feedbacks but that is only starting to be done in practice. Most discussions of the climate sensitivity in the literature implicitly assume that these are fixed.

So in order to constrain the climate sensitivity from the paleo-data, we need to find a period under which our restricted subsystem is stable - i.e. all the boundary conditions are relatively constant, and the climate itself is stable over a long enough period that we can assume that the radiation is pretty much balanced. The last glacial maximum (LGM) fits this restriction very well, and so is frequently used as a constraint. From at least Lorius et al (1991) - when we first had reasonable estimates of the greenhouse gases from the ice cores, to an upcoming paper by Schneider von Deimling et al, where they test a multi-model ensemble (1000 members) against LGM data to conclude that models with sensitivities greater than about 4.3ºC can't match the data. In posts here, I too have used the LGM constraint here to demonstrate why extremely low (< 1ºC) or extremely high (> 6ºC) sensitivities can probably be ruled out.

In essence, I was using my informed prior beliefs to assess the likelihood of a new claim that climate sensitivity could be really high or low. My understanding of the paleo-climate record implied (to me) that the wide spread of results from (for instance, the first reports from the climateprediction.net experiment) were a function of their methodology but not a possible feature of the real world. Specifically, if one test has a stronger constraint than another, it's natural to prefer the stronger constraint, or in other words, an experiment that produces looser constraints doesn't make previous experiments that produced stronger constraints invalid. This is an example of 'Bayesian inference'. A nice description of how Bayesian thinking is generally applied is available at James Annan's blog (here and here).

Of course, my application of Bayesian thinking was rather informal, and anything that can be done in such an arm waving way is probably better done in a formal way since you get much better control on the uncertainties. This is exactly what Annan and Hargreaves have done. Bayes theorem provides a simple formula for calculating how much each new bit of information improves (or not) your prior estimates and this can be applied to the uncertain distribution of climate sensitivity.

A+H combine three independently determined constraints using Bayes Theorem and come up with a new distribution that is the most likely given the different pieces of information. Specifically they take constraints from the 20th Century (1 to 10ºC), the constraints from responses to volcanic eruptions (1.5 to 6ºC) and the LGM data (-0.6 to 6.1ºC - a widened range to account for extra paleo-climatic uncertainties) to come to a formal Bayesian conclusion that is much tighter than each of the individual estimates. They find that the mean value is close to 3ºC, and with 95% limits at 1.7ºC and 4.9ºC, and a high probability that sensitivity is less than 4.5ºC. Unsurprisingly, it is the LGM data that makes very large sensitivities extremely unlikely. The paper is very clearly written and well worth reading for more of the details.

The mathematics therefore demonstrates what the scientists basically thought all along. Plus ça change indeed...



125 Comments

  1. I could probably look this up, but what is 2x CO2? I mean, is that 2x the historical level or 2x the present level (or 2x some other level)? Why not simply refer to the actual ppm? Further, why are we stuck on 2x? If we don't have to wait for equilibration every time, surely it can't be too onerous to plot out the sensitivity from present levels to, say, 3x? Cognitive psychologists tell us that we deal better with discrete entities and integers, but the world (and particularly the world of probability) is more continuous... my soapbox argument against limiting discussion to values that are 'convenient'.

    [Response: It turns out not to matter (which is why it rarely gets mentioned). The forcing from CO2 is logarithmic at the concentrations we are discussing (~5.3 log(CO2/CO2_orig) ). That means that any doubling (from 1x pre-industrial to 2x pre-industrial, or 1x present to 2x present) gives roughly the same forcing. Specifically, 280 to 560 ppm, or 380 to 760ppm are equivalent. 3xCO2 gives ~60% more warming than 2xCO2. It's always easier if people stick to a standard measure, and for good or bad we are stuck with 2xCO2 as the reference. - gavin]

    Comment by Steve Latham — 24 Mar 2006 @ 8:38 pm

  2. According to CDIAC's WWW site the average atmospheric CO2 mixing ratio at Mauna Loa was 374 ppm in 1976. Are you saying that Jule Charney's CO2 doubling (to 748) gives comparable climate sensitivity to Annan and Hargreaves modern GCMs for a doubling to 748? or 2-by todays CO2 of 375 that would be 750? I think that we need to be more explicit when talking about doubling CO2. If sustained, the recently reported accelerating rate of increase in atmospheric CO2 indicates we are likely to acheive that projected 3 degree C warming earlier than once thought.

    I guess I am surprised that with better understanding of the importance of water vapor feedback, sulfate aerosols, black carbon aerosols, more rapid than expected declines in sea ice and attendant decreases in albedo, effects of the deposition of soot and dust on snow and ice decreasing albedo, and a recognition of the importance of GHGs that were probably not considered 30 years ago, that the sensitivity has changed so little over time.

    [Response: We have to distinguish the intrinsic sensitivity of the climate to a forcing (such as 2xCO2) from the actual transient response to a whole suite of forcings. While we generally use 2xCO2 as the standard for the intrinsic sensitivity, we could have as easily used 1% increase in total solar irradiance, or 2x CH4 or something. The answer in deg C/ (W/m2) would be very similar. The aerosols don't affect the intrinsic sensitivity, but they are very important for the transient solutions, as are the other GHGs and volcanoes and solar etc. -gavin]

    Comment by Tom Huntington — 24 Mar 2006 @ 8:42 pm

  3. I am a little unclear as to what exactly is included in terms of feedbacks. I would assume that a prediction of 3oC warming for 2x CO2 includes at least H2O feedbacks. Maybe it includes ice albedo feedbacks. It probably does not include methane from melting permafrost. Or I suppose it takes the view that it does not matter where the GHG's come from, such feedbacks of CO2 or methane only mean we get to 2x CO2 faster.

    In short is it the conclusion that if one snapped one's fingers and doubled CO2, at the end of a few decades it would be 3oC+/-? What about the longer timeframe for ice sheet response and CO2 ouitgassing from the oceans etc?

    Comment by Coby — 24 Mar 2006 @ 9:08 pm

  4. Coby,

    I have concerns similar to those you expressed in 3. I recently reviewed an article presented at the 2006 AAAS symposium in St. Louis, MO. It seems that Mark Chandler, an atmospheric scientist at Columbia University, has similar concerns. He discussed a warming episode about 3 million years ago, in the middle Pliocene.

    Excerpts:

    "Ocean temperatures rose substantially during that warming episode - as much as 7 to 9 degrees Celsius (about 12 to 16 degrees Fahrenheit) in some areas of the North Atlantic. But scientists are puzzled. The carbon dioxide levels at that time "inferred from geochemical data" were roughly comparable to our own time, approaching 400 parts per million. Today's computer models do not predict the sort of temperature rises that occurred during the middle Pliocene, Chandler said." ...

    "You have to take some warning from the Pliocene," he said. Even in the absence of huge amounts of carbon dioxide as a forcing mechanism, he said, there still appear to be trigger points that, once passed, can produce rapid warming through feedbacks such as changes in sea ice and the reflectivity of the Earth's surface. ... Earl Lane

    Modern Lessons from Ancient Greenhouse Emissions
    http://www.aaas.org/news/releases/2006/0216greenhouse.shtml

    --
    http://groups.yahoo.com/group/ClimateArchive/
    --

    Comment by pat neuman — 24 Mar 2006 @ 10:38 pm

  5. Wouldn't it be important to compare where the Earth was with respect to Milankovich cycles and other forcings at the time?

    Comment by Steve Latham — 25 Mar 2006 @ 12:08 am

  6. Pliocene models, anyone?

    I'd love to know what they did take into account in attempting to model that period -- must include astronomical location, sun's behavior, best estimates about a lot of different conditions -- where the continents were, what the ocean circulation was doing, whether there had been a recent geological period that laid down a lot of methane hydrates available to be tipped by Pliocene warming into bubbling out rapidly. Maybe whether a prior era of subduction put a lot of carbon down into the magma for volcanos to belch out? The latter two I don't think we can model yet. But I don't know.

    I'm sure someone's addressed these in trying to model that period. Maybe our hosts can invite an author in?

    And those models didn't come up with the actual rapid warming that happened. Missed something, but what?

    Caution being -- we know it _did_ happen, then, so we know it _can_ happen, so it's conservative (overly so) to assume that what did happen won't happen.

    Put another way -- do other modelers include "whatever caused Pliocene rapid warming happening again" added to the upside error range" for other models?

    Comment by Hank Roberts — 25 Mar 2006 @ 1:37 am

  7. Recommended reading:

    J.P. van der Sluijs, J.C.M. van Eijndhoven, B. Wynne, and S. Shackley, Anchoring Devices in Science For Policy: The Case of Consensus Around Climate Sensitivity, Social Studies of Science, vol 28, 2, April 1998, p. 291-323.
    http://www.chem.uu.nl/nws/www/general/personal/sluijs_a.htm

    Abstract:

    Abstract

    This paper adds a new dimension to the role of scientific knowledge in policy by emphasizing the multivalent character of scientific consensus. We show how the maintained consensus about the quantitative estimate of a central scientific concept in the anthropogenic climate-change field - namely, climate sensitivity - operates as an 'anchoring device' in 'science for policy'. In international assessments of the climate issue, the consensus-estimate of 1.5 degrees C to 4.5 degrees C for climate sensitivity has remained unchanged for two decades. Nevertheless, during these years climate scientific knowledge and analysis have changed dramatically. We identify several ways in which the scientists achieved flexibility in maintaining the same numbers for climate sensitivity while accommodating changing scientific ideas. We propose that the remarkable quantitative stability of the climate sensitivity range has helped to hold together a variety of different social worlds relating to climate change, by continually translating and adapting the meaning of the 'stable' range. But this emergent stability also reflects an implicit social contract among the various scientists and policy specialists involved, which allows 'the same' concept to accommodate tacitly different local meanings. Thus the very multidimensionality of such scientific concepts is part of their technical imprecision (which is more than just analytical lack of resolution); it is also the source of their resilience and value in bridging (and perhaps reorganizing) the differentiated social worlds typical of modern policy issues. The varying importance of particular dimensions of knowledge for different social groups may allow cohesion to be sustained amidst pluralism, and universality to coexist with cultural distinctiveness.

    Comment by Roger Pielke, Jr. — 25 Mar 2006 @ 1:58 am

  8. "The forcing from CO2 is logarithmic at the concentrations we are discussing (~5.3 log(CO2/CO2_orig)."
    I've attemted to derive a formula for this. Is there a referernce available?

    [Response: IPCC TAR (or Myrhe et al, 1998) -gavin]

    Comment by Graham Jackson — 25 Mar 2006 @ 5:19 am

  9. Are there some equivalent formulae for estimating the radiative forcing caused by water vapor feedback?

    Comment by Blair Dowden — 25 Mar 2006 @ 10:22 am

  10. #7 Roger, are you sure the article is not a repetition of Sokal's joke on "postmodern speach"? I mean are you sure that the following sentences were not created by a computer program fed with some key words?
    "But this emergent stability also reflects an implicit social contract among the various scientists and policy specialists involved, which allows 'the same' concept to accommodate tacitly different local meanings" or
    "The varying importance of particular dimensions of knowledge for different social groups may allow cohesion to be sustained amidst pluralism, and universality to coexist with cultural distinctiveness".
    HELP!!!

    [Response: Indeed it reads like Sokal's famous hoax. Climatologists would have dearly loved to narrow the uncertainty range of climate sensitivity, but until recently there has been not enough solid evidence to justify this. Neither is there reason to shift the range - most models (including large ensembles with systematic variation of uncertain paramaters) give values smack in the middle of the traditional range, near 3 ºC. Finally, there is no good reason to widen the range, even though some studies have pointed to the possibility of higher climate sensitivity - but as we have discussed here, they did not provide positive evidence for a higher climate sensitivity, they merely showed that the data constraints used were weak. No doubt sociologists will have their theories on this, but my personal perception as a climatologist having worked on climate sensitivity is: the range has not changed over a long time simply because there was no clear physical evidence that would have justified such a change. -stefan]

    Comment by Georg Hoffmann — 25 Mar 2006 @ 10:29 am

  11. So how do we know that 2x pre industrial or present (280 or 375 ppm respectively) leads to a 1.5 to 4.5 Deg C rise in temperature. I mean have we a precedent in the available records on climate to know this for certain ?

    I have read for instance that by 2040 the Amazon will be drying out and has the potential by 2100 to have released some 35 Gigatonnes of additional CO2 into th atmosphere pushing up ppm levels to around 1000 ppm.

    Comment by pete best — 25 Mar 2006 @ 12:41 pm

  12. Why is CO2 more important than, say, deforestation or forest fires (as a "plant removal", not CO2 producer)as far as climate changes are concerned?

    [Response: Errm, well, its all a matter of putting greenhouse gases into the atmosphere. Most of the CO2 comes from fossil fuel burning; about 1/6 from deforestation I think - William]

    Comment by cp — 25 Mar 2006 @ 1:11 pm

  13. Following my own #6, I reread Gavin's original article a few more times and find it very helpful in understanding what hasn't been included in models (methane release for example). But that makes me suspect the sociology article can't be about scientists directly involved in doing climate modeling -- the modelers have to be explicit about the exact meaning of each concept taken into account, to be able to do their math.

    If the sociologist is describing people outside the field -- reading and interpreting the scientific publications -- then it's talking about politics (interpretations of reality, making alliances with people who disagree) -- which makes more sense.

    Were modelers specifically included or excluded, in that article? I suppose there are few enough modelers in the world that they will have talked about whether it's describing them (and whether they were interviewed, for that matter).

    Just speculating here, obviously.

    Comment by Hank Roberts — 25 Mar 2006 @ 1:23 pm

  14. I agree with Georg's (# 10) discomfort with Roger's posting # 7.

    Understanding the social and cognitive components of science is certainly important, but the abstract reads as if the possibility that we are discussing estimates of an objective quantity with an actual quantitative value is a matter of complete irrelevance.

    Some people don't believe in objective reality, but it's hard to see why one should refer to them in rational conversation.

    Comment by Michael Tobis — 25 Mar 2006 @ 2:18 pm

  15. "Social Studies of Science" was started in 1970 and claims to be the leading international journal devoted to studies of the relation between society and science. But sociologists often write as unclearly as the abstract indicates. I recommend ignoring this journal, these authors and this paper as irrelevant to climatology. The abstract suggests the authors are interested in a different topic and just happened to use climatogists as subjects.

    Comment by David B. Benson — 25 Mar 2006 @ 2:54 pm

  16. I don't know, guys -- I thought #7 wasn't so bad, although I'm not fond of the abstract's wordiness. RC is about conveying the science of global warming to the public. There are also many other outlets, including the mainstream media, trying to convey aspects of global warming to the public. Reading about how some researchers evaluate the communications among different groups may be insightful. I admit, though, that I haven't rushed to read the paper because I don't think the abstract says anything very surprising, it just says it in a dense way. In population genetics we have our own sort of benchmark comparisons, too, like effective population size and comparisons to one migrant per generation. Those concepts actually get abused (in the opinion of some people anyway) by people who want to oversimplify to make points; but without the abuse of the concepts these groups of people might not interact at all. I'm not sure which situation would be better.

    Comment by Steve Latham — 25 Mar 2006 @ 5:53 pm

  17. Re:#9
    Unfortunately, since e.g. even the sign of cloud effects is not known, this is one of the major aspects of the climate system that still needs to be sorted out.

    Comment by Armand MacMurray — 25 Mar 2006 @ 5:54 pm

  18. re 16. I think we'd be further ahead if more people read this (below) instead of that (7).

    Fiddling As The Earth Burns
    By Captain Paul Watson
    25 March, 2006
    http://www.countercurrents.org/cc-watson250306.htm

    For example, ... "In fact anytime anyone actually does something physically, they can expect to be condemned. The conservation movement seems to only value action on paper."

    Comment by pat neuman — 25 Mar 2006 @ 6:34 pm

  19. Question: I had the impression that recent results implied we were hitting a tipping point faster than we thought. I take it climate sensitivity is not the sole deteriminent for when a particular level of CO2 equivalent takes us past a tipping point.

    Or to put it another way: Assume sensitivity is actually 2.9? How good news is this in terms of how much we have to cut how soon?

    [Response: I'm not sure which "recent results" you are referring to. Certaintly there has been lots of discussion in the media and in the scientific literature about the possiblity of climate "tipping points". However, it is all pretty hypothetical, and there is no known particular level of CO2 that puts us past a tipping point, if such a thing really exists. Indeed, one of the findings in the recent paper by Overpeck et al. (this weeks Science), is that even as the Greenland ice sheet melts faster than originally expected, it still won't provide sufficient meltwater forcing of the North Atlantic circulation (which is the feature of the climate system most commonly implicated in the discussion of "tipping points") to force any sort of threshold change. We'll no doubt do a more extensive post on this at some point. As for good news...I suppose it is good news that the most extreme climate sensitivity estimates are likely to be wrong. But the basic finding that we have been right all along is not particularly good grounds for, say, reneging on Kyoto. --eric]

    [Response: Concerning the North Atlantic circulation: I'm not sure whether Peck's article supports the conclusion that Eric draws. Nobody knows for sure how fast Greenland would melt, and Overpeck does not provide the answer to that either. The simple rule of thumb is: melting Greenland during 1,000 years will provide an average freshwater inflow of 0.1 Sv (1 Sverdrup is one million cubic meters per second). Then, nobody knows how much freshwater it takes to shut down the Atlantic circulation. A recent model intercomparison concluded:

    The proximity of the present-day climate to the Stommel bifurcation point, beyond which North Atlantic Deep Water formation cannot be sustained, varies from less than 0.1 Sv to over 0.5 Sv.

    One needs to remember that Greenland is not the only source of freshwater, and that warming itself also reduces density. -stefan]

    Comment by Gar Lipow — 26 Mar 2006 @ 12:18 am

  20. I did some research and think I answered part of my own question. To some extent this could mean that we are heading for the middle scenario - that is the response to a given amound of carbon equivalent will be pretty close to the what has been the mainstream middle prediction all along. The one variable (aside from the fact the 2.9 is still the middle you can still go above or below) is the type of forcing as far as feedbacks. If for example the recent increase in speed of glacial melting is not noise but a real trend, icecap melting tends to provide faster feedback than other forcings thus will give worse results. So in that case we are looking at something a bit worse than the middle case, but not a lot worse. Which would mean we still (barely) have time to stave off the worst. Have I understood correctly?

    [Response: Yes, you have understood correctly. The chief uncertainty is the details of the feedbacks. Nicely put! -eric]

    Comment by Gar Lipow — 26 Mar 2006 @ 12:52 am

  21. Re response on 19
    If the scientific opinion is that we are not near the "tipping point".Why is this not published.
    Exagerated reporting will give Science a bad name.
    We might be crying "wolf" too often and when the real emergency is upon us nobody (including the politicians) will believe us.
    us

    Comment by Graham Jackson — 26 Mar 2006 @ 3:41 am

  22. The great disasters of human induced climat change appear to all be exaggerated in the press and pseudo scientific press according to realclimate.org and I have no reason to not believe them.

    Take the BBC program Horizon which claims to be (well I think it does anyway) a promote a scientific view of certain issues. One such episode was on Global Dimming and one on the Thermohaline system in the north atlantic. The first program proposed that as human kind has mopped up its pollution so global warming may increase as pollution seemed to be preventing a certain amount of sunlight from reaching the ground. Realclimate has commented on the episode I believe and has questioned some of its validity.

    The other episode of interest was concerning the thermohaline system in the north atlantic which is appearing to weaken. This weakening has been attributed to increased melt water from Greenland I believe along with increased river run off from rivers in Russia. Again this has been commented on by real climate and there appears to be no real evidence of thermohaline slowdown being caused by these means. One other factor here is increased evaporation at the equator which has increased the salanity of tropical waters along with increased percipitation at the poles seems to be making the thermohaline system move faster which in turn carries move heat to the poles and hence increases polar ice melting and hence possibly a greater chance of slowdown of the thermohaline system. This appears in the book the Weather Makers as fact from the Woods Holes Oceanographic institute. http://www.whoi.edu/institutes/occi/currenttopics/abruptclimate_15misconceptions.html

    The only thing that appears to be really true about climate change and climate scientists is that increased CO2 will arm the world but what the implications are of this warming scientists leave to other people.

    Comment by pete best — 26 Mar 2006 @ 6:44 am

  23. Re #11,

    Pete, where did you hear that "By 2040 the Amazon will be drying out and has the potential by 2100 to have released some 35 Gigatonnes of additional CO2 into th atmosphere pushing up ppm levels to around 1000 ppm"?
    Sounds intriguing but I have not been able to track it down.
    Cheers

    [Response: There is a Hadley Centre / HadCM3 study on this, using a version of the GCM with vegetation model included - William]

    Comment by Ben Coombes — 26 Mar 2006 @ 7:06 am

  24. Re #22

    It is listed in the Book the Weather Makers by Tim Flannery (a recent release) where he details on page 196 the possibility of the Amazon rain forest drying out and returning to the atmosphere huge quantities of CO2 released from the soil. It was also detailed some years ago here in the UK on a Equinox program on Channel 4.

    This is essentially a tipping point event or a GAIA revenge event you might say but in scientific terms it is a +ve feedback event of which there appears to be many such possibilities including the following:

    Thermohaline Slowdown - http://www.whoi.edu/institutes/occi/currenttopics/abruptclimate_15misconceptions.html#ocean_1

    Massive Methane release as detailed in the greatest mass extinction event of all time. Some 5C of warming appears to make the Oceans less dense allowing methane calthrates to be released from the Ocean floor en masse.

    Comment by pete best — 26 Mar 2006 @ 7:21 am

  25. RE #23: William, you're saying the model run actually showed the potential for 1,000 ppm? With what underlying number? In any case this sounds like kind of an important result, unless it's somehow deemed very unlikely. Could we please have a link to the study?

    [Response: I don't know about the 1,000 but Alistair has provided a link to the Nature paper - William]

    Comment by Steve Bloom — 26 Mar 2006 @ 8:33 am

  26. Climate projections need to account for enhanced warming due to global
    warming feedbacks as discussed by James Zachos, professor of Earth
    sciences at the University of California, Santa Cruz at the annual
    meeting of the American Association for the Advancement of Science (AAAS)
    in St. Louis in Feb, 2006.

    Excerpts from: Ancient Climate Studies Suggest Earth On Fast Track To
    Global Warming, Santa Cruz CA (SPX), Feb 16, 2006

    Human activities are releasing greenhouse gases more than 30 times
    faster than the rate of emissions that triggered a period of extreme
    global warming in the Earth's past, according to an expert on
    ancient climates.

    Zachos ... is a leading expert on the episode of global warming
    known as the Paleocene-Eocene Thermal Maximum (PETM), when global
    temperatures shot up by 5 degrees Celsius (9 degrees Fahrenheit).
    This abrupt shift in the Earth's climate took place 55 million years
    ago at the end of the Paleocene epoch as the result of a massive
    release of carbon into the atmosphere in the form of two greenhouse
    gases: methane and carbon dioxide.

    "The rate at which the ocean is absorbing carbon will soon
    decrease," Zachos said.

    Higher ocean temperatures could also slowly release massive
    quantities of methane that now lie frozen in marine deposits. A
    greenhouse gas 20 times more potent than carbon dioxide, methane in
    the atmosphere would accelerate global warming even further.

    "Records of past climate change show that change starts slowly and
    then accelerates," he said. "The system crosses some kind of
    threshold."
    ...

    http://www.terradaily.com/reports/Ancient_Climate_Studies_Suggest_Earth_On_Fast_Track_To_Global_Warming.html

    Comment by pat neuman — 26 Mar 2006 @ 9:58 am

  27. IHMO the Annan/Hargreaves estimate is correct UNLESS the system wanders into an area where the response is very non-linear (for want of a better word). For example, something like a major release of methane from clathrates, collapse of an Antarctic ice shelf, etc.

    The method they use is perfectly valid for an interpolation, but much more risky for an extrapolation. They are extrapolating. Bridge builders who do this risk embarrassment.

    [Response: But both of your examples are outside the 'construct' that is being used (i.e. fixed boundary conditions). You are correct in thinking that this is not a prediction though, rather it is a 'best guess' of a particular system property that has some relevance for the future. -gavin]

    Comment by Eli Rabett — 26 Mar 2006 @ 10:51 am

  28. It would seem to me that unless the earth operates systemically (GAIA like) then climate change will not be as bad for life as it could have been.

    Comment by pete best — 26 Mar 2006 @ 11:06 am

  29. Re 23, 24 and others
    Below is the first paper to raise the problem of a biotic feedback. However, since Annan is "implicitly" ignoring feedbacks it seems unlikely that he is including these results in his analysis and so it is deeply flawed! There are many other feedbacks, most notably the the ice-albedo effect of Arctic sea ice, which have already passed their tipping points. However, neither the scientists, whose incompetence it would illustrate, nor the politicians, whose impotence it would expose, wish to admit that. As the paper referenced by Pielke Jnr points out, "[The] emergent stability also reflects an implicit social contract among the various scientists and policy specialists involved, which allows 'the same' concept to accommodate tacitly different local meanings."

    Hank posted a second quote that he could not understand. "The varying importance of particular dimensions of knowledge for different social groups may allow cohesion to be sustained amidst pluralism, and universality to coexist with cultural distinctiveness". What it means is that the scientists' agenda is faith in their models, whereas the politicians agenda is to get re-elected. Their re-election is impossible if they act to limit carbon emissions. And we all, who post here, do not want to believe that the situation is as desperate as it is, and so we are happy to accept the reassurances from James, Gavin, and William.

    But the models are fatally flawed. Global warming is happening faster that the models predict. The models failed to reproduce the lapse rate, so the MSUs and radiosondes were blamed. Now, despite repeated attempts, it is still not possible to model rapid climate change, especially that at the end of the Younger Dryas. Even the start of the YD is only reproducible if unreasonable amounts of fresh water are included in the models. Moreover, it now seems unlikely that Lake Agaziss did flood the North Atlantic triggering the YD.

    I've already pointed out on RealClimate that the error in the models is in their treatment of radiation. Kirchhoff's law only applies when there is thermodynamic equilibrium, and that does not apply to the air near the surface of the Earth when the thermal radiation is constantly changing, driven by a diurnal solar cycle. Moreover the radiation from the greenhouse gases is calculated using Planck's function for blackbody radiation, but greenhouse gas molecules emit lines, not contiuous radiation. These two errors nearly cancel each other out, and so it is not obvious that the models are in fact wrong. However, a schema for handling radiation, which dates from before the quantum mechanical operation of triatomic molecules was even considered, is inevitabley flawed.

    Anyway here is the paper asked for:

    Cheers, Alastair.

    Letters to Nature
    Nature 408, 184-187 (9 November 2000) | doi:10.1038/35041539

    Acceleration of global warming due to carbon-cycle feedbacks in a coupled climate model
    Peter M. Cox1, Richard A. Betts1, Chris D. Jones1, Steven A. Spall1 and Ian J. Totterdell2

    Abstract

    The continued increase in the atmospheric concentration of carbon dioxide due to anthropogenic emissions is predicted to lead to significant changes in climate1. About half of the current emissions are being absorbed by the ocean and by land ecosystems2, but this absorption is sensitive to climate3, 4 as well as to atmospheric carbon dioxide concentrations5, creating a feedback loop. General circulation models have generally excluded the feedback between climate and the biosphere, using static vegetation distributions and CO2 concentrations from simple carbon-cycle models that do not include climate change6. Here we present results from a fully coupled, three-dimensional carbonâ??climate model, indicating that carbon-cycle feedbacks could significantly accelerate climate change over the twenty-first century. We find that under a 'business as usual' scenario, the terrestrial biosphere acts as an overall carbon sink until about 2050, but turns into a source thereafter. By 2100, the ocean uptake rate of 5 Gt C yr-1 is balanced by the terrestrial carbon source, and atmospheric CO2 concentrations are 250 p.p.m.v. higher in our fully coupled simulation than in uncoupled carbon models2, resulting in a global-mean warming of 5.5 K, as compared to 4 K without the carbon-cycle feedback.

    http://www.nature.com/nature/journal/v408/n6809/abs/408184a0.html;jsessionid=6FD9E0EEA1AB92BC821D10607313B78F

    Comment by Alastair McDonald — 26 Mar 2006 @ 11:28 am

  30. RE 29

    "I've already pointed out on RealClimate that the error in the models is in their treatment of radiation. Kirchhoff's law only applies when there is thermodynamic equilibrium, and that does not apply to the air near the surface of the Earth when the thermal radiation is constantly changing, driven by a diurnal solar cycle. Moreover the radiation from the greenhouse gases is calculated using Planck's function for blackbody radiation, but greenhouse gas molecules emit lines, not contiuous radiation. These two errors nearly cancel each other out, and so it is not obvious that the models are in fact wrong. However, a schema for handling radiation, which dates from before the quantum mechanical operation of triatomic molecules was even considered, is inevitabley flawed."

    Really do not know what you are on about here. Usually in large scale models so-called band approximations are made to simplify and speed up the radiation calculations. However these approaches all can be derivied starting from a quantum mechanical view of gasseous absorption and emmission. It is also standard practice to evaluate the accuracy of the band models (or correlated k type approaches) against
    detailed line-by-line data and calculations. See the relevent part of a text such as "An Introduction to Atmospheric Radiation, 2nd edition, K.N. Liou, Academic press" or any other text book on the subject.

    Comment by David Donovan — 26 Mar 2006 @ 3:32 pm

  31. Gavin, a minor but non-trivial point first. In your sixth last line, you've put the Annan and Hargreaves (A&H) estimate of the lower bound of the 95% confidence limits for climate sensitivity at 1.9ºC. On my reading, it is 1.7ºC. This figure is given three times in the paper (twice on p. 10 and again in Figure 1 at the end). Am I missing something?

    [Response: Whoops, my mistake. It is 1.7 and I've adjusted the post accordingly - gavin]

    The more important point I want to make is that A&H has only just been published, and the cut-off date for papers for AR4 has passed. It is unfortunate that this paper, whatever its scientific merit, cannot be considered by the intergovernmental panel until the succeeding assessment, which is due to be published around 2013. As you rightly say, this paper confirms what scientists 'basically thought all along' (or at least for the past thirty years). But since the last assessment report in 2001 there's been a change: things HAVEN'T remained the same.

    The change since 2001 was stressed by Sir Nicholas Stern, Head of the UK's Stern Review of the Economics of Climate Change in his keynote address at Oxford on 31 January (in the week before A&H was accepted by GRL). According to Sir Nicholas, 'Scientists have been refining their assessment of the probable degree of warming for a given level of carbon dioxide in the atmosphere', and 'ranges from 2004 estimates are substantially above those from 2001 - science is telling us that the warming effect is greater than we had previously thought.'

    The calculations of prospective warming in the OXONIA lecture and the accompanying discussion papers are based on the new climate sensitivity estimates by Murphy et al which were published in Nature, 12 August 2004, vol. 430, pps. 768-72. The 90% probability range in Murphy et al is 2.4 - 5.4ºC - i.e., 0.9ºC higher at both ends of the range than the 'canonical 1.5 - 4.5ºC range which survived unscathed even up to the IPCC TAR (2001) report.'

    Sir Nicholas Stern stressed that things have changed, while A&H now argue that they have remained the same. Unfortunately, AR4 comes in between.

    The issue of Nature which carried the Murphy et al paper included a related article by Thomas Stocker, and a paper in the issue of Science published on the following day (Kerr, "Climate Change: Three Degrees of Consensus", vol. 305: 932-934) reported Gerald Meehl as polling the 14 models expected to be included in the IPCC assessment and finding that a span of 2.6 - 4.0ºC in the 8 model results then available (Gerald Meehl and Thomas Stocker are the coordinating lead authors of the "Global Climate Projections" chapter of the AR4 scientific report, and James Murphy is a lead author of the same chapter).

    It is of interest to relate the range cited by Meehl to the profile of likelihood functions for climate sensitivity charted in Figure 1 appended to A & H. The lower and upper points of the IPCC modellers' range are, respectively, 0.9ºC above and 0.9ºC below A&H's 95% confidence range of 1.7 to 4.9ºC. (as shown in the red solid line representing "combination of the three constraints"). On the face of it the range of the IPCC models is centrally within the A&H 90% range, but visual inspection of Figure 1 suggests that A&H find that there is about a 45% probability that climate sensitivity is below the lower end of the range quoted by Meehl in August 2004 (Of course the IPCC draft report, which I have not seen, may include models with lower sensitivity than 2.6ºC).

    Note also that A & H qualify their findings as represented by the red solid line, as follows:

    "In fact our implied claim that climate sensitivity has as much as a 5% chance of exceeding 4.5ºC is not a position that we would care to defend with any vigour. since even if it is hard to rule it out, we are unaware of any significant evidence in favour of such a high value."

    [Response: Well, actually the final cut-off date for 'in press' was at the end of March, so A+H does count - whether this has influenced the text is unknown at this point. I don't think that Stern has any particular inside track to the IPCC draft and it shouldn't be presumed that he speaks with any authority on the subject. The point you make about the Murphy et al (or Frame et al or Forest et al or Stainforth et al papers) is exactly the opposite of the point I was trying to make. Specifically, just because one method does not constrain the high end sensitivities that does not imply that there is no constraint. The exisiting (stronger) constraints (based on the LGM for instance) are not overridden by a method that is not as strong. A+H's conclusion that there is no positive evidence for extremely high sensitivities is completely correct. -gavin]

    Comment by Ian Castles — 26 Mar 2006 @ 5:17 pm

  32. #29, I totally agree that the models appear to be quite conservative in their long term projections. On particular case in point was this past winters extremely warm periods, in fact as I can recall Michael Mann write, about North Americas sea of red temperature anomalies of January as something which is supposed to happen "20 years" from now. I also must point out the failure of Seasonal temperature Forecasts, quite glaring again for this past winter. Allistair suggestions should be taken quite seriously. I must also announce again, like a broken record, that running averages for March 2006 Canadian high Arctic are totally warm: +5 to 10 degrees C warmer, more again like a Polar model projection 20 years from now due to Polar Amplification as on a previous post on RC.

    Comment by wayne davidson — 26 Mar 2006 @ 5:35 pm

  33. Attempting to oversimplify (grin)
    -- the models state specific conditions, and give us trends and a sensitivity estimate that holds so long as only those specific conditions have any effect.
    -- The paleoclimate history gives us trends; as they come up with higher resolution info (ice cores, mud cores, etc) they give us sudden dramatic changes as unexplained fact.
    -- The theorists and field researchers give us every now and then a new big fact (such as the existence, stability, and likely evidence of past events involving sudden releases of methane, plankton feedbacks, solar flare episodes).

    After a while, the new big fact is well enough documented to fold into tne next generation of models, but it shows up there as "curves A, B or C" -- the likely trends depending on whether or not such a big event happens to happen (a large volcano, for example)

    Big sudden events that occur as tipping points during slow trends aren't in the models yet, although we can expect they must be in the details somewhere.

    Someone knowledgeable beat up on that, will you? I'm trying to express the feeling as an ordinary reader that neither the modelers nor the 'skeptics' incorporate something everyone seems to actually believe -- that big surprises do happen. The modelers rule them out to be able to create a model that can actually be run; the skeptics either believe the dice always roll in their favor or that nothing can go wrong ... or something. And those who do expect the unexpected get called "alarmists."

    Comment by Hank Roberts — 26 Mar 2006 @ 5:41 pm

  34. Re 33 (Hank again)

    You are quite right. The 'unexpected' happened twice during the Younger Dryas, once at the start and once at the end. There is no evidence of a sudden changes in CO2 then, and it has just been proved from ice core sampling that there was not a sudden methane release at the end to cause that rapid warming. What is evident from the dust during the cool phase and lack of dust during the warm phase was that the water vapour content of the air suddenly changed. H2O is the greenhouse gas which causes rapid climate change not carbon dioxide or methane.

    If carbon dioxide melts the Arctic sea-ice the change in water vapour will be catastrophic, because it produces a positve feedback.

    James Annan's sensitivity is nonsencse if it "implicitly" ignores other feedbacks.

    Cheers, Alastair.

    [Response: It is not 'James Annan's sensitivity' - it is the universally accepted sensitivity, and it does include Arctic sea ice and water vapour feedbacks. It does not include ice sheet, dynamic ocean, or vegetation feedbacks. - gavin]

    [Response: I suspect another common confusion here: the abrupt glacial climate events (you mention the Younger Dryas, but there's also the Dansgaard-Oeschger events and Heinrich events) are probably not big changes in global mean temperature, and therefore do not need to be forced by any global mean forcing like CO2, nor tell us anything about the climate sensitivity to such a global forcing. As far as we understand them, they are due to a redistribution of heat within the earth system, very likely due to ocean circulation changes. They change the heat transport between hemispheres and cause a kind of "see-saw": the south cools as the north heats up and vice versa, with little effect on the global mean. You can't compare that to global warming. -stefan]

    Comment by Alastair McDonald — 26 Mar 2006 @ 6:20 pm

  35. Alastair, Gavin's making more sense to me here, I think because he's giving more explicit detail on the models I'm asking about.

    I'm familiar as a lay reader with humidity-plus-methane arguments for the PETM, for example here (not at all sure these are more than opinions, since this was published as a letter not an article):

    http://www.nature.com/nature/journal/v432/n7016/abs/nature03115.html;jsessionid=D73B299BB40A74A2761F5C95CBF13113

    If you've published (or can refer to a publication) that tries to pull together models and fieldwork that gives support for this I'd welcome a pointer because I'm trying to understand how people get to agreement about these disparate arguments.

    Gavin's explanation of sensitivity above is the first clear explanation I've ever seen, making the point about what is -- and is not -- included in the many attempts to come up with a sensitivity estimate.

    I'm curious how and when (in a 'history or sociology' way) the unexpected events are taken into consideration in public discussion by the actual working scientists -- specifically.

    I realize the modelers are experts within the limits they have set out and are wise not to talk outside of them -- and the the field and theory people likewise are experts within the areas of their own work.

    Maybe the current RC discussions are as good as it gets, so far.

    Comment by Hank Roberts — 26 Mar 2006 @ 7:29 pm

  36. I think what Alastair is alluding to is the fact that, say by 2050 when the arctic ocean will conceivably be ice-free in the summer, the atmosphere will have a much higher relative humidity than it has currently because of the open air=water interface, so this will have a magnifying effect beyond just the feedback from increased CO2.

    What i am interested to know is if the models have been run with this scenario and what affect this has has on Northern Hemisphere climate, since in mid-summer the artcic can receive nearly as much insolation as the tropics.

    Comment by Anonymous Coward — 27 Mar 2006 @ 10:26 am

  37. Anony, Gavin answered your question before you asked it, above:
    "[Response: ... sensitivity ... does include ... water vapour feedbacks.]

    Comment by Hank Roberts — 27 Mar 2006 @ 12:21 pm

  38. Re 37

    It is not really true that Gavin has answered that question. He said that sensitivity includes water vapour and arctic sea ice, but I suspect that the changes in sea ice in the models are much less than we are seeing in practice. IIRC the sea ice extent last summer was similar to that modeled for 2040. Dr Coward's question was "Has an ice free arctic ocean been modelled?" That is a question I would also like answered, especially as when I last heard, it had not been.

    Gavin has said that the greenhouse effect of CO2 increases with the logarithm of the concentration. That is also the formula used for water vapour. However, they are both wrong. They are derived using the current radiation theory which is incorrect. The radiation is not absorbed throughout the full height of the atmosphere, passing through it rather like an electric current through a conductor.

    It is absorbed mainly in the bottom 30 metres. If the concentration doubles then the same amout of radiation is absorbed in the bottom 15 m. In other words the forcing is linear not logarithmic. This has a profound effect on the way water vapour behaves, because it increases sub-exponentialy with temperature. If the forcing was logarithmic, then that would cancel out the exponential effect. But since it is linear, if the temperature rises then the water vapour will run away, because the higher temperature leads to more water vapour which causes more greenhouse warming which leads to higher temperatures. Eventually the clouds cut off the source of heat and the system stabilises. This can be seen on a small scale in the tropics. On a large scale it can cause El Ninos, and on an even lrger scale in abrupt climate change.

    In the paper you cited, it showed how the water vapour was giving the extra boost to the temperature, but they could not explain it because their models were using the logarithmic relationship rather than the linear one. In other words that paper is further evidence that my ideas are correct. However I expect I will be told yet again that since I make such extravagent claims, it is no wonder no-one believes me :-(

    Cheers, Alastair.

    [Response: Many of the IPCC AR4 runs produce ice free (in the summer) conditions by 2100 (depending on the scenarios) so there is nothing intrinsic to the GCMs that do not allow this. You are fundementally wrong in your descriptions of the radiation as has been pointed out frequently here and elsewhere. -gavin]

    Comment by Alastair McDonald — 27 Mar 2006 @ 3:41 pm

  39. Re 34 Stefan, in your response you suggest that I am confused, but I am the one who has an answer to rapid climate change. Compare that with the conventional view, which is that the bursting of the dam holding back Lake Agassiz stopped the THC and so triggered the Younger Dryas. But as you point out there were other rapid climate events occurring in Dansgard Oescher cyles. Were all of these triggered by Lake Agassiz dam bursts? Moreover, were the warming events at the end of those cycles caused by the Lake refilling! No, I don't think it is me who is confused. If you will pardon a pun, I don't think the current model for abrupt climate change holds water! Why, even its inventor, Wally Broecker, now says "I apologize for my previous sins." http://www.sciencemag.org/cgi/content/summary/297/5590/2202

    The answer lies in a short paper by Gildor and Tziperman (see below. There is also an informal description of it here http://www.poptel.org.uk/nuj/mike/acc/#ACC_Tziperman )They too are slightly confused, trying to blame the 100 kyr cycles on sea ice. But their mechanism works beautifully for rapid climate change in the North Atlantic. The abrupt changes seen in the Greenland ice cores are due to sea-ice changes and the slower changes are the growth or retreat of continental ice sheets. What G&T are missing is the linear effect of water vapour accelerating the ice albedo effect of change in size of the sea ice sheets. They also seem to be unaware of that the Arctic sea ice still exists and its abrupt collapse will trigger the next rapid warming!

    Cheers, Alastair.

    Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
    ISSN: 1364-503X (Paper) 1471-2962 (Online)
    Issue: Volume 361, Number 1810 / September 15, 2003

    Pages: 1935 - 1944
    DOI: 10.1098/rsta.2003.1244
    URL: Linking Options
    Sea-ice switches and abrupt climate change

    Hezi Gildor and Eli Tziperman

    Abstract:

    We propose that past abrupt climate changes were probably a result of rapid and extensive variations in sea-ice cover. We explain why this seems a perhaps more likely explanation than a purely thermohaline circulation mechanism. We emphasize that because of the significant influence of sea ice on the climate system, it seems that high priority should be given to developing ways for reconstructing high-resolution (in space and time) sea-ice extent for past climate-change events. If proxy data can confirm that sea ice was indeed the major player in past abrupt climate-change events, it seems less likely that such dramatic abrupt changes will occur due to global warming, when extensive sea-ice cover will not be present.

    [Response: Alastair, I did not mean to say you were confused, but often people mix up global mean changes with the abrupt regional changes seen e.g. in the Greenland ice cores. You do seem not quite up to date with current thinking on abrupt climate changes (now I'm referring to your polemic question "Were all of these triggered by Lake Agassiz dam bursts?" etc.), and you're quoting Broecker out of context here, which misleads your readers. Perhaps have a look at my review article in Nature?
    Concerning sea ice: it does give a strong positive feedback which is included in all climate models. It plays an important amplifying role in the our own theory of the Dansgaard-Oeschger and Heinrich events. But remember it is a feedback, sea ice does not just start changing by itself, so sea-ice alone can never be a full explanation of these abrupt climate events. -stefan]

    Comment by Alastair McDonald — 27 Mar 2006 @ 4:56 pm

  40. If the high climate sensitivity effect of the ice ages is a result of the hysteresis effect as proposed by Oerlemans and Van den Dool (1978), then the present observed sensitivity of 1K/2xCO2 cannot be much higher.

    J. Oerlemans and H.M. Van Den Dool. 1978: Energy Balance Climate Models: Stability Experiments with a Refined Albedo and Updated Coefficients for Infrared Emission. Journal of the Atmospheric Sciences: Vol. 35, No. 3, pp. 371-381.

    The analogy with PETM is also not correct because ocean bottomwater temperature was about 20 degrees higher than present, and atlantic ocean circulation was going across Panama into the pacific.

    [Response: 'Observed'? Hardly. An almost 30 year old paper though - that's pretty good - I'll look it up next time I'm near the library... - gavin]

    Comment by Hans Erren — 27 Mar 2006 @ 4:59 pm

  41. No need to go to the library it's available online
    here now!

    Comment by Hans Erren — 27 Mar 2006 @ 6:34 pm

  42. WRT Gavin's response to 27, that is exactly my point, the confidence one can have in the Annan and Hargreaves limits decreases as you get outside of the limits set by recent climate parameters, not because their work is wrong, but because there may be other significant factors which become important. I think you have to explicitly state/understand this.

    WRT Alister McDonald's tosh, he would have to show me that the distribution of population among quantum states was not thermal in order to justify what he says. This is not the case at any location below the stratopause.

    Comment by Eli Rabett — 27 Mar 2006 @ 6:48 pm

  43. I'd welcome comments from Eli or Gavin on the content at the website Alastair points to:
    http://www.poptel.org.uk/nuj/mike/acc/#ACC_Tziperman

    Whether or not the content there supports Alastair's solution, the author does try to explain abrupt events to the general reader, and is writing at a level many people can follow. I wish Holderness were participating here and helping simplify discussion (assuming the facts are right!)

    Title is:
    Abrupt Climate Change: evidence, mechanisms and implications
    A report for the Royal Society and the Association of British Science Writers
    by Mike Holderness, March 2003

    Comment by Hank Roberts — 27 Mar 2006 @ 8:05 pm

  44. Gavin, Thanks for your comment on my 31. I'm glad to learn that I was wrong to conclude that A&H was too late for AR4, and that this paper "does count" (though it's unknown whether it has influenced the text of the Report). I'm also glad to know that you believe the A&H conclusion that there is no positive evidence for extremely high sensitivities is "completely correct": I certainly hadn't intended to suggest otherwise.

    I don't understand your comment that the point I made about the Murphy et al paper is exactly the opposite of yours. My only point on the paper was that the estimates of climate sensitivity therein had been relied upon in the Stern discussion papers. That's a simple statement of fact.

    Stern's temperature projections were presented as having been "taken straight from a combination of the IPCC and the Hadley Centre." I didn't presume that Sir Nicholas spoke with any other authority, and I certainly didn't endorse his alarmist conclusion, presented as a certainty, that under "business-as-usual ... we can see that we are headed for some pretty unpleasant increases of temperature [of 4 or 5ºC]."

    Curiously, the Stern documents define "business-as-usual" by reference to the IPCC A2 scenario, and calculate the required reductions in emissions against this benchmark. This is wrong for two reasons. First, the IPCC SRES (2000) states explicitly that "There is no business-as-usual scenario"(p. 27); and secondly, the population assumptions underlying A2 are totally unrealistic: the scenario assumes an end-century global population of 15.1 billion.

    In 2001 the International Institute for Applied Systems Analysis (IIASA), which produced the A2 population projectiion put the 95% confidence limits for the world's end-century population at 4.3 - 14.4 billion. These estimates are published on IIASA's website. Thus the authors of the population projection themselves consider the A2 population numbers to be highly unlikely. It is surprising that the IPCC considered that this scenario was suitable for use in AR4, but didn't see the need to commission an equally (or more) likely low scenario with an end-century global population of 4.3 billion (40% lower than the lowest SRES scenario).

    [Response: Why do you find this strange? Do you think climate modellers have an infinite amount of time to spend on doing scenarios? An improbable event with very bad consequences (the high population scenario) is much more relevant for policy decisions than an improbable event in which impacts are less severe. The high end tells policy makers the risk they are taking on. The low end only says that if we're very lucky part of the problem could go away by itself, and we'll all be breathing a sigh of relief and dancing in the streets. Put it this way: If you're designing a nuclear reactor containment vessel, do you design it for the 5% chance that the pressure in a meltdown is 100 atmospheres or the 5% chance that the pressure in a meltdown is 10 atmospheres? As far as policy goes, there's very little usable information content in the low end. --raypierre]

    Comment by Ian Castles — 28 Mar 2006 @ 1:01 am

  45. Re 42 Eli wrote

    "WRT Alister McDonald's tosh, he would have to show me that the distribution of population among quantum states was not thermal in order to justify what he says. This is not the case at any location below the stratopause. "

    Can I just first say that we are discussing Earth Science here. Earth Science is fractal, by which I mean for every rule there is an exception, and for every exception threre is yet another exceoption. This recursive feature makes it very difficult to give an exhaustive explanation, and a simple one tends to be full of holes. Here is a simple explanation of how radiation in the atmosphere really works. I have tried to make it understandable at the expense of completeness.

    First, it is only atoms in solids and liquids which have the electronic quantum states that are distributed thermally. Greenhouse gas quantum states are molecular vibrations whose distribution depends mainly on the frequency of molecular collisions which is controlled by the gas pressure not its temperature.

    Above the stratopause, the air is too thin for the collisons with greenhouse molecules to have a dominant effect, and that region is not considered to be in local thermodynamic equilibrium (LTE) ie it is non-LTE. (I think this is what you were referring to, Eli, with your mention of the stratopause.)

    Below the stratopause or region of non-LTE, at any two adjacent layers of the atmosphere then the pressure will be almost equal. So the radiation emitted by one layer towards the second will equal the radiation emitted by the second towards the first. In this case there is a radiative equilibrium known as LTE. (Originally this state was named 'radiative equilibrium' by Karl Schwarzschild when he first proposed it. At that time it was though that gases radiated as blackbodies and, I presume, that was why it was renamed 'local thermodynamic equilibrium'.

    The surface of the Earth radiates as a blackbody at its temperature which is continually changing because it is being heated by the sun, or it is cooling during the night. But the radiation fron the air above the surface is fixed by the pressure of the air which remains fairly constant at around one atmosphere. Thus the radiation entering the atmosphere from the surface of the Earth is not equal to that radiatied by the air to the surface. In other words, there is a layer of the atmosphere near the surface which is also non-LTE but in a different way from that above the stratopause.

    QED Eli.

    Just to expand on that a little - the distribution of population among the quantum states of the greenhouse gas molecules is actually set in two ways; by collisons with air molecules (ie both greenhouse gas molecules and non-greenhouse gas molecules) and by excitation as a result of absorption in that greenhouse gases' bands. This radiation can come from the surface of the Earth, or from the Sun which can be ignored here, or from other greenhouse gas molecules of the same species. In the mid atmosphere, the excitiation from collisions and radiation from other greenhouse gas molecules is matched by the emission from the greenhouse gas molecules and their de-excitation by collisions, LTE.

    The collision of a greenhouse gas molecule with a non-greenhouse gas molecule can result in the greenhouse gas molecule becoming excited and the non-greenhouse gas losing kinetic energy. Collison of an excited greenhouse gas molecule will normally result in the deactivation of the greenhouse gas molecule and an increase in the kinetic energy of the non-greenhouse gas molecule. This change in kinetic energy is called thermalisation and it changes the temperature of the air. Thus when the Earth is radiating with a greater intensity than the back radiation from the air, then the excess radiation will be absorbed by the air molecules, and the air will warm. At night, when the surface is cooler than the back radiation brightness temperature, then the air will cool until it reaches the surface temperature. But note that the air does not cool at a rate based on it temperature. It cools at a rate based on the difference in temperature between the surface and the gas brightness temperature and on the greenhouse gas density. In other words the heat flows from the air to the surface at a rate determined by the number of molecules of greenhouse gas and their emission rate. Increasing the concentration of a greenhouse gas will increase the back radiation linearly, not logarithmetically.

    This is easy to calculate for CO2 but not for H2O which may well be condensing (forming dew or fost!) or evaporating and so its concentration is also changing.

    So, I'll stop here before it gets too complicated.

    Cheers, Alastair.

    [Response: Your idea of what LTE means is fundamentally flawed- it has nothing to do with whether a a volume of air is radiatively cooling or not - see http://amsglossary.allenpress.com/glossary/search?id=local-thermodynamic-equilibrium1 . For further discussion of your ideas, please take it to sci.env - gavin]

    Comment by Alastair McDonald — 28 Mar 2006 @ 12:51 pm

  46. > end-century population at 4.3 ... billion [low end estimate]

    Today's population is about 6.5 billion -- and half the people in most of the world are under age 18!

    Is that low number a feedback result of possible climate change?

    Comment by Hank Roberts — 28 Mar 2006 @ 1:30 pm

  47. No Hank, it's not, and I can assure you that IIASA's population experts are fully aware of the age distribution of the world's population. This is from evidence given by Professor Nebojsa Nakicenovic, Coordinating Lead Author of the IPCC, to the UK House of Lords Committee that inquired into The Economics of Climate Change on 8 March 2005:

    'The scenarios reported in [the SRES] were done with knowledge about the future population roughly state of the art of the year 1996... In the early 1990s it was felt that the most likely or medium population projections for the world [in 2100] was about 12 billion people, the top range perhaps 18 or so. By the time we were writing this (SRES) report the medium was 10 billion people. The highest range about 15 billion, the lowest about six billion people... The two main organisations that do population projections for the world are IIASA [and] ... the United Nations. These scenarios are done very, very seriously and also reviewed with many, many groups and, as it happens, the two projections do tend to coincide. Ever since the (SRES) report was published, the population projections have been revised [downward]... [T]he medium population is no longer 10 billion but about eight billion people; the higher is now in the range of about 12 and the lower is in the range of four instead of six.'

    Thus the projection characterised by the Stern Review as 'business-as-usual' (15 billion) is 25% higher than the high estimate given by Professor Nakicenovic - and Nakicenovic's low estimate is 43% below the lowest projection used in the IPCC scenarios. This doesn't mean that the 4 billion is assessed by the experts as having a high probability - but it is more probable than the IPCC projection that Stern has adopted as 'business-as-usual'.

    [Response: The reduction in mid-range population estimates is great news, because even at the mid level projections, the prospects for keeping CO2 from doubling looked pretty bleak. Now the problem begins to look more tractable, meaning there's all the more reason for governments to buckle down and take serious action. I hope you're carrying this good news to the Australian government. --raypierre]

    Comment by Ian Castles — 28 Mar 2006 @ 3:03 pm

  48. Thanks Ray. On your comment on 44, If you agree that a 15 billion end-century population is highly unlikely, don't you think that the IPCC might have said so? Then the British Government's chief adviser on the economics of climate change may not have made the mistake of asserting that urgent action was necessary to avoid what he portrayed as a 'business-as-usual' outcome. What if the assessed probability of a 15 billion population was 1 in 1000? Or 1 in 1000000.

    [Response: And I'm glad you agree that the rosy low-end scenario is unlikely. --raypierre]

    Re your comment on 45, I don't know why you think the prospects for keeping CO2 from doubling are bleak. On the limited information available to me, they seem quite promising - but it certainly would have been helpful in making judgments on this point if the IPCC had modelled a low-medium population projection (as in the A1 and B1 scenarios) which made more moderate assumptions about growth in output and energy use. The Australian Government proposed that the IPCC consider such a scenario in its submissions to the scoping meetings in April and September 2003, but the IPCC decided in its wisdom that the SRES scenarios as they stood were suitable for use in AR4. I hadan't realised that an element in the IPCC's failure to take up the suggestion may have been that climate scientists had more to do with their time.

    [Response: I wasn't part of the group that made the decisions on what scenarios to use, but it is indeed true that running simulations of scenarios is a great drag on the climate science community, so there is every reason to try to focus on the ones that will be most informative. To be sure, the SRES scenarios might not be optimal, but tweaking the way scenarios are constructed is really pretty inconsequential compared to the physical uncertainties climate scientists are trying to grapple with. The whole issue of the way scenarios are chosen is vastly overblown, as Gavin nicely explained in his post here on Lawson's anti-IPCC diatribe ]

    Comment by Ian Castles — 28 Mar 2006 @ 5:53 pm

  49. Hmmm...Ian

    So that's a plausible 4.3billion low end population 'projection' is it?

    And a plausible 14.4 billion high end 'projection'?

    Would you acknowledge that either is probable, or do you think that the truth may lie somewhere inbetween?
    Considering that the calculations that resulted in these two figures have been "done very, very seriously and also reviewed with many, many groups" there appears to be very a similar spread between the poles (with significant room for many possible outcomes) as I've seen in some other peer reviewed work I've glanced through. Only that work was using another metric that relied on actual physical principles and not just on an economists' trust that future populations may develop in such a globally PC manner (which I believe is a concept you have previously derided) that those in the developing countries no longer need to produce progeny in large numbers simply as a means of providing some sort of security for themselves...now what was that publication called again?

    [Response: It would be a good idea to distinguish between demographics and economics. Demographics (population modelling) is a somewhat more tractable problem than economics. Nonetheless, as you note, it's an imprecise science. I don't hold this against demographers. There is simply no very reliable way to project fertility, and in particular the rapidity of the "demographic transition" whereby people tend to opt for lower fertility as the societies they participate in get richer. The unexpected speed of the demographic transition is one of the reasons for the downward shift in population projections -- though against that one must also keep in mind that the US population is growing unusually rapidly for a developed country. Given the high per capita carbon emissions in the US, this is not a good thing. There are many possible futures, and IPCC is not in the business of predicting "the" future, particularly given that the future will depend in large measure on the actions taken in response to the information provided by IPCC. IPCC only provides a conditional picture of what various futures might look like. It's the ironic fate of prophets that if they're listened to, and disaster is averted as a result, they rarely get a pat on the back. Instead, the reaction tends to be, "See, it was all alarmism; the disaster never happened." --raypierre]

    Comment by Hugh — 28 Mar 2006 @ 6:20 pm

  50. Would I acknowledge that either is probable, or do I think that the truth may lie somewhere in between? Of course I think that the truth will probably lie somewhere in between: that's the whole point of defining a 90% or 95% probability range. I'm not a demographer, and on the probabilities of different future global populations I'm ready to accept the findings of the two leading producers of demographic projections.

    I wasn't part of the group that decided on the scenarios either, and I certainly wouldn't have been in favour of producing as many as 35 if I had been. As many economists have pointed out, the scenarios should have been designed so as to provide a more transparent connection between driving forces and emissions outcomes.

    I find it ironic that on a thread which is discussing a new paper on the modelling of one of the main sources of uncertainty in climate projections (climate sensitivity), I am being criticised for trying to bring some rigour into the other main source of uncertainty: the future profile of emissions. In the end the improvement of climate projections depends upon reducing both sources of uncertainty and arriving at joint probability profiles.

    In my opinion, some previous studies by scientists to estimate joint probabilities have misinterpreted the SRES. For example, Wigley and Raper (Science, 2001, 293: 452) charted the 'frequency of occurrence of different 1990 to 2100 radiative forcings under the SRES emissions scenarios.' But the conclusions on probabilities derived from this exercise are negated by the following explanation from 15 SRES authors:

    'The fact that 17 out of the 40 SREES scenarios explore alternative technological development pathways under a high growth ... scenario family A1 does not constitute a statement that such scenaarios should be considered as more likely than others ..., nor that a similar large number of technological bifurcation scenarios would not be possible in any of the other three scenario families' (Nakicenovic et al, 2003, "IPCC SRES Revisited: A Response", Energy & Environment, 14, 2&3, 2003: 195).

    Thus Wigley and Raper erred, I believe, in weighting the scenarios equally. This effectively gave a much greater weight to the high emission (A1) scenarios compared with scenarios in the other families that the SRES authors elected not to explore in the same detail, and led to a significant upward bias in the probability distribution. It's also relevant that the SRES authors specifically stated that the high growth scenarios were 'Highly unlikely' (ibid., p. 196).

    [Response: I agree absolutely that a lot could be done to improve the process of generating scenarios. It's far from optimal. Your efforts at doing this are certainly most welcome. My beef is with those (like Lawson, and even some of the Economist editorial staff) who make out of the imperfection of the scenario generation a full-blown indictment of the whole IPCC enterprise. I am not a fan of Wigley and Raper's equal weighting of the scenarios. For that matter, I don't think any weighting of the scenarios is appropriate, because there is really no reliable basis to assign probabilities (yes I know, I'm about to hear from the Bayesians again...). I think that in a case like this, aggregating the different forecasts is inappropriate, and throws away too much information. The information is in the full spread of the forecasts, and as Judge Posner notes, too much attention has been paid to the mid-range and not enough to the extremes of what is possible. --raypierre]

    Comment by Ian Castles — 28 Mar 2006 @ 10:18 pm

  51. RE 50 and raypierre's reply:

    It's late and lets see if I can be coherent here:

    Scenarios are used as management tools or outcomes in adaptive management schemes.

    Combined with chosen indicators, one chooses among the scneario trajectories to gauge likely outcomes given the values from a range on indicators. Or, alternatively, one can use a trajectory to manage indicators.

    The scenarios, in and of themselves, sitting there, are valueless and lack probability assigments until one takes the values of certain indicators, and then analyzes them as to which scenario trajectory they fit into. One then follows the management strategy to manipulate the conditions that create indicator values in order to align with an agreed-upon scenario (such as the track to 3C warming).

    The scenario trajectories could also be what you get to (or follow, or want to acheive) when you have, say, q human population growth with r GDP growth and s carbon sequestration and t arable land in production and u urban growth and v climate variability and...you get the idea.

    Perhaps the IPCC or another body can include an adaptive management executive summary to help us understand how scenarios work (and maybe lessen some criticism of Ian's attempt to tighten up emissions profiles).

    Maybe that summary will help me write more cogent comments about scenario management too.

    Best,

    D

    Comment by Dano — 29 Mar 2006 @ 12:27 am

  52. If I may rephrase what I think Hugh was hinting at in comment #49, it is this: The SRES scenarios are imperfect, viewed as predictions, but the state of economic models is so dismal (and so unlikely to improve) that we are fooling ourselves if we think that there is some pool of economic "rigor" that can be brought to bear on the emissions forecasts that will significantly improve the situation. Given that state of affairs, one is more or less stuck with driving models with a range of emissions curves lightly constrained by the net pool of fossil fuels and some generous assumption about the amount of demand there will be for them. After all, what do you get our of the scenarios at the end? Answer (basically) a curve of CO2 emissions. How many different ways are there to draw a smooth curve between now and 2100 that goes up monotonically (or maybe goes down a bit toward the end, if we're lucky and policy makers pay any attention to the science)? Once you've decided how much it goes up, and integrated the net emissions to make sure you haven't burned more coal than exists, the details of where you put the wiggles don't make much difference to the climate forecast. I'd argue that there's a big enough library of curves in the SRES scenarios already that one gets a perfectly adequate feel for the range of possible future climates. To think that economics is really going to tell us which curve is the right one is a fata morgana. There may be some room for improvement in the population projections, though -- not because the state of the art has improved, but because there is new data on how rapidly the demographic transition can take place.

    Comment by raypierre — 29 Mar 2006 @ 12:50 am

  53. #33-34 I rather think that modellers must be judged by a very high standard beyond human peers. The peer of GCM's and long range climate models is the future, it is the only way to judge them efficient. I only know of NASA GISS has being quite accurate, the rest are not impressive. I find it curious that adaptation by trial in order to eliminate errors has not improved most models, it is a mystery to me that succesful projections are not commonly achieved. This is not a trivial matter, either sensitivity is greater than expected, or there are fundamental mis-applications generating unfortunate failures.

    Comment by wayne davidson — 29 Mar 2006 @ 1:17 am

  54. Ian, re my #46 -- no offense meant by my dumb question; you've got a mix of professional scientists and avid amateur readers here (and it's the Internet so some write-only contributors are inevitable). This subject in its branches is seriously fascinating stuff; the stretch between writing for scientists and writing for the average reading level is enormous. Many thanks for doing it.

    Comment by Hank Roberts — 29 Mar 2006 @ 3:46 am

  55. I agree generally with Raypierre's comment on my 50 and also with Dano's 51. But I don't agree that it is only the population component of the projections that can be improved. To my mind, the scenarios 'industry' has not done a good job of explaining what the scenarios mean in concrete physical terms - and if this were done better, it would quickly become apparent that some scenarios are scarcely conceivable.

    For example, the A1FI scenario assumes that by 2100 the average consumption of electricity per head for the world as a whole will be five times greater than the average consumption in the RICH (OECD90) countries in 1990; that the efficiency of energy use will have greatly increased; and that 70% of the total primary energy supply will still be being met from fossil fuels. Does anyone really believe that all of these things could happen? I believe that the model-builders could be asked to give detailed illustrative specifications of what different scenarios assume: for example, under A1FI, what will be the size of the average house in India, what proportion of dwellings will have centralised air conditioning in Nigeria, and what will be the capacity of coal-fired power stations in Japan, Sri Lanka and Poland?

    That is one end of the exercise: the long-term projections. But it's also possible to start off from the present and look ahead through the period for which plans are already under way. I don't believe that it is true that medium-term economic models have failed so dismally. The projections of energy use and CO2 emissions for 2020 which were made by the Commission of the World Energy Council in 1993 (published in 'Energy for Tomorrow's World') still seem remarkably good 13 years later, with only 14 more years to go. Dr. Pachauri, the present Chair of the IPCC, was one of the fifty experts that produced that report.

    The International Energy Agency has been modelling energy production and use up until 2030 in considerable detail, for several years. One can monitor the changes, and overall the projections haven't changed much. In 2003 the IEA produced a 500-page report, linked to its quantitative forecasts of demand and supply of energy in different regions, on the financing of global energy infrastructure up to 2030. It is not difficult to realise from such studies that the prospective growth in energy use in some of the IPCC scenarios is already completely out of the question.

    However sceptical one is about the possibilities of forecasting energy demand and emissions in the longer-term (and most economists are properly sceptical), it is important to try and narrow the range of uncertainty. Figure 9.15 in the TAR main scientific report (at http://www.grida.no/climate/ipcc_tar/wg1/fig9-15.htm) seems to show that the mean temperature increase from 1990 and 2100 varies between the six illustrative scenarios shown in that Figure in a range which is at least comparable in magnitude with the "likely" range identified by A&H for climate sensitivity. Measured against cumulative CO2 emissions from 1990 to 2100 the full set of 35 scenarios stretches from, respectively, 16% above to 21% below the highest and lowest of the six illustrative scenarios shown in Fig. 9.15. And this full set still does not include any scenarios with an end-century population lower than 7 billion, not to mention a number of other reasons why emissions could plausibly be lower than the lower end of the SRES range.

    Raypierre asks 'How many different ways are there to draw a smooth curve between now and 2100 that goes up monotonically (or maybe goes down a bit toward the end ...)' Answer: there are many such ways and one needs to start with some of the SRES scenarios. The lowest SRES scenario (in terms of the forcing pattern at the end of the century) is B1T MESSAGE. This scenario assumes that global CO2 emissions will increase by 42% between 2000 and 2030 - about the same increase as in the IEA's Alternative Scenario, which makes moderate, specified assumptions about what countries will do under the heading of 'climate policy' (much of which they may well have decided to do anyway, for reasons of energy security, curbing air pollution etc.)

    Yet, on my understanding, B1T MESSAGE stabilises CO2-equivalent concentrations at well below a doubling of the pre-industrial level of about 270 ppm. So why is it being constantly asserted that emissions must start falling within a decade or so if such a stabilisation is to be achieved? I do not believe that the IPCC 'no climate change policy' scenarios have been properly reconciled with the stabilisation scenarios. This is unfinished business.

    [Response: Ian, what do you think about the issue of the inertia arising from the long capital life of energy systems? Without policies rewarding carbon-free energy production going into place quite soon, the number of coal fired power plants likely to be built in the next decade have the potential to lock us into increasing emissions for the next 50-60 years, given the long capital life of such power plants. That would seem to argue that even if we could tolerate some delay in the date of the drop in emissions, we can't tolerate much delay in implementing economic disincentives to coal burning and tar sand oil production. I'm interested in what the energy forecasting models have to say about this issue, but don't know where to start looking in order to find out. --raypierre]

    [Response: On the matter of how many ways there are to draw the curve, I was suggesting (for the sake of discussion) that perhaps one could just replace the whole SRES process with a two or three parameter family of curves, and dispense entirely with any pretense that these are supposed to be actual forecasts. From the standpoint of the way most of the modelling community makes use of the scenarios, this would do just about as well as SRES, or would if it weren't for the need for sulfate aerosol emissions (which may not be all that critical in the out years). One would still want to lightly constrain the curves by things like total fossil fuel availability, plausible ranges of population growth, and plausible ranges of per capita energy usage or per capita carbon emissions. By "plausible ranges" I mean reasoning to the effect that we could estimate the year 2100 Chinese percapita (or maybe per-GDP) carbon emission to be something between the present US level and the present French level. From my standpoint, I'd say that SRES has too many scenarios rather than too few. The right way for policy makers to use IPCC results to make decisions is to take sensitivity coefficients from IPCC WGI and use those in parameterized climate models run by economists themselves. For that matter, there's nothing to keep economists from doing this right now. It's what Nordhaus does. --raypierre]

    Comment by Ian Castles — 29 Mar 2006 @ 4:05 am

  56. Re #53 I entirely agree with your view "This is not a trivial matter, either sensitivity is greater than expected, or there are fundamental mis-applications generating unfortunate failures." The radiation scheme that I am proposing does have a sensitivity greater than expected and highlights a fundamental mis-application. However, Gavin does not want me to discuss it here :-( See his response to #45.

    However, perhaps he will allow me to correct his assertion that LTE "has nothing to do with whether a a volume of air is radiatively cooling or not." Air in LTE obeys Kirchhoff's Law - see http://amsglossary.allenpress.com/glossary/search?id=local-thermodynamic-equilibrium1 . If that is the case, then the input radiation equals the output radiation and the Law of Conservation of Energy dictates that the temperature of the air cannot change.

    The corollary to that is if the air temperature near the sufrace is changing, then it cannot be in LTE. In other words, the boundary layer, where the air temperature follows the diurnal cycle, is not in LTE.

    But my views have been singled out to be banned from appearing here. So, Adieu. I will no longer outstay my welcome :-)

    Cheers, Alastair.

    [Response: Alastair, your views on most subjects are most welcome here. However, insisting that everyone has this particular point wrong (except you) and not listening to anyone who points this out is a bit of a waste of time. One last try: you are confusing LTE with the concept of the large scale steady state. This is just not the case. LTE is a statement about how evenly energies (thus temperature) are spread over a air volume, it is not a statement about whether that volume is in large scale equilibirum with the larger system. LTE means that radiating temperature of the volume is the same as the bulk temperature and is valid over >99.99% of the atmosphere. Thus endeth the lesson. - gavin]

    [Response:And if the theoretical argument isn't good enough for you, consider that radiative transfer schemes based on LTE have been verified against field observations of measured fluxes literally hundreds of times -- and millions of times if you consider that such schemes are the very basis of all remote sensing in infrared and longer wavelength channels. Alastair, please don't go, but please do stop bringing up the exact same spurious claims about radiative transfer over and over again. --raypierre]

    Comment by Alastair McDonald — 29 Mar 2006 @ 5:00 am

  57. Re: current 49-55

    Yes, taking a leaf from Hank's book (thank you), after a night's sleep and a day at work I return to find that my comment caused a little more angst than perhaps I intended. I'd like to take this opportunity to apologise if I appeared as a hit-and-run 'writer'.
    Although...I feel I do have a point, even if it takes raypierre to make it more eloquently than I ever could. Roger Pielke Jnr may not approve of this, but as I work in the field of the public perceptions of flood risk I can only say that I get frustrated when 'I perceive' that people are tending to make themselves comfortable on the left tail of any distribution curve they come across, before refusing to consider the slightest attempt at an ascent up the gradual slope above them.
    Perhaps I have just laid myself open to crys of *alarmist*, however, my point is that *extremes happen anyway* and as I see it (and perhaps this wasn't the best thread to make this point) it is inappropriate to skew conversation in any forum which is considered a teaching and learning resource (which RC without doubt is) completely toward the moderate and benign.
    I am aware of uncertainty, I am also aware of extremes, and my personal position is that one should not be used to camoflage the other.
    I apologise again but also thank subsequent contributors for clarifying the SRES issue for me. I shall know respectfully doff my cap, touch my forelock and retire to the rear of the church in order to re-take my pew...on the right.

    Comment by Hugh — 29 Mar 2006 @ 3:43 pm

  58. Re 56.

    Alastair please see the following article.
    http://en.wikipedia.org/wiki/Thermodynamic_equilibrium

    Cheers,

    Comment by David donovan — 29 Mar 2006 @ 5:09 pm

  59. Raypierre, Thanks for your responses to my 65. On the first, I think we are at cross purposes. The purpose of the IPCC emissions scenarios was (is) to project what future emissions levels under various plausible assumptions might be, ASSUMING that there are no policies explicitly to combat climate change. My point is that, IN FACT, every one of the 35 IPCC scenarios assumes that CO2 emissions WILL go on increasing at least up until 2030 - and that IN FACT the models used by the IPCC to produce the temperature projection in the TAR find that under several of those scenarios (in the B1 family) CO2-equivalent concentrations are stabilised at or below twice the pre-industrial level.

    I agree with your inertia point, but I'm not addressing the policy question: as a first step in framing policies explicitly to combat climate change, there is a need to assess what might happen in the absence of such policies. Many jurisdictions which have announced targets to reduce GHGs by 60% or more by 2050 (e.g.,the UK, California, South Australia) assert that there is a need for such reductions in order to stabilise at 2 X CO2 or below. On my reading of the TAR, this is simply untrue. If I am wrong on this TECHNICAL question, I'm ready to be told why. The POLICY question of whether and what rewards should be provided for clean energy is to my mind an entirely separate matter.

    On your question of where to start looking on the energy forecasting/technology issues, I'd recommend the IEA World Energy Outlook 2004, especially the discussion of the IEA Alternative scenario. The World Energy Outlook 2005 would also be useful, but it is focused especially on the Middle East.

    I've a lot of sympathy for your suggestion of just having hypothetical emissions curves rather than fully-fledged scenarios. In fact, David Henderson and I made a proposal along these lines to the convenor of the discussions on the scenarios at the IPCC meeting in Amsterdam in 2003, Dr. Richard Moss. The problem is that if the scenarios are to be used to guide (say) energy policies, you have to know what energy use, and from what sources, underlay the CO2 emissions at different points on your hypothetical curve. The need for the underlying socio-economic assumptions is even more obvious if the purpose is to project future numbers at risk of hunger, malaria, etc. The whole reason for commissioning the SRES in the first place was to have projections that would be useful for purposes other than as input to climate models.

    With respect, I think that your suggestion that (e.g.) China's per capita emissions in 2100 be modelled as an average of the PRESENT US and French levels embodies a fundamental fallacy. The UK's per capita emissions now are lower than they were a century ago, but its real income is five times as great. Conversely, China's per capita income is now higher than the UK's was a century ago, but its per capita emissions are only a quarter as great. In order to model prospective emissions a century hence, it's essential to take account of technological change: I think that it's surprising that even the lowest of the IPCC scenarios project significant fossil fuel CO2 emissions at the end of the century.

    I'll reply separately to your point about Nordhaus and the economic modellers.

    [Response: Ian, the chances of a decreasing CO2 emission trend by 2030 are extremely slim if not absolutely zero (barring some absolute economic catastrophe), so I think the SRES scenarios are on pretty safe ground there. And the climate changes through 2030 are mostly going to be a catch up excercise - so very little change can be expected climatically based on conceivable emission pathways to that date. On the technical point regarding what is necessary for eventual stabilisation, back of the envelope calculations do indeed suggest that a 60-70% reduction emissions would be necessary - the exact number will depend on carbon cycle feedbacks to the new climate regime. This point is simply related to the carbon cycle and has nothing to do with any specific scenario of course. The basic argument goes like this: current emissions are around 7 Gt/yr, of which 60% remains in the atmosphere. The sinks for this anthropogenic CO2 depend for the most part, on the atmospheric concentration, which implies that the CO2 levels today are sufficent to 'push' about 3 Gt/yr into the ocean and biosphere. Therefore a reduction to 3 Gt/yr global emissions would be much closer to a stable state than the current emission level. This is not a controversial number. - gavin]

    Comment by Ian Castles — 29 Mar 2006 @ 6:48 pm

  60. RE 55:

    Ian, your argumentation omits the implementation from feedback loops.

    There is no a priori necessity, say, 20 years after a projection to stick with that projection, unless you are managing to that projection. So, for instance your [f]or example, the A1FI scenario assumes that by 2100 the average consumption of electricity per head for the world... would be modified in an adaptive management program as more information became available; presumably, management strategies would attempt to reduce the amplitude of this scenario and successful implementation would change the trajectory.

    But this gets back to Ian's observation that [t]o my mind, the scenarios 'industry' has not done a good job of explaining what the scenarios mean in concrete physical terms .

    Regardless, I agree with raypierre that there are too many scenarios. There is no way decision-makers can decide in that sort of environment.

    Best,

    D

    Comment by Dano — 29 Mar 2006 @ 9:43 pm

  61. Thanks Gavin for your comments, but with genuine respect I don't think that what you've said meets the point I'm trying to make. I'll try putting it a different way.

    The projected level of CO2 emissions in 2100 under the IPCC B1T MESSAGE scenario, which like all of the IPCC scenarios excludes the impact of policies that explicitly address climate change initiatives (e.g. the Kyoto Protocol) is 2.68 GtC. As you've said that 3 GtY is not a controversial number, I assume that you agree that the IPCC's emissions modelling experts believe that annual CO2 emissions may be at a sustainable level in 2100, without climate change policies.

    One can of course debate whether the assumptions underlying that projection are reasonable, and I've already said that I don't believe that the scenarios 'industry' has done a good job in this respect. But I can't see the point of the IPCC going through an elaborate four-year study imagining possible futures if the Panel then turns round and says that this particular future isn't imaginable after all.

    The next step is to look at the time profile of global fossil fuel CO2 emissions under this scenario. As I've said, B1T MESSAGE projects a growth of 42% between 2000 and 2030. It happens that this looks to be in the ballpark as a PREDICTION at this stage but THIS IS NOT NECESSARY TO MY ARGUMENT.

    EITHER the IPCC was wrong in finding that if (I emphasise 'if') emissions rose by 40%+ in the first decades of the century it would still be possible (by following the B1T MESSAGE profile during the rest of the century) to achieve stabilisation at a CO2-equivalent concentration lower than 2 X CO2-equivalent; OR I am wrong in my interpretation of the IPCC's findings as outlined in the SRES and Chapter 9 and Appendix II of the WGI Contribution to the TAR (in which case please tell me where I'm wrong); OR many governments (and others) are wrong in claiming that emissions must start to fall within a decade or so, and be below the 2000 level by large percentages by mid-century. I don't see any other alternative.

    If the governments are right, the world has already failed. As you say, growth in CO2 emissions for several decades is inevitable. But if the IPCC is right, this growth does not at all exclude achieving stabilisation via (for example) the B1T MESSAGE emissions trajectory. The achievability or otherwise of this trajectory is a separate issue which will require the probing of assumptions - it can't be settled by back-of-the-envelope figuring.

    The probability range for climate sensitivity in the A&H paper refers, as I understand it, to the increase in temperature at equilibrium resulting from a doubling in CO2-equivalent atmospheric concentrations. Am I right in my understanding that if these concentrations are stabilised at or below twice the pre-industrial level, the frequency distribution of the probability of different climate sensitivities in that paper can be used to assess the prospective increase in global mean temperature at equilibrium COMPARED WITH THE PRE-INDUSTRIAL LEVEL? In other words, the increase which has already occurred and the 'catch-up exercise' between now and 2030 are included within the increase at equilibrium that is estimated to result from the near-doubling in CO2. If that is the case, I don't understand the relevance to this discussion of your 'catch-up exercise' point. The catch-up increases are already taken into account in the IPCC temperature projections.

    [Response: I think you may be confusing reaching a level of emisisons that will (eventually) lead to a stabilisation, and the actual level at which CO2 may be stabilised. The more CO2 that is emitted in the meantime, the higher the eventual stable level will be. The B1 scenario looks like it would stabilise at around 2xCO2 in the 22nd Century which is a relatively optimistic outcome (given no specific climate related reductions in emissions). But don't kid yourself that this would be a minor climate change. The EU target of trying to avoid a 2 deg C change over the pre-industrial level is extremely unlikely to be met under such a scenario, and most efforts appear to be directed towards stabilisation at significantly less than 2xCO2, hence the emphasis on slowing growth rates now. -gavin]

    [Response: I can even go Gavin one better. You can compute the CO2 response to your favored emission scenario yourself, using Dave Archer's online version of the ISAM model here. I'll leave the land carbon emissions the same as in IS92b, and assume that we reach 8Gt per year in 2010 (an approximate linear extrapolation from the 7Gt per year global emissions in 2002, from CDIAC). Then, if I assume we at least manage to stabilize the emissions thereafter at 8GT per year, atmospheric CO2 hits about 550ppm in 2100, but it's not in equilibrium at that point -- it's still rising and will continue to do so into the indefinite future, until the coal runs out. So, in order to stabilize CO2 at less than doubling, it is certain we have to reduce emissions after 2010. Suppose we reduce emissions by 50% from 2015 onward. In that case, we only hit 440 ppm in 2100, which is short of doubling, but the important thing is that the CO2 is not STABILIZED at 440ppm; in 2100 it is still rising at a good clip, and without further reductions in the out years it will double and continue increasing beyond that. If we reduce CO2 emissions beyond 2010 by 75% (down to 2Gt per year), then CO2 does indeed stabilize at a value short of doubling (about 400ppm). ISAM isn't a state of the art carbon accumulation model, but it's not bad. By these calculations, the statement that it would require a 60% reduction of emissions in the near term seems spot on. If you have some other scenario in mind that you think would stabilize CO2 without such an emissions reduction, you can try it out in ISAM and see what happens. I'm curious about what you actuallly had in mind. Were you perhaps confusing a target of keeping CO2 below doubling at 2100 with a target of achieving stabilization by 2100? Those are two very different things. --raypierre]

    Comment by Ian Castles — 29 Mar 2006 @ 10:13 pm

  62. Forgive me for leading us farther astray from the topic, but I'm finding the dialog with Ian very instructive, and if not here, where else would we pursue it? If I understand Ian correctly, he is suggesting that naturally occurring technological progress will reduce the per capita carbon emission in the future to a point where the problem of global warming will largely solve itself. Evidently, many nations of the world must not believe this, otherwise they'd have nothing to lose by signing on to mandatory carbon controls beyond Kyoto. I don't see any clamor to do this, not by the US, not by China or India, and not by Australia either. Perhaps these nations are misguided.

    However, I question the assumption that technological innovation will substantially decrease per capita carbon emissions in the future, in the absence of aggressive internalization of the environmental costs of coal burning and tar sand mining, as implemented by either cap and trade schemes or carbon taxes. Without such measures, where would the incentive be to invest in new technology? Coal is abundant and cheap, and is likely to remain so. There are no market signals in place (beyond the limited Kyoto action) that would cause any free enterprise to consider anything other than coal burning in pulverized coal plants, perhaps with some scrubbers to cut down on the more obvioius pollution. In fact world per capita CO2 emissions leveled off at about 1.1 tonnes per person per year in about 1970 and have stuck there ever since. So where is the incentive to do things any differently in the future? The improvement from the 19th to the early 20th centuries was driven by cost, and the desire to get rid of the more obvious forms of pollution (like the London smogs). Most of those incentives are gone now, except to a limited extent in China which is scheduled to reduce its coal related pollution somewhat.

    Comment by raypierre — 29 Mar 2006 @ 11:57 pm

  63. Thanks Raypierre. I'm finding this interesting too. There's no mystery about the profile of emissions I'm talking about: it's the B1T MESSAGE scenario in the SRES, NOT the B1 IMAGE scenario which has a much higher growth of emissions. Thanks for the reference to ISAM and I'll do my best to feed B1T MESSAGE into it and report what I come up with.

    The reason that I remain mystified is that your response to 59 seemed to me to mean that if CO2 emissions in 2100 amounted to 3GtC or less, they would be absorbed by sinks and the atmospheric concentration of CO2 would not be increasing at that point. Is that correct, or is it the case, as you seem to imply, that the atmospheric concentrations would still be rising at 'a good clip'? Even if ISAM tells me the latter, I think that I will still have trouble understanding the mechanism: where is the increased CO2 concentration coming from, if not from emissions? (I understand the point that there will still be a TEMPERATURE increase in the pipeline).

    Now to your 67. You are not understanding me quite correctly because it's not me that's canvassing the possibility that 'naturally occurring technological progress' will reduce the carbon emission in the future to the level where the problem will solve itself: it's the IPCC itself, which approved the Special Report on Emissions Scenarios at the WGIII plenary session at Katmandu, Nepal in March 2000. The arguments in your second paragraph are effectively questioning the findings of that Report. In Box 4-9 (pps. 216-220) of the SRES there is an analysis of estimated cost levels and the output of energy from 22 energy technologies in 2050 and 2100, for each of the six IPCC marker scenarios. Of course the technological assumptions underlying some of the scenarios may be over-optimistic, but that would have to be demonstrated on the basis of evidence rather than the casual empiricism exhibited in your examples of London a century ago and China today. (It's important to note that the IPCC experts excluded all technologies that had not already been demonstrated on a prototype scale, which seems to me to be a very conservative assumption for projections extending a century into the future).

    [Response: Yes, the technologies exist, but I ask again, what is the economic incentive to use them when coal is so cheap? --raypierre]

    Comment by Ian Castles — 30 Mar 2006 @ 3:17 am

  64. Raypierre, I downloaded the ISAM link but wasn't able to work out how to enter an emissions profile such as that of B1T MESSAGE. But I can hardly believe that the projected concentrations (including at equilibrium), forcings and temperature increases under this scenario haven't been modelled already. Dr Tom Wigley said in his presentation to the IPCC Expert Meeting on Emissions Scenarios in Amsterdam on 10 January 2003 that the projected level of CO2 concentrations under this scenario was 480 ppm in 2100. Regrettably, the IPCC has not published a report on this meeting. However, the projected emissions of the main GHGs under the B1T MESSAGE scenario are shown, in the same detail as was used to model the six illustrative scenarios in Appendix II of the WGI Contribution to the TAR, in http format at http://www.grida.no/climate/ipcc/emission/164.htm , and there is a link to the same data in Excel at http://www.grida.no/climate/ipcc/emission/data/allscen.htm . Would it be possible for someone more familiar with the technology than I am to calculate some rough-and-ready projections of key climatic data from these emissions numbers?

    Comment by Ian Castles — 30 Mar 2006 @ 7:56 am

  65. I would like some feedback from the climate scientists regarding my thoughts regarding climate change. From what I have seen of the current climate models (or simulations) they are remarkably accurate.

    I certainly disagree with a line of argument put forward by the contrairians that you cannot make predictions. They love to quote some Physicist who said something to the effect: "making predictions is very hard, especially about the future". If this were the case, then we might as well throw away Engineering. It is a stupid argument, that is in line with the rest of their repulsive opinions.

    My thoughts follows:
    I have been thinking about the weather in terms of basic thermodynamics, in order to imagine what a worst case ultimate climate equilibrium would look like.

    The way I figure it: weather involves changes in phase of water (solid - liquid - vapor) and the movement of air. These changes require work to drive them. The work that is available to drive the weather is derived from the temperature differences within the weather system. For example in your car, the amount of work that is available, before any other efficiency losses, is the temperature difference between the temperature of combustion and the temperature of that same gas at the end of its expansion in the cylinder.

    If global warming leads to reduced temperature differentials then what will emerge will be a hotter climate with much less rain and much less wind, so there will be less humid air being forced over mountains. That would mean widespread desert.

    My questions to the climate scientists are:
    Do the climate models predict ultimately (when equilibrium is reached) reduced temperature differentials, including latitude and altitude?
    What was the climate like after the Permian extinction, when CO2 levels were very high?

    Comment by Lawrence McLean — 30 Mar 2006 @ 9:55 am

  66. Lawrence - Permian models, maps, discussion that answer your question, several in the top ten hits here. Note that there's no ultimate equilibrium, the continents drift, orbit/axis change ....
    http://www.google.com/search?q=Permian+climate&start=0
    Also 'Permian' in the search box at top of this page will lead you to other topics here, one in particular, on that question.

    Ian, Raypierre, your conversation is fascinating, more! more!

    Comment by Hank Roberts — 30 Mar 2006 @ 11:57 am

  67. Continuing the discussion with Ian...

    To put in new data in the online ISAM model, you just click on one of the IPCC scenarios and then you get an editable table, where you can put in the numbers you want. Since this is made for use in class, the time resolution for the data entry is a little limited, to make things easier for the students; however, you can do pretty well by putting in SRES data interpolated to the time points allowed in the web interface.

    I did this for the B1T MESSAGE scenario, which I hadn't looked at closely before (as I said, there are a whole lot of scenarios there). This scenario basically assumes we manage to hold fossil fuel emissions steady at about 9GT per year between 2015 and 2025, whereafter they drop precipitously to 5Gt in 2050 and 2.68 Gt in 2100. This indeed does result in stabilization at 460ppm in 2100, largely because the sharp drop to 2.68 brings emissions down to where they are balanced by oceanic uptake of the accumulated atmospheric burden. It thus does appear that if we can at least keep emissions flat up to 2025, we can put off the need for really strong reductions until after 2025. Keep in mind also that the MESSAGE scenario includes an assumption of 1.5Gt additional reduction by 2050 from land carbon sequestration. Reducing the resulting net emissions to 3.5Gt in 2050 represents a 56.25% reduction from an 8Gt baseline. It seems excessive to raise such a hue and cry (as if it were a real scandal) if some politicians in some speeches set their target at 60% reduction rather than 56.25%. For politicians, I think that's doing pretty well! For that matter, how much reduction you need in 2050 depends somewhat on how much you think emissions will increase before then. We're really quibbling about differences around the margin here. If everybody managed to adhere to something like the MESSAGE scenario, you'd get no complaint from me.Note also that stabilizing at 460ppm still gives you a pretty hefty climate change once equilibrium is reached, so if it's possible to do better by more agressive emissions reductions, that would be highly desirable.

    Comment by raypierre — 30 Mar 2006 @ 12:48 pm

  68. Re #58 Thanks David for that link, but you don't say whether you think it agrees or disagrees with what I have to say. It is mainly concerned with LTE in a glass of water containing an ice cube. That is rather dissimilar to a planetary atmosphere containing a mix of gases which includes greenhhouse gases in trace amounts and a condensible greenhouse gas.

    It does say:

    "It is important to note that this local equilibrium applies only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas need not be in thermodynamic equilibrium with each other or with the massive particles of the gas in order for LTE to exist."

    That does not fit with my understanding. As I understand it LTE exists when the Planckian temperature equals the Maxwellian temperature i.e. when the kinetic energy of the massive particle matches the emission of photons. See; http://scienceworld.wolfram.com/physics/LocalThermodynamicEquilibrium.html

    However in the glass, the water and ice will radiate more or less as blackbodies unlike the gases in the atmosphere.

    Cheers, Alastair.

    Comment by Alastair McDonald — 30 Mar 2006 @ 3:28 pm

  69. I don't think it is said nearly enough in conversations about IPCC scenarios: the world does not come to a stop in 2100. Be careful not to look at the climate in 2100 under your favorite scenario and think "that's manageable" without understanding that there is usually more change in the pipe.

    I know raypierre touched on that by making the distinction between CO2 levels in 2100 vs a CO2 equilibrium in 2100, I just wanted to frame the point more prominently.

    Is a 2oC change over the 21st century ok? Well what if there is another 1oC in the pipe? This is harder, especially when we DO NOT KNOW what the threshold is for massive releases of methane from ocean sediment.

    I know Michael Tobis makes this point from time to time but I agree and it is worth repeating, and that is that the idea of focusing on 2100 is really very arbitrary, even if I understand the need for some line in the sand and the difficulty of caring about 1000 years from now.

    Comment by Coby — 30 Mar 2006 @ 4:25 pm

  70. With reference to comments 31 and 44, I think that this paper was indeed published in time to be cited in the latest draft of the AR4, but apparently it wasn't in time to have much influence, if this article is to be believed.

    [Response: I'm pretty sure that can't have been the current draft because they were not finalised at that point. Let's hope not anyway! -gavin]

    Comment by James Annan — 30 Mar 2006 @ 6:34 pm

  71. That was a month ago, have they been continuing to edit the draft since then? For the public, I think dealing with the "low probability" (James, you call it less than 5% chance) of higher warming takes into account the number of people at risk.

    If you compare 2 degrees C to a small asteroid, what level of warming compares to a large one?
    The answer is counterintuitive, for risk perception, and I think IPCC needs to deal with the low probability high consequences case. A 4% risk of, what, 6 degrees C warming, to pick a number wildly.

    Compare your odds of dying from an asteroid impact, from The Planetary Society's page:
    http://www.planetary.org/explore/topics/near_earth_objects/threat.html

    "... in any given year, there is only a 1 in 500 million chance that you will die from a Tunguska-like impact. Over a human lifetime, which we round up to an even 100 years for simplicity, it would seem there is only a 1 in 5 million chance that a Tunguska-like impact will result in your untimely death. A 1 in 5 million chance may be small enough that most people would give it little practical concern.

    "What about the comparative hazard from much less frequent global-scale impacts? If we assume that such events occur only once every million years but are so devastating to the climate that the ultimate result is the death of one-quarter of the world's population, this translates to an annual chance of 1 in 4 million that you will die from a large cosmic impact even if you happen to be far removed from the impact site. Integrated over a century, our simple metric for a human lifetime, the chance becomes 1 in 40,000 that a large cosmic impact will be the cause of your death. Such a probability is in the realm that most people consider a practical concern."

    -----
    Now, the odds we'd get a serious excursion into warming from unexpected (methane burp after a major subduction earthquake series, a Yellowstone-caldera explosion) are low. One in four million per year?

    Of course, if we got the warming _and_ the asteroid, they'd cancel out (wry grin).

    I realize this is all outside what the modelers model. But it's what the worriers worry about, eh?

    Comment by Hank Roberts — 30 Mar 2006 @ 7:19 pm

  72. Re 70 et al. Did you include the forcing from the reduction in Arctic sea ice in your sensitivity calculations? Or were they based on the effect of CO2 on the level of the atmosphere which emits at the effective temperature?

    Cheers, Alastair.

    Comment by Alastair McDonald — 30 Mar 2006 @ 7:48 pm

  73. Re 71 Hank, if CO2 levels continue to rise, then the effects of global warming will get worse. How bad does it have to get before we take action? When we do decide, how much worse will it get then?

    Cheers, Alastair.

    Comment by Alastair McDonald — 30 Mar 2006 @ 8:20 pm

  74. 65 Continued...

    Hank,
    I realise that there is no absolute equilibrium, however, what I mean, what will conditions be like when the system has had time to adjust. Periods of stability of climate do exist. In any case thank you for the suggestion, I thought there may have been a ready answer to my two questions.

    Cheers

    Comment by Lawrence McLean — 30 Mar 2006 @ 8:48 pm

  75. Gavin (70),

    Maybe you don't realise quite what an outlier we currently are in the probabilistic climate prediction community :-)

    It will be interesting to see what gets written in the months and years to come, but anyone who's hoping for a handbrake turn in the next few days is likely to be disappointed. Indeed, it would be risky of the IPCC authors to overturn several years of accumulated work based on one short GRL paper that they've barely had time to read. Who knows, maybe someone will demonstrate why we are wrong...OTOH, plenty of people have seen the paper now and I've yet to hear anyone in the field still maintaining their belief (of a significant chance of a sensitivity substantially greater than 4.5C) in the light of our argument.

    [Response: But as I said, your paper essentially formalises what most people felt already - some of those were likely reviewers of the first draft and so one might expect more of a consensus in the final version. We shall see. - gavin]

    Comment by James Annan — 31 Mar 2006 @ 1:17 am

  76. Aren't people saying that besides sensitivity to CO2, other causes for sudden climate excursions need to be mentioned, to cover worst case risk as explained to the public? I don't see them as ignoring your models, but as saying the models and CO2 sensitivity aren't guaranteed to cover all risks.

    Comment by Hank Roberts — 31 Mar 2006 @ 2:19 am

  77. Raypierre, In your response to my 63, you ask 'what is the economic incentive to use [alternative technologies to coal] when coal is so cheap.' I find it ironic that I am being asked this question, when the IPCC issued a press statement in December 2003 (its only press statement in more than two years) which strongly criticised me and my co-author David Henderson for 'questioning the scenarios developed by the IPCC'. The statement noted that these had been published in a report 'based on an assessment of peer reviewed literature ... and subject to the review and acceptance procedures followed by the IPCC.'

    The IPCC defines 'scenario' as a 'PLAUSIBLE description of how the future may develop'. I think your question implies that some of the IPCC scenarios are implausible - for example, there are three (A1T AIM, A1T MESSAGE and B1T MESSAGE) which assume that coal's contribution to total energy supply will be less than 3% at the end of the century. The reasons why the SRES Writing Team found these scenarios to be plausible are outlined in the Report and in the peer-reviewed literature cited in the lists of references. Another valuable contribution to the literature on this subject, not cited in the SRES, is Chakravorty et al, 1997, 'Endogenous substitution of energy sources and global warming', Journal of Political Economy, 105 (6) 1: 201-34.

    Your 67 is totally off-beam because the emissions numbers you cite bear little relationship to those in the B1T MESSAGE scenario (or any other IPCC scenario). You say that fossil fuel emissions under this scenario 'drop precipitously to 5 GtC in 2050': they don't - these emissions are projected at 8.48 GtC in 2050, According to the table on p. 526 of the SRES (to which I provided links to html and Excel versions), global fossil fuel CO2 emissions in this scenario at mid-century are projected to be 42% above the level in the Kyoto base year.

    I don't know where your numbers come from - perhaps some other scenario done by the MESSAGE modellers at IIASA? You ask me to keep in mind that the MESSAGE scenario 'includes an assumption of 1.5 GtC additional reduction by 2050 from land carbon sequestration'. No, this is not true of B1T MESSAGE: the reduction from land use changes in 2050 is 0.67 GtC, not 1.5 GtC.

    You go on to say that 'Reducing the resulting net emissions to 3.5 GtC in 2050 represents a 56.25% reduction from an 8 GtC baseline', and that 'It seems excessive to raise such a hue and cry (as if it were a real scandal) if some politicians in some speeches set their target at 60% reduction rather than 56.25%.' Well, for a start the projected net CO2 emissions in 2050 in B1T MESSAGE are 7.81 GtC, not 3.5 GtC, and the reduction from an 8 GtC baseline is therefore 2.4%, not 56.25%. And the 60% reductions to which I referred are formal targets set and announced by national and state governments, not 'some politicians in some speeches.'

    I reject your statement that 'we're really quibbling about differences at the margin': the differences are in fact very large and the targets are, at least in most cases, expressed in relation to 2000 or current levels (NOT from the peak levels that may be achieved as your comments imply).

    I'm glad of your confirmation that the CO2 emissions of 2.68 GtC under B1T MESSAGE in 2100 would be balanced by oceanic uptake of the accumulated atmospheric burden, so that emissions would be stabilised at that level. Gavin said in response to my 61 that I might be 'confusing reaching a level of emissions that will (eventually) lead to a stabilisation, and the actual level at which CO2 may be stabilised', but I take your comment to mean that I was not confused on this point.

    You say that 'stabilizing at 460ppm still gives you a pretty hefty climate change once equilibrium is reached, so if it's possible to do better by more aggressive emissions reductions, that would be highly desirable.' In my view, the question of how aggressive emissions reductions should be is properly a matter for decision by governments, not for scientists. But scientists have an important role in helping to ensure that government decisions are made on a fully-informed basis.

    In my 51 I said that "I do not believe that the IPCC 'no climate change policy' scenarios have been properly reconciled with the stabilisation scenarios. This is unfinished business." I reiterate that statement. If anything, the comments that have been made confirm my view that there is no substance to claims that CO2 emissions must start falling within a decade or so if 2 X CO2 is to be avoided. I know some argue that the scale of the emissions reduction task must be exaggerated in order to put pressure on governments to act. I disagree strongly with that view. It's the responsibility of scientists to tell it like it is.

    [Response: Ian, if your aim was to complete obsfucate any original point you were trying to make you have succeeded. If you wanted to hoist a strawman argument ('global emissions must fall within a decade') to knock down, then you may have succeeded as well. The fact remains that decisions being made in the next decade will determine not this decade's emissions but those of the next 20 or 30 years. Unrestrained growth of emissions over that time scale make stabilisation at levels consistent with a climate change of (say) < 2 deg C over pre-industrial levels extremely difficult. No-one I know thinks that the scale of the emission reduction task is being exaggerated - if anything I think it's being understated to make it look more achievable. A cut of 60% in emissions? easy? I don't think so. - gavin]

    Comment by Ian Castles — 31 Mar 2006 @ 3:07 am

  78. Re 68.

    I was basicly reinforcing the points made by Gavin and Raypierre in thier response to #56 (see also my commet #30). You appeared to argue that LTE does not apply in the lower atmosphere while it indeed does. This does not mean that the emmission is treated as a smooth featureless `blackbody curve'. The existence of LTE only means that the distribution of gass molecules having particular quantum vibrational and rotational energy states can be accurately calculated using Boltzmann statistics and LTE also means that Kirchoff's law applies (emmission coefficeint=absorption coefficent at all wavelengths).

    In general the IR absorption coefficient of a gass (e.g. its spectrum) is a function of wavelength, its molecular structure, and the population of its various quantum states (which in LTE is determined by Temperature). The line shapes themselves are functions of temperature and pressure (the higher T and P the broader they become). If one looks at a IR emmission spectra (looking down form space) one sees something like http://www.atmos.umd.edu/~owen/CHPI/IMAGES/emisss.html (note the specta shown here are at low spectral resolution at higer res individual lines may be resolved). If the absorption coefficent in the atmosphere were constant with wavelenght one would measure a smooth backbody curve like those depicted in the figure. However, since gasses do indeed possess specifc absorption spectra we see this reflected in the observations.

    In practice if one is interested in using measured spectra to deduce things like gass concentrations in the atmosphere one must, in general, perform so-called `line-by-line' radiative transfer calculations. These type of calculations are preformed directly using detailed data on line positions and strengths and are carried out on a fine spectral grid. Depending on what spectral region one is interested in 1000's of line may have to be treated. Line-by-line calculations are generally too time consuming for use in weather and climate models and thus so-called band models are used. Here, in effect, low spectral resolution calculations using effective absorption coefficients are carried out. The accuracy of these band models (there exists several methods for creating band models) are tested against observations and line-by-line calculations and have generally been found to do a pretty good job for their intended purposes.

    Hope this helps

    Comment by David Donovan — 31 Mar 2006 @ 5:26 am

  79. Thanks for your comment on my 77 Gavin. The Stern discussion paper issued by the British Government states that 'any of these stabilisation levels [450, 500 or 550 ppm equivalent CO2] would require global emissions to peak in the next decade or two and then fall sharply.' This claim was based on calculations by the Hadley Centre.

    Under the IPCC's B1T scenario, global fossil fuel CO2 emissions are projected to be higher in 50 years time than they are now, but the rapid decline after that means that the CO2 burden in 2100 under this scenario is estimated at 480 ppm (Dr. T M L Wigley at IPCC Expert Meeting in Amserdam, January 2003). You thought that I might be 'confusing reaching a level of emissions that will (eventually) lead to a stabilisation, and the actual level at which CO2 may be stabilised', but it is clear from Raypierre's comments in 67 that in this case the eventual stabilisation would not be higher than 480 ppm. I conclude that the UK Government view that emissions must peak in a decade or so and then fall sharply in order to stabilise CO2 concentrations at 450, 500 or 500 ppm CO2 equivalent is incorrect. I'm grateful for all of the comments I've had.

    Comment by Ian Castles — 31 Mar 2006 @ 2:37 pm

  80. Ian --

    I pointed you towards Archer's online model in the hopes that you would dial in the kind of scenario you wanted and get a feel for the behavior of the system yourself. Since your computer skills don't seem to be up to filling in some numbers on a web form and clicking on a button, I did a hasty job of this myself; my time isn't unlimited, and if you want other people to do your work for you, you've got to be patient and not rant if people don't do things exactly the way you want the first time. Nonetheless, I'm trying to provide some numbers that will help me (and you) understand your argument.

    I banged in some numbers that had an emissions reduction (land sequestation included) of about 60% in 2050, and showed that this yielded stabilization at a value somewhat short of doubling CO2. I rather like this scenario, but it's true that it isn't B1T Message -- when doing the interpolation to transfer the numbers from the SRES web site to Archer's model I accidentally shifted a few columns and wound up creating a scenario with earlier reductions than B1T Message. The CO2 stabilization value I quoted was correct for this scenario; also the 2.67 Gt/Year target value I quoted (and used) for 2100 is the correct one for B1T Message, including land sequestration. That's a pretty aggressive reduction, I think you'll agree.

    In response to your comment, I had a closer look at B1T Message, and ran the correct numbers for this scenario through a proper interpolation before putting them in Archer's model. My description of the scenario given in my previous post is basically right, except that the aggressive reductions (relative to 2010 base values) don't begin until after 2050. B1T Message has a net emission reduction (including sequestration) of 13% in 2050, increasing sharply to 70% in 2100. This scenario, put into the carbon cycle model, just barely stabilizes CO2 by 2100, at a value of 480ppm. However, this is only CO2, and this concentration yields a radiative forcing just .8 W/m**2 shy of the radiative forcing corresponding to doubling of CO2; you have all the other anthropogenic greenhouse gases to add in as well. But yes, it does seem that if one's only ambition is to live with a climate stabilized at that yielded by a doubling of CO2, there is a scenario where one can postpone the date of 60% emission drop past 2050, and make up for that with sharp reductions going to 2100. This still leaves you with a climate that is very likely warmer than anything we've seen for the past 10 million years or so.

    Even this would be a worthy goal -- it would at least mean we would never have to face a 4xCO2 or 8XCO2 climate. If the nations of the world could at least agree to do this, it would be a good start. One could justifiably argue that to allow the world emissions to conform to B1T Message, the developed countries should be subjected to more stringent emissions reductions, to make up for their greater past emissions and the benefits reaped thereof by their economies. Note that even though the B1T Message scenario doesn't call for sharp reductions in emissions until after 2050, it does require emissions to be kept below 9Gt through 2025 and to begin to decrease after that. Are you really saying that that will happen without some kind of carbon tax or equivalent action? I have yet to see any evidence of that.

    Finally is your main complaint that some politicians may have said that a 60% reduction by 2050 would be necessary to stabilize at 2XCO2? There are many different variations on what might have been meant by a claim of that sort. I would be very grateful if you could provide some specific citations to where this specific argument has been made, so we can all base our judgement on what was actually said. I'll reiterate what Gavin said, and what I said earlier: even achieving the modest goals of B1T Message is no mean feat, and (given the long capital life of energy systems) is very likely to require the right market signals to be put in place very soon.

    Now, Ian, I do assume you are really interested in all this, and are not just using this discussion as an ruse to harvest sound-bites that can be quoted out of context on those other blogs you like to post to. If that happens, I will be both disillusioned and disappointed.

    Comment by raypierre — 31 Mar 2006 @ 11:44 pm

  81. Dano, In your 60, you say that my argumentation 'omits the implementation from feedback loops', and that in an adaptive management world the successful implementation of management strategies would reduce the amplitude of a scenario in which (to take my illustration) electricity output per head for the world as a whole was five times as great as that of the rich OECD90 countries in the Kyoto base year.

    I think that you may have misunderstood my argument. I wasn't arguing that the level of electricity use projected in A1FI could ever happen, with or without adaptive management strategies: on the contrary, my illustrations were designed to show that it couldn't. It follows, all other things being equal, that the projected emissions in the A1FI scenario couldn't happen either, and neither could the IPCC's projections of high increases in temperature. Of course all other things wouldn't be equal, but this doesn't affect the in-principle point that if the assumed level of electricity consumption isn't realised, for whatever reason, the upper end of the IPCC's temperature range is overstated.

    Comment by Ian Castles — 1 Apr 2006 @ 12:08 am

  82. Raypierre, Thank you for the detailed information in your post 80. I can assure you that I AM genuinely interested in all this. I did go to the model you suggested, but when I realised how few data points there were (e.g., no decadal values in the second half of the century) I couldn't see the point of trying to replicate the number I already had from Tom Wigley's presentation at Amsterdam. You have now arrived at the same figure (480 ppm). I note this is for CO2 alone.

    Some of your comments imply that I am advocating this emissions scenario. That's not so. I've tried to explain that I believe that the severity of emissions reductions, and the balance between adaptation and mitigation, is a matter for governments to decide, based on the best available information. We've now established, I think, that an important statement in the Stern Review papers is incorrect: it is stated in these papers, on the basis of advice from the Hadley Centre, that achievement of stabilisation at 550 ppm CO2 equivalent 'would require global emissions to peak in the next decade or two and then fall sharply.' That appears not to be so.

    I don't conclude from this that 550 ppm CO2 stabilisation equivalent is an appropriate stabilisation target. Perhaps in the light of this information, and the advice of scientists that the climate would very likely still be warmer than for the past 10 million years or so, governments may want to work to a more stringent target. As I see it that's for them to decide, on an 'eyes-open' basis.

    You ask whether I am really saying that emissions will begin to decrease after 2025 without a carbon tax. No, it's the IPCC's model-builders that have found that this MAY happen. Judgments on whether or not it WILL happen could only be made by examining the realism of the assumptions. I've said that I believe that the growth assumptions in the B1 (and A1) scenarios are too high, but it doesn't necessarily follow that the assumptions are unreasonable overall. I'd make the same comment on your statement that a global 2.68 GtC represents an 'aggressive' reduction, and the issue about the need for the 'right market signals' to be in place if the profile of emissions in B1T MESSAGE is to be achieved. If this means climate-related policies such as carbon taxes, I can only repeat that the SRES modellers were specifically precluded from assuming these. This is true of ALL of the scenarios.

    Finally, I'm not 'complaining' about what some politicians may have said: I don't blame politicians or governments for saying that a 60% reduction in global GHGs is required by 2050 in order to achieve stabilisation at 2 X CO2 if, for example, Stern has said the same thing based on Hadley Centre advice. The UK House of Lords Committee said that they 'were concerned that UK energy and climate policy appears to rest on a very debatable model of the energy-economic system and on dubious assumptions about the cost of meeting the long run 60% target' (para. 94). I know that the target was set in 2003 and relates to the period 2000 to 2050. I'll try and dig up some more information on the UK and on other jurisdictions that have set targets to meet your request. I agree that industrialised countries would have to work to above-average reductions in order to achieve any given global percentage reduction. Please bear in mind that my time isn't unlimited either.

    Comment by Ian Castles — 1 Apr 2006 @ 2:42 am

  83. Gavin, will you provide citations for the three papers/reports by Jule Charney, Suki Manabe and Jim Hansen mentioned in the first sentence of the post. Thanks

    [Response: It's a reference to the Charney report:

    National Academy of Sciences, Climate Research Board (1979). Carbon Dioxide and Climate: A Scientific Assessment (Jules Charney, Chair). Washington, DC: National Academy of Sciences.

    -gavin]

    Comment by Dan Hughes — 1 Apr 2006 @ 12:35 pm

  84. Thank you for the info. The NAS report seems to be no longer available from NAS, so I cannot get citations to the Manabe and Hansen peer-reviewed papers. I have been tracking down and working with some of the original climate models as reported in the peer-reviewed literature; Manabe, Moller, Sellers, Budyko, Paltridge, North, Saltzman, Lorenz, Ramanathan, along with several others ... . These papers add up to a lot of material so it is possible that I've missed what I'm looking for, but I have not seen climate model results for doubling CO2 published prior to 1976. Maybe my focus on papers relating to models has caused me to miss this specific information.

    Will you provide the exact citiations for the Manabe and Hansen papers mentioned in the post?

    Thanks again.

    [Response: Manabe, Syukuro, and Richard T. Wetherald (1975). "The Effects of Doubling the CO2 Concentration on the Climate of a General Circulation Model." J. Atmospheric Sciences 32: 3-15.
    Hansen, J., A. Lacis, D. Rind, G. Russell, P. Stone, I. Fung, R. Ruedy, and J. Lerner 1984. Climate sensitivity: Analysis of feedback mechanisms. In Climate Processes and Climate Sensitivity (J.E. Hansen and T. Takahashi, Eds.). AGU Geophysical Monograph 29, pp. 130-163. American Geophysical Union. Washington, D.C.. However, note that the simulations reported in 1984 were actually done prior to the 1979 report and were discussed in there. You are probably best off trying to track a copy of the NAS report from your local library. -gavin]

    Comment by Dan Hughes — 3 Apr 2006 @ 11:52 am

  85. Has the very recent info from the Aura satellite study surprised any of the modelers? Quoting a snippet from EurekaAlert with the cite:

    " ... Su et al. collected simultaneous observations of upper tropospheric water vapor and cloud ice from the Microwave Limb Sounder on the Aura satellite. Their observations show that upper tropospheric water vapor increases as cloud ice water content increases. Additionally, when sea surface temperature exceeds around 300 Kelvin [30 degrees Celsius; 80 degrees Fahrenheit], upper tropospheric cloud ice associated with tropical deep convection increases sharply with increasing sea surface temperature. This moistening leads to an enhanced positive water vapor feedback, about three times larger than that in the absence of convection.

    Title: Enhanced positive water vapor feedback associated with tropical deep convection: New evidence from Aura MLS

    Authors: Hui Su: Skillstorm Government Services, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA; William G. Read, Jonathan H. Jiang, Joe W. Waters, Dong L. Wu, and Eric J. Fetzer: Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA.

    Source: Geophysical Research Letters (GRL) paper 10.1029/2005GL025505, 2006

    I gather this tropical convection study fills a gap in observations; curious what the climatologists are learning from it, if it's not too soon to comment. If there's another thread for it, I've not found it.

    Comment by Hank Roberts — 3 Apr 2006 @ 6:10 pm

  86. Thanks to Ian for the additional information in #82. I think we are converging on some kind of understanding here.

    I haven't read the Stern Review myself, but based on your quote I'd agree that it is technically incompatible with an analysis of B1T MESSAGE based on simplified carbon models like ISAM. I do think that policy documents should strive to be as accurate as possible in their scientific arguments, but I think it is equally important to ask whether the nature of the mis-statement (if indeed it is a mis-statement) does any real harm, or sends the policy in a completely wrong direction. In this case, I think the answer is clearly in the negative. Based on results from ISAM, and taking into account the assumed land sequestration, a more correct statement would have been "The global net carbon emissions must essentially stop increasing in the next decade or two, and begin decreasing sharply in the following two or three decades." This statement conveys precisely the same sense of urgency, and the difference in policy implied is very little. It calls for a little less near-term action, but the difference between the policies you'd put in place to keep emissions from growing and policies that would go a little farther and begin the decrease earlier, is not so very great. Still, assuming that the Stern review was working off an ISAM-type analysis, I would have preferred my version of the statement. For developed countries, I myself would advocate a nearer-term start of reduction, but not on the basis of the precise statement you quote.

    Now, I don't want to be too hasty in blaming the Stern commission, since my discussion above is based on the ISAM model, which is a pretty crude carbon cycle model. I suspect Wigley's talk was based on the same model. The Hadley Center group has been doing some very sophisticated things with modeling ocean carbon uptake and land carbon feedbacks, and they may have provided some not-yet-published information to the Stern group which does indeed suggest that an earlier start of the reduction would be necessary. It would be very interesting to know precisely what information they provided to the Stern commission, and whether the commission interpreted it correctly.

    I'll comment later on the matter of whether the SRES scenarios should be considered forecasts. Briefly, they are storylines, not forecasts, and though SRES may have been told not to consider carbon taxes and such like policies, some of these scenarios are extremely unlikely to happen without policy intervention. B1T Message basically assumes everybody sees the light and goes green -- spending their money on opera subscriptions rather than Hummers -- without any nudging. It could happen, but I woudn't bet on it. Policy measures would have the intent of making B1T Message a more likely future and A2 a less likely future.

    Comment by raypierre — 4 Apr 2006 @ 10:48 am

  87. Ray,

    You appear to be at least tending towards, if not quite repeating, the same regrettable error of regarding the high growth (eg A2) as "business as usual" and asserting that climate change-related policy action is required to push us onto a lower emissions path. As Ian points out, the SRES itself explicitly disavows such an interpretation. Choosing the highest emissions pathway one can get away with is fine for investigating model behaviour, but it's not much of a basis for a plausible forecast. I noted with some disappointment that some of the recent work on ice sheets even used a 1% pa compound CO2 increase, backdated to 1990! That's already 20ppm too high for the present day, a gap that will grow rapidly over the next few decades. Yet this was presented as a forecast for 2100, unless we "do something". This sort of behaviour really gives the sceptics a stick to beat ourselves over the head with.

    Of course one can also note that there is no such thing as "no policy" anyway. Different policies will lead to different outcomes, and the SRES considers a range of futures in which AGW mitigation is not considered. The concept of (environmental) policies that do not explicitly consider GHG emissions in any way seems a bizarre one to me, but it seems to be how they approached things, and no doubt they have some justification for this. It certainly doesn't describe how the real world is working, though. I predict (hey, a real forecast) that the next set of scenarios will have generally lower emissions than the current set - they will have to start lower, just to match reality in 2000-2010.

    Comment by James Annan — 5 Apr 2006 @ 11:15 pm

  88. Hmmm, James (re: #87), would you be referring to Overpeck et al. (2006)? Who after running their model with a 1% per year increase from 1990, and then predicting doom and gloom from melting ice by 2100, had the nerve to write in their Supplementary Material section "For example, it is highly likely that the ice sheet changes described in this paper could be avoided if humans were to significantly reduce emissions early in the current century." As you point out, CO2 concentrations are already much lower than their modeled values (BTW, by my calculations CO2 levels are currently 30ppm below a 1% per year increase starting at 355 in 1990). So instead of giving us stern warning, Overpeck should be praising our remarkable achievements!

    And James, you are perfectly right... as long as folks continue to use a 1% per year CO2 increase in their models and then attach dates to events in the future, us "skeptics" will continue to bash the results (e.g. here ). At the very least, remove the DATES from the events. I would have a lot less problem with Overpeck et al. reporting that global temperature will rise to a level high enough to significantly melt ice sheets when they reach a concentration of 1060ppm (the 1%/yr value for 2100 starting in 1990 at 355ppm), or whatever other value they want, than I do when they say the year 2100. Sure attaching dates to things makes for really scary headlines and attempts to prompt legislative action, but it is not based in reality.

    Comment by Chip Knappenberger — 6 Apr 2006 @ 10:27 am

  89. Re: #88, "So instead of giving us stern warning, Overpeck should be praising our remarkable achievements!"

    What "achievements"? Us North Americans have done little, if anything, to cut our greenhouse gas emissions. American per capita GHG emissions has increased 13% since 1990, while Canadians have increased their GHG emissions 24% since 1990. The Europeans have at least something to be proud of, since they have cut their emissions. However, we all have to do our part by cutting GHG emissions drastically to avoid catastrophe.

    Comment by Stephen Berg — 6 Apr 2006 @ 1:23 pm

  90. Stephen,

    Well, we (the world) have already achieved an atmospheric CO2 concentration that is 7% below where Overpeck et al.'s scenario said we should be in 2005. If you want to cast our emissions trends in an unfavorable light (as you have in #89) then press upon Overpeck and other modelers to use a scenario that is along the lines of the pathway that you think we should be on (or the one that we are actually on).

    My point is, that simply based upon Overpeck's scenario (a 1%/yr increase in CO2 concentrations since 1990), it *appears* as if we have achieved a great deal by doing nothing at all. I am agreeing with James Annan's sentiments expressed in #87--why start out with something that is already wrong? At the very least, as I mentioned in #88 remove the dates. In other words, tie events to CO2 concentrations and not specific years. If Overpeck et al. had done that, then there would be no grounds for this exchange between you and I.

    Comment by Chip Knappenberger — 6 Apr 2006 @ 2:14 pm

  91. Does "1%" refer to anthropogenic CO2 increase, or to an overall measured atmospheric level total?

    March 2005 NOAA (averaged from 60 sites worldwide) said:

    The increase in 2002 was 2.43 ppm; the increase in 2003 was 2.30 ppm. .[the increase returned to longterm average of 1.5 ppm per year in 2004, "indicating that the temporary fluctuation was probably due to changes in the natural processes that remove CO2 from the atmosphere."
    www.publicaffairs.noaa.gov/releases2005/mar05/noaa05-035.html

    Reuters Tue Mar 14, 2006 11:25 AM reported:

    The average annual increase in absolute amounts of CO2 in the atmosphere over the past decade has been 1.9 ppm, slightly higher than the 1.8 ppm of 2004 ....[according to] National Oceanic and Atmospheric Administration, cited by the British Broadcasting Corporation, ... carbon dioxide had grown last year [2005] by 2.6 ppm.

    NOTE -- I'm not saying these numbers are comparable.
    1.5 ppm from NOAA in March for 2004
    1.8 ppm from Reuters quoting BBC quoting NOAA, also for 2004.

    I haven't found the data at NOAA, can someone point to the actual numbers, what stations they're comparing, to make sense of this mismatch? A simple table current through 2005 for Mauna Loa alone would be nice.

    ---> Does the "1%" rate of increase IPCC assumed include feedback added to the historical annual rate of 1.5 ppm? When would such feedback increases be expected to show up above natural variation?

    Can a statistician tell us -- please -- how many more annual observations will be needed to tell us if the rate is increasing (assume 5% confidence, one tail test)?

    Comment by Hank Roberts — 6 Apr 2006 @ 2:22 pm

  92. The Manua Loa data is here;
    http://cdiac.ornl.gov/ftp/trends/co2/maunaloa.co2

    Cheers, Alastair.

    Comment by Alastair McDonald — 6 Apr 2006 @ 3:03 pm

  93. Re: #90, "Well, we (the world) have already achieved an atmospheric CO2 concentration that is 7% below where Overpeck et al.'s scenario said we should be in 2005.

    ...

    My point is, that simply based upon Overpeck's scenario (a 1%/yr increase in CO2 concentrations since 1990), it *appears* as if we have achieved a great deal by doing nothing at all."

    A 1% per year increase in CO2 would result in an increasing rate of CO2 emissions and not a steady rate. Therefore, we could see jumps from 3 ppm/yr to 6 ppm/yr within a decade. As for the "7% below" comment, that may be as a result of European reductions, not North American action, since our GHG emissions have risen 15% per capita.

    Perhaps Overpeck et al. should have used a ppm/yr rate increase instead of a percentage increase. However, it's nothing we should be squabbling over. It is only a tactic famously used by skeptics to prevent or at least delay the necessary action.

    Comment by Stephen Berg — 6 Apr 2006 @ 4:15 pm

  94. Re: Future increases in CO2 levels

    I understand that future CO2 emission trends are based largely on projected emissions from fossil fuel burning and ongoing deforestation. Something I have read a lot about and which really worries me is whether this ignores potentially vast additional emissions - not just possible natural feedbacks, but emissions from the destruction of carbon sinks.

    What comes to mind are

    a) the massive release of carbon from drained peat swamps on Borneo (and perhaps elsewhere). I understand that the region was a carbon sink with no big fires until the late 1990s, then Suharto came along and decided to drain the swamps and within just two years the peat turned into a massive source of CO2 emissions (see here, for example: www.nature.com/news/2004/041108/pf/432144a_pf.html ). Incidentally, the years of the largest recent peat fires were also the years with annual CO2 increases above 2ppm.

    b) the possibility of a collapse of the Amazon as a viable ecosystem, with increasing fires, loss of vegetation and emission of vast stores of carbon. Although the Hadley Centre suggest that this could result from climate change, it is also quite likely with 'business as usual' deforestation.

    Am I correct to think that, unless people safeguard those carbon sinks (ie re-flood the peat swamps and protect rainforests), there will be little hope of stabilising CO2 levels, even if any of the (essential) efforts were made to cut down on fossil fuel emissions? How significant are those emissions in the greater scale of things?

    Comment by Almuth Ernsting — 6 Apr 2006 @ 6:27 pm

  95. Alastair, that Mauna Loa data ends with 2004. Any word about 2005, yet?

    Comment by Hank Roberts — 6 Apr 2006 @ 6:33 pm

  96. Here is more recent CO2 data, including graphics:
    http://www.cmdl.noaa.gov/ccgg/trends/
    1995 2.01
    1996 1.19
    1997 1.98
    1998 2.95
    1999 0.90
    2000 1.78
    2001 1.60
    2002 2.55
    2003 2.31
    2004 1.54
    2005 2.53

    Comment by Hank Roberts — 6 Apr 2006 @ 6:48 pm

  97. Here is more recent CO2 data, including graphics:
    http://www.cmdl.noaa.gov/ccgg/trends/
    1995 2.01
    1996 1.19
    1997 1.98
    1998 2.95
    1999 0.90
    2000 1.78
    2001 1.60
    2002 2.55
    2003 2.31
    2004 1.54
    2005 2.53

    Comment by Hank Roberts — 6 Apr 2006 @ 6:51 pm

  98. I would like to return to some of the points made in the original posting, and a few of the early responses.

    I have read the the Annan and Hargreaves (hereafter A&H) paper several times and have had the opportunity to discuss some of the issues with James. There are, I believe, two central conclusions that should be drawn from their work.

    First, not all PDFs for the climate sensitivity are equally credible, and PDFs such as those generated (for example) by Bayesian computation using a single broad data set (e.g., Andronova and Schlesinger 2001 and others primarily based on the last two centuries of forcing and warming) should not be considered as credible or "likely" as those, like A&H, that use additional lines of evidence.

    Second, however, and this is what I will attempt to argue here, there are additional considerations regarding the Bayesian methods used by A&H which suggest that their own calculations are not robust, and are not inherently more credible than PDFs with, in the jargon, "longer tails." The devil is so in the details. Some of the reasons that it is not relate to the issues raised by Hank Roberts in #33 and other comments, having to do with the interpretation of past climate dynamics as evidence for constraints on future climate dynamics.

    To start with, let me lay out what I think are the crucial claims made in the paper. My interpretation of their argument is that it goes like this:

    1. Bayesian methods are the proper method for estimating the PDF for the climate sensitivity.

    2. Using Bayesian methods, if there exist three fully independent and equally weighted lines of evidence, each of which produces a PDF for a parameter like the climate sensitivity, the appropriate "posterior" PDF is that which you obtain by multiplying the other three PDFs.

    3. In the case of climate sensitvity, there exist three such lines of evidence, which give the posterior probability we report.

    4. There are if anything additional independent lines of evidence; thus the PDF they produced from just three is conservative, and should be sufficiently convincing to us to justify our actions.

    There are three obvious criticisms of this line of argument, which primarily address (1) and (3) above. The importance of (4) I leave for another time.

    First, whether the three lines of evidence are actually independent is not obvious. The authors discuss this at some length, but based on my moderate familiarity with the methods used in each of the three cases, I suspect that there are any number of interdependencies. I believe it is reasonable to claim, as A&H do that the independence is �close enough� to justify the �as if� methodology. But, and crucially, in the absence of an independence assumption, the method of integrating three PDFs becomes indeterminate. Thus there is no straightforward sensitivity analysis that can be done to show that the interdependencies are irrelevant, because there is no method for such a calculation. Thus the authors have no way of strongly refuting the claim that it's also reasonable to assume that the three lines of evidence are not independent, and thus that the conclusion is not robust.

    Secondly, and this raises the point that Hank Roberts raises: the method they use depends on the assumption that the past will be like the future; thus it can at best be used to robustly justify the claim that "if the mechanisms that govern the climate sensitivity in all three of the 'cases' are effectively the same as each other and as those governing it in the (e.g., doubled CO2) future, then it would be reasonable to act as if this PDF will continue to be our best representation of the climate sensitivity going forward." I think there are ample reasons to question this assumption as well, and again, once one entertains this notion, the appropriate "modification" of the calculated PDF becomes indeterminate.

    Third, a closely related notion is that the apparently 'objective' method of multiplying PDFs to get a posterior distribution cannot be substituted for a more subjective analysis that weights the evidential strength of the component PDFs on the basis of the reasoning that went into them. That is to say, in this case, that it's worth asking what is the causal story that leads us to treat a certain line of evidence as a constraint? Prima facie, for example, one might think that the response to a perturbation like Mt. Pinatubo - in which a small, globally distributed negative forcing is created for a very short time - isn't a very strong constraint on the response of the system to a very heterogeneous and high amplitude positive forcing signal.

    I admit I am out of my depth here, and there may be many reasons for considering the volcanic signals as a strong constraint on future climate sensitivity. But the key point is methodological - once one admits that some lines of evidence are stronger than others, the calculation of a posterior distribution becomes indeterminate. One can come up with arbitrary methods for "weighting" alternative PDFs in such a calculation, but there is no single correct method, and no obvious method for demonstrating sensitivity to the assumption that the lines are equal and "full" constraints.

    The key point, in my opinion, is that what Bayesian methods do is suggest what it is reasonable to believe, and thus to act as if you believe, and the A&H method has nothing to say to many of those whose have other reasons for assigning a higher probability to high climate sensitivities. They do effectively make the argument that those who would choose to base policy recommendations on the output of any one of the included constraints, without considering other lines of evidence that would tend to narrow it, are making a mistake. But the claim that one must therefore treat the three PDFs as fully independent and that the multiplicative algorithm is robust, I believe is not justified.

    The significance of methods such as climateprediction.net is another discussion, but suffice it to say for now that one reason that we build mechanistic models rather than only statistical models is to give evidence about how a system will behave pushed outside the domain of experience. The credibility of such models is necessarily a matter of domain-specific arguments.

    Comment by Paul Baer — 6 Apr 2006 @ 9:35 pm

  99. Hi Paul,

    Thanks for your comments. I'll offer a brief reply to some of them.

    1) Although I do agree that Bayesian methods are at least a good method to approach the problem, I think it's more accurate to say that our paper is not so much aimed at promoting this idea as merely pointing out that if you are going to do it (as many people have been in recent years), there are better and worse ways to go about it :-)

    2) I have to say I'm not overly impressed with some of what I've read in the climate science literature regarding such ideas as "probability of probability", or in other words "how confident are you that your pdf (or assumptions) are the 'correct' ones". This seems to be at least in part a category error regarding the nature of the probability that we are estimating - there isn't really such a thing as the 'correct" probability, only the strength of a belief. Kandlikar et al provide strong arguments that precision and confidence have to go pretty much hand-in-hand - it doesn't make much sense to give a precise probabilistic estimate that you have little faith in, and (but perhaps a little more debatably) vice versa. I don't have great confidence that our 3 underlying constraints are exactly 'right' (inasmuch as that assessment even has a meaning) but I think they are tending towards the generous side, and the overall result is robust to substantial changes in them, so I'm therefore more confident about the more precise posterior. Uncertainty in a particular estimate can be accounted for by increasing its error, and in fact we explicitly did this for the LGM. So I reject the notion of some constraints being more equal than others, aside from their actual shape. I'm not insisting that the concept is wholly without merit, merely saying that it is probably an unnecessary complication.

    Independence is a bit more of a tricky one, but note that accounting for dependence could result in a stronger result as well as a weaker one. While our paper was in review, yet another estimate was generated by Forster and Gregory using ERBE data which also points clearly towards low sensitivity. Had this result been available to us earlier, it would have been very tempting to use it in some way (eg here) as their analysis doesn't involve a climate model at all (merely a linear regression) and thus its independence from other analyses would be hard to challenge. So I am quite confident that the results are robust. Given that "beyond reasonable doubt" in legal terms is often stated to mean about 95% confidence or higher, I think we could reasonably expect to convict S of being less than 4.5C.

    Of course, what matters is not really what I think, but how other researchers receive the work - having written a peer-reviewed paper merely means that a couple of referees didn't see any obvious errors. I look forward to seeing what others have to say over the next few months and years - we've actually had very little feedback from anyone, least of all IPCC authors :-) So far, I'm not aware of anyone producing an alternative analysis (in the light of our arguments) to justify the belief that S > 4.5C is credible at more than the 5% level, and certainly not S > > 4.5C (I'm trying to avoid such constructions as "the probability of S > 4.5C", cos it's not "the probability", it is their probability).

    I'm not sure what "other reasons" one can have for assigning high probability to high sensitivity, other than it being our (rationally defensible) belief. Are you really saying we should pretend to believe that climate sensitivity is high (with high probability), even if we don't actually believe it? Why?

    James

    Ref:

    Kandlikar M., Risbey J., Dessai S, (2005) Representing and communicating deep uncertainty in climate-change assessments. Comptes Rendus Geoscience 337(4): 443-455. (not on-line, AFAIK)

    Comment by James Annan — 7 Apr 2006 @ 2:08 am

  100. Hi James -

    Thanks for the thoughtful reply. Here's some quick thoughts:

    I think that the point raised by Kandlikar et al. is quite appropos, about the connection between precision and confidence. The argument I'm making, actually, is that our confidence in our understanding of how the feedbacks in the climate system will evolve in the coming decades - summarized by the climate sensitivity - is in fact low, and that the appropriate conclusion is that the PDF should be considered to be broad.

    Quite specifically, I am indeed arguing, as you suggest that no one has yet (and I'm happy to be first :^) that in light of your analysis, S>4.5 is still credible at greater than 5%. Because if as you argue, that it is reasonable to believe that it is in fact 2.5%, then our lack of understanding of the process can justify a widening of the credible tail that includes S=5º at 2.5% and S=4.5 at 5% or 6% or 7% or what have you. There is no reason not to operate on your calculated PDF differently than you operate on the components, widening them subjectively, and arguably good reason to do so.

    I don't disagree that the fact that we have three types of constraints on historical climate sensitivity is some evidence for predicting its future behavior (and lets be clear, probabilistic prediction is what we're trying to do here). I just don't think it's all that strong evidence. We know enough about the complexity of the system to be able to say that even if we knew exactly what the climate sensitivity "was" in each of the three constraining periods - that is to say, we had a reasonably precise model of the feedbacks and forcings operative in each period - it wouldn't give us a particularly high confidence that it would be very similar in process and consequence in the future. Certainly I think it's reasonable to say "there's at least a 5% chance that the feedbacks will work very differently as we warm the planet towards temperatures unprecedented in recent millenia."

    In short, then, yes, I believe that we ought to act as if the climate sensitivity has a higher probability of being high than we believe may be the best estimate of its �historical� value.

    In fact, given significant evidence about the state dependence of the climate sensitivity, I'm not even sure that what is estimated by the three constraints is meaningfully a single, unique parameter :^)

    Comment by Paul Baer — 7 Apr 2006 @ 4:33 am

  101. Re 87 and following

    As far as I understand the modelling assumptions, they allways assume a 1% increase in CO2 equivalents, that means all the greenhouse gases, not CO2 alone. So if comparing to the measurements, one should compare to the total of greenhouse gases, not CO2 alone.

    Comment by Urs Neu — 7 Apr 2006 @ 5:09 am

  102. Urs (101),

    Yes that's very true - however, in reality the other GHGs actually aren't increasing to make up the difference (methane is currently flat and could even be decreasing slightly, as I've mentioned before).

    Paul (100),

    Write it up and publish it then - good luck :-)

    Comment by James Annan — 7 Apr 2006 @ 7:55 am

  103. >write it up and publish it
    No, we're citizens asking you, or advertising for other experts, who are willing to talk to us (well, I'm not a Japanese citizen, but talk to us anyhow), about questions you perhaps don't think are useful. But these questions are about surprises.

    Take examples outside your immediate area, James (at least I think so). Because these questions are addressed to any climatologist working who is reading this. I hope many are and more will be encouraged to respond to all this.

    Assuming 'sensitivity' a term applicable: Didn't we have very low estimates for the sensitivity of the Greenland glaciers, and the Antarctic ice shelves, to warming, for example? Would such estimates have changed after the icequakes got noisy and the Ross collapsed?

    Don't we now have very low estimates for the sensitivity of the very deep ocean to atmospheric warming? Would finding warming in the deep ocean (as I believe some Japanese research vessels reported a few years ago) change our expectation?

    Comment by Hank Roberts — 7 Apr 2006 @ 10:28 am

  104. Another example -- five years old, and I don't know where the journal publications are! -- describing documentation from drilling of sudden warming caused by events like major seismic events rather than predictable astronomical changes. I don't know what to make of this, but -- do we expect the unexpected when calculating risks?

    http://www.sciencedaily.com/releases/2001/11/011120045859.htm
    QUOTE

    November 23, 2001
    Global Warming Periods More Common Than Thought, Deep-Sea Drilling Off Japan Now Demonstrates

    CHAPEL HILL -- Core samples from a deep-sea drilling expedition in the western Pacific clearly show multiple episodes of warming that date back as far as 135 million years, according to one of the project's lead scientists. Analysis of the samples indicates warming events on Earth were more common than researchers previously believed.

    The expedition aboard the scientific drill ship "JOIDES Resolution," which ended in late October, also revealed that vast areas of the Pacific Ocean were low in oxygen for periods of up to a million years each, said Dr. Timothy Bralower. A marine geologist, Bralower is professor and chair of geological sciences at the University of North Carolina at Chapel Hill.

    "These ocean-wide anoxic events were some of the most radical environmental changes experienced by Earth in the last several hundred million years," he said.
    ... Drilling took place on Shatsky Rise, an underwater plateau more than 1,000 miles east of Japan. Its purpose was to better document and understand past global warming.

    In geologic time, episodes of warming began almost instantaneously -- over a span of about a thousand years, Bralower said.

    "Warming bursts may have been triggered by large volcanic eruptions or submarine landslides that released carbon dioxide and methane, both greenhouse gases," he said. "Besides reducing the ocean's oxygen-carrying capacity, warming also increased the water's corrosive characteristics and dissolved shells of surface-dwelling organisms before they could settle to the bottom."

    In some especially striking layers of black, carbon-rich mud, only the remains of algae and bacteria were left, he said.

    "The sheer number of cores that reveal the critical warming events found on this expedition -- three from the 125-million-year event and 10 for the 55-million-year Paleocene event -- exceeds the number of cores recovered for these time intervals by all previous ocean drilling expeditions combined," Bralower said.
    --------
    END QUOTE
    ----------

    Ok, enough from me. I'm not saying you all should have modeled this sort of event, I don't see how you can. I'm asking whether we can expect the estimates of possible warming to include some possibility of such events. Major seismicity/undersea landslide/methane release. Or even Antarctic icecap melting and releasing methane hydrates, if there are any buried under grounded thick ice -- and do we know if such exist nowadays?

    Forget the asteroids -- how about Earth's hiccups interfering with smooth predictable change? Risk?

    Comment by Hank Roberts — 7 Apr 2006 @ 7:32 pm

  105. Hank,

    Paul already writes stuff in Science, so I think my suggestion is a reasonable one if he thinks an argument along those lines will stand up to peer review. It would certainly be amusing if all those who have spent the last few years trying to generate estimates of climate sensitivity were to decide in the light of our result that it's actually not possible to do this after all, we just have to assert S > 4.5C at the 10% level regardless. Time will tell.

    As for things like methane eruptions - this is outside the realm of what we were estimating. There's not any sign of increased methane yet, and the recent RC article clearly played down the risk, but it would be hard to rule out the possibility of it. However, note that if people can come up with 100 scary catastrophe theories, then even if we can rule them all out at the 99% level (which is well-nigh impossible to achieve on a formal basis), they still "win" overall, and can claim that the end of the world is nigh. IMO this says more about the power of imagination than it does about reality.

    Comment by James Annan — 8 Apr 2006 @ 12:52 am

  106. Hello,
    I am doing a model for earth mean temperature with a new method, and i am looking for data (time series):
    - recent (e.g. 1900-2005)earth mean temperature
    - recent marine benthic oxygen isotope values
    - recent dust (aerosol) values
    - recent Na values
    I have the ancient values through Vostok and DomeC, but I need recent ones also...
    Can someone tell me where I can find that?
    Florent

    Comment by Florent Dieterlen — 8 Apr 2006 @ 6:10 am

  107. >outside the realm of what we were estimating

    That is a good clear answer to my question, thank you -- I thought perhaps your procedure relied on a "bottom line" outcome of what we know from prior climate change -- including known and unknown details.

    In addition to methane outbursts, what else did you choose to rule out of the realm used to make your estimates?

    I guess I'm puzzled why the half of the CO2 total humans released, in the past 30 years, is not like a methane outburst. I can understand why the first half of the CO2 people released over 11,000 years (per Dr. Ruddiman) would give sensitivity range comparable to how past climate behaved in the absence of any sudden spike like a methane outburst.

    But I wonder -- if you next do include methane outbursts from the past climate record in the realm of calculation -- would the human CO2 release of the past 30 years compare to an outburst event? Would you get a different value for sensitivity, if outbursts were in the realm considered?

    Comment by Hank Roberts — 8 Apr 2006 @ 11:33 am

  108. Hi James - your suggestion to write it up and submit it is obviously correct. In the meanwhile, however, I hope that you and others in this forum can help me explore the arguments. I think that progress in this area regarding the use of Bayesian and related models of uncertainty depends on generating active discussion. Typically scientific "results" are generated in collective settings (think of a lab group at one scale); I'm hopeful that realclimate can help facilitate a "virtual collective" on this topic. (If there's another discussion forum that would be more appropriate, I'm happy to move the discussion too.)

    In any case: I'm working on a much longer argument, but as I reread the comments of yours to which I'm responding, I was struck by this: "Independence is a bit more of a tricky one, but note that accounting for dependence could result in a stronger result as well as a weaker one."

    As I understand the method, I can't see how that could be true. Isn't multiplication of the PDFs the method that produces the greatest narrowing of uncertainty, based on the assumption of complete independence? How, mathematically and conceptually, could greater narrowing of uncertainty come from asserting any positive degree of dependence?

    --Paul

    Comment by Paul Baer — 9 Apr 2006 @ 12:57 am

  109. Hank,

    The point is that simply from the POV of looking at how the physical climate system (ocean + atmosphere including sea-ice) responds to forcing, we would consider methane as an additional forcing. Any methane would (to first order) have the same effect as an equivalent CO2 concentration. I thought Gavin explained it pretty well in the article itself.

    Comment by James Annan — 9 Apr 2006 @ 4:03 am

  110. Paul,

    Sure, I have no objection to discussion, and here is as good as anywhere. I didn't mean to sound snarky in my previous comment.

    Independence.

    I'll start off by clarifying exactly what "independence" actually means in this situation. The first thing some people think when I assert that the different observations are independent, is "oh no they aren't - they are measuring the same thing (sensitivity = S) so a high value for one observation would lead us to expect a high value for other observations". This is true, but it is the independence of the error that matters. One way of looking at it is to ask the question "if we knew S, would knowing the value of an observation X1 change our expectation of a presently unknown observation X2?" If the answer is no, the observational errors are independent.

    An example as to how combining observations with strongly dependent errors can give a result which is much more accurate than an assumption of independence would lead you to expect:

    Assume you have an apple of unknown weight A, and weigh it on a balance against some old weights with limited accuracy. The weights that balance the apple add up to an indicated X, but the marked weight has an error of some unknown value e.

    So we have

    A=X+e

    and now know the weight of the apple with some limited accuracy.

    Now we take the apple, and these exact weights, and put them all on the same side of the balance. On the other side, we use some well-calibrated weights and obtain a value of Y (which has an error which is small enough to be negligible).

    So we have
    A+X+e = Y

    which again gives us the weight of the apple with the same magnitude of error e, ie: A=Y-X-e

    Combining these estimates under an assumption of independent errors, we'd get an overall value of A=Y/2 with an error e/sqrt(2). But note that if you simply add the equations, you get
    A+A+X+e=Y+X+e
    ie 2A=Y
    and we get A = Y/2 with no error at all! The reason is that although the second estimate has an error of the same magnitude, and the error is highly dependent on the error of the first measurement, the two errors are negatively correlated. So they cancel better than they would if they were independent.

    Now I'm not claiming that this is the case in our work and that any dependencies are negatively rather than positively correlated - merely pointing it out as a possibility in reply to those who claim that since we ignored dependency, we have necessarily overestimated the tightness of the result. In any case, even though one could argue for some possibility of dependence in some cases, I really can't see what hint of a possibility there is of a dependency between the ERBE data set examined by Forster and Gregory, the proxy records contained in sediment cores which inform us about paleoclimate, and the magnitude of the seasonal cycle for example. And I note that such an assumption (independence) is entirely routine in the absence of strong arguments (certainly, in the absence of any meaningful argument at all) to the contrary.

    In summary (for now) I'd like to re-emphasise just what most of these previous estimates were doing. They start off from an extremely pessimistic prior - a uniform prior on [0,20] might sound like just "ignorance" but in fact it represent the prior belief that 10 &lt S &lt 20 is ten times as likely 2.5 &lt S &lt 3.5, for example (and S &gt 5 is 15 times as likely). Then, using a very limited subset of the available evidence, we cannot rule out the high values with 100% certainty, so even though the agreement with observations is poor and the likelihood of these high values is very low compared to the well-fitting values close to 3, the posterior probability integrated across this range is not quite negligible. If we start off from a prior that does not assign such a strong belief to high S in the first place, or (equivalently) use some more evidence out of the mountain that points to a moderate value, then this problem simply goes away. As was noted on RC some time ago (and also here), these estimates which assigned significant probability to high S never did actually amount to any genuine evidence for this. All we have really done is to formalise those arguments.

    It's also worth noting that Chris Forest has already generated more moderate estimates over several years (most recently in Jan 2006), by using an old "expert prior" (which originates in the 80s I think) together with the recent warming trend. It is perhaps unfortunate that these estimates didn't receive as much attention as the more exciting results in the same papers which he generated from a uniform prior. It may be a little questionable how much belief we should place in expert priors from 20 years ago - on the other hand, the overall result IMO is pretty much the same as if we took an intelligent look at other data such as the paleoclimate record.

    Comment by James Annan — 9 Apr 2006 @ 4:15 am

  111. Hi James - thanks for the prompt and thorough reply. I hope it has some value to you to spend some time replying, as well as to me and others.

    Your point about dependence is interesting - I've seen something like that apples-and-scales argument before. But my question was very specifically about the Bayesian calculation you were doing.

    I'm still interested answer to this question: doesn't the multiplication algorithm simplyl assume complete independence, and isn't it thus true that in this particular problem structure, dependence could only weaken the conclusion?

    My concern here really is that there's a methodological leap that justifies the multiplicative assumption without any ability to test either the legitimacy of the assumption or the sensitivity of the result to alternative assumptions. In particular, I don't think that widening the spread of the component PDFs is a methodological substitute for a modification of the multiplicative algorithm, because (as you noted in an email to me) if you combine by multipleication any two distributions with the same centroid, the result is narrower than the narrowest of the originals.

    Although I haven't yet really spent much time thinking about it, I suspect there are any number of dependencies in the three major constraints that you use (leaving the ERBE data set for another time), for example in the methods that are used to estimate radiative forcing in each era?

    More generally, one of the intuitions I'm pursuing is that if you have one estimate of a value with a spread of X, and a second estimate that has a similar centroid but a spread of 2x, it's at least possible that your revised distribution will be between the two, rather than narrower than X. I believe that in fact the actual "substance" of the experiments is as important to the "joint" PDF that emerge as the shape of the originals. I'll have to see if I can come up with some good examples.

    Thanks again for engaging in the discussion.

    By the way, I'm not sure if I said this in one of my earlier postings, I actually agree with your fundamental claim that alternative lines of evidence suggest that very high values of S are very unlikely, although I really do think that values of ~5º can't be ruled out with high confidence. But my concerns in this debate are significantly methodological. I don't remember if you read chapter 3 of my dissertation, but in part I'm really interested in the question of what PDFs mean, and what kind of inferences can be drawn from them. Fundamentally, I believe that Bayesian methods are normative rather than "objective," and thus that the action-informing power of their conclusions requires additional levels of justification.

    --pb

    Comment by Paul Baer — 9 Apr 2006 @ 12:12 pm

  112. Hi Paul,

    Yes it's always useful to rehearse and clarify arguments. OTOH I am fully aware of the limitations of debate in changing minds!

    The example I gave is exactly what you said you were seeking - a case where errors are negatively correlated and thus an assumption of independence is pessimistic. The first data point gives rise to a likelihood N(X,E) where E is the (estimated) st dev of the unknown actual error e. The second measurment gives N(Y-X,E). Both of these are gaussian, they each individually give rise to pdfs of that shape under the assumption of a uniform prior. Combining them under an assumption of independence via Bayes gives us N(Y/2,E/sqrt(2)) which is a tighter gaussian, but still with significant uncertainty. But if we account for the dependence we get N(Y/2,0). I could have allowed for the case in which Y has an error too, which would have conveyed the same underlying message but required more ascii-unfriendly algebra. There is no fundamental difference in principle between this simple example and all the other more detailed stuff in real cases.

    It is interesting that you bring up forcing errors. It is well known that the 20th century warming tells us that a larger-than-expected forcing from suphate aerosols implies a higher sensitivity (or else we would not have warmed so much). However, if the forcing from a volcanic eruption is higher than expected, this tells us that sensitivity is lower than we currently estimate, or else we would have seen a bigger cold perturbation at that time! This is precisely a case where plausibly codependent errors are negatively correlated.

    Your suggestion about averaging pdfs makes some sense when the pdfs are equally plausible analyses of the same information. In fact our 20th century constraint can be viewed in that light (average of the various published analyses), although we didn't actually perform this operation, merely chose a convenient distribution that seemed roughly similar. But if you ask two people the time, one of them has a watch that says 9:45 am (as mine does) and the other says "well, it's daylight, I guess that means it's between 6am and 6pm" would you really average their resulting implied pdfs (one a narrow gaussian around 9:30, the other uniform on [6am,6pm]) and act as if there was a 25% probability of it being afternoon?

    Comment by James Annan — 9 Apr 2006 @ 8:46 pm

  113. I think my rather inarticulate question is understood -- and I'm confident Paul Baer understands why I'm asking better than I do. You're both so articulate I get lost, though.

    Am I asking if surprises are likely hidden by the "implicit assumption" Gavin describes, as James points out, above?

    "... The concept could be extended to include some of the shorter time scale bio-geophysical feedbacks but that is only starting to be done in practice. Most discussions of the climate sensitivity in the literature implicitly assume that these are fixed."

    I understand that assumption's been needed til now for modeling -- and wonder what risks there are for underestimating by making that assumption.

    Comment by Hank Roberts — 9 Apr 2006 @ 9:41 pm

  114. Is "Almost 30 years ago" too modest? My 1968 Encyclopedia Britannica says: Recent calculations indicate that a doubling of the carbon dioxide concentration in the atmosphere would cause an average rise in the earth's surface temperature of about 3.6 C (6.5F). (page 184 of volume 18, article on Pollution by Edward R Hermann, Assoc Prof of Environmental Health, North Western University)

    [Response: Is there a proper reference with that comment? - gavin]

    Comment by Ian K — 9 Apr 2006 @ 11:01 pm

  115. Ian's quoting the first line of your article. But estimates of the amount of warming go back to Arrhenius, as single numbers. This is about figuring an over-or-under likelihood around such a number.

    Comment by Hank Roberts — 10 Apr 2006 @ 11:05 am

  116. Gavin I suppose one would have to ask Prof Hermann (or this descendants) if one were to dig deeper.The article is really concerned with the direct and immediate effects of pollutants. A fuller extract from the part of the article directed to CO2 reads:
    Although the doubling time for fuel consumption in the world is currently 20 years, statistically significant evidence of a build up in atmospheric carbon dioxide concentrations has not been established, even though the burning of carbonaceous matter has produced great quantities of carbon dioxide. Measurements during the last century, however, indicate that worldwide they may be increasing. Concern has been expressed by some scientists about such an occurrence, since carbon dioxide is an excellent absorber of infrared radiant energy. Recent calculations indicate that a doubling of the carbon dioxide concentration in the atmosphere would cause an average rise in the earth's surface temperature of about 3.6 C (6.5F). A temperature shift of this magnitude would have far-reaching hydrological and meteorological effects: the polar ice masses would be reduced and the ocean levels would rise. Although the carbon dioxide theory has plausibly explained the climatic oscillations of geologic time, accompanied by the coming and going of glacial periods, the present annual production of carbon dioxide by fuel combustion is only enough to raise the global atmospheric concentration by 1 or 2 parts per million, approximately less than 0.0002% if not counterbalanced by plant photosynthesis. Since the carbon dioxide concentration of the atmosphere is about 300 parts per million (0.03%), the production over a few years would appear to be insignificant. Furthermore, the available sinks of marine and terrestrial plant life capable of reducing carbon dioxide seem entirely adequate to maintain the ecological balance for centuries unless other factors come into play. The problem of air pollution with carbon dioxide therefore does not seem to be alarmingly great. Quantitatively, however, knowledge is lacking.

    [Response: Interesting. At the time he wrote though, there was enough evidence that CO2 levels were rising (from Mauna Loa published in Keeling (1960), Callendar (1958)) and that the ocean would not absorb most of the emissions (Revelle and Suess, 1957), so he should have known a little better. I think though that the suspiciously precise 3.6 deg C change is an error though. As far as I can tell, the only credible estimate that had been made at that point was the one by Manabe and Weatherald (1967), and they had a sensitivity of ~2 deg C. If you convert that to Fahrenheit, you get 3.6 deg F, so I think it likely that there was a unit mix up at some point. -gavin]

    Comment by Ian K — 10 Apr 2006 @ 5:55 pm

  117. Hank (113),

    I think we can agree that any major surprises on the global scale, if they were going to happen, would be hidden by our analysis (and other similar attempts). However, in order to be "likely hidden", they'd have to be "likely" in the first place (at least under one plausible reading of your comment), which seems (very) unlikely to me :-)

    Hope that..um...makes things clearer?

    Comment by James Annan — 10 Apr 2006 @ 9:42 pm

  118. I understand the problem, at least superficially (wry grin). I'm still hunting for journal publication detailing the Japanese deepsea cores done in 2001 (mentioned in #104, as indicating many sudden warming events not otherwise known). I can't expect you to consider such as likely, if the work hasn't been published, eh?

    Comment by Hank Roberts — 10 Apr 2006 @ 9:58 pm

  119. James, Read "Abrupt Climate Change - Inevitable Surprises" by the Committee on Abrupt Climate Change, NAS, 2002 at http://www.nap.edu/catalog/10136.html

    We are not going to get any warning of abrupt climate change. If we did, it would not be abrupt! The Permian Triassic (P-T) mass extinction, the Paleocene Eocene Thermal Maximum (PETM) and minor extinction, and the end of the Younger Dryas may all have been scary events but they did happen. There is some unconvincing ideas about why the Younger Dryas began, but none for why it ended, nor can the rapid warmings which followed the other Daansgard-Oescher events be explained. In other words, all the evidence is that when global temperatures rise, they do so abruptly not smoothly as one would expect.

    Your application of Bayseian logic to PDF (probability distribution functions) is really just, in effect, a matter of averaging averages. As you admit, it will hide the little evidence there is for abrupt change. Can't you see that the complacency which this breeds is extremely dangerous?

    Cheers, Alastair.

    Comment by Alastair McDonald — 11 Apr 2006 @ 4:24 am

  120. Thank you for your response Gavin. A mix-up in the units may well be the reason for the apparent prescience as to sensitivity. Alternatively it could be that this entry was actually written even earlier than 1967 (which might also give some excuse for the author's ignorance of evidence for atmospheric buildup of CO2)and therefore this entry may appear in earlier copies of the Encyclopedia Britannica. Hey, has anyone out there got a copy of EB dated between 1963 and 1967, say, who could check the entry on *Pollution*?

    Comment by Ian K — 12 Apr 2006 @ 3:48 pm

  121. Re #89, Stephen Berg's statement that 'American per capita GHG emissions has increased 13% since 1990' is incorrect. US per capita emissions have DECREASED slightly since 1990. The 13% figure relates to the TOTAL increase.

    Re 120, I have a 1963 edition of the EB and there's no entry on 'Pollution.'

    Comment by Ian Castles — 12 Apr 2006 @ 5:54 pm

  122. Thanks Ian. How times have changed since 1963! I wonder when Britannica first considered pollution an issue. Was it previously subsumed under another heading, perhaps?

    Comment by Ian K — 13 Apr 2006 @ 2:44 am

  123. Re #122. Ian K, the Index volume of the 1963 EB has three entries under 'pollution'. The first is 'Pollution (Hinduism)', and refers the reader to the article on 'Caste'. The next entry says 'Pollution: see Refuse disposal; Sewage disposal; Water supply and purification.' And the last of the entries says 'Pollution, Air: see Air: Pollution', which in turn lists references to 'cancer', 'legislation', 'refuse disposal', 'smog' and 'ventilation.'

    It's of interest that there is an entry in the 1963 EB for GREENHOUSE, which is devoted entirely to 'structures used to grow plants'. The next index entry is 'Greenhouse effect (astron.)', and refers the reader to articles on 'Mars' and 'Soldering'. The article on Mars refers to a 'greenhouse effect' which 'is produced by the blanketing effect of water vapour and carbon dioxide in the Martian atmosphere, and is consistent with the theory that the surface of the planet is covered with dust.' This effect was identified by John Strong and William M Sinton from radiometric measures with the 200-in. telescope at Palomar in 1954. I can't find any reference to 'greenhouse effect' in the article on soldering, and so far as I can see there is no reference anywhere in EB 1963 to greenhouse gases or to a greenhouse effect with reference to planet Earth.

    Comment by Ian Castles — 13 Apr 2006 @ 4:07 am

  124. Thanks again, Ian: its good to think that these ancient tomes of ours have some uses. It goes to show that we live in a very different world now. Sorry to lead you into an historical cul de sac.

    Comment by Ian K — 13 Apr 2006 @ 4:02 pm

  125. Today in Nature, apparent corroboration from the proxy record:
    Climate sensitivity constrained by temperature reconstructions over the past seven centuries
    Gabriele C. Hegerl, Thomas J. Crowley, William T. Hyde and David J. Frame

    The magnitude and impact of future global warming depends on the sensitivity of the climate system to changes in greenhouse gas concentrations. The commonly accepted range for the equilibrium global mean temperature change in response to a doubling of the atmospheric carbon dioxide concentration1, termed climate sensitivity, is 1.5â??4.5 K (ref. 2). A number of observational studies3, 4, 5, 6, 7, 8, 9, 10, however, find a substantial probability of significantly higher sensitivities, yielding upper limits on climate sensitivity of 7.7 K to above 9 K (refs 3, 4, 5, 6, 7â??8). Here we demonstrate that such observational estimates of climate sensitivity can be tightened if reconstructions of Northern Hemisphere temperature over the past several centuries are considered. We use large-ensemble energy balance modelling and simulate the temperature response to past solar, volcanic and greenhouse gas forcing to determine which climate sensitivities yield simulations that are in agreement with proxy reconstructions. After accounting for the uncertainty in reconstructions and estimates of past external forcing, we find an independent estimate of climate sensitivity that is very similar to those from instrumental data. If the latter are combined with the result from all proxy reconstructions, then the 5â??95 per cent range shrinks to 1.5â??6.2 K, thus substantially reducing the probability of very high climate sensitivity.

    Looking forward to RC comments.

    [Response: You might well find http://julesandjames.blogspot.com/2006/04/hegerl-et-al-on-climate-sensitivity.html intersting - William]

    Comment by Tom Fiddaman — 20 Apr 2006 @ 4:59 pm

Powered by WordPress