Open Mind

Breaking Records

June 26, 2009 · 149 Comments

According to temperature data from GISS the hottest year on record is 2005, but according to data from HadCRU (the HadCRUT3v data set), the hottest year is 1998. You might wonder whether there’s any significance to the fact that, eleven years later, the HadCRU data set hasn’t yet set a new record. HadCRU data shows a much stronger influence from the very strong 1998 el Nino than does GISS data; hence the HadCRU 1998 record is considerably more extreme than the GISS 1998 record (it was the record at the time). How long should we expect it to take before breaking a record anyway? How long does a record have to remain unbroken before we have statistically significant evidence that global warming might have peaked in 1998?


The gory mathematical details are given at the end of this post. But for the HadCRUT3v data set, the 1998 record was a whopping 2.6 standard deviations above the trend line. That’s a lot! Already we should expected it to take a while to break that record.

Using the formulae outlined at the end of this post, we can compute the probability that the record won’t be broken until any later year n, given a steady warming rate at 0.017 deg.C/yr. The probability is shown in the left-hand graph, with the “Survival function” (it’s not the actual survival function, it’s the probability of the record not being broken until year n or later) shown in the right-hand graph:

cru98

We see that the most likely single year in which to break the record is year 10 (2008), although there’s still considerable probability that the record will last longer than that. In fact, there’s a 6.9% chance the record will last 14 years — until 2012 — even assuming, as we have done, that global temperature is a steady increase plus random noise. Hence the “95% confidence limit” (the standard in scientific research) is 14 years; only if the record lasts beyond 2012 do we have statistically significant evidence of any change in the global warming pattern.

The HadCRU record lasts so long because temperature in 1998 was so much above the trend line. What about GISS? In this case, the 1998 value is only 2.2 standard deviations above the trend, so it’s easier to break the record. Still, the 1998 record shouldn’t last beyond 2010, it should be broken by then. And in fact the GISS 1998 record WAS broken, in 2005. It was also tied in 2007.

giss98

The new GISS record is year 2005, but that’s only 1.2 standard deviations above the trend so it shouldn’t take as long to break. In fact it shouldn’t last beyond 7 years, so if we don’t break it by 2012, only THEN should we wonder why the record hasn’t been exceeded. Of course, that assumes we don’t have some unforseen event like a massive volcanic eruption, which cools the planet and alters the underlying trend.

giss05

Lots of denialists — lots of them — use the 1998 record in the HadCRU data to claim that “global warming stopped in 1998″ or “the globe has cooled since 1998.” Lots of other analyses show how foolish such claims are, but this particular one shows with crystal clarity: the fact that HadCRU data hasn’t yet exceeded its 1998 value is nothing more than what is to be expected. Anyone who tells you different is selling something.

But, a cleverly crafted yet fundamentally flawed “sales pitch” is all they’ve got.

Probability for breaking the record in year n

Let’s suppose that annual average global temperature is the combination of a steady, linear trend at a rate of 0.017 deg.C/yr, and normally distributed white noise. This is actually a pretty good approximation; we know the noise isn’t white noise but for annual averages it is at least approximately so, and in fact the noise approximately follows the normal distribution. Then our simple model of annual average temperature is

x_t = \alpha + \beta t + \varepsilon_t,

where \alpha is the intercept of the trend line, \beta is its slope (about 0.017 deg.C/yr), and \varepsilon_t is random (normally distributed white) noise with mean value zero and standard deviation \sigma.

Now suppose that some particular year, let’s call it “year zero,” the noise term \varepsilon is big enough to set a new record for global annual average temperature. Since t=0, the record temperature is

x_0 = \alpha + \varepsilon_0.

What’s the chance of breaking the record the following year? To break the record we require x_1 > x_0, or

x_1 = \alpha + \beta + \varepsilon_1 > \alpha + \varepsilon_0 = x_0.

This is the same as requiring

\varepsilon_1 > \varepsilon_0 - \beta.

If the noise follows the probability density function f(\varepsilon), with cumulative distribution function F(\varepsilon), then that probability is just

Probability = 1 - F(\varepsilon_0-\beta).

We can even use the normal cdf \Phi(z) to compute the noise cdf as

F(\varepsilon) = \Phi(\varepsilon/\sigma).

What’s the chance we don’t break the record until the 2nd year after it’s set? For that to happen, first we have to NOT break the record the following year, which has probability

Probability(not 1st year) = F(\varepsilon_0 - \beta).

Then we have to break the record the 2nd-following year. This means x_2 > x_0, or

x_2 = \alpha + 2 \beta + \varepsilon_2 > \alpha + \varepsilon_0 = x_0.

This is the same as

\varepsilon_2 > \varepsilon_0 - 2 \beta,

and the probability of that happening is

 1 - F(\varepsilon_0 - 2 \beta).

Hence the probability of not breaking the record in year 1 and breaking it in year 2 is the product of these probabilities, namely

Probability(year 2) = F(\varepsilon_0 - \beta) [ 1 - F(\varepsilon_0 - 2 \beta) ].

By similar reasoning, the chance we won’t break the record until year 3 is the probability of NOT breaking it in year 1, times the probability of NOT breaking it in year 2, times the probability of breaking it in year 3, which is

Probability (year 3) = F(\varepsilon_0 - \beta) F(\varepsilon_0 - 2 \beta) [ 1 - F(\varepsilon_0 - 3 \beta) ].

You can probably see a pattern developing; the chance that we won’t break the record until year n is

Probability (year n) = F(\varepsilon_0 - \beta) F(\varepsilon_0 - 2\beta) F(\varepsilon_0 - 3\beta) ... F(\varepsilon_0 - (n-1)\beta) [1 - F(\varepsilon_0 - n\beta)].

Categories: Global Warming
Tagged:

149 responses so far ↓

  • Deep Climate // June 26, 2009 at 9:20 pm | Reply

    Great post and very understandable, thanks. I hope Edward Wegman reads it.

    In the last equation for probability (year n), is there any simplification possible of the product of F1 through Fn-1 that represents the cumulative probability of not breaking the record for years 1 through n-1?

  • Timothy Chase // June 27, 2009 at 12:42 am | Reply

    It has been a while for me, what I would expect would be a modified form of the law of exponential decay, and with exponential decay the survival probability for year n would be expressed as an exponential of the probability of “surviving” the initial year (p), that is for year n the probability of survival to and including that year is p^n, and likewise the probability of decaying on a particular year would be (p^(n-1))(1-p). Now exponential decay would likewise involve white noise, as there would be no correlation between a given year and the succeeding year.

    So as I see it, the difference between that pure exponential decay and the formula you have given is the n=1,2,3,4,… βs where β is simply the slope of the trendline and represents the constant march year after year along that trendline.

    And looking back over your explanation it appears that this is exactly what you have done. Is that it? My apologies. It has been a while for me.

    [Response: Essentially yes, bearing in mind that the \beta term affects the argument of the cumulative distribution function.]

  • Bob Tisdale // June 27, 2009 at 1:14 am | Reply

    Tamino: FYI, the Hadley Centre changed SST data sources in 1998. The following quote is from the Hadley Centre:
    http://hadobs.metoffice.com/hadsst2/

    “Brief description of the data
    “The SST data are taken from the International Comprehensive Ocean-Atmosphere Data Set, ICOADS, from 1850 to 1997 and from the NCEP-GTS from 1998 to the present.”

    And now a quote from ICOADS:
    http://icoads.noaa.gov/products.html

    “ICOADS Data
    “The total period of record is currently 1784-May 2007 (Release 2.4), such that the observations and products are drawn from two separate archives (Project Status). ICOADS is supplemented by NCEP Real-time data (1991-date; limited products, NOT FULLY CONSISTENT WITH ICOADS).” [Emphasis added.]

    This change in data suppliers created an upward step change in their data with respect to the SST datasets that did not swap suppliers at that time (ERSST.v2, ERSST.v3b, OI.v2).
    http://i34.tinypic.com/2zswhac.jpg
    http://i33.tinypic.com/2j11y6r.jpg
    http://i37.tinypic.com/ighm9s.jpg

    And GISS has used OI.v2 SST data since December 1981.

    Regards

  • michel lecar // June 27, 2009 at 8:50 am | Reply

    Problem with HADCRU is, they will not reveal either their raw data or their algorithms.

    So I can’t really see the sense of debating their stuff. Still less using it in any public policy debate. It is not reproducible, its not subject to external scrutiny. It could be right or wrong, who knows? Its not science. At the moment it is at the level of ‘trust me, I’m a climate science expert’. No, if you want to be taken seriously, show us the workings.

    Unlike GISS, to Hansen’s great credit. There may be things wrong with the GISS algorithms and raw data, but GISS is setting an example of reproducibility and verifiability which HADCRU needs to follow. If they don’t, get their stuff out of IPCC and get it out of all policy discussions.

  • Barton Paul Levenson // June 27, 2009 at 12:35 pm | Reply

    Tamino,

    could you express an equation like

    e1 > e0 – beta

    as

    e1 > (e0 – beta)

    to make it clearer for us computer science types? Remember that in many programming languages,

    (e1 > e0) – beta

    would be evaluated differently from

    e1 > (e0 – beta)

    and might give a different answer depending on the implementation. Remove all ambiguity!

    [Response: I sympathize with your dilemma, but including the parentheses would be bad form mathematically although clearer for programmers. It's not incorrect, but mathematicians would wonder why I included the unnecessary parentheses. Nothing personal, really! -- but I think I'll conform to standard mathematical style.]

  • george // June 27, 2009 at 3:03 pm | Reply

    I wonder about the value of the whole “record” thing.

    As we saw recently when NASA’s small error (and adjustment) “changed” the rankings for the continental US (though not with statistical significance) , some people actually misuse/abuse the rankings.

    Fox news and others were reporting that 1934 had suddenly become the “hottest year on record” with the implication that it was for the entire globe, when in fact, the result was for the continental US AND the difference between 1998 and 1934 was STILL not significant (neither before or after the adjustment)

    Also, as pointed out above, the fact that the hadCRUT temp for 1998 is 2.6 std deviations above the trend may not be entirely due to nature (unless you consider the data set switch or possible errors in the calculation of the global temp “natural”).

    So, scientifically speaking, it’s a little hard to gage what a “record” temperature actually means.

    Unfortunately, the general (unscientific) public has no such problem assessing records. A record is a record and only steroids can change that.

    Perhaps worst of all, if you set up a record to be broken within a certain time period with the idea that it is NOT broken, global warming becomes suspect, I think you may be asking for trouble because if it does not happen and the hadcrut 1998 temp was actually in error, it will be very hard to convince people that the fact that the record has not been broken in 14 years (or whatever) is really meaningless.

    I think this may be another case where the public gets confused by a descriptive tool that is less than optimal and may actually be counterproductive.

    I think statements like the following are a better indicator (to both the public and to scientists) of what is happening than the “record.”

    “The ten warmest years [of the instrumental record since 1880] all occur within the 12-year period 1997-2008.” (NASA GISS)

  • MikeN // June 27, 2009 at 3:14 pm | Reply

    >the fact that HadCRU data hasn’t yet exceeded its 1998 value is nothing more than what is to be expected.

    That doesn’t look true. 95% confidence isn’t the same as 50% confidence. ‘What’s expected’ is the 50% confidence level.

    [Response: Your statistical naivete is showing.]

  • dhogaza // June 27, 2009 at 3:35 pm | Reply

    BPL, as someone who made his living writing high-end compilers for a variety of languages and processors during my 20s and early 30s, offhand I can’t think of any mainstream language which gives comparison operators like “>” equal or higher precedence than arithmetic operators like “-”.

  • Timothy Chase // June 27, 2009 at 4:29 pm | Reply

    dhogaza wrote:

    … I can’t think of any mainstream language which gives comparison operators like “>” equal or higher precedence than arithmetic operators like “-”.

    Another point: the (a>b) will be a boolean, and as such its treatment from one computer language to another will be ambiguous since true will be 1 in some languages but -1 in others — that is assuming the language isn’t strongly typed to begin with, in which case subtracting a numerical value from a boolean would be strictly verboten anyway.

  • Timothy Chase // June 27, 2009 at 4:51 pm | Reply

    RE a>b-c

    Anyway I am glad this came up.

    For five years I was doing VB6, and although (a>b) was looked down on a bit due to its ambiguity it was a nice shorthand that simplified code. So I could definitely see where BPL was coming from. At the same time I had vaguely noticed the notational convention employed in math.

    Nice to think about — as it involved some connections.

  • MikeN // June 27, 2009 at 5:07 pm | Reply

    >Response: Your statistical naivete is showing.]

    Oh, you want to use expected value instead? Looking at your chart, that still doesn’t give you a number higher than 10.

    [Response: The expected value in statistics isn't what we "expect" to get, it's the average value of repeated identical experiments as the number of repetitions grows unboundedly. It's not even the single most likely value (that's the mode). And the likelihood of getting that value is often quite small -- including in this case, for which the most likely value has less than 13% probability of occuring. In fact for a continuous (rather than discrete) random variable, the probability of getting exactly the "expected value" is equal to zero.

    As for the idea that what we "expect" is the 50% confidence limits, that's utter nonsense -- we expect the result to be outside 50% confidence limits as often as it's within them.

    What we "expect" is that most of the time (95% of the time being the de facto scientific standard) it will be within a given set of confidence limits (95% confidence limits). Only when that fails to happen do we have any statistical evidence that our hypothesis is mistaken. Even that's not proof; we "expect" to be outside the 95% confidence limits for no other reason than random fluctuation, 5% of the time.

    The level of naivete you've exhibited about statistics is astounding, but hardly surprising. It's the obstinacy with which you cling to your ignorance that's truly embarrassing. If you simply admit it, we'll respect your wisdom; if not...]

  • george // June 27, 2009 at 6:45 pm | Reply

    There are broken records and then there are broken records.

    The former might mean something but the latter almost never do.

  • michel lecar // June 27, 2009 at 7:22 pm | Reply

    But you are not answering the question.

    If the HADCRU originating data and algorithm has not been revealed, how do we know that the various trends and levels are not an artifact of the way its been compiled? So, why do we think any tests of significance are testing movements in temperature, as opposed to movements in the index?

    What you are showing is that there are significant movements in the HADCRU index. How do you know this corresponds to movements in temperature?

    One is sure they have done their best. But unless we can verify what that best amounted to, its a waste of time thinking much about what their work, taken at face value, shows.

    [Response: You're just flapping your lips in an attempt to smear HadCRU. The close match of HadCRU, GISS, NCDC, and other data sets is plenty of confirmation that they're on the right track, and the results of HadCRU data are independently recoverable from other data sets. Do yourself a favor and give it up.]

  • Deep Climate // June 27, 2009 at 8:20 pm | Reply

    But let’s not think about:
    x + 2 = 2x

    (x + 2) == (2 * x)

    =:>)

  • Hank Roberts // June 27, 2009 at 8:46 pm | Reply

    MikeB, Tamino is sincere about admitting ignorance and is a very good teacher.

    Don’t be thin-skinned; he’s much less caustic than either of my statistics teachers tended to be with me; it goes with the territory.)

    http://catb.org/~esr/faqs/smart-questions.html#rtfm

    —-excerpt—–
    “… Often, the person telling you to do a search … thinks (a) the information you need is easy to find, and (b) you will learn more if you seek out the information than if you have it spoon-fed to you.

    You shouldn’t be offended by this; by hacker standards, your respondent is showing you a rough kind of respect simply by not ignoring you. You should instead be thankful for this grandmotherly kindness…. the direct, cut-through-the-bullshit communications style that is natural to people who are more concerned about solving problems than making others feel warm and fuzzy…. Get over it. It’s normal. In fact, it’s healthy and appropriate.

    Community standards do not maintain themselves: They’re maintained by people actively applying them, visibly, in public. …
    Remember: When that hacker tells you that you’ve screwed up, and (no matter how gruffly) tells you not to do it again, he’s acting out of concern for (1) you and (2) his community. It would be much easier for him to ignore you …
    —-end excerpt—-

  • Riccardo // June 27, 2009 at 10:04 pm | Reply

    Tamino,
    in the claim that 1998 is 2.6 standard deviations (SD) above the trend line, the SD is calculated from the stated error of the measurements or from the residuals in a given period of time?

    [Response: You can't use the stated measurement error because that includes only measurement error, not the natural variation which we're really interested in. I estimated the deviation in two ways: first, by fitting a lowess smooth to the entire data set and basing it on the residuals from that fit, and second, by fitting a line to the 1975-present data and basing it on the residuals from that fit. Both estimates put 1988 2.6 standard deviations above the trend.]

  • Lazar // June 28, 2009 at 12:04 am | Reply

    michel.,

    they will not reveal either their raw data or their algorithms

    the algorithms are described in relevant papers, the main one for hadcrut3 is available free at the hadley or cru websites, there’s a list of surface stations used in crutem3 at cru and their data can be obtained free from ghcn or the relevant national weather service, and finally sst measurements are available free from noaa icoads

    Its not science

    … and you’re in a better position to judge that than the referees and the probably hundreds of scientists who use hadcrut?… and you think people here will believe you?
    it takes helluvalota work to ask the right questions… let alone make substantial criticism…
    have you read those references re climate sensitivity?

  • Ray Ladbury // June 28, 2009 at 12:08 am | Reply

    Michel, an old saying: A man with one watch always knows what time it is–even if the watch is broken. A man with two watches is never sure, but at least he’ll know if one of them is broken. HADCRU is nto the only watch we have.

  • Ray Ladbury // June 28, 2009 at 12:12 am | Reply

    Mike N.,
    First, confidence and probability are different entities. Second:
    mean=expected value=1st moment
    mode=most probable value
    median=point where the cumulative probability is 0.5

    Dude, go learn some probability. You’ll get a lot more out of Tamino’s posts.

  • dhogaza // June 28, 2009 at 12:24 am | Reply

    Dude, go learn some probability. You’ll get a lot more out of Tamino’s posts.

    Let’s have him over for poker, first …

  • Glen Raphael // June 28, 2009 at 3:31 am | Reply

    So: when HadCRU still hasn’t exceeded 1998’s level as of the end of 2012, *then* you will be convinced there’s something wrong with your model? Good to know!

    (Actually, if Bob is correct that 1998 was bumped to a higher level due to a one-time change in data sources, then the jump wasn’t really 2.6 standard deviations after all. In which case the numbers are heavily padded in your favor. Still, easily falsifiable short-term predictions are pretty rare in the climate debate, and this one seems pretty likely to bite you, so good on you for making it!)

    [Response: You're mistaken. If the 1998 figure is too high, there's still the same probability of exceeding the given *numerical* value whether it's a genuine temperature record or not -- except that the inflated 1998 value makes the estimated trend rate too high, so it causes an overestimate of the likelihood of exceeding the given numerical value. Hence the numbers are "padded" *against* breaking the record.

    As for "pretty likely to bite me," you offer exactly the evidence for that I expected: none.

    If the HadCRU data don't exceed the 1998 value by 2012, that's evidence but not proof of a difference between model and reality. If there's a known cause for such an observation (Pinatubo-scale or larger volcano next month), all bets are off. The GISS data have ALREADY exceeded the 1998 record.

    It's generally only denialists who are desperately seeking some single event or measure that justifies saying "global warming is wrong." Sane and honest climate researchers acknowledge that climate is a lot more complicated than that, we have to aggregate all the evidence; it's not a single record-setting year, it's the combination of hundreds, even thousands, of evidences that combine to make an overwhelming case for global warming. Perhaps that's just too subtle for you; it certainly is for Bob Carter.

    The POINT of this post is that not only is the 1998 HadCRU record not "proof" against global warming, it isn't even evidence.]

  • michel lecar // June 28, 2009 at 6:46 am | Reply

    Simple question, and maybe I am wrong about this, if so, I’ll own up to it. Where exactly does one find the data and the algorithm in a form that one can run it, and generate the series?

    Steven Haxby quotes the following reply from Defra on this subject

    Although I accept that you are understandably concerned over this issue relating to scientific practice, the CRU is an independent organisation which receives no DECC funding for developing the CRU land dataset and therefore DECC does not have any proprietary rights to this data. It is up to Professor Jones, as the dataset?s owner, to release this data. So far, in response to various freedom of information requests, he has released only the names of the meteorological stations used to compile his dataset, but the station data for many of these (though admittedly not all) can in fact be obtained at the Goddard Institute for Space Studies (GISS) website at http://data.giss.nasa.gov/gistemp/station_data/ [my emphasis]

    So it really does not sound, does it, as though Defra thinks all the data is available in a form which will let one first verify that the algorithm applied to it will generate the time series, and then after that move to asking whether the algorithm is appropriate? Is this wrong, and is there a full data set and a code listing someplace where we can get it and look at it?

    The reply (by Andrew Glyn) goes on to say:

    the HadCRU global temperature graph was one of four that were cited by the Intergovernmental Panel on Climate Change (IPCC) in their 2007 Assessment as evidence of the warming that has occurred since the end of the nineteenth century. One of these graphs was produced by GISS who do make available on their website all the station names and the associated temperature data along with their calculation computer code. The graph produced by GISS is very similar to HadCRU (and the other two independently produced graphs) but is calculated by a different method to that developed by Professor Jones. The close similarity of these graphs indicates that there are no concerns over the integrity of the HadCRU global temperature graph.

    Which is actually not far from the suggestion I made, if you look at it in a different light. It amounts to saying don’t use the stuff. Use other series where the underpinnings have been placed in the public domain. Because this one adds nothing and has to be verified by referring to them.

    But that of course is not the way its publicized.

  • Ray Ladbury // June 28, 2009 at 11:03 am | Reply

    I’ll bring chips. Mike N., bring lots of money.

  • Lazar // June 28, 2009 at 1:38 pm | Reply

    michel,

    in a form that one can run it

    … what do you want the code for michel… you ain’t gonna run it… it’s a talking point… have you read the references re climate sensitivity?… you can read the algorithm descriptions in the papers and generate your own code… if you get different results you can let us know…

    it really does not sound, does it, as though Defra thinks all the data is available

    … no michel… they said available at giss… giss (ghcn) is not the only source…

    It amounts to saying don’t use the stuff

    nope … allaying concerns and admitting their validity are two different things…

  • george // June 28, 2009 at 1:52 pm | Reply

    If the standard deviation of the residuals over the last few decades is about 0.1C, aren’t all the global temperature anomaly values from that period that are within about 0.1C of one another essentially equal? (from the standpoint of making comparisons between different years for the purpose of assessing what is happening to the climate.)

    After all, if the global temperature over a fairly long period (say 30 years) was found to merely fluctuate about a mean value with a standard deviation of 0.1C, this would mean that there was effectively no trend (up or down) , ie, no real change in the global temperature that was meaningful from the standpoint of climate.

    [Response: Well said.]

    Along similar lines, wouldn’t a la nina year that had the very same temperature (within measurement uncertainty) as an el nino year really be the more significant of the two with regard to climate? (because the temperature sans the noise would be greater in the case of the la nina year )

    Unless I am mistaken, Hansen et al refer to this very issue in a 2001 paper:

    There are inherent uncertainties in the long-term temperature change at least of the order of 0.1°C for both the U.S. mean and the global mean. Nevertheless, it is clear that the post-1930s cooling was much larger in the United States than in the global mean. The U.S. mean temperature has now reached a level comparable to that of the 1930s, while the global temperature is now far above the levels earlier in the century. The successive periods of global warming (1900-1940), cooling (1940-1965), and warming (1965-2000) in the 20th century show distinctive patterns of temperature change suggestive of roles for both climate forcings and dynamical variability. The U.S. was warm in 2000 but cooler than the warmest years in the 1930s and 1990s. Global temperature was moderately high in 2000 despite a lingering La Niña in the Pacific Ocean.

    Presumably, the 0.1C uncertainty that Hansen refers to is not due to “measurement error”, but instead to natural variability.

    That paper seems to take a slightly different approach than the current ranking system for representing the relative magnitude of the temperatures in different years using qualifiers like “comparable”, “far above”, “moderately high” to compare global temperatures from different time periods instead of actually ranking the years.

    Finally, for estimating how long it will take to break a certain record (eg, the 1998 hadcrut record), I would note that one can get a pretty good estimate (not far off from the one obtained with the fancy math) with a fairly crude method. I know it will probably make most mathematicians cringe, but here goes:

    Consider a temperature that lies 2.6 std deviations — or about 0.26C (assuming 0.1C is the std dev of residuals) above the trend line. Assuming that the temp drops back down about the same amount after el nino passes and that the trend continues upward at 0.017C per year and that there are no more subsequent el ninos or significant volcanic activity, it would take about 15 years for the temperature to just equal the value it had when the record was set.

    For the case of a record lying 2.2 std deviations above the trend line with the same assumptions, this method gives 13 years to just equal the record and for a record lying 1.2 std deviations above the trend line, about 7 years.So the crude method’s not half bad.

    Mathematicians in the crowd may now hurl eggs and tomatoes (and insults, of course)

    [Response: The reasoning is essentially sound; no cringing here.]

  • Glen Raphael // June 28, 2009 at 2:42 pm | Reply

    As for “pretty likely to bite me,” you offer exactly the evidence for that I expected: none.

    I don’t actually need to provide evidence when I can just wait three years and say “aha!”. :-)

    But since you asked so nicely, I think making that 3-year prediction is a mistake – and would still think that even if I believed your model were accurate.

    Why? Because the chance that we’ll break the 1998 record in the next three years has to be a lot lower than 95% given that we haven’t broken it yet. Let’s take this into a less politically-charged domain: human life expectancy. If I’m a caucasian male living in the US, my life expectancy at birth was 75 years. But if I manage to live to age 70 my life expectancy at that point isn’t 5 years, it’s 15.

    Similarly, your chart calculates the “life expectancy” of the 1998 HadCRU record from its birth, not from the present day, 11 years in, with the latest temperatures below the trend. If you think it’s likely to get back to the 1998 level inside 3 years, you’re predicting a heck of a steep climb. Jumps of that sort do happen – one happened in 1998 – but they don’t happen very often.

    [Response: You're correct that a proper estimate of the probability of breaking the record must account for the fact that it hasn't happened yet. My point was NOT to predict when it'll be broken (perhaps I was unclear about that), but to show that an 11-year span of unbroken record for such a large excursion is nowhere near out of the ordinary.]

  • luminous beauty // June 28, 2009 at 2:43 pm | Reply

    Where exactly does one find the data and the algorithm in a form that one can run it, and generate the series?

    Surface station data

    The algorithms are in the supporting literature.

    Computer coding isn’t the algorithm. The algorithm is the math that is encoded in the program. If one knows the math, one can write code for it.

  • Glen Raphael // June 28, 2009 at 2:51 pm | Reply

    As for the effect of the “padding”, I accept your correction. (a fake jump would also increase the apparent standard deviation, but the effect on the linear trend would dominate that. So you were right; I was wrong.)

  • dhogaza // June 28, 2009 at 4:43 pm | Reply

    Similarly, your chart calculates the “life expectancy” of the 1998 HadCRU record from its birth, not from the present day, 11 years in, with the latest temperatures below the trend. If you think it’s likely to get back to the 1998 level inside 3 years, you’re predicting a heck of a steep climb. Jumps of that sort do happen – one happened in 1998 – but they don’t happen very often.

    True, not very often, but they have a name – El Niño. You’re suggesting that the fact that it’s been 11 years since we’ve had an amped-up El Niño event is evidence that we’re not likely to have one in the next three years.

    Some might argue that it’s evidence of the opposite, i.e. that we’re past due.

  • Michael hauber // June 28, 2009 at 11:50 pm | Reply

    Ah but what about the chance of a record next year if we get an el nino? I think there is a high chance of a decent el nino – as we are currently on the border of el nino conditions with a clear trend towards el nino.

    With a coolish start to this year, and a typical lag between ENSO and temperature we should probably see most of the heat from this event next year.

  • David B. Benson // June 29, 2009 at 12:32 am | Reply

    dhogaza // June 28, 2009 at 4:43 pm — Are El Nino events Poisson distributed? If so, all on can now is the average frequency of occurrence. Doesn’t matter how long it has been since the last one.

  • Steve Bloom // June 29, 2009 at 12:32 am | Reply

    IIRC Jim Hansen “promised” Obama that there would be an el nino-influenced record year by the end of his first term.

  • dhogaza // June 29, 2009 at 4:08 am | Reply

    Are El Nino events Poisson distributed? If so, all on can now is the average frequency of occurrence. Doesn’t matter how long it has been since the last one.

    I don’t know the answer to this question. Yet, I know – if I’m to believe the best data available – that this recent extended La Niña event is the warmest on record.

    So now that we’re entering El Niño conditions, it’s going to be warmer.

    But correct me if I’m wrong, but isn’t a poisson distribution in some sense a characterization of random swings absent an underlying signal?

    If so, all on can now is the average frequency of occurrence. Doesn’t matter how long it has been since the last one.

    Again, correct me if I’m wrong, but that depends on there being no underlying trend, that the distribution is truly random.

    Well, I guess the simple answer, is that those who are qualified don’t believe that the Poisson distribution explains what we’re seeing today.

    We do, after all, have an established warming signal that is statistically significant.

    But I’m just a dumbshit BS Mathematics/compiler writer guy.

    Maybe you’re right, and climate science will fail on this as well as other criteria.

    Write it up … publish it … become famous.

  • Rattus Norvegicus // June 29, 2009 at 4:42 am | Reply

    FWIW, NCDC is predicting an ENSO+ condition by August. Therefore I predict a new record in 2010.

  • Timothy Chase // June 29, 2009 at 5:14 am | Reply

    David B. Benson asked:

    dhogaza // June 28, 2009 at 4:43 pm — Are El Nino events Poisson distributed? If so, all on can now is the average frequency of occurrence. Doesn’t matter how long it has been since the last one.

    I found the following which states that they are too regular to be Poisson:

    2.2 Estimating the Interval Distribution

    The next question about the timing of El Nino events concerns the distribution of the intervals between events. It is widely recognized that El Nino events occur at intervals too regular to be consistent with a Poisson process. For example, the largest gap between successive events in Table 1 is 8 years. If 50 events are uniformly distributed over an interval of 187 years, then the results of Fisher (1929) on the distribution of the largest gap under a uniform distribution of events show that the probability of the largest gap exceeding 8 years is approximately .998.

    Solow, A.R., 1995. An exploratory analysis of a record of El Niño events, 1800-1987. Journal of American Statistical Assoc., 90(429): 72-77.

    DePreSys (the approach by Hadley where they create an ensemble of runs initialized with real world data from consecutive days) also has an El Nino showing up in the next few years, which would seem to match up with Hansen’s view.

    There is also some talk about the North Pacific Gyre Oscillation having ~10 month lead time on ENSO. Not sure how much stock to put in it. However, a presentation on this lead received the 2008 Pices best presentation award handed out by the North Pacific Marine Science Organization. Top of the page here:

    http://www.pices.int/publications/presentations/PICES_17/Best_17/Best_2008.aspx

  • Matt Andrews // June 29, 2009 at 5:25 am | Reply

    I’ve read somewhere that GISS includes Arctic areas in its data, whereas HadCRU does not.

    If so, that would be entirely consistent with GISS having already broken the 1998 record, since the Arctic region has experienced much more rapid warming than anywhere else.

    If HadCRU does include Arctic region data, my apologies; I’ll need to go and find where that mistaken impression came from.

    And yes, looks like another El Nino is brewing.

  • Timothy Chase // June 29, 2009 at 6:18 am | Reply

    More on Predicting the Next El Niño…

    John Mashey wrote a post on August 14, 2007 at 6:24 am that included the following:

    http://aos.princeton.edu/WWWPUBLIC/gphlder/bams_predict200.pdf
    “How predictable is El Nino” says it isn’t.

    The paper is still there and this is a bit of an oversimplification — at least with respect to timing…

    From the conclusion:

    Thus far, attempts to forecast El Niño have not been very successful (Landsea and Knaff 2000, Barnston et al 1999). However, the factors that cause the irregularity of the Southern Oscillation – random atmospheric disturbances whose influence depends on the phase of the oscillation – are such that the predictability of specific El Niño events is inevitably limited. That is especially true of the intensity of El Niño. For example, the occurrence of an event in 1997 was predictable on the basis of information about the phase of the Southern Oscillation, but the amplitude of the event could not have been anticipated because it depended on the appearance of several wind bursts in rapid succession.

    How Predictable Is El Niño? (Nov 2002)
    A.V. Fedorov, S.L. Harper, S.G. Philander, B. Winter, and A. Wittenberg
    Atmospheric and Oceanic Sciences Program, Department of Geosciences, Princeton University
    Sayre Hall, P.O. Box CN710, Princeton, NJ 08544, USA
    http://aos.princeton.edu/WWWPUBLIC/gphlder/bams_predict200.pdf

  • Timothy Chase // June 29, 2009 at 7:01 am | Reply

    The abstract to a paper from 2008 — which seems to lend further support to DePreSys and its use of initialization:

    El Nino-Southern Oscillation (ENSO) is by far the most energetic, and at present also the most predictable, short-term fluctuation in the Earth’s climate system, though the limits of its predictability are still a subject of considerable debate. As a result of over two-decades of intensive observational, theoretical and modeling efforts, ENSO’s basic dynamics is now well understood and its prediction has become a routine practice at application centers all over the world. The predictability of ENSO largely stems from the ocean-atmosphere interaction in the tropical Pacific and the low-dimensional nature of this coupled system. Present ENSO forecast models, in spite of their vast differences in complexity, exhibit comparable predictive skills, which seem to have hit a plateau at moderate level. However, mounting evidence suggests that there is still room for improvement. In particular, better model initialization and data assimilation, better simulation of surface heat and freshwater fluxes, and better representation of the relevant processes outside of the tropical Pacific, could all lead to improved ENSO forecasts.

    Chen, D. and M. A. Cane, 2008: El Nino prediction and predictability. Journal of Computational Physics, 227(7): 3625-3640.

  • Timothy Chase // June 29, 2009 at 7:05 am | Reply

    A couple more papers worth checking out:

    141. Mu, M., W. Duan, and Bin Wang, 2007: Season-dependent dynamics of nonlinear Optimal error growth and ENSPO predictability in a theoretical model, J. Geophys. Res. 112, D10113.
    http://www.soest.hawaii.edu/MET/Faculty/bwang/bw/list.html

    Cheng, Y., Tang, Y. , X. Zhou, P. Jackson, D. Chen, 2009: Further Analysis of Singular Vector and ENSO predictability from 1876-2003—Part I: Singular Vector and the Control Factors , Climate Dynamics

  • michel lecar // June 29, 2009 at 7:06 am | Reply

    I still do not get it.

    We have some raw data, only some of which, according to DEFRA, is available. Then we have some computer code which allegedly implements an algorithm against all of this data. The code is not available, the algorithm is, in some journal articles.

    How are we supposed to verify that the code implements the algorithm correctly, and that when applied to the totality of the data it outputs the charts which the supplier furnishes?

    I don’t have a clue whether its legally right or wrong for this stuff to be kept out of the public domain, but I’m dead certain that if they want us to put money on it, they have to publish the raw data, the algorithm, and the code.

    Lets say I am running a company, it could be Enron or Lehman. I publish accounts. People, investors, express worries, and want to know what the raw data was. I explain to them that some though not all of the data is available to them if they get my Hong Kong filings. The principles of my accounts are covered in a couple of articles by my Controller in the Journal of Accounting. As to exactly how I prepared my accounts, in the light of these articles, that is commercially confidential.

    So its all OK, you can buy my stock for your pension fund. Come off it!

    Same thing with the Met Office. If Kirsty Wark on Newsnight was correct, they are refusing to reveal how they did their long range forecast. But this forecast is 30 years out and to a grid of 30km. It is unprecedented, Nobel prize stuff, if it is valid. No-one has yet done this to such fine grained level so far out. It is purporting to be a forecast which Humberside Councils can rely on for planning purposes. What it forecasts is a truly catastrophic future for many communities, one which will require massive investment if lives are not to be lost, not least right there in Humberside.

    Has it been peer reviewed? Can we see the code? Apparently not. But we are being asked to believe and invest on the basis of what? Some well meaning guys with a large computer someplace?

    It makes absolutely no sense. Couple this with the continuing refusal of our government to draw the conclusions which follow from the forecasts it claims to believe, and one is simply baffled. We seem to have predictions of catastrophe which cannot be verified by outsiders, which the government claims to accept, and which it then resolutely refuses to act on, on a scale commensurate with the forecast catastrophe. It all makes no sense at all.

    [Response: What really makes no sense is your reluctance to act based on complaints about HadCRU. They're only one of *many* global temperature estimates, only one of *many* modeling centers for climate projections ... and the global community of estimates and of projections are all saying the same thing: disaster headed our way. As for transparency, if you hate HadCRU so much go to GISS -- for their temperature estimates all the data, procedures, even the computer code is freely available, and their GCM model is also free for the download.

    So you don't like HadCRU ... why won't you listen to the thousands of others saying the same thing?]

  • Ray Ladbury // June 29, 2009 at 12:20 pm | Reply

    David Benson,
    Solar particle events also tend to look Poisson distributed, but a colleague, Mike Xapsos has shown them to exhibit SOC. It was an interesting study.

  • J // June 29, 2009 at 1:50 pm | Reply

    Here’s one current ENSO forecast discussion from NOAA:
    http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/enso_advisory/

  • Timothy Chase // June 29, 2009 at 2:49 pm | Reply

    Matt Andrews wrote:

    And yes, looks like another El Nino is brewing.

    Well, I ran across the predicted indices, but before giving you those I need a definition.

    Here is one:

    North America’s operational definitions for El Niño and La Niña, based on the index, are:
    El Niño: A phenomenon in the equatorial Pacific Ocean characterized by a positive sea surface temperature departure from normal (for the 1971-2000 base period) in the Niño 3.4 region greater than or equal in magnitude to 0.5 degrees C (0.9 degrees Fahrenheit), averaged over three consecutive months.

    North American Countries Reach Consensus on El Niño Definition
    http://www.nws.noaa.gov/ost/climate/STIP/ElNinoDef.htm

    By that measure, currently the dynamical models are predicting a stronger El Niño than the statistical models, but nearly everyone is predicting an El Niño before the end of the year. NASA’s GMAO predicts an El Niño maxing out at a little below an anomaly of 1.75°C. (By comparison, the 1998 El Niño maxed out at about 2.8°C but was short-lived.)

    Please see:

    The following graph and table show forecasts made by dynamical and statistical models for SST in the Nino 3.4 region for nine overlapping 3-month periods. Note that the expected skills of the models, based on historical performance, are not equal to one another…

    Summary of ENSO Model Forecasts
    17 June 2009
    http://iri.columbia.edu/climate/ENSO/currentinfo/SST_table.html

    If you look further down that page, they show the predictions for the last 21 months…

  • Timothy Chase // June 29, 2009 at 3:20 pm | Reply

    Matt Andrews wrote:

    I’ve read somewhere that GISS includes Arctic areas in its data, whereas HadCRU does not.

    If so, that would be entirely consistent with GISS having already broken the 1998 record, since the Arctic region has experienced much more rapid warming than anywhere else.

    If HadCRU does include Arctic region data, my apologies; I’ll need to go and find where that mistaken impression came from.

    I would have to do some digging to get you the material, but as I remember, HadCRU uses pretty much the same data as GISS. This includes the Arctic. The difference is largely one of methodology. A bit like RSS and UAH, although much less sinister. ;-)

    Temperature anomalies are strongly correlated over vast distances, I believe Hansen said somewhere in the neighborhood of a thousand km. Coverage in the Arctic and sub-Arctic is sparse. NASA is comfortable with making use of those long-distance correlations to fill in the blanks. The people at HadCRU, not so much. The still fill in the blanks, but only over shorter distances. Therefore NASA has “better coverage” of the Arctic, but essentially because they fill in the blanks over greater distances, not a larger dataset.

  • dhogaza // June 29, 2009 at 4:03 pm | Reply

    I’ve read somewhere that GISS includes Arctic areas in its data, whereas HadCRU does not.

    As Timothy says, they use the same data. GISS extrapolates the relatively sparse weather station data for the arctic to compute estimated warming for the arctic as a whole.

  • dhogaza // June 29, 2009 at 4:04 pm | Reply

    Oh, oops, should’ve read the rest of Timothy’s post, as he said the same thing …

  • dhogaza // June 29, 2009 at 4:05 pm | Reply

    Then we have some computer code which allegedly implements an algorithm against all of this data. The code is not available, the algorithm is, in some journal articles.

    How are we supposed to verify that the code implements the algorithm correctly, and that when applied to the totality of the data it outputs the charts which the supplier furnishes?

    Write your own program to implement the algorithm.

  • Timothy Chase // June 29, 2009 at 4:16 pm | Reply

    PS to the comment on what the models are predicting regarding El Nino…

    First a couple relevant posts:

    ASA: 2007 Second Warmest Year Ever, with Record Warmth Likely by 2010
    December 11th, 2007
    http://climateprogress.org/2007/12/11/nasa-hansen-2007-second-warmest-year-ever-warmest-year-likely-by-2010/

    Impure Speculation
    August 14, 2007
    http://tamino.wordpress.com/2007/08/14/impure-speculation/

    I am simply a philosophy major turned computer programmer and therefore can claim no expertise in this area. However, if the models are right, it looks like DePreSys and Hansen (both back in 2007) will have more or less hit the bull’s eye on this one. And it also looks like the denialists will be in search of a new chestnut early 2011 — as they always wait until after the last moment on things of this nature.

    However, I would like to leave you with one more note of caution. If you will remember my predictions on Arctic Sea Ice Extent from last year (or actually less than half a year ago), clearly I am not clairvoyant.

  • george // June 29, 2009 at 4:50 pm | Reply

    Are El Nino events Poisson distributed? If so, all on can now is the average frequency of occurrence. Doesn’t matter how long it has been since the last one.

    Some climate scientists (Trenberth et al) believe that El nino is something like a “heat release valve” for the tropics. If that is the case, there would be something to the idea that if an el nino has not occurred in a while, one is “overdue” (and it would not be a Poisson process)

    According to Trenberth:

    “Our view is that El Niño is a fundamental way in which the tropics get rid of heat. If you continue to pour heat into the tropics–which is what the sun is always doing–the weather systems and the ocean currents, under their normal variations within the annual cycle, are not sufficient to get rid of all the heat. Something has to happen to get the heat out of the tropics, and the something which happens is El Niño.”
    The authors support their theory with analyses of global oceanic and atmospheric heat budgets during an El Niño and a La Niña event in the late 1980s. If El Niño does serve as a release valve for tropical heat, then overall global warming could lead to more frequent events, as we have seen in the last two decades…

    from El Niño and global warming: What’s the connection?

    This reminds me more than a little of theories about earthquakes along the San Andreas fault where the view is that energy “builds up” over time (due to pacific and n American plate movement) and is released during earthquakes.

    In some places (like Parkfield) it happens fairly regularly. USGS has actually produced “predictions” of the likelihood of an earthquake within a given time period on a given section of the San Andreas faultline (or nearby faults) based on this idea.

  • Timothy Chase // June 29, 2009 at 5:02 pm | Reply

    Rattus Norvegicus wrote on June 29, 2009 at 4:42 am (more than ten hours before my post on June 29, 2009 at 2:49 pm):

    FWIW, NCDC is predicting an ENSO+ condition by August. Therefore I predict a new record in 2010.

    That looks quite similar to what the other models are predicting.

  • Jim Eager // June 29, 2009 at 6:57 pm | Reply

    dhogaza wrote: “Write your own program to implement the algorithm.”

    Exactly. Running the exact same data through the exact same algorithms using the exact same code will give you the exact same errors, if there are any, so you would not even know if there were any errors or not.

    But at least it would keep some trolls busy for a while so they wouldn’t be cluttering up threads.

  • B Buckner // June 29, 2009 at 8:08 pm | Reply

    Tim Chase – you can reference Tamino’s handy Climate Data Links at the top of the page to get the original info, but NASA uses satelite data for air temps over the ocean, whereas HadCRU uses ship measurements of sea surface temperatures, correleated to changes in air tempertures.

  • Timothy Chase // June 29, 2009 at 8:30 pm | Reply

    J gave us a link to the following:

    There continues to be considerable spread in the model forecasts for the Niño-3.4 region (Fig. 5). All statistical models predict ENSO-neutral conditions will continue for the remainder of 2009. However, most dynamical models, including the NCEP Climate Forecast System, predict the onset of El Niño during June – August 2009. Current observations, recent trends, and the dynamical model forecasts indicate that conditions are favorable for a transition from ENSO-neutral to El Niño conditions during June – August 2009.

    El Niño/Southern Oscillation (ENSO)
    Diagnostic Discussion
    issued by
    Climate Prediction Center/NCEP
    4 June 2009
    http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/enso_advisory/ensodisc.html

    … with the chart:
    http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/enso_advisory/figure5.gif

    … differs from what I found as the above describes the statistical models as having ENSO-neutral conditions for the rest of the year whereas four of the statistical models I saw have El Nino conditions late this year with only one dynamical model showing ENSO-neutral.

    However, the difference appears to lie in the dates on the charts. The chart showing four statistical models with El Nino conditions by the end of the year is from June whereas the chart showing all statistical models and two dynamic models ENSO-neutral is from May of this year.

    Looks like a month can make something of a difference. It will be interesting to see this unfold. Incidentally, J’s source has more exposition — explaining the actual causal mechanisms which appear to be in play.

  • Timothy Chase // June 29, 2009 at 8:44 pm | Reply

    george wrote:

    This reminds me more than a little of theories about earthquakes along the San Andreas fault where the view is that energy “builds up” over time (due to pacific and n American plate movement) and is released during earthquakes.

    Up in the Pacific Northwest we tend to have earthquakes on roughly the same scale as the one that produced the Boxer Day Tsunami. No — seriously. They typically come in sets of three or four with 300 years between each ~9.0. Occasionally there will be a fifth superquake to the set. Last one happened in January of 1701 if I remember correctly — and resulted in a recorded Tsunami several hours later in Japan.

    But that was number four to that set — and we appear to have skipped having a fifth. (back in after our 6.8 in 2001 I told people on occasion about this and said, “The next one should happen…,” the look down at my watch, “… right about now,” giving them a deadpan look in the eye just as I finished, then smile and tell them we had already had our fourth and there usually wasn’t a fifth to a set. Next set will probably begin in about 700 years.

    Anyway, I like your analogy. Seems like it might fit in with the explanation Tamino has given before regarding ENSO.

  • george // June 29, 2009 at 8:45 pm | Reply

    Glenn Raphael said

    So: when HadCRU still hasn’t exceeded 1998’s level as of the end of 2012, *then* you will be convinced there’s something wrong with your model? Good to know!
    easily falsifiable short-term predictions are pretty rare in the climate debate, and this one seems pretty likely to bite you, so good on you for making it!)

    Tamino did not make a “prediction” related to the 1998 hadcrut record.

    But it’s not even clear that the conclusion that “there’s something wrong with the model” necessarily follows from a non-breaking of the hadcrut record within the expected time.

    As Tamino pointed out, the 1998 record was already broken for at least one data source (NASA GISS in 2005)

    How should one interpret such a “failure” (to break the record) in one case (with one data source) and a “pass” in the other?

    It would seem wise to be especially careful about drawing conclusions about predictions that appear to have been “falsified” (presumably meaning “rejected at the 95% confidence level”) using one data source and not using another.

    How about averaging data sources together and then doing the same tests for “rejection at 95%” in an attempt to address this non-agreement between data sources?

    What does it mean when a hypothesis (or prediction) is “rejected at 95% confidence” for the average of different data sets (eg, GISS, hadcrut) but not for one or more data sources that went into the average?

    Perhaps this is like Schroedinger’s Cat?? (simultaneously dead and alive)

    Perhaps it would make the most sense in such cases to look at the differences between HAdcrut and GISS to find the reason for the discrepancy rather than to first conclude that there is “something wrong with the model”.

    PS: the latter question is not merely of academic interest since the claim has been made that
    “the IPCC 2C/century projection was falsified using the averaged data, and all main data reporting services except GISS.”

  • Timothy Chase // June 29, 2009 at 8:55 pm | Reply

    In the above where I wrote:

    (back in after our 6.8 in 2001 I told people…

    … it should have read:

    Back in 2001 just after our 6.8 I told people…

    Maybe I should Microsoft Word’s grammar check.
    *
    The raven said, “Never again. Never, ever, ever again.”

  • Timothy Chase // June 29, 2009 at 9:06 pm | Reply

    B Buckner wrote:

    Tim Chase – you can reference Tamino’s handy Climate Data Links at the top of the page to get the original info, but NASA uses satelite data for air temps over the ocean, whereas HadCRU uses ship measurements of sea surface temperatures, correleated to changes in air tempertures.

    You are right.

    Please see:

    A global temperature index, as described by Hansen et al. (1996), is obtained by combining the meteorological station measurements with sea surface temperatures based in early years on ship measurements and in recent decades on satellite measurements. Uses of this data should credit the original sources, specifically the British HadISST group (Rayner and others) and the NOAA satellite analysis group (Reynolds, Smith and others). (See references.)

    GISS (Goddard Institute for Space Studies) Surface Temperature Analysis: Current Analysis Method, paragraph 4
    http://data.giss.nasa.gov/gistemp/

    Looks like some more digging is in order — to see whether the differences in methodology are as I and dhogaza remembered now that we the data is different in the case of the ocean.

  • David B. Benson // June 29, 2009 at 11:42 pm | Reply

    Ray Ladbury // June 29, 2009 at 12:20 pm — SOC?

    ================
    Ok, if El Nino events can be predicted with even modest skill then not Poisson. Thanks for the info.

    I’ll point out that there is a North Pacific Rossby wave which shows up in lots of records with a period of 3.6–3.8 years and so is highyl predictable. But it is a rather small little blip. What makes it intereesting is than the North Pacific is a resonant basin for this Rossby wave and there is enough wind(?) energy to keep it repeating with (for climate) great regularity.

  • michel lecar // June 30, 2009 at 6:51 am | Reply

    “So you don’t like HadCRU … why won’t you listen to the thousands of others saying the same thing?”

    It is not that I either dislike them, or do not listen to others. I do, but its not with the others that I am concerned right now.

    It is that I think these guys are behaving improperly, and that it should be rectified. They are refusing to show their working, while claiming to be producing results which should influence public policy to the tune of billions, maybe trillions, of dollars. And affect public spending priorities for a couple of generations.

    Simple test question for you all. Do you, or do you not, think that both the UK Met Office and HADCRU should be obliged to release both code and input data as a condition of continued government funding and having their outputs used in public policy debates? Yes or No.

  • Ray Ladbury // June 30, 2009 at 9:50 am | Reply

    David–SOC–self-organized criticality. Don’t know if El Nino events follow this, but they might. Poisson applies in a lot of places you wouldn’t think it would.

  • Curious // June 30, 2009 at 11:17 am | Reply

    Thanks, Tamino!!

    I’ve had some skeptics asking me this question repeatedly as if my ignorance of the answer meant that GW science was all false (instead of showing that all of us need more maths if we are so interested in the details). I get a bit lost with the formulae, but I get the general concept of the standard deviation from the detected trend (and I can ask some friend if I need to get trhough the numbers in detail). Thanks!!

  • Deech56 // June 30, 2009 at 12:32 pm | Reply

    RE: michel lecar // June 30, 2009 at 6:51 am

    Arrgh! The critical scientific question is: What is happening to surface temperatures. The critical scientific challenge is to extract information from whatever data we have available and to analyze the information. The critical scientific test is independent verification of results. Our host has provided a great deal of education on this topic.

    OK, back to .

  • dhogaza // June 30, 2009 at 1:15 pm | Reply

    Simple test question for you all. Do you, or do you not, think that both the UK Met Office and HADCRU should be obliged to release both code and input data as a condition of continued government funding and having their outputs used in public policy debates? Yes or No.

    I could care less. If they are meeting the terms of their grant agreement and making source of their funding happy then it’s none of your business.

    The way to deal with this is the “RSS way”. After poking holes in UAH’s work, rather than spend their life whining and nitpicking and screaming “fraud” across the denialsphere, they made their own product.

    A longer description of the “RSS way” is … “this is how science works”. If you don’t like the 2nd law of thermodynamics, make a perpetual-motion machine that works rather than whine about those mean physics.

  • george // June 30, 2009 at 1:29 pm | Reply

    michel lecar asks:

    “Do you, or do you not, think that both the UK Met Office and HADCRU should be obliged to release both code and input data…”

    From what I gather, HADCRU was developed by CRU and Hadley research center (of the UK met office) acting in conjunction, so it seems that the release by one should be sufficient.

    But to answer the question:

    I’m not familiar with the specifics of this particular “release debate”, but assuming that the above claim is correct — that CRU and Hadley have not released their code and input data — I would say that I would advocate such release, if for no other reason than to figure out precisely why GISS and hadcrut differ on the issue mentioned above (and others as well).

    If it’s due to actual errors either in the algorithm or input data used to calculate global temperature anomalies (on the part of either GISS or hadley), that might be made apparent.

    Even if the discrepancy is not due to “errors” per se, a close inspection may make it obvious that one way of doing things is the more accurate/reliable of the two.

    Such a release might put an end to the “debate” about which data source is more accurate (who knows, perhaps both should be changed) and obviate the “need” felt by some to average data sets together to get the “best” result

    I think the latter is questionable at best. Averaging might reduce the impact of errors in one or possibly both data sets, but it’s not always the “best” (or even a good) thing to do. Among other things, it also reduces useful information — not just noise.

  • Deech56 // June 30, 2009 at 1:30 pm | Reply

    The clipped portion was “less than sign” followed by “ignore” followed by “greater than sign”. Darn special characters.

  • Ray Ladbury // June 30, 2009 at 3:17 pm | Reply

    Michel,

    Yawn!!

  • t_p_hamilton // June 30, 2009 at 3:29 pm | Reply

    michel asked:”Simple test question for you all. Do you, or do you not, think that both the UK Met Office and HADCRU should be obliged to release both code and input data as a condition of continued government funding and having their outputs used in public policy debates? Yes or No.”

    If not required to by the government – No.

    If you ask why not, I ask why. Publishing the algorithm allows INDEPENDENT checking (by writing your own program), the input data is a compilation that others can check by going to the same sources, also an INDEPENDENT check.

    These are good things. If it is too hard for “skeptics” to do, perhaps their understanding is inadequate. In that case using HADCRU’s code would be of no benefit, and actually be a detriment because any idiot can do GIGO.

  • Timothy Chase // June 30, 2009 at 4:14 pm | Reply

    michel lecar wrote:

    Simple test question for you all. Do you, or do you not, think that both the UK Met Office and HADCRU should be obliged to release both code and input data as a condition of continued government funding and having their outputs used in public policy debates? Yes or No.

    I would prefer that they make their raw data available. Methodology? I ran across a pdf on it last night.

    Please see:

    Brohan, P., J.J. Kennedy, I. Harris, S.F.B. Tett and P.D. Jones, 2006: Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850. J. Geophysical Research 111, D12106, doi:10.1029/2005JD006548
    http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT3_accepted.pdf

    Accessible from:

    Temperature
    http://www.cru.uea.ac.uk/cru/data/temperature/

    Their code? Bad idea if you want an independent check of its validity. Better to re-write the code yourself and see if you come up with the same results.

    But then again, that has been explained to you ad nauseum above, hasn’t it? And like a troll you pretend as if no one has ever responded to one of your queries if you find their response “inconvenient.”

    Should their funding be cut-off or data disregarded if they don’t disclose all of their raw data, methodology (haven’t they?), or code?

    Why should it — if there are other data products, such as GISS — which are arriving at virtually identical conclusions? Products that you compare favorably to HadCRU due to their openess?

    But then again, this has been explained to you ad nauseum above, hasn’t it? And like a troll, you have chosen to ignore it because you find it inconvenient.

    Such people will also bold long passages of their own writing at length rather than one or two words, or at most a sentence — like civilized people. Cyber-yelling — to get attention.

    There is an expression that I haven’t used before, but on this particular occasion rises to the surface.

    It begins, “If it walks like a duck…”

  • Lazar // June 30, 2009 at 4:56 pm | Reply

    michel stop repeating stuff which you know is false… ‘not available at giss / in ghcn’ does not equate to ‘not available anywhere’… the uk climate change report was peer reviewed… you can check their code by coding the algorithm yourself…

    … have you read those references on climate sensitivity?

  • Petro // June 30, 2009 at 6:27 pm | Reply

    Simple test question for you all. Do you, or do you not, think that michel should be obliged to release both code and input data as a condition of his (scientific) IQ?

  • jacobl // July 2, 2009 at 3:47 pm | Reply

    would it be fair to call some sceptics “lazy”?
    given that the data is avaible as is the algorithm’s
    I get the feeling that some would rather complain about not having a transpent turn key program instead of trying to create one and learn something in the process.
    thanks for your thoughts..

  • michel lecar // July 2, 2009 at 5:05 pm | Reply

    So the response to the question is that some think yes, they should release the code + data as a condition of having their work used in a public policy context Congratulations, you are calling the question on its merits and regardless of how convenient or inconvenient the answer is for the agencies involved.

    Your point of view is also the point of view of Hansen and GISS, and one has to acknowledge and respect their example of doing the right thing. And demand that others follow it.

    Some of you think that no, they should not, and anyone having any doubt about it should be obliged to recode himself to see if he gets the same results. You are not explicit what he should do if not. Punt? Nor are you explicit about what it would prove if so. That he has made the same errors?

    This is basically dishonest. You think that in this one particular case, unlike, for instance, the case of voting machines, the output of the code should be accepted without those paying to have it written being able to do a code review. In a matter of this importance, the idea that one should rely for verification on recoding and seeing if the results are the same is totally absurd. Do you think we should follow the same method for voting machines? Or only if they deliver the right kind of majorities?

    Some of you finally take the tactic the Old Comrades took after the Nazi-Soviet Pact, the invasion of Hungary and Prague Spring. You refuse to discuss the issue, because you are aware that if you say, yes they should publish the code, you will be confronted with the fact that they have not and will not, and so you will have to criticize fellow believers, and this is a no-no. Or, if you say no they should not, you are also smart enough to be aware that you will be trying to defend the indefensible. And so, like Ray Ladbury, you say “Yawn!!”.

    jacobl. my work ethic, or ability to recode, is not the issue. The issue is what standards of transparency we should require if organizations produce computer generated data, and expect us to take it at face value into public policy discussions. The more globally important the consequences are, the more absurd it becomes to answer this difficulty with the suggestion, code it yourself, maybe you are too lazy. The problem is not me refusing to recode. The problem is them refusing to publish so we can assess it.

    This is about people publishing their workings if they want to have their work used in a public policy context. If they will not, it should be banned from use.

    [Response: What this is not about: global warming.]

  • MikeN // July 2, 2009 at 7:10 pm | Reply

    So what happens if someone tries to recode and fails? He is attacked for poor coding. How would we know if the first guy did it wrong?

  • Ray Ladbury // July 2, 2009 at 8:40 pm | Reply

    Michel, you forgot to count those of us who don’t give a flying [edit].

    A real scientist will gather his own data from sources that already exist, analyze it and publish the result.

  • t_p_hamilton // July 2, 2009 at 9:39 pm | Reply

    michel says”Some of you think that no, they should not, and anyone having any doubt about it should be obliged to recode himself to see if he gets the same results. You are not explicit what he should do if not. Punt?”

    Yes. If they don’t have the ability to reproduce the results, they don’t have the understanding to make a scientific contribution. In lieu of actually writing a complete code, perhaps the “skeptic” could show intimate familiarity with coding and models, in order to make requests that the original group would not think would be misused by GIGO.

    “Nor are you explicit about what it would prove if so. That he has made the same errors?”

    The same coding errors, or the same error in a theoretical derivation? You clearly have no experience in this area.

    MikeN also shows his lack of experience:”So what happens if someone tries to recode and fails? He is attacked for poor coding.”

    Maybe because he is not competent. Has that thought ever crossed your mind?

    ” How would we know if the first guy did it wrong?”

    I suggest choosing smarter “skeptics”. Here’s how:

    Send them through graduate school and post-docs in climatology-related fields, let them learn these codes, data sets and theories inside and out. Put them in a position where their reputation is adversely affected by errors, and see what happens.

    Let us know how that works out for you.

  • t_p_hamilton // July 2, 2009 at 9:56 pm | Reply

    michel says:”jacobl. my work ethic, or ability to recode, is not the issue. The issue is what standards of transparency we should require if organizations produce computer generated data, and expect us to take it at face value into public policy discussions.”

    Transparency for people who have no idea about what they are looking at? Somebody could take a publicly released code and data sets, produce garbage, and enter it into the public discussion. That is not a good thing. The public who has no ability nor inclination to discriminate error from correctness.

    I have a crazy idea – let the best in their field decide which theories have the best support. Elitist, I know. Just like the best engineers should design automobiles, not open their process to be “transparent” for contributions from yahoos. The doctor’s medical decisions should not have “alternative medicine” practitioners come in and kibbutz.

    I say , yes you should take at face value the reports of scientists in a field as a whole if you are not an expert in the field. If their colleagues can’t figure out what they did wrong, you sure as hell couldn’t.

  • Lazar // July 2, 2009 at 10:18 pm | Reply

    michel,

    what he should do

    … contact the authors…

    what it would prove if so. That he has made the same errors?

    proof is poor logic… and if they looked at the hadcrut code and found no errors… what would it prove… that they missed some?… same argument michel…

    voting machines

    inappropriate comparison… fraud is possible absent redundancy… you don’t have redundancy with voting machines… do different people using different algorithms on different data get similar enough results… the differences between hadcrut, noaa and gistemp are too small to change climate policy…

  • Lazar // July 2, 2009 at 10:29 pm | Reply

    … the argument that no trust can be placed on results from closed source code… is completely bogus… you have the published algorithm… you have redundancy with gistemp, noaa, five satellite products…

  • MikeN // July 2, 2009 at 10:48 pm | Reply

    [edit]

    [Response: Personal insults against Dr. Rahmstorf will not be tolerated here.]

  • TCO // July 2, 2009 at 11:05 pm | Reply

    It’s true that there is SOME value to results on secret code. And that non-replicability or even variation studies can be done absent the code.

    But having the code is a lot better. For one important thing, Lazar, methods descriptions are VERY well known to be overly brief or have mistakes in them, often.

    Having the exact code helps drive things faster.

  • Riccardo // July 2, 2009 at 11:48 pm | Reply

    Errors in the code may be taken for granted. Even in the processing of the raw data themselves errors are found sometimes. So, having the source code is useless for scientists.
    It might be nice for the laymen (like me) to play around with the “big thing”, no more.

    On the contrary, it’s _essential_ to have the original data, which we have. Then scientists can play around with those numbers and maybe come up with a different and better processing/analisys.

    We have GISS and HadCRUT, RSS and UAH, different analisys but basically the same results. Clearly the errors in the codes are minor and the small difference we see is due to the different approach. If there’s anyone around that have a better idea is certainly welcome.

  • Lazar // July 3, 2009 at 12:31 am | Reply

    tco…

    It’s true that there is SOME value

    yep… was responding to michel repeatedly asserting zero value… did not intend a disingenuous argument tho it may seem that way… agreed that open source is better… for the reason below…

    Having the exact code helps drive things faster.

  • Gavin's Pussycat // July 3, 2009 at 12:45 pm | Reply

    Michel, I have a test question for you.

    Do you or do you not, think that if Hadley Centre were to release their source code in useable form, that
    1) skeptics (you?) would actually try to build and use it, and
    2) would not find some other lame excuse for ignoring its inconvenient output?

    Thank you for your time.

    • michel lecar // July 4, 2009 at 11:24 am | Reply

      I think that if they were to release it, people, not necessarily me – it depends for me on health, ability and workload, and fortran is not really my thing, if that is what they use – would go through it and either find errors and omissions or not.

      If you look at what happened when GISS released it all, there was a flurry of comments and investigations, and some oddities were found, but I don’t think people as you put it ‘found some other lame excuse’. A fairly reasonable argument took place and understanding was raised. People still disagree about the GISS series, but they can no longer argue that dreadful things are being done in secret to generate it. The reputation of GISS was also raised. I cannot see any argument, from the GISS experience, for not putting it in the public domain, and it shows a great many for doing it.

      But the issue does not turn on what people would do with it. The issue is whether we should accept in the public domain scientific data whose provenance cannot be verified. No, we should not. Well, matters of national security apart perhaps.

      Lazar asks how much extra documentation they should do. None. None whatever. If it is incomprehensible without extra documentation, that is something that they should reveal too. My experience is that if it is not documented enough for an outsider to review it, it is not fit for purpose. But the last thing you want is some special efforts to clean it up before release. Release it, then clean it up, if it needs it, and then release the new version.

      GP, you make the usual assumption of malicious motives for when others think something inconvenient, like that the code should be released. My motives are fairly straightforward, but they’ve nothing to do with the merits of release. They should just get it over with.

      I think Mann should release the algorithm behind MBH 98 as well. And lots of others should release code. Its the price of participating in public policy debates.

      [Response: The algorithm behind MBH98 is widely available, in fact there's a website which gives complete data and code to reproduce the whole thing. And the latest paleo reconstruction from Mann et al. included publicly available code and data.]

  • Lazar // July 3, 2009 at 1:19 pm | Reply

    tco…

    how human readable should the released code be which can entail work rewriting… or it is possible to release functioning code which is undecipherable… so, in terms of naming, commenting, formatting, program structure… how much effort should scientists make…and how much ability should be assumed of the end user…?

  • MikeN // July 3, 2009 at 2:41 pm | Reply

    Personal insult? Dr Rahmstorf has admitted on RC the caption is wrong, and that he changed the smoothing method to generate the graph in the Copenhagen Report.

    How was this error supposed to be found, when code is not revealed?

    This idea that people should do their own analysis and publish. What journal is going to accept ‘I ran the same test and got something else?’

  • TCO // July 3, 2009 at 4:29 pm | Reply

    Lazar: It’s like saying how exactly should you write up the methods. I think a similar judgement applies. Best practice would be fully commented, clear code, with a flow diagram. Minimum would be the code itself, however kludged together. The thing about requiring well written and public code is that it’s not just helping the replicators. It will actually help the article writers. similarly to how publishing forces one to clarify one’s thoughts.

    Sorry, I’m not a code expert so can’t offer more.

    I think there are probably some good guides from other fields on sharing code though. That have more expereicne in it.

  • dhogaza // July 3, 2009 at 5:31 pm | Reply

    What journal is going to accept ‘I ran the same test and got something else?’

    Ever hear about the debunking of cold fusion?

  • Ray Ladbury // July 3, 2009 at 5:45 pm | Reply

    MikeN says, “What journal is going to accept ‘I ran the same test and got something else?’”

    Oh, Mikey, you’re so close. Now work with me. Why do you think they wouldn’t accept it?

  • t_p_hamilton // July 3, 2009 at 7:06 pm | Reply

    MikeN asks:”Personal insult? Dr Rahmstorf has admitted on RC the caption is wrong, and that he changed the smoothing method to generate the graph in the Copenhagen Report.

    How was this error supposed to be found, when code is not revealed?”

    This is an example where an “error was found” without “releasing code”! MikeN asks, how could it be found without releasing code. Maybe he should listen to the person who found that N=15 is better than N=11 for a clue how to find this “error”: “[Response: Almost correct: we chose M=15. In hindsight, the averaging period of 11 years that we used in the 2007 Science paper was too short to determine a robust climate trend. The 2-sigma error of an 11-year trend is about +/- 0.2 ºC, i.e. as large as the trend itself. Therefore, an 11-year trend is still strongly affected by interannual variability (i.e. weather). You can tell from the fact that adding just one cool year - 2008 - significantly changes the trend line, even though 2008 is entirely within the normal range of natural variability around the trend line and thus should not affect any statistically robust trend estimate. -stefan]”

    Standard operating procedure for science.

    Standard operating procedure for “skeptics” – OMG somebody found a typo in a figure caption! Science can’t be trusted!

  • Gavin's Pussycat // July 3, 2009 at 7:07 pm | Reply

    How was this error supposed to be found, when code is not revealed?

    Actually it was found precisely that way… by JeanS, who used the R version of SSA smoothing, while Stefan used (undisclosed, disclosing trivial code being silly) Matlab code.

    You don’t need the code for replication.

    This idea that people should do their own analysis and publish. What journal is going to accept ‘I ran the same test and got something else?’

    The way it works in real science is that the scientist admits the mistake and takes steps to correct it as Stefan did. And no, no more than a blog comment — a correct one — was needed to trigger this action.

    BTW about the mistake, it is completely without consequence. The smoothed temperature curves lie solidly within the IPCC grey band, for 11, 14 or 15 year smoothing, and with or without roughness minimization. As for any smoothing technique, only a fool would give significance to what the curve does at the data edges. But Tamino has been over that many times so I’ll stop here.

  • TCO // July 3, 2009 at 7:28 pm | Reply

    I think the skeptics got a bit more flesh than normal this time. And that Rahmstorf is great at getting grants but weak at rigor. Love to see Tammy eviscertate him a little.

  • Gavin's Pussycat // July 3, 2009 at 9:26 pm | Reply

    > I think the skeptics got a bit more flesh than normal this time.

    They got the lizard’s tail.

  • MikeN // July 3, 2009 at 10:59 pm | Reply

    JeanS says he got Rahmstorf’s code from a 3rd party, and that’s how he was able to pin down the error.

  • Mark // July 4, 2009 at 11:18 am | Reply

    Why does it HAVE to be Hadley Centre code?

    The NASA model source code is out there. It gets the same results as the Met Office does.

    Nobody has used the NASA code to work with, so why should it be expected that the Met Office code gets used if it became available? Current evidence is that it would not.

  • Ray Ladbury // July 4, 2009 at 12:08 pm | Reply

    I have given my reasons previously why I oppose releasing code–it is bound to find its way into other analyses and thereby compromise independent investigation. By all means, archive the code for every publication, but don’t release it. If subsequent analyses differ substantially, I guarantee that the original investigators will be on the code like white on rice.

    This is yet another nonissue. Anybody who thinks their eyes will only be the second pair to look at the code is delusional.

  • Gavin's Pussycat // July 4, 2009 at 5:35 pm | Reply

    JeanS says he got Rahmstorf’s code from a 3rd party, and that’s how he was able to pin down the error.

    No, he got code for the SSA method from a third party (Grinsted), as did Rahmstorf. It’s a standard method. I know, I have been using it too.

  • Timothy Chase // July 4, 2009 at 7:44 pm | Reply

    Lazar wrote:

    how human readable should the released code be which can entail work rewriting… or it is possible to release functioning code which is undecipherable… so, in terms of naming, commenting, formatting, program structure… how much effort should scientists make…and how much ability should be assumed of the end user…?

    When you think about it, making the code itself readable by others, with the “naming, commenting, formatting, program structure” would necessarily slow down the process of scientific investigation to such a great extent that it would more than likely make it much more difficult for people to perform the sort of investigation that acts as double-checks on work in the same field, and as such would impede the progress of the science it was “intended” to insure. Likewise, as Ray Ladbury point out, by making it easier for individuals to simply copy one another’s code and as a result the mistakes in that code, it would more than likely make scientific investigation more prone to error.

    But then again the stated intention isn’t necessarily the actual intention. At best, the intention would seem to be to make it easier for non-experts who know little or nothing of the actual practice of science to criticize the work of experts in a particular scientific field — and more than likely only when it suits them — such as when they view what is being discovered as being at odds with their libertarian ideology.

  • MikeN // July 4, 2009 at 9:56 pm | Reply

    >it would more than likely make scientific investigation more prone to error.

    Well, the copiers would be revealing their code as well. As it is, algorithms get copied. Different scientists don’t go out and collect their own temperatures.

  • MikeN // July 4, 2009 at 10:12 pm | Reply

    This should probably go to open thread since it is not on topic.

    Anyways, here is Jean S’s take on the subject.

    When the Science-paper appeared and Lucia and others criticized it, a problem they faced was that Rahmstorf was not giving any code or data for replication. An answer David Stockwell essentially got for his critique was that he was doing replication somehow “wrong”. In other words, that there were no problems with the paper itself, but in people’s replication.

    Last autumn, when I decided to take a closer look, I contacted Rahmstorf through two different contacts (another one being a scientist and another one a journalist). Neither of them received neither code nor any data! Rahmstorf essentially claimed that he did not own a copyright, so he could not pass either the code or data!!! I was then lucky enough to obtain, a way I do not want to disclose, the smoothing routines used by Rahmstorf. Using those it was easy to replicate the results from the Science paper. I then replicated and updated the result using the preliminary 2008 data for a Finnish TV-documentary, the graph is here:
    http://ohjelmat.yle.fi/files/o…..mstorf.jpg

  • dhogaza // July 4, 2009 at 11:18 pm | Reply

    Rahmstorf essentially claimed that he did not own a copyright, so he could not pass either the code or data!!!

    Why the “!!!”? If the code doesn’t explicitly allow for free distribution, than a user can not legally pass it on to another party.

    You do that with my photos without either following my terms-of-use or my explicit permissions, and I may just sue your ass.

    I was then lucky enough to obtain, a way I do not want to disclose, the smoothing routines used by Rahmstorf.

    Does not want to disclose why? I can guess: someone’s code got distributed without the author’s explicit permission.

  • Deech56 // July 5, 2009 at 12:56 am | Reply

    What dhogaza said (on July 4, 2009 at 11:18 pm). This, and MikeN’s explanation is entirely consistent with what Gavin’s Pussycat wrote. The terms and conditions that allow for a transfer of material (whether physical or intellectual property) are explicit regarding the distribution to a 3rd party. You just don’t do that and preserve credibility.

    Science is a collaborative effort, and adhering to these types of agreements is extremely important. It seems that some do not understand this.

  • Timothy Chase // July 5, 2009 at 2:21 am | Reply

    MikeN wrote:

    Well, the copiers would be revealing their code as well. As it is, algorithms get copied.

    Algorithms are much more easily understood. They are much more easily described, perhaps involving only a few hundred English sentences rather than thousands of lines of code.

    If there are mistakes in the algorithm, which afterall is written in English, it is much easier to identify and discuss than code that exists part way betweeen and bridges English and binary machine code. But without at least a description of the algorithm you won’t actually know what sort of data was acquired.

    Likewise, once you are able to describe the algorithm you are able to compare it to another. In this way you can see how one set of investigators improved upon the methods of others.

    If one were to simply repeat the very same steps there would be no originality, and a given investigation would in no way represent a step forward. Under those circumstances the work of the investigators would more than likely not be worth publishing and certainly wouldn’t advance their careers.

    The fact that you believe algorithms simply get copied and investigators simply repeat one-another strongly suggests that you know nothing of the practice of science.

    Banks employed double book-keeping — two different ways of calculating the same totals — to catch errors. I myself wrote calculations in both spreadsheet formula and vba in order to catch errors that would crop up — including those that result from vba and Excel competing for the same processor time.

    To use an analogy that sometimes gets employed in a different context, if you have one watch you always “know” what time it is even if your watch is wrong. If you have two watches you will at least know when one of the two watches is no longer keeping proper time even if you won’t know which one. And if you have three watches you will know which watch is no longer properly keeping time.
    *
    MikeN wrote:

    Different scientists don’t go out and collect their own temperatures.

    Would you rather they did? Just curious.

  • dhogaza // July 5, 2009 at 3:16 am | Reply

    The terms and conditions that allow for a transfer of material (whether physical or intellectual property) are explicit regarding the distribution to a 3rd party.

    Oh, it’s better than that. One’s creative output is *automatically* protected by copyright law, even if the creator doesn’t say “this is copyrighted”.

    That’s the law.

  • Hank Roberts // July 5, 2009 at 4:43 am | Reply

    > I was then lucky enough to obtain, a way I
    > do not want to disclose, the smoothing routines
    which someone tells me were the ones that were
    > used by Rahmstorf.
    and I have faith that they told me the truth because
    _________________.

  • Gavin's Pussycat // July 5, 2009 at 8:50 am | Reply

    MikeN,

    your link doesn’t work.

    Yours is a strange story. The correct way to get the SSA package used by Rahmstorf is to ask Aslak Grinsted, who wrote it. It worked for Nicholas Nierenberg as he tells on his blog; it would work for you.

    For some reason (support issues) Aslak does not want to publish the code, but he gives it to anyone mailing him. And colleagues give it to colleagues, admonishing not to redistribute, and promptly forgetting… that’s how I got my copy.

    But blaming Rahmstorf for keeping it secret as also Steve McI does, is just weird… it isn’t his code to keep secret. The routine assumption of bad faith gets tiresome quickly.

    And BTW the argument of R07 or CPH09 is not dependent on the smooting method or parameters chosen; the appearance of endpoint artefacts means nothing for the argument. Try moving average with triangular weighting (which McI claims to be the same, may well be true), or Savitsky-Golay with triangular weights, which I have used in other contexts, obtaining also very similar behaviour to SSA.

  • Gavin's Pussycat // July 5, 2009 at 9:04 am | Reply

    OK I found the link you were trying to give.

    http://ohjelmat.yle.fi/files/ohjelmat/u3219/liite5_paivitetty_rahmstorf.jpg

  • dhogaza // July 5, 2009 at 3:36 pm | Reply

    But blaming Rahmstorf for keeping it secret as also Steve McI does, is just weird… it isn’t his code to keep secret. The routine assumption of bad faith gets tiresome quickly.

    As expected. Rahmstorf is being accused of bad faith because he’s honored the distribution agreement of the copyright holder.

    These people are scum.

  • george // July 6, 2009 at 2:45 pm | Reply

    “The routine assumption of bad faith gets tiresome quickly. “

    With the emphasis on “the routine”:

    Claim or imply that a scientist is acting unscientifically, unethically, or even fraudulently — eg, hiding data, code, smoothing algorithm, etc with the implication that he/she is doing so because they are trying to support their own (“church of global warming”) ideology.

    “Contact” said scientist with the hope (unstated of course) that the attempt will actually “fail” to meet one’s (unreasonable) preconditions — which rarely allow for copyright, privacy and other agreements.

    Point to the “failure” of the scientist to fulfill all of one’s requests as “proof” that one’s original claim (the scientists is actually being dishonest or fraudulent) is actually true.

    The above “routine” is nothing if not entirely predictable.

  • Gavin's Pussycat // July 6, 2009 at 8:47 pm | Reply

    george, I have no proof that that happened. All I have is a weird story.

    We shouldn’t too easily assume bad faith either, where dumb will do as an explanation. And yes, also smart folks can be dumb.

  • george // July 7, 2009 at 7:26 am | Reply

    “We shouldn’t too easily assume bad faith either, where dumb will do as an explanation. And yes, also smart folks can be dumb.”

    I guess it all comes down to what is meant by “too easily”. I’ve seen the “routine” repeated all too often (at CA, for example).

    The problem with giving some of these people the “benefit of ignorance” in this case is that it is the very excuse that they themselves fall back on when they are shown to be wrong. “Oh, I’m so sorry. I didn’t know that Gavin was out of the office (on Christmas day)”

    It’s a rather convenient excuse and not a particularly credible one when it is done for the N’th time.

  • Gavin's Pussycat // July 7, 2009 at 8:56 am | Reply

    About Fig 3 of the Synthesis Report still, if anybody had wanted to do a meaningful replication (instead of a gotcha), that would have been easy: polynomial fit requires no code from either Rahmstorf or Grinsted. Just try linear, quadratic, cubic… of course the result would be an unspectacular confirmation of what was uncontroversial all along.

    A good student project.

  • MikeN // July 7, 2009 at 5:06 pm | Reply

    That doesn’t replicate, it gives you a different graph that is significantly different. The whole point of the criticisms.

  • Gavin's Pussycat // July 7, 2009 at 7:33 pm | Reply

    MikeN, yes, and the criticisms are beside the point scientifically. You replicate what matters: the position of the smoothed curve within the IPCC uncertainty band. That’s what matters in science, getting similar results with a variety of methods, showing the result to be robust.

    Pixel-perfect replication using the same code is the stuff of “gotcha”. Auditing, not science.

  • Mark // July 7, 2009 at 9:19 pm | Reply

    MkeN, it does replicate. That’s the whole point of this thread.

  • Hank Roberts // July 7, 2009 at 10:55 pm | Reply

    Read it again, MikeN:

    “You replicate what matters: the position of the smoothed curve within the IPCC uncertainty band. ”

    “Close enough” counts in a variety of situations: hand grenades, horseshoes, nuclear weapons, and uncertainty bands.

  • Timothy Chase // July 10, 2009 at 5:08 am | Reply

    It’s official:

    NOAA scientists today announced the arrival of El Niño, a climate phenomenon with a significant influence on global weather, ocean conditions and marine fisheries. El Niño, the periodic warming of central and eastern tropical Pacific waters, occurs on average every two to five years and typically lasts about 12 months.

    NOAA expects this El Niño to continue developing during the next several months, with further strengthening possible. The event is expected to last through winter 2009-10.

    El Niño Arrives; Expected to Persist through Winter 2009-10
    July 9, 2009
    http://www.noaanews.noaa.gov/stories2009/20090709_elnino.html

    *
    ñ ñ

  • Timothy Chase // July 10, 2009 at 5:13 am | Reply

    A delayed monsoon…

    Much of India has been reeling under a sweltering heat wave that has claimed more than 100 lives as temperatures rose to 46C last week. Water shortages and powercuts made life even more unbearable as the monsoon stalled in its progress northwards. But on Tuesday morning, Delhi and much of the north woke up to enjoy showers of cool rain as the monsoon suddenly accelerated and covered about 500km overnight.

    From The Times
    July 4, 2009
    Weather eye: El Niño effects ripple out across the world
    http://www.timesonline.co.uk/tol/news/weather/article6631810.ece

  • Mark // July 12, 2009 at 11:44 pm | Reply

    I’m curious, Tamino, what you think of this:

    http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

    Kyle Swanson claims to have detected a break in the trend, associated with the 1998 El Nino (though he does say “only time will tell if it’s real”). Are you convinced?

    [Response: I saw the post, which is fascinating, as a result of which I'm planning to post on that very topic.]

  • george // July 13, 2009 at 3:50 pm | Reply

    Mark,
    The appearance of the temperature development since 1998 also strikes me as similar to the impulse response of an “under-damped” system

    (though I’m not a climate scientist and realize that superficial appearances can be deceiving and would also be interested in seeing what Tamino makes of this from a mathematical standpoint.)

    Swanson says in the real climate piece

    We hypothesize that the established pre-1998 trend is the true forced warming signal, and that the climate system effectively overshot this signal in response to the 1997/98 El Niño. This overshoot is in the process of radiatively dissipating, and the climate will return to its earlier defined, greenhouse gas-forced warming signal.

    It would seem that if the climate system actually behaves as described, the elapsed time before the “record” gets broken would then depend not only of how much the impulse (el nino or whatever) forcing the system exceeded the trend line (by how many std deviations) and the actual magnitude of the trend but also on the details of the “damping”, which would determine how long it takes for the “impulse” from El nino to “die out”

    If the system is “underdamped”, it will take longer to return to the underlying trend (if there is one) than if it is “critically damped” or “overdamped”.

  • MikeN // July 13, 2009 at 4:03 pm | Reply

    46C is not that unusual there.

  • george // July 17, 2009 at 3:13 pm | Reply

    Tamino:

    I can now see after your post on the Swanson paper that it’s primary focus is not on what I referred to in my last comment above: response of an oscillatory system to a one time “forcing” (either an impluse or a step) but instead on the possible source (chaos) of apparent “episode shifts’ in the global temperature record.

    What sent me off on the tangent was their reference to ‘overshoot” and “radiatively dissipating” which immediately made me think of a damped harmonic oscillator.

    I am curious if you have you ever tried modeling climate “noise” — from El Nino in particular — that is based on the impulse (or step) forced “damped oscillator” case

    It seems that the damped oscillator case has some of the basic qualities of the AR1 model (and ARMA as well) — exponential decay of the noise from a particular ‘random’ event (like el nino) — but with the one significant difference of a kind of ‘ringing” effect for the damped oscillator, (in the under-damped’ case in particular)

    In other words, for the oscillator, the noise (impulse) from one instant in time (eg, the large el nino in 1997-98) does not just decay, but may also “oscillate” as it decays in amplitude.

    Actually, as i am sure you are well aware, again depending on the details of the damping, you can get a similar type of “ringing” response for an oscillatory system driven by a step forcing as shown seen here.

    If i am not mistaken, Swanson is actually proposing such “step forcing” from time to time, showing up in the temperature record not as perfect steps but as a step with an “overshoot” and then decay to the new level (or possibly an overshoot followed by an undershoot or even decaying overshoot undershoot oscillation about the new level)

  • David B. Benson // July 17, 2009 at 11:16 pm | Reply

    george // July 17, 2009 at 3:13 pm — I checked the response function from
    http://tamino.wordpress.com/2008/10/19/volcanic-lull/
    with various plausible forcing functions. In no case is there any ringing; indeed the transient response is too brief.

    Certinly one could posit a harmonic oscillator; finding a plausible physical reason for it in the oceans seeems to be the harder task. And incidently, the only one I know about has a period of close to 3.6 years.

  • george // July 18, 2009 at 3:13 pm | Reply

    David:
    The underlying assumption made by Tamino for that two box model example that you linked to is that the system itself is not oscillatory. It basically assumes that the climate system is like a big heat capacitor.

    So it should not be suprising that you see no ringing from any forcing (volcanic or otherwise).

    The systems you see ringing in are inherently oscillatory.

    For example, the electrical circuit analog to tamino’s assumed model is the RC circuit — ie, circuit with just a capacitor and a resistance (but with no inductance).

    The equation that governs the climate model assumed for tamino’s post (described here) is basically the same one that governs a simple RC circuit (no inductance) with an applied voltage.

    When you apply a step voltage to an RC circuit , the voltage on the capacitor ramps up much as tamino shows temperature doing in the graphs on his post that you linked to.

    That’s the RC circuit response to an applied voltage: system approaches final voltage with no ringing (ie, no overshoot followed by undershoot)

    You only might see ringing if you add inductance to the equation — ie, make it an RLC circuit — because then the system becomes a ,a href=”http://en.wikipedia.org/wiki/Harmonic_oscillator”>damped harmonic oscillator.

    I have no idea whether there is any ringing in the climate system and certainly don’t know enough about how the system as a whole and el nino in particular work to speculate about a “mechanism” (speculation which would be more than a little premature, given the fact that there might not even be any such ringing behavior, at any rate).

    I am just curious if tamino (or perhaps somone else) has actually explored this at all.

    It’s so obvious that my guess is someone must have done so. who knows, maybe it was discarded as a possibility long ago.

  • David B. Benson // July 18, 2009 at 7:24 pm | Reply

    george // July 18, 2009 at 3:13 pm — Thanks, bu7t it is not just a series RC cirsuit because there are two time constants. Which was the point of the thread, I think.

    Anyway, nothing simple is going to model (almost all) the ocean oscillations well. These can be described at best as quasi-periodic.

  • David B. Benson // July 18, 2009 at 11:37 pm | Reply

    Another example illustrating the difficulties in studying ocean dynamics:
    http://blogs.nature.com/climatefeedback/2009/07/_indian_ocean_gatekeeper_to_cl.html

    And another, in which a bit of progress is well-explained:
    http://www.sciencedaily.com/releases/2009/07/090716113358.htm

  • george // July 19, 2009 at 12:26 am | Reply

    David:

    Perhaps I was a bit sloppy and what I really should have said was this:

    “The underlying assumption made by Tamino for that two box model example that you linked to is that the system itself is not oscillatory. It basically assumes that the climate system is like two heat capacitors — one very large and one smaller — with some leakage between them.”

    Unless someone can convince me otherwise (by showing me oscillatory solutions to Tamino’s coupled equations for his 2 box system [for non-oscillatory forcing, of course]) I will continue to doubt that oscillatory behavior (“ringing”) can arise from that 2 box system in response to an impulse forcing (eg, from el nino or a volcano)

    As far as your statement that “nothing simple is going to model (almost all) the ocean oscillations well. These can be described at best as quasi-periodic”, I have no dispute with that.

  • David B. Benson // July 19, 2009 at 10:11 pm | Reply

    george // July 19, 2009 at 12:26 am — Well stated. We are in agreement.

  • george // July 20, 2009 at 3:04 pm | Reply

    David:

    We are in agreement about your latter point “nothing simple is going to model (almost all) the ocean oscillations well. These can be described at best as quasi-periodic”.

    Perhaps it is due to lack of clarity on my part, but I don’t believe you have really addressed my original question to Tamino above, which was about possible ringing in the global temperature in response to an impulse like the el nino of 1998:

    I am curious if you have you ever tried modeling climate “noise” — from El Nino in particular — that is based on the impulse (or step) forced “damped oscillator” case

    Perhaps I was not being clear, but what I was really asking about is whether modeling the climate “noise” with an exponentially decaying oscillation helps in any way to “mimic” the observed temperature development after an el nino, for example.

    To answer that question, you have to focus on the actual temperature record (not the response of some model to forcing, which can only show you whether that model itself produces ringing — and as I indicated above, I don’t believe tamino’s 2 box model even supports oscillatory behavior in response to an impulse or other non-oscillatory forcing).

    I think it is important to note here that what i am asking is different from the question of whether el nino itself is “oscillatory” (even “quasi-periodic”). In fact, in my comments above, I asked about possible ringing after el nino assuming that el nino is essentially a random impulse forcing (ie, no oscillatory bahvior to el nino itself). While that assumption about el nino (random impulse) may not be accurate, but it is nonetheless the same one that Tamino and others make in their AR1 and ARMA noise models.

    Finally, as i said above, I think focusing on whether there is a “plausible mechanism’ for ringing before one even knows if there is any evidence for it is more than a little premature.

    Anyway, I think this dead horse has probably been beaten enough. As i indicated above, I have really no idea whether any of this has any validity at all. Was just basically curious, that’s all.

  • David B. Benson // July 20, 2009 at 10:08 pm | Reply

    george // July 20, 2009 at 3:04 pm — Aha.

    Yes, the two box model helps to explain the climate response to forcings, in particular volcanic aerosols:
    http://tamino.wordpress.com/2008/10/19/volcanic-lull/

    It is not completely clear to me how to treat ENSO, but I found a paper which I biefly described on the “Warming Interrupted” thread. I think I can say something about ringing, but it makes most sense to use that thread.

  • Steve L // July 30, 2009 at 1:51 am | Reply

    Sorry for not reading all of the comments. Maybe somebody has already noted this gambler’s fallacy thing:
    Okay, we rolled double sixes in 1998. Now, starting in 1999 and going forward, you expect the probability of going longer and longer periods of time forward without rolling double sixes again to diminish simply. However, if you’re not really caring and just rolling away having a good time, and then you notice that you haven’t rolled double sixes for awhile, you can’t calculate the likelihood of not rolling another double six as if you thought of starting the calculation in 1999. That is, it’s very unlikely to flip a coin and get 20 heads in a row. But after 19 heads, the next one is still 50:50. And I guess I’m saying that 2012 is a prediction starting in before 1999. If you were to consider it again starting in 2008, the probability space suggesting the possibility of breaking the record before 2008 is irrelevant, since it didn’t happen, so don’t you get some time later than 2012 if you start in 2008?

  • sentient // July 30, 2009 at 12:57 pm | Reply

    In some ways, I applaud the sense of urgency that accompanies the perceived need to do something to affect climate change. The need is there in more ways than you presently know. But the means could be another matter entirely. The Akkadian Empire under Sargon (2,300-2,200 BC), mankind’s first empire ever, succumbed to climate change that happened rather suddenly. A 300 year long period of drought struck this nascent civilization and toppled what turned out to be only a 100 year empire. The Old Kingdom of Egypt and the Harappans of the Indus Valley suffered a similar fate 4,200 years ago, succumbing to an abrupt drought that ended those civilizations, with Egyptians “forced to commit unheard of atrocities such as eating their own children and violating the sacred sanctity of their own dead (Fekri Hassan, 2001)”. The Mayans had pretty much the same luck with three periods of extreme drought at 810, 860 and 910 AD. Sadly just two years after the last drought which saw 95% of the Mayan population gone, wet years returned to the Yucatan. A reconstruction from fossil algae in sediments from Drought Lake in North Dakota of the past 2000 years found that dry conditions were far and away the rule in the High Plains, with the Dust Bowl conditions of the 1930’s one of the lesser dry spikes found in the record. Half of the warming that brought us out of the last ice age (the Wisconsin) occurred in less than a decade.

    There were 24 Dansgaard-Oeschger oscillations between this interglacial, the Holocene, the interglacial in which all of human civilization has occurred, and the last one, the Eemian, in which the first fossils of Homo sapiens are to be found. D-O oscillations average 1,500 years, and have the same characteristic sawtooth temperature shape that the major ice-age/interglacials do, a sudden, dramatic, reliable, and seemingly unavoidable rise of between 8-10C on average, taking from only a few years to mere decades then a shaky period of warmth (less than interglacial warmth), followed by a steep descent back into ice age conditions. Each D-O oscillation is slightly colder than the previous one through about seven oscillations; then there is an especially long, cold interval, followed by an especially large, abrupt warming up to 16C. During the latter parts of the especially cold intervals, armadas of icebergs are rafted across the North Atlantic (Heinrich events) their passage recorded reliably by the deep ocean sediment cores which capture the telltale signature of these events in dropstones and detritus melted out of them. We know with absolute certainty that these events happen, with evidence of D-O oscillations extending back some 680 million years. We do not know yet precisely what causes them. What we do know is that the past 6 interglacials (dating back to the Mid Pleistocene Transition) have lasted roughly half of a precessional cycle, or 11,500 years, which just happens to be the current age of the Holocene. What we know is that N65 latitude insolation values are very close now to what they were at the close of the Eemian. What we also know is that GHGs seem to have played only a spectator role to all of these natural transitions, with temperature changes leading GHG concentrations by a considerable margin of time (800-1,300 years). What we do not know is if anthropogenic sourced GHGs can trigger a climate change event. What we do know is that earth’s climate is bimodal, cold (90%) and warm (10%), with the transition times (such as at the end of an interglacial) well known from proxy records to be quite sensitive to forcings we do not yet understand, and the forcings we have identified seemingly incapable of producing the responses we see in the paleoclimate record. Including the recent paleoclimate record.

    The climb out from the Last Glacial Maximum of the Wisconsin ice age (called Termination 1 with sea level bottoming out about 121 meters, ~397 feet, below present) into the Holocene is studded with the Younger Dryas, a 1,300 year near instantaneous return to ice age conditions. “Briefly, the data indicate that cooling into the Younger Dryas occurred in a few prominent decade(s)-long steps, whereas warming at the end of it occurred primarily in one especially large step of about 8°C in about 10 years and was accompanied by a doubling of snow accumulation in 3 years; most of the accumulation-rate change occurred in 1 year (National Research Council, 2002)”. Almost as suddenly we came out of it: “Taylor et al. (1997) found that most of the change in most indicators occurred in one step over about 5 years at the end of the Younger Dryas, although additional steps of similar length but much smaller magnitude preceded and followed the main step, spanning a total of about 50 years (NRC, 2002)”.

    Termination 1 began with what is referred to as Melt Water Pulse 1a (mwp-1a) centered at about 14,680 years ago which resulted in a 24 meter rise (about 78 feet) in sea level believed to have occurred at the rate of 4.5 cm (about 2 inches) a year. It was followed around 12,260 years ago by MWP-1b with a 28 meter (about 92 feet) rise nearer 5 cm per year. Recent model results predict that sea level is currently rising at 32cm/100 years. With natural rises clocked at 5cm/yr (or 500cm/century) we, (meaning us) have a lot of hard work ahead of us if we hope to trump mother nature’s most recent finest result.

    Between 6,000 and 7,000 years ago, a period known to geologists and paleoclimatologists as the Holocene Climate Optimum, sea levels peaked about 6 meters (about 20 feet) higher than today, and during the Eemian Optimum, some 20 meters (about 60 feet) higher than today (some say 70 meters). During the seven post MPT ice ages, sea levels dropped some 100 or more meters below present, the water tied up in the miles thick ice sheets that have spread in North America as far south as Kansas. These are just some of the facts of the abrupt climate changes which we, as Homo sapiens, have experienced. General Circulation Models, of which the IPCC references 23, have yet to reproduce a single known abrupt paleoclimate change fed with the proxy data. The latest GCM models produce predictions based on a variety of input data and complex equations which few of us would understand. But for all the complexity and investment, they are just predictions.

    Belief in, and acting as a result of, such predictions has opened up what may be the first chapter in faith-based science (W. should be so proud). Understanding the history of climate change provides a factual understanding of far more alarming climate changes that have actually happened, with sea level changes and temperature shifts that dramatically overshadow any faith-based prediction you have yet heard.

    What might be quite ironic is that if GHG predicted global warming is in fact real, and, at half of a precessional cycle, we are near to the cliff of the next natural shift to an ice age, we may find ourselves needing to generate as much GHGs as possible to ease our transition into the next ice age. So as I said at the beginning, doing something about climate change is not necessarily a bad thing. Doing the right thing might actually be quite another. The ice ages and associated interglacials are well known to be paced by the eccentricity, obliquity and precession cycles in earth’s rickety orbit. These we will do nothing about. D-O oscillations show strong evidence of being tied to the 1,500 year cycle of solar output, something we cannot change.

    So be ever thoughtful of both facts and predictions before leaping to a conclusion. It was in fact a LEAP that terminated the last interglacial, the cold Late Eemian Aridity Pulse which lasted 468 years and ended with a precipitous drop into the Wisconsin ice age. And yes, we were indeed there. We had been on the stage as our stone-age selves about the same length of time during that interglacial that our civilizations have been during this one.

    Meanwhile, enjoy the interglacial!

  • Mark // July 30, 2009 at 3:16 pm | Reply

    What in all that long and boring discourse says that physics is wrong?

    Because running the physics shows that AGW is real and we are going to get in trouble over it.

    “So as I said at the beginning, doing something about climate change is not necessarily a bad thing. ”

    And the death of the United States would be VERY beneficial to China and Russia.

    Does this mean that they should be looking forward to all out nuclear war???

  • dhogaza // July 30, 2009 at 4:54 pm | Reply

    What in all that long and boring discourse says that physics is wrong?

    Honestly, sentient’s post reminds me of the Congressional record of a filibuster where a Senator reads a textbook into the record in order to fill time and hold the floor…

  • Hank Roberts // July 30, 2009 at 6:51 pm | Reply

    Simulation of abrupt climate change induced by freshwater input to the North Atlantic Ocean
    S Manabe, RJ Stouffer – Nature, 1995 –

    Syukuro Manabe & Ronald J. Stouffer Geophysical Fluid Dynamics …
    Cited by 218

    http://scholar.google.com/scholar?cites=16087350614408640761&hl=en

  • Gary // July 30, 2009 at 8:18 pm | Reply

    The trend you are using is only 1.7 degrees per century. That is at the very low end of IPCC and other projections. I would love to see you analysis for some of the mid range or sever projections (3-5 degrees).

    No one is very worried about an increase of 1.7 degrees per century.

  • David B. Benson // July 30, 2009 at 9:24 pm | Reply

    sentient // July 30, 2009 at 12:57 pm — Unfortunately, much of what you posted is simpy wrong. I’ll offer but one correction: read
    http://en.wikipedia.org/wiki/Orbital_forcing
    to understand that the current interglacial was scheduled to be about as long as that during MS 11 (that is, barring AGW).

  • Ian Forrester // July 30, 2009 at 10:31 pm | Reply

    Gary you seem to be confused. The 1.7C per century is a measured trend for right now. It is expected that as feed backs become more evident that rate will increase.

    The IPCC rate you quote of 3-5C is for a doubling of CO2 concentration.

  • Gary // July 31, 2009 at 12:43 am | Reply

    Point taken. But they actually say that will be the total change by 2100 so the rate of change will be lower than that in the first half of the century will be larger in the second half. But since CO2 has increased since 1998 we still should be using a rate of change larger than the past linear trend. If global warming is caused by CO2 then the rate of increase should be increasing and we should be using a number larger than the past trend. If the .017 is all natural variation then you are assuming from the start that CO2 is not affecting it the trend. If it is affecting the trend with the higher concentration of CO2 should already have increased the trend.

  • Matt Andrews // July 31, 2009 at 1:59 am | Reply

    Just a few quick responses to that long post from “sentient”:

    What we also know is that GHGs seem to have played only a spectator role to all of these natural transitions, with temperature changes leading GHG concentrations by a considerable margin of time (800-1,300 years).

    I’m sorry, but “a spectator role” is not an accurate representation of the broad scientific consensus at all. Orbital forcings are not sufficient to explain the full warming at the start of interglacials; the most probable model is that orbital forcings generated some initial warming, which (after a lag of 800+ years, as you say) caused increases in GHGs, which in turn forced temperatures higher, and a series of positive feedbacks (including raised GHGs) brought the climate to a new equilibrium at a higher temperature (the interglacial).

    What we do not know is if anthropogenic sourced GHGs can trigger a climate change event.

    They are strongly likely to have been the primary cause of the significant warming in the last 40 years, and to have been a major factor in the entire warming since the late 19 century. That warming on a global scale is unprecedented in the last million+ years, which probably qualifies it as a “climate change event”. There is substantial evidence that GHGs can modify climate; I doubt you’d have much scientific support for the suggestion that anthropogenic GHGs would be somehow different.

    Belief in, and acting as a result of, such predictions [GCM models] has opened up what may be the first chapter in faith-based science (W. should be so proud).

    This is so astonishly ignorant that I don’t really know where to begin. What, exactly, do you suggest should be the basis for projecting future climate? Double-blind tests? Given the unfortunate limitation that we only have one planet to experiment with, computer models are the only possible approach.

    Current climate models are highly successful at replicating past climate events, and are based on fundamental physical laws. They’re not perfect, but they are a pretty good basis for projection at this point.

    we are near to the cliff of the next natural shift to an ice age, we may find ourselves needing to generate as much GHGs as possible to ease our transition into the next ice age.

    No, we aren’t. The next orbital forcing that would be likely to produce an ice age is due approximately 30,000 years from now. And even that is now unlikely given our disruption of the climate system.

    Overall, it’s great that you’re interested in climate science, and devoting energy to understanding it; however, you really should devote more time to researching it, and perhaps less time to expounding about it, until you’ve acquired a sounder knowledge of the field.

  • Hank Roberts // July 31, 2009 at 5:31 am | Reply

    > read somewhere

    Here, most likely:

    http://tamino.wordpress.com/2008/01/24/giss-ncdc-hadcru/#comment-12359

    “… The GISS/HadCRU difference seems to be because HadCRU omits the arctic while GISS estimates it …”

    Details further down in tha tthread

  • Mark // July 31, 2009 at 8:21 am | Reply

    “But since CO2 has increased since 1998 we still should be using a rate of change larger than the past linear trend.”

    Express the change as a % of the CO2 already out there.

    For each % increase, there is a linear increase in temperature.

    But to increase the CO2 by 10% takes more CO2 the fifth time you do it than it does the first time.

  • Mark // July 31, 2009 at 8:27 am | Reply

    “No one is very worried about an increase of 1.7 degrees per century.”

    No one is worried about the RISK of 1.7 per century.

    But since we have centuries of coal left instead of waiting 50 years for danger at 3.4 per century, we will wait 100 years and get the same danger.

  • Paul K // August 14, 2009 at 3:20 am | Reply

    I hope you are still taking questions on this thread, because I have one. It seems to me that taking calendar years is a rather arbitrary way to take climate measurements, when many of the datasets have monthly anomaly information. If you were asking the question, “what is the hottest twelve month period on record?” then you could get a very different time period as the answer.

    For example, it appears that the hottest 12 month period in the GISS record is from Jan o5 to Dec 05, but very close is the 12 months from Sep 06 to Aug 07. Lagging these two periods significantly is the period from Oct 97 to Sep 98.

    My Q: Do the other temperature records show these same periods as the three hottest, and does the Oct 97 to Sep 98 period remain as the hottest 12 month period in those records?

  • Deep Climate // August 14, 2009 at 4:34 am | Reply

    I do know that NOAA had 2005 as the warmest year, while HadCRU has 1998 as the warmest year. I haven’t checked rolling 12-month periods though.

    Also, for what it’s worth, if you were to take an average of the three, 1998 would come out ahead, just because it is so much higher than the other years in the HadCRU record.

    I believe divergence between HadCRU and NASA/GISS has been discussed elsewhere on this blog, and is thought to result largely from the omission of Arctic estimates in the HadCRU series.

  • Sekerob // August 14, 2009 at 10:50 am | Reply

    GISTEMP 7 month rolling 2009 makes the southern hemisphere, by far the largest part of oceans there, the warmest on record in modern times. Now is that a signal? GISTEMP 58-09 I have not looked further back than ‘58 since the object of the chart was to relate the MLO CO2 data capture period. Cooling, well the sum of all 8 collections I track shows nothing at all even hinting. Not much faith in RSS/UAH as being a good echo with lag or what lies ahead, they spiked up by 0.4C in July as some will have seen and posted about. Christy/R.Spencer must be crigging.

  • Deep Climate // August 14, 2009 at 4:50 pm | Reply

    It also should be mentioned that 2007 is also very warm in GISS (just slightly above 1998 I believe), so calendar years are close enough for retrospective analysis of trends and variation about the trend.

  • Sekerob // August 14, 2009 at 8:16 pm | Reply

    DC, can you point me to the data set where the full year mean, global shows nose-hair length warmer than for 2007 over 1998. I’ve seen it in charts or seen mentioned before, but was it global or USA?

    Here’s my ref: http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt for ocean+land

  • Sekerob // August 14, 2009 at 8:18 pm | Reply

    Do note that whilst, one set shows 2007 warmer than 1998, but 2005 still warmer concurrent with HADCRUT3v

    http://data.giss.nasa.gov/gistemp/graphs/Fig.A.txt

Leave a Comment