Open Mind

Bjorn Lomborg: How did you get those numbers?

October 14, 2008 · 24 Comments

Some of you might wonder why I make so many posts about the impact of noise on trend analysis, and how it can not only lead to mistaken conclusions about temperature trends, it can be abused by those who wish deliberately to mislead readers. The reason is that this is still a common tactic by denialists to confuse and confound the public.

Case in point: Bjorn Lomborg has written an article for the British newspaper The Guardian in which he attempts to persuade readers that climate data indicate conditions are much better than expected. Here’s a excerpt:

The most obvious point about global warming is that the planet is heating up. It has warmed about 1C (1.8F) over the past century, and is predicted by the United Nations’ climate panel (IPCC) to warm between 1.6-3.8C (2.9-6.8F) during this century, mainly owing to increased CO2. An average of all 38 available standard runs from the IPCC shows that models expect a temperature increase in this decade of about 0.2C.

But this is not at all what we have seen. And this is true for all surface temperature measures, and even more so for both satellite measures. Temperatures in this decade have not been worse than expected; in fact, they have not even been increasing. They have actually decreased by between 0.01 and 0.1C per decade. On the most important indicator of global warming, temperature development, we ought to hear that the data are actually much better than expected.

Lomborg doesn’t say exactly what he means by “this decade,” nor does he state exactly what data set(s) lead him to his conclusions. But it’s a pretty safe bet that by “both satellite measures” he refers to the RSS and UAH estimates of TLT (lower-troposphere temperature). The most natural meaning of “this decade” is — well, this decade, i.e., the 2000’s. So I computed the trend and its uncertainty (in deg.C/decade) for three data sets: NASA GISS, RSS TLT, and UAH TLT, using data from 2000 to the present. To estimate the uncertainties, I modelled the noise as an ARMA(1,1) process. Here are the results:

Data Rate
GISS +0.11 0.28
RSS +0.03 0.40
UAH +0.05 0.42

All three of these show warming during “this decade,” although for none of them is the result statistically significant.

Maybe by “this decade” he’s referring to the last 10 years. Let’s make the same calculation for the same data sets, using the last 10 years of data, from October 1998 through September 2008:

Data Rate
GISS +0.18 0.23
RSS +0.10 0.34
UAH +0.11 0.35

Once again all three data sets indicate warming but none of the results is statistically significant.

Maybe, by “this decade,” he’s referring to data since 1998? But that would be the worst cherry-picking possible, extending “this decade” to more than 10 years ago just so he could get the most out of the giant el Nino of 1998. We all know that would be cheating, right? Let’s make the same calculation using data from January 1998 to the present:

Data Rate
GISS +0.10 0.22
RSS -0.07 0.38
UAH -0.05 0.38

Finally one can obtain negative trend rates, but only for 2 of the 3 data sets. But again, none of the results is statistically significant. Even allowing this dreadfully dishonest cherry-picked start date, the most favorable case for Lomborg’s claim indicates that the trend rate could be as high as +0.31 deg.C/decade. That’s right, based on these data the trend rate could well be 50% higher than the IPCC “projection” of “about” 0.2 deg.C/decade.

How, then, does Lomborg arrive at his figures? Only God and Bjorn can be sure. But here’s my guess: he took the results from 1998, computed the uncertainty based on assuming that the data are a linear trend plus white noise (which we know, without doubt, is a terribly mistaken assumption), then computed his “range” by using only +/- one sigma. That would, in fact, give a range from -0.01 to -0.1 deg.C/decade.

What do you have to do to make this happen?

  • 1. Start at the beginning of 1998, more than 10 years ago, but call it “this decade.”
  • 2. Compute the probable error using a white-noise assumption, which is known without doubt to be wrong.
  • 3. Compute a confidence interval using only +/- one sigma, when we know that a normal random variable has about a 32% chance to fall outside the +/- 1 sigma range.

    One of these might be considered an honest mistake. If you’re woefully ignorant of statistics, you may not know that the noise in global temperature isn’t white.

    But the others are outright dishonest. Computing a confidence interval using only 1 sigma is bound to be wrong nearly one third of the time. And starting with the beginning of 1998, referring to “this decade” as starting more than 10 years ago, is cherry-picking taken to the extreme.

    I’ve previously said “Those who point to 10-year “trends,” or 7-year “trends,” to claim that global warming has come to a halt, or even slowed, are fooling themselves.” I may have been mistaken; is Lomborg fooling himself, or does he know exactly what he’s doing?

    So, Mr. Lomborg, we’re all very curious: how did you get those numbers?

  • Categories: Global Warming

    24 responses so far ↓

    • Mark Hadfield // October 14, 2008 at 11:17 pm

      But surely he wouldn’t have made such a claim without citing a peer-reviewed paper in which the methods and results are described in detail?

    • David B. Benson // October 14, 2008 at 11:22 pm

      Tamino — I am sure that God is confused and questioning as well…

    • Brian D // October 15, 2008 at 12:49 am

      Mark Hadfield: You’d imagine so, but there aren’t any citations. There’s quotes from the Associated Press and the BBC, but that’s it.

      Tamino, have you considered sending this to the Guardian as a rebuttal (perhaps through an intermediary, if you’d prefer to remain anonymous as usual)? Lomborg, if he is fooling himself, isn’t going to be convinced by mere facts, but at least points like this can prevent others from being fooled themselves.

    • Ray Ladbury // October 15, 2008 at 1:13 am

      I’m SHOCKED!!! SHOCKED!!! to find Lomborg playing fast and loose with statistics. (where are my winnings)

      [Response: I think this is the beginning of a beautiful friendship.]

    • François GM // October 15, 2008 at 1:47 am

      What are you talking about ?? Confidence intervals are estimate intervals of a true endpoint, which could be human population parameters, distance to the moon or temperature trends. You don’t apply confidence intervals on the true endpoint but on proxies or measures to estimate the true endpoint. A - 0.1 temp trend from 2001 to 2008 is a -0.1 temp trend from 2001 to 2008 - period. There are NO confidence intervals. It is the true final endpoint, if you define it that way. You may argue that a short-term trend from 2001- 2008 is not a proper TRUE endpoint. Fine. You may then apply statistics (but which ones ??? good luck) to determine the confidence intervals to estimate the TRUE endpoint. But tell us - what is the TRUE endpoint ? Temperature trend from 1980 to 2030 ? From 2000 to 2099 ? Up to you to cherry pick but PELEASE DO NOT apply confidence intervals on the true endpoint.

      [Response: My opinion: there's no hope for you.]

    • Geoff Larsen // October 15, 2008 at 2:01 am


      I suggest Lomborg’s figures refer to since Jan01 & he may have got them from here (to Jul 08-scroll down to charts): -

      or possibly from here (to Aug08-again scroll down to chart):-

      [Response: I doubt it. He'd have done a better (but not necessarily correct) job.]

    • george // October 15, 2008 at 2:12 am

      It is a well-established fact that Lomborg does not hold up well to close scrutiny, so it is no real surprise that he is again mistaken.

      And as far as knowing just how Lomborg got his numbers, I, for one would prefer to remain ignorant of Lomborg’s own ignorance.

      I seriously doubt there is anything worthwhile to be learned from it — other than perhaps how NOT to do statistics.

      I actually find it more than a little depressing that someone like Lomborg keeps getting public exposure for his opinions, which seem to require almost continual debunking by real scientists.

      Harvard biologist Edward O Wilson included Lomborg among what he termed “the parasite load on scholars who earn success through the slow process of peer review and approval.”

      Based on what I have seen (eg. Lombog’s book and the response of several scientists to specific claims made in the book made in Scientific American and elsewhere), I have to say that I believe that to be a fairly accurate assessment.

    • Bob North // October 15, 2008 at 2:32 am

      Tamino -
      Another possibility you didn’t examine is that Lomborg is referring to the period from Jan 2001 to the present (i.e., part of the first decade of the first decade of the 21st century). This would probably result in a much lower “trend” but I don’t know what it would do to the uncertainty values. Using the period from Jan 2001 would be consistent with his statement that IPCC expected “about 0.2C” for this decade since the IPCC did expect about 0.2C for the first two decades of this century.

    • Duane Johnson // October 15, 2008 at 3:43 am

      None of the decade choices Tamino offers show a warming as large as 0.2C per decade.The degree to which natural variation can explain away this fact depends upon the statistical model that is used, as well as the method of measurement, as well as the method processing the measurements to arrive at the result. Since there hasn’t been a significant volcanic eruption in the period in question, and Tamino’s natural variation includes such effects, the odds are getting slim that the first two decades of the 21st century will in fact have warming as large as 0.2C/decade, whether you start in Jan 2000 0r Jan 2001 or 1998. If a significant volcano should occur, the odds are slimmer still.

      I’m pleased to see that Lomborg is questioning IPCC projections. In the past, he has been content to mostly accept their conclusions on temperature while exposing the absurdity of the economic conclusions (e.g. Stern report).

      [Response: The statistical analysis is both sound and robust, but it appears we have yet another person who will deny it mainly because he doesn't like the outcome.

      I'm no expert in economics, but Stern's credentials are vastly superior to Lomborg's, and what I hear from those who do know the subject is that Lomborg's economics is every bit as bogus as his statistics.]

    • Philip Machanick // October 15, 2008 at 4:32 am

      I suspect some on this page are making the mistake of accepting the claim that Lomborg is a “statistician”. I have yet to track down anything to support this claim. His primary qualifications are in political science (in which he has published at least 2 papers, which makes him hardly more of an authority than me) and though he has worked as a statistics associate prof, I strongly suspect it was in a role of teaching stats to social sciences, as he was working in a political science department at the time. He certainly has not published anything of significance in statistics (or for that matter on climate change, but he has at least written books and copious newspaper articles in that field).

      The man is a certifiable bogon and I don’t know why anyone pays attention to him. Just shows how gullible the commercial media are.

    • Former Skeptic // October 15, 2008 at 5:16 am

      That was good analysis on Lomborg’s cherry picking.

      I’m actually more interest in Bjorn’s other comment:

      “…Likewise, and arguably much more importantly, the heat content of the world’s oceans has been dropping for the past four years where we have measurements. Whereas energy in terms of temperature can disappear relatively easily from the light atmosphere, it is unclear where the heat from global warming should have gone – and certainly this is again much better than expected.”

      I assume BL is taking information from Pielke Sr. where Ol’ Roger makes the rather brazen claim that “…global warming has actually halted, for now.” (see

      If you click on the relevant link in RPSr.’s post, it brings you to another post that details the source i.e. OHC data from Josh Willis. I’m pretty sure that there’s something wrong with using only 7.5 yrs of upper ocean heat content data to make the conclusion that GW is stopped since 2004. But then again, making such huge leaps of logic has not stopped RPSr in the past (re: The butterfly effect debacle, upper troposphere T, his crazed insistence the UHI seriously contaminates the overall surface T increase…)

      PS: A bit off topic, but could this be the moment that Pielke Sr. jumped the shark?

    • Patrick Hadley // October 15, 2008 at 12:27 pm

      One of the good features of this site is the way that Tamino always uses clear illustrations to demonstrate his points.

      Why has Tamino not simply shown us graphs of the data from the last decade so that we can see for ourselves what the trend has been?

      Surely this is a post that is crying out for some graphs.

    • void{} // October 15, 2008 at 1:45 pm

      Former Skeptic // October 15, 2008 at 5:16 am

      Where can I find info re:

      “The butterfly effect debacle, …”


    • Ray Ladbury // October 15, 2008 at 2:47 pm

      Patrick Hadley, your use of the word “trend” and decade is pretty much inconsistent unless you are interested in trends in the noise. Now you’ve been around here long enough to know this, so one wonders what the point of your post is.

    • george // October 15, 2008 at 3:03 pm

      Patrick asks

      Why has Tamino not simply shown us graphs of the data from the last decade so that we can see for ourselves what the trend has been?

      There is a significant irony in that statement.

      An important (if not central) theme in Tamino’s posts (as I see it, anyway) is that your eyes can fool you.

      While graphs are certainly useful in many cases, they can also lead you astray.

      “Eyeballing” is simply not a reliable way of determining trends.

      That is particularly true for short spans of time, for which any “underlying” trend (due to CO2 increases, for example) can easily be buried in the noise.

      Statistics — including the error bar on the calculated trend — is the only way that you can reliably approach the issue.

      Unfortunately, for short time spans, it is simply impossible to say what the “actual” trend is with precision because the error bar is large relative to the calculated trend and encompasses a fairly broad range of possible trend values.

      That’s precisely why Tamino says above that

      the most favorable case for Lomborg’s claim indicates that the trend rate could be as high as +0.31 deg.C/decade. That’s right, based on these data the trend rate could well be 50% higher than the IPCC “projection” of “about” 0.2 deg.C/decade.

      [Response: A good summary of the situation. Thanks.]

    • Former Skeptic // October 15, 2008 at 3:50 pm

      Hi void{}:

      The relevant thread is here:

      IMO it’s best read with a steaming cup of java. Ike Solem’s post (#53) is perhaps the best one explaining RPsr’s confusion. Hope that helps!

    • Patrick Hadley // October 15, 2008 at 4:29 pm

      Nobody is going to argue that graphs are on their own sufficient, the fact that Tamino seems to have made the decision that his argument might not be helped by showing us the data on a graph is rather telling.

      Perhaps if our eyeballs were shown the data points gradually trending south our brains might not be so receptive to an argument that there is really a trend in the other direction.

      [Response: You want graphs? Try this.

      Clearly your brain is very receptive to excuses to deny the truth.]

    • Thomas Huxley // October 15, 2008 at 5:13 pm

      Danish Biologist Kåre Foggood exposes Lomborg’s errors in great depth.

    • David B. Benson // October 15, 2008 at 7:26 pm

      Patrick Hadley // October 15, 2008 at 12:27 pm — After reading Tamino’s ‘this’, consider the five and ten year averages from the HadCRUTv3 global surface temperature product:

    • Gavin's Pussycat // October 15, 2008 at 7:30 pm

      Patrick, you can create your own graphs — both those that you like and those that you hate — here.

    • Cthulhu // October 15, 2008 at 8:53 pm

      I’ve noticed that the 10 year window is moving past the 97/98 el nino peak now. Either they’ll switch to using “last 11 years” or “6 years”

      September 1998 - September 2008 with trend

      And using UAH for the benefit of paranoids.

    • Steve Bloom // October 15, 2008 at 9:30 pm

      Former Skeptic wrote: “(C)ould this be the moment that Pielke Sr. jumped the shark?”

      No, but there has been a fairly steady slide downhill since the start of his blog. I suspect the underlying motivation is his perception that the larger climate science world has failed to acknowledge his peculiar (note not denialist as such) views. I suspect there was some major professional disappointment (loss of funding for his RCM?) around the time he began the blog, which was also roughly the same time he retired. He went off the reservation quite abruptly, the details of which are documented in the first six months or so of his blog.

    • John Mashey // October 15, 2008 at 10:49 pm

      Regarding Lomborg being a statistician:

      I have a copy of The Skeptical Environmentalist, and both American & British versions of Cool It!

      I can’t recall actually seeing any real *statistical* analysis, i.e., the sort of stuff that tamino does so well, or folks from my old place of work like Tukey & Kruskal.

      Can anyone point at any real statistical analyses in those books? Perhaps it’s there and I just missed it.
      [Note: I don't count just selecting & graphing data as statistical analysis.]

    • Ray Ladbury // October 16, 2008 at 12:25 am

      Hi John, Lomborg’s statistics and analysis always bring to mind some quotes by Andrew Lang:

      “He uses statistics as a drunken man uses lampposts - for support rather than illumination”


      “He missed an invaluable opportunity to hold his tongue.”

    Leave a Comment