# Open Mind

#### September 21, 2007 · 18 Comments

What’s really happening to global temperature these days?

Let’s take a look at, say, the entire GISTEMP monthly temperature data set:

It’s natural to separate this record into four episodes of temperature change:

You can argue about where exactly the transitions occur, but the most recent episode, the modern global warming era, starts just about 1975, and runs to the present day. So that’s what we should look at to understand what’s happening “these days.” Temperature for this period of time, together with a linear fit to the data (we can call that the linear model of temperature change), looks like this:

The trend line rises at 0.018 deg.C/yr. Temperature does increase, but it also deviates strongly from the linear model. We can even identify the cause of some of these deviations. The strong, and long-lasting, el Nino of 1997-1998 caused a warm spell, while the explosion of the Mt. Pinatubo volcano caused a couple of years of very cool temperatures in 1992-1994:

Now suppose we look at isolated decades. We can look for a trend from 1975-1985, or from 1976-1986, etc. all the way up to 1997-2007. For each decade we can estimate the trend, and its likely error range. If we estimate the error ranges using the “usual” assumption that the random part of the temperature data is white noise, then we can plot the results here (I’ve also plotted the rate 0.018 +/- 0.002 deg.C/yr for the entire modern era 1975-2007 as a red circle):

The decade 1987-1997 (mid-year 1992) has the lowest rate (slightly negative, but not with statistical significance) of any decade. More important, its error range indicates that the true rate is between -0.011 and +0.008 deg.C/yr, but the overall modern-era rate is 0.018, well outside that range. The error range is a “realm of possibility” — the true rate is extremely likely (95%) to fall into that range — so if a given value (say the modern-era rate of 0.018) falls outside that range, then we can say with confidence that during that decade the true rate was below the modern-era rate. In other words, we reject the linear-increase model during this decade.

Other decades also show error ranges that don’t include the overall average; for decades centered on 1982, 1985, 1990, 1991, 1992, and 1993 the modern-era rate is above the realm of probability, while for decades centered on 1987, 1997, and 1998 the modern-era rate is below the realm of possibility. These decades (compare 1987-1997 with 1992-2002!) show dramatically changing behavior during the modern era.

The cause of the swings in decadal rate is random events like el Nino and Mt. Pinatubo. They can change the rate of temperature increase dramatically, and they are (as far as we know) random, so they’re part of the noise. But they don’t behave like white noise. For white noise, any two moments are unrelated; the noise component at any given time is completely independent of the noise component at all other times. But for el-Nino noise or Mt. Pinatubo noise, the deviations tend to persist for months to years, so the random value at one moment is likely to be close to the random value at very nearby moments of time: nearby moments are correlated.

We call this red noise, and it increases the uncertainty in our estimates of trend rates. We can compensate for the red-noise character of temperature data to compute corrected error ranges. Plotting the decadal rates together with the red-noise estimate of error ranges gives this:

Now we see that in fact, all decades are in accord with the modern-era rate; every one of them gives an error range for the rate that includes the modern-era value. From this I conclude that there is no statistically significant evidence that temperature from 1975 to the present deviates from a linear trend plus red noise.

I have in the past given the impression that global warming has recently accelerated. Under certain circumstances it has, but those are limited to consideration of special time intervals. For the modern global warming era (1975-present) I see no hard evidence of acceleration. Therefore I will retract the impression, if not the actual statement, that global warming is accelerating; to me it looks like steady warming at 0.018 +/- 0.004 deg.C/yr, plus red noise.

Categories: Global Warming · climate change

### 18 responses so far ↓

• Tamino, what are you doing?

This post presents data, reduces it using several models, describes some of the underlying principles of those models, makes observations and suggests tentative conclusions to be drawn from them.

This is nuts. Don’t you know that the purpose of examining GISS data is to find some accounting anomaly and use it to hint darkly at its authors’ motives?

As it is, I now understand the topic better than I did before. I will have to return to the fever swamp for a paranoia booster, and try to forget anything that I’ve learned.

• Don Fontaine

Tamino,
You don’t give the equations for either the white noise or the red noise the error bars. The white noise error bars all appear to be the same size. Is this really just one calculated value that is assigned to each data point on the graph? Is this one value the standard deviation of the mean of the plotted moving averages?

The red noise error bars vary in size among the differently centered moving averages (data points). Do you have a link for calculating such things? Red noise (perhaps I’m confusing “pink noise”) has some constant given for how large the correlation for nearby points is. Is there such a constant for the second plot?

[Response: Error limits for linear regression assuming white noise can be found in most any textbook dealing with linear regression. The error bars are not all the same size, they’re different for each data point.

Regarding red noise, it is sometimes compensated by assuming that the noise is an AR(1) process. In this case the correlation between two successive points $r_1$ detemines a constant factor $\nu = (1+r_1) / (1-r_1)$ which is the “number of data points per degree of freedom.” A more rigorous way (used for the 2nd plot) is to define the number of data points per degree of freedom using all the correlations $r_k$ at all lags $k$. In either case, red noise effectively increases the size of the error bars by a factor $\sqrt{\nu}$.]

• nanny_govt_sucks

I wonder if you might address Mike B’s criticisms here: http://www.climateaudit.org/?p=2086#comment-140515

• […] Tamino: Cheaper by the Decade offers a good post about recent global temperature trends. […]

• luminous beauty

nanny,

The only reasonable response is that only someone who doesn’t get the basic idea of science would consider an observed and quantifiable physical explanation for modifying the mathematical model of a physical phenomenon to be hand waving.

• Darrel

Tamino, thanks for a very informative website you’re keeping up. I have a few questions about this analysis:Is it a fair assumption that all points must have red noise, ie that the errors on all points is equally related to the errors of the nearby points? I would think that this correlation between errors must be stronger for events like El Nino and Pinatubo than for times where no such strong and long lasting event took place. This would mean that the errors on points outside of the regions indicated on your fourth graph as influenced by El Nino and Pinatubo have error bars closer to the white noise level than the red noise level. Which in turn could influence your final conclusion about whether there the accelerating trend in the rate of warming is statistically significant.
There seems to be a very weak oscillating signal in the temperatures; could this have to do with the solar cycle? I.e. if you subtract the likely effect of El Nino and Pinatubo, is there a significant oscillation left in the signal that matches the solar cycle? At first sight it doesn’t look like the temperatures matche the phase of the solar cycle, but that could also be due to a lag time in response.

[Response: It is safe to assume that the noise is equally red at all time intervals; at least I know of no hard evidence otherwise. The appearance of a “weak oscillating signal” is not supported by the numbers; the strength of this “signal” is too weak to pass significance tests. Also, the “period” doesn’t match the solar cycle.]

• John Cross

Nanny, while we are on the subject of responses, I wonder if you might address the criticisms that were posed to you on this site.

Regards,
John

• Hi tamino,

I appreciate your clarification; and hope that no-one reads the lack of acceleration in global temperatures to mean that we are and we all will be fine.

It seems to me that acceleration exists in the way global temperatures impact ecosystems and evidence of that affects public perceptions. In other words, global warming can occur at a steady rate, yet the increasing fragility of our environment can accelerate, to turn the title of your post on its head

There are ecosystems that will be knocked off balance by a particular temperature, others that are affected by a change in range of temperatures, and still others that cannot keep up with a rate of change of temperature.

• Hank Roberts

> lack of acceleration in global temperatures

The deep ocean warming is just barely beginning to start being detected, right?

• Marko

Tamino,

Which GISTEMP dataset are you using?

Based on the trend, it looks like you are using GISTEMP dset=2.

Have you compared the different trends between datasets 0, 1, and 2?

Also, what does the trend look like after eliminating the UHI-tainted stations?

Since the accuracy of the UHI adjustments have been challenged, isn’t it a good idea to also examine the trend of rural-only stations?

• nanny_govt_sucks

Give it a rest John. I’ll get back to you when I have time. You do want a detailed response, don’t you?

• Heretic

Inel , it is probably worse than that. The response of ecosystems is a big wild card. Human influences is already putting a great deal of stress on these and thus they are weakened and less able to adapt to climate change. Marine ecosystems might have surprises in store for us, especially if sea ice dependent species at the base of the food webs suffer.

• John Cross

Nanny, no problem, but don’t you think it is a little hypocritical to demand that Tamino answers something right away when at the same time you have been saying that you would answer for months now.

Its almost as if you have no real intention of answering (perhaps because that would take actual research).

Regards,
John

• nanny_govt_sucks

Where did I demand anything, John? Where did I demand anything “right away”? Perhaps you should relax and have a beer with the folks on the “e” thread.

• What timescale did you use for the red noise?

[Response: It isn’t necessary to use a “time scale” for the red noise. That’s one way (assume the noise is AR(1) and estimate a time scale), but there are better ways which don’t depend on the noise process being AR(1) (and in this case, it’s not).]

Since there’s no prior reason for choosing the dates of the inflection points, I can’t see this as anything but an exercise in cherry-picking.

The best fit model depends on the start date. If it looks like a linear trend since the last kink, that just means you picked the right place to put the kink.

The system has a few major forcings which we know and one (solar) which is probably relatively small, plus internal dynamics. The right way to extrapolate is to fit response curves to all the known forcings.

I can’t imagine any system dynamics that would be piecewise linear in global temperature, but if there were such a thing, all we could say would be that we are overdue for a kink.

For good or ill, you are seeing faces in automobile grilles, though. The resemblance to piecewise linearity is, on physical grounds, surely coincidental, and on statistical grounds, an imposed bias. There’s no kink coming and there are no kinks in the past.

I can just as easily postulate an exponential curve with a superimposed bump around 1940. It turns out that isn’t a useful decomposition either, but it’s just as valid as yours.

• Michael Tobis // Sep 26th 2007 at 2:54 am

quote I can just as easily postulate an exponential curve with a superimposed bump around 1940. It turns out that isn’t a useful decomposition either unquote

On the contrary — an extremely percipient choice. You then can explain the bump.

JF

• Don Fontaine

Michael Tobis,
There is an algorithm for finding the kink. I don’t know if Tamino used it (he probably used it or something even better).
It goes like: choose the time point that looks like the kink point. That is make an initial guess. Call it tk1 for the first kink point guess. Then divide the data at the kink point. Do the (linear) fits to each portion. See where the lines intersect. Call this tk2, the new kink point. Repeat, if this converges then you got a reasonable estimate of the kink point.
This or similar procedures aren’t cherry picking.