Open Mind

Don’t Get Fooled, Again

September 12, 2008 · 60 Comments

In the last post we discussed MA (moving average) noise processes, and even combined them with AR (autoregressive) noise processes to define ARMA (autoregressive moving average) processes. I mentioned that global average temperature behaves approximately as a trend plus ARMA(1,1) noise, i.e., a 1st-order AR, 1st-order MA process.

Let’s put some of this to practical use; let’s create some artificial data, the sum of a steady trend at a rate of 0.018 deg.C/yr (about the rate of global average temperature), and pure ARMA(1,1) noise with AR parameter \phi = 0.8493, MA parameter \theta = -0.4123, and white-noise standard deviation \sigma_w = 0.1147. With these parameters, it’ll have just about the same structure as GISS monthly temperature data since 1975.


I generated 100 years (1200 months) of such data, and here it is:

I didn’t generate lots and lots of 100-year sequences until I got one that satisfied me. I only generated one, and that’s it.

The 100 years of data shows the upward trend quite clearly. In fact I fit a trend line to the data, and the estimated trend is equal to the actual trend, 0.018 deg.C/yr. Of course, with 100 years of data it’s easy to get an accurate trend estimate even in the presence of noise like this. But suppose we only had 33 years of data:

The trend is still evident, but now the noise is much more visible. We still get the right trend rate from linear regression (at least, to two significant digits), but as it happens that’s a lucky accident. Now let’s take a look at 33 years of GISS data, from 1975 to 2008:

It certainly does look similar. This similarity of GISS monthly temperature data to trend+ARMA(1,1) artificial data is quite striking. In fact the GISS data is very well modeled (not perfectly, but strikingly well) by this trend+noise process.

Some have made a big deal of the fact that the linear regression trend for temperature data has been nearly flat for the last 10 years. The GISS data still shows a slight upward trend over the last 10 years, but the trend rate is smaller than the 30-year trend rate, and HadCRU data don’t show an upward trend at all over the last decade. Could such an appearance really be only the effect of noise, or is it a sign that global warming has come to a halt? We can look at the artificial data, to see whether it shows any similar episodes of a decade or more with little or no apparent trend. When we do so, we should bear in mind that the artificial data still does have a rising trend; it’s constructed as a perfectly steady upward trend plus pure noise.

It so happens that the 2nd decade of the artificial data, from years 10 to 20, shows an apparent negative trend:

The apparent downtrend is slight, but it’s there. And we already know that it’s only apparent, not real, because the artificial data have a steady upward trend by construction.

Some have even made a big deal of apparent downward trends in global temperature for periods as brief as 7 years (or even shorter). What does the artificial data show? Here’s a 7-year period from the artificial data which shows a very large apparent downward trend:

But again, we know this trend is only an appearance. In reality, the trend behind the artificial data has continued to climb ever upward at a steady rate, by construction; only the noise gives the appearance of a decline.

We can even find a longer period — a full 14 years — for which the artificial data show an apparent downtrend, from years 41 to 55:

This illustrates just how strongly noise of this nature can mask the underlying real trend. And we know, without doubt, that in the artificial data the underlying trend is still there. It never stops, it keeps on climbing; that’s how the artificial data was made.

Those who point to 10-year “trends,” or 7-year “trends,” to claim that global warming has come to a halt, or even slowed, are fooling themselves. Statistics doesn’t support such a claim, and as this example shows, it’s really easy for noise to create such a false impression even when we know, without doubt, that the underlying trend hasn’t changed. This is a theme I’ve emphasized often, but it bears repeating. Such claims come only from those who are fooling themselves. Don’t let ‘em fool you.

Categories: Global Warming

60 responses so far ↓

  • Magnus W // September 12, 2008 at 8:02 am

    Just wanted to say, grate post!

  • Patrick Hadley // September 12, 2008 at 9:40 am

    You made a “bet” back in January that the annual GISS would twice average 0.735 before it twice fell below 0.277455+.018173(t-1991)-0.1918. For t = 2008 that gives us 0.395. After 8 months of 2008 GISS is averaging 0.3725, so you need an average of 0.439 over the next four months to avoid going one point down in the first innings. It could be a close thing.

    You may have seen Lucia’s Blackboard where she has used a number of statistical tests to examine the hypothesis that temperatures since 2001 have been rising at an underlying rate of 2C per century. She finds the hypothesis rejected at the 95% level.

    [Response: The last I heard (but I don't keep current with Lucia's blog), she uses an AR(1) model to compensate for autocorrelation in trend analysis. But as this post and especially this graph show, that model is inadequate.]

  • Luis Dias // September 12, 2008 at 2:02 pm

    At last, a good post. Concisive and explanatory. Thanks.

  • swade016 // September 12, 2008 at 2:05 pm

    My question is, outside of the example, how do you know the 100 year trend isn’t a smaller subset of some larger construction? (I’m not questioning if it *really* is, just how do you, as an analyst, *know* that what you’re looking at is the real trend, and not noise from a larger set?

    In other words, how would one know that the 14-year *trend* is a subset of the 100-year, supposing they didn’t have access to the 100-year data?

    As an example, we have managers who are in love with safety data and they get all hot and bothered when we have multiple months without a “recordable incident”. I think they’re just mistaking noise with trends, but since they only have two years of data, it looks really important to them.

    [Response: Of course in this example I "know" because the data are artificial, and are constructed in very precise fashion. But if these were observed data we wouldn't know. It would be entirely possible that the 14-year "flatline" actually was due to a change in the underlying trend. Even if we compensated for autocorrelation in trend analysis, to be rigorous we would have to take into account the fact that we have data for *many* decades/14-year time spans/7-year time spans, so we have multiple chances to exceed the 95% confidence limits computed for a single test and we need to adjust the critical values of the test statistics accordingly. We'd also have to take into account the uncertainty in our estimates of the autocorrelation structure -- we don't know the model with certainty, and we can't estimate the parameters with arbitrary precision. These factors are very rarely accounted for, I haven't even done it myself. All of which makes it very tricky business to make pronouncements about trends, and trend changes, when working with short time spans.

    My point in this post is that we genuinely wouldn't know, so those who claim to know that the "flatline" in the last decade of HadCRU data, or the "almost flatline" in the last decade of GISS data, is indicative of a demonstrable change in the global warming pattern, do so outside the bounds of sound statistics.]

  • Bernard J. // September 12, 2008 at 2:24 pm

    Elegant and concise demonstration of the weakness in the ‘cooling’ claim, Tamino.

    Kudos.

  • Greg // September 12, 2008 at 3:43 pm

    And of course, it follows from your post that a 50-year trend could be wrong on 500-year timescales. Or a 500-year trend could be wrong on 5000 year timescales. All that you have reiterated is that we can’t be absolutely sure of a trend based on a segment of noisy data. I think everyone is aware of that. I think, given that we have maybe 80 years (?) of reliable *global* temperature measurements, on a planet that is 4 billion years old, you’ll find that denialists are the ones keenest to point this out.

    We can only hypothesis test on the accurate data that we have. If you take the past 10 years, the warming hypothesis isn’t supported. If you take the last 25-30, it definitely is. But a key point is that your example trend is linear, while the one we are trying to establish may be changing. You’ve shown it is hard enough to detect a linear trend amongst noisy data, how do we detect a change in trend amid that noise?

    [Response: If we take the past 10 years, the warming hypothesis isn't confirmed but neither is it denied. There's just not enough data, given the behavior of the noise, to say very much at all. And that's the point of this post: that there's no evidence yet of departure from a continuation of the warming trend. Those who say there is have no sound statistical basis for that claim.

    And don't make the mistake of thinking that no trend can be determined with confidence at all. The 100-year time series shown in the first graph has a strong upward trend, and there's more than enough data to say so with certainty. Even the 33-year data (both artificial and from GISS) can be said to show an upward trend with certainty. The trend in actual temperature is almost certainly not linear, but over the last 33 years or so it cannot be shown to be different from linear. The example of a 50-year trend being "contradicted" by a 500-year trend isn't pertinent; in that case the 50-year linear trend would be very real, and if it departed from the 500-year linear trend it would not contradict the reality of the 50-year trend, it would simply show that the long-term trend is not linear.

    Eventually the nonlinear character of the temperature trend will make itself apparent. But the only way we'll detect that with confidence is to acquire more data.]

  • JohnV // September 12, 2008 at 3:48 pm

    Very illuminating series of posts. I have 1 question and 1.5 requests:

    Question: How did you determine the appropriate ARMA parameters (theta, phi, and sigma)? Is there an algorithm for objectively determining the best parameters from a time series.

    Request: Would you consider a Monte Carlo analysis of 90-month trends from this artificial data? I’m curious about the distribution of 90-month trends about the mean.

    Sub-Request: The distribution of 90-month trends calculated using both ordinary least squares (OLS) and Cochrane-Orcutt (C-O) would also be very interesting.

    Thanks.

    [Response: 90 months seems to be a peculiar choice.

    I modeled the autocorrelation structure as \rho_j = \lambda \phi^j, the case for an ARMA(1,1) model. I estimated the parameters \lambda and \phi from the estimated autocorrelations of the residuals from a linear trend fit to GISS data from 1975 to the present. Given those parameters, there's a unique set of values for the corresponding ARMA(1,1) model (the \phi from the model happens to equal the \phi in the ARMA(1,1) model).

    A Monte Carlo analysis would probably yield interesting results. But I'm somewhat overloaded with work at the moment (work-work and private research), so I'm having a bit of trouble keeping up with the blog. So I can't commit to further projects at this time. I encourage you to "go for it."]

  • JohnV // September 12, 2008 at 4:07 pm

    As a follow-up to my own comment, I attempted to calculate the distribution of 90-month trends. I have only used OLS at this point.

    *If* I did the calculations correctly, I found a standard deviation of 90-month OLS trends using this ARMA(1,1) model to be ~0.11C/decade.

    I’d appreciate if someone could confirm this result.

  • John V // September 12, 2008 at 4:27 pm

    Tamino, thanks for the reply.
    90 months is indeed peculiar, but it was not *my* choice. :)

  • Dano // September 12, 2008 at 4:34 pm

    I think, given that we have maybe 80 years (?) of reliable *global* temperature measurements, on a planet that is 4 billion years old, you’ll find that denialists are the ones keenest to point this out.

    This is where our society’s educational system is remiss in teaching folks about scale, and at what scales processes work, and where scale is out of context with conditions.

    F’r instance, the comparison of 80 years to 4B is the wrong scale, for so many reasons it is worthless to point out. Comparing recent trends to 5000 ya is better, but in context does not compare well due to anthropogenic influence on land and in the atm.

    Ecologically, though, 5000 years is appropriate, as we find that we can tease out that new species can occur in this period. In fact, I did field work in CA on two occurrences like this, with Jeffery pine and Pondo pine and with oracle oak and interior live-black oak (which are hybridizing now and may speciate, but we can’t tell, etc).

    At any rate, natural processes as a rule of thumb change at a stable rate at the millenial scale.

    We should be very concerned that anthropogenic change is happening at the decadal scale, as the vast majority of natural processes (and certainly the ones we depend upon for ecosystem services do not) do not change on these scales into stable states.

    Those who do not share this concern should not be speaking to the issue, in my view. And they should not be contributing to discussions of policy options, as they are too ignorant to do so.

    Best,

    D

  • apolytongp // September 12, 2008 at 7:57 pm

    I think it’s kinda funny how “my side” advances both the idea of long term persistence or Hurst or what have you (basically the possibility of 100 year long excursions), while also touting 10 year deviations. I think the two are sorta non self-consistent denialist insights.

    BTW, I tried looking at Lucia once, but it was a rat hole. Mosh-pit directed me to one thread, but then Lucia was adjusting her approach later on. It was a real mess to even try to understand the overall insight since it was not summarized as a document, but just read out as a chronological series of working experiments and commentary. Plus there’s all the thread comments as well as head posts.

    I do think it’s a bit bizarre to look at the extreme wigglyness of temp data that has been observed over last several years and then make a statement that the last 10 years is a statistically significant change. Impression I got also was that when pinned down Lucia and her crew fell back on either of two fallacies:

    *arguments of the semantics or she-saideitness of the IPCC (the IPCC claimed significance that can not be meaningfully claimed for short times, therefore we can infer a failed prediction). Kind of a silly way of looking at things if you ask me.

    *failure of opponent to disprove, being equated with own having proved (excluded middle fallacy).

    (I admit that this is very much a “quick glance and impression”. And given my frustration with the sheer mass of unorganized work, I did not bother really checking it hard. So I could be wrong.)

  • David B. Benson // September 12, 2008 at 10:19 pm

    Greg // September 12, 2008 at 3:43 pm — Actually, there are several proxies which do quite well for paleoclimate temperatures; marine corings can be used (see Tamino’s thread on The Stack) and for the last 800,000 years or so, ice cores.

    Restricting just to the mid and late Holocene, from 10,428 ybp to 100 ybp = 1850 CE, in central greenland there is but one major trend; that due to orbital forcing. Then there is a puzzling bump up centered around 3,300 ybp. In addition the minor ups and downs of MWP and LIA are observed.

    IMO, none of that uping and downing is just noise. W.F. Ruddiman’s early anthropocene hypothesis certainly explains at least some of it.

    In the GISP2 ice core data there are noisy-looking minor ups and downs at all ‘periods’ great than the minimum resolvable, about 22–25 years. None of that seems to be of the least interest except possibly those with ‘periods’ in the range 45–90 years, which might be due to ocean oscillations. (I stress that nothing here is actually periodic or even pseudoperiodic; maybe I can coin ‘quasi-periodic’ for what is there.)

    In any case, there certainly is more paleodata than good ideas of what further to do with it, at least for me.

  • George Tobin // September 13, 2008 at 1:29 am

    I can understand that a flat period does not change the existence of an upward trend if for no other reason than the beginnings of the data series is lower than the current plateau. However, the slope of the plotted averages or temp anomalies does decrease , doesn’t it?

    I think lucia was looking at an apparent prediction of a trend of 0.2 degrees per decade and trying test it. A decade seems short but claims that a period of less than 30 or 40 years of net flatline would still be “consistent with” IPCC models made by some are unsatisfying given the degree of certainty ascribed by many people to inferences drawn from that modeling.

    There is a hell of difference between an “underlying trend” of less than 2 degrees per century versus 3-4 per century. The mere fact that the trend remains upward is not very dispositive, scientifically or policy-wise.

  • Duane Johnson // September 13, 2008 at 3:50 pm

    Tamino,
    In your reply to John V above, you didn’t mention your method of establishing the white-noise standard deviation. Could you comment on that for completeness?

    [Response: There are 3 unknowns in the model, \phi,\theta,\sigma_w. One can derive formulae for the autocorrelations, and for the ARMA(1,1) series standard deviation, in terms of those 3 unknown parameters, and those equations can be solved to express the unknowns in terms of the estimated std.dev. and autocorrelations. The formulae are a little complicated (but only a little); maybe I'll do a post about it.]

  • george // September 13, 2008 at 4:11 pm

    John V says above:

    I attempted to calculate the distribution of 90-month trends. I have only used OLS at this point.

    *If* I did the calculations correctly, I found a standard deviation of 90-month OLS trends using this ARMA(1,1) model to be ~0.11C/decade.

    Tamino said in an earlier post

    I’ll also use the exact formula for the impact of autocorrelation on the probable error in an estimated trend rate from OLS (see Lee & Lund 2004, Biometrika, 91, 240).

    Results? For GISS data
    …using only post-2001 observations. GISS data indicate 0.0024 \pm 0.0334 deg.C/yr

    Assuming the 0.0334 deg.C/yr (0.334C/decade) is a 2-sigma error, that leaves a difference between the two error values

    0.11 C/decade vs 0.167 C/decade

    Admittedly, not large, but a difference nonetheless.

    Whence the difference?

    Perhaps it is due to a difference in the models used? I understand Tamino did not use AR1 for that previous analysis, but it is not clear whether what he did use was ARMA(1,1) . In the earlier post, he said

    I’ll estimate the trend from ordinary least squares (OLS), but I won’t compensate OLS using a simplified estimate of the impact of AR(1) autoregression, or Cochrane-Orcutt estimation, both of which assume an AR(1) model

    Or is the difference perhaps due to the data used?

    both are GISS data, but monthly data in both cases?

    Did both analyses have the same starting point?
    Does “post 2001″ indicate “after the end of 2001″ (ie, “post Dec 31, 2001″) or “post Jan 1, 2001″?

    That post was also dated march 26, 2008 (5+ months ago) so that would also presumably make a difference in the calculated error (more data, smaller error), though just how much is the question.

    John V:

    Though some know the reason for the “peculiar” 90 month choice, not all do, so I’ll spell it out:

    Those who have claimed to have “falsified” IPCC projections at the 95% confidence level have based that claim on analysis performed on data since the beginning of 2001.

    Not that the difference between the errors 0.11C/decade and 0.165C/decade would make any difference when it came to deciding whether the calculated trend range (trend + error) for GISS includes the mean of the AR4 projected trends. It would not make a difference (ie, the mean projection would still fall within the trend range — for the GISS data, at least)

    [Response: I'm not sure that John V has done the process correctly. I took my artificial ARMA(1,1) data (with no trend) and computed the trend rate from OLS for each independent 7.5-year (90-month) block. I got a standard deviation more like 0.2 deg.C/decade; although the sample size is quite small, it's much more in line with the theoretical calculation. But I'm not sure John V has done it wrong, either.]

  • Buddenbrook // September 14, 2008 at 11:49 am

    Excuse the stupid question, but why do you think it is a 7 or 10 year trend and not a alonger, more permanent one?

    Last year it was no doubt a 6 or 9 year trend, as 2008 was supposed to be very warm…

    It’s been close to as “warm” as it was in 1988.

  • JohnV // September 15, 2008 at 1:11 am

    george, Tamino:
    I made a mistake in my estimate of the standard deviation of 90-month trends from the ARMA(1,1) model. I was calculating the white noise incorrectly.

    Using a proper estimate for the white noise, I get a standard deviation of 90-month trends of 0.197 C/decade. That’s based on ~1000 years of artificial data.

  • Cthulhu // September 15, 2008 at 2:41 am

    This is pretty cool, I tried it out but I have discovered I am doing something wrong but haven’t yet figured out what.

    You can see in the image below what’s wrong. Sorry about the lack of an x-axis, it’s monthly hadcrut3 from 1970-2008, after that it’s the ARMA(1,1) noise with 0.0015C/month trend. The transition is clearly visible because the amplitude of the noise+trend is too low .

    http://img.photobucket.com/albums/v235/ononelk782/ohno2.jpg

    I am using:

    temp(t) = 0.8493 * temp(t-1) + whiteNoise(t-1) * -0.4123 + whiteNoise(t)

    where whiteNoise(n) = random(-0.1147, 0.1147 )

    And plotting:
    plot(t) = temp(t) + 0.0015t

    I suspect I have used the parameters incorrectly. If anyone can see a obvious error I have made here that would be great.

    [Response: Have you tried using GISS instead of HadCRU?]

  • lucia // September 15, 2008 at 3:10 am

    Tamino–

    Not to shock everyone… but basically, I like the idea of this method.

    If you could write the post on how you obtained the parameters, I’d find that interesting. (If a post is too much effort– maybe you can send me email. It reads like it’s just three equations with three unknowns? But do you correct for the bias in the lag1 and lag2 autocorrelations somehow?)

    Having said I basically like the method generally– I have some reservations about your specific parameters. The major reservation relates to the fact that the period you used to estimate the parameters includes major stratospheric volcanic eruptions which are known to induce variability above and beyond what we get when there are no volcanic eruptions.

    I’ll be posting some numbers later to explain why I think the specific parameters don’t seem to apply to periods when the forcing due to stratospheric aerosols has become relatively constant. (We are in one of those periods.)

    However, if you describe how to get the magnitude of parameters, I want to estimate the parameters using two other specific periods of time. (JohnV should know which they are. :) )

    Obviously, I’d also apply this same method to measurements by other groups, but right now, estimating the parameters using data since 1975 strikes me as not quite right if we are trying to answer questions about whether the current flat-ish temperature trend is consistent with a 2C/century trend (or whatever trend one might wish to test).

    [Response: It is just three equations in three unknowns. Others have also expressed an interest in more details, so I guess I'll do a post about it.

    It seems to me that the volcanic eruptions are part of the "random noise" in global temperature, so their impact should be included in specifying the character of the random noise. Also, if I recall correctly the parameters are nearly the same when one uses a longer time span, but in that case the detrending becomes more complicated because the trend is not linear. In any case, it's necessary to use at least on the order of 30 years of data to estimate the model parameters, even if they'll then be applied to a shorter time span for trend analysis, or the uncertainty in the parameter estimates becomes a problem.

    The parameters are distinctly different if one uses HadCRU data rather than GISS.]

  • Hank Roberts // September 15, 2008 at 4:14 am

    George Tobin
    > the slope …?
    Not significant. That’s the point of the thread.

  • george // September 15, 2008 at 3:44 pm

    Cthulhu said

    whiteNoise(n) = random(-0.1147, 0.1147 )

    I wonder: Does your “whiteNoise(n)” generator yield values that have the proper statistical characteristics for this case?

    I am used to seeing numbers within parens for a random number generator as min and max values (ie to specify the range of output values)

    Perhaps I have misinterpreted your use of “random(-0.1147, 0.1147 )” — and if I have, please forgive me and simply ignore the rest of this comment — but are those numbers that you give in parens “min” and “max” values input to the random generator that you have used to generate the white noise value?

    If they are, that might be at least part of the reason your generated series appears to have smaller deviation about the trend line (ie, lower noise) than the actual temperature series.

    Of course, in the case of inputing -0.1147 and 0.1147 as min and max, the generator will never return a white noise value that is less than -0.1147 or greater than +0.1147 (unless you have used Microsoft Excel and there is a bug, which, as we all know, never happens, so you may safely ignore the last part of this sentence… well, not the very last part, but the second (or is it third?) to last part…)

    On the other hand, for the case specified by Tamino above (in which the standard deviation of the white noise is actually 0.1147), a correctly functioning white noise generator for the case given will sometimes give values greater that 0.1147 and sometimes less than -0.1147 , ie, more than one standard deviation away from the mean.

    [Response: Ironic that this and the next comment appeared in the moderation cue at the same time.]

  • Cthulhu // September 15, 2008 at 3:52 pm

    I am using GISS now, but turned out the main problem was I generated the white noise incorrectly. I was using a random number between -0.1147 and 0.1147, instead of 0.1147 being the standard deviation of the white noise.

  • george // September 15, 2008 at 4:11 pm

    Lucia says:

    The major reservation relates to the fact that the period you used to estimate the parameters includes major stratospheric volcanic eruptions which are known to induce variability above and beyond what we get when there are no volcanic eruptions.

    I’m curious.

    What (precisely) qualifies as a “major stratospheric volcanic eruption”?

    Or asked another way: which (magnitude and type) volcanic eruptions should one include in the noise and which not?

    In general, just how should one choose the “correct” period to estimate the parameters?

    What other cooling effects (besides volcanoes) are things to be avoided when we select the period? La Nina?

    Should we also choose periods that do not include (or are not “bracketed by”) short term warming effects like El Nino because we know that “major El Ninos are known to induce variability above and beyond what we get when there are no major El Ninos”?

    Where does this end?

    Or, as the atheist asks of Saint Peter at the Gates of Heaven “Where’s the cutoff?”

  • apolytongp // September 15, 2008 at 5:17 pm

    (Honest) stepping aside from even caring about AGW or the arguments on it:

    I think there is sort of a D&A insight in what Lucia is talking about. Essentially, what she wants to think about is what is the inherent variability of the system regardless of volcano forcing. One could imagine coming up with some sort of feel for how much the system swings on its own and in response to expected forcings. Something like this (conceptually):

    CO2: large, long duration impact
    Solar: very small, less than previously believed.
    Volcano: large, short duration
    Pollution aerosols: large impact
    Inherent variability (regardless of factors above and occasionally in addition or oppostition to forcings): large swings of ~ 10 year time frame.

    In a model world, one could imagine doing OFATS, and full factorials to understand how each of these forcings works as well as the level of inherent system variability (perhaps due to ocean/wind patterns like El Nino).

    Of course, we don’t have a manipulatible earth. Still, imagine that we have a period of time where CO@ has risen (and/or there is “temp in the pipe” from previous CO2), and we want to compare inherent variability and CO2 impact as possible effects. It seems reasonable that we should “correct” for what vulcanism occured. This would not help us know how future variability could occur (since there the vulcanism would be an unknown), but would help us to analyze the already occured period in question.

  • apolytongp // September 15, 2008 at 5:27 pm

    george:

    You are one of my favorite posters on the other side, so it pains me to have to clash with you. And also that you seem to be asking some obvious questions, raising trivial issues.

    1. The vulcanism is another variable forcing. We use whatever is the best estimate (perhaps the one Mann used in MBH98?) for the temp sensitivity of response to vulcanism. rguing about the specific number for that forcing is different from ignoring it altogether (first and second order effects). And it’s probably not about the “level of a major volcano”…but about some measure of the entire periods vulcanism (as done in D&A portion of MBH for example).

    2. No (wrt El Nino/Nina). The whole point is to consider inherent variability of the system versus forced changes. this is (intuitively, not that I have done the math) why I have a bit of a problem with the long drawn out endeavor on Lucia’s site or by Pielke to disprove AGW by the recent temp history. The reason is that we know that temp has a tendancy to dance around a fair amount on its own, even if the long term trend is CO2 rising.

    P.s. Note that the tendancy to want to talk about recent 10 year temp decline as invalidating AGW is not consistent with the other denialist tendancy to posit very long term inherent variability.

  • apolytongp // September 15, 2008 at 5:28 pm

    I mean, “no” don’t exclude Nina and Nino. Keep them in, they are perhaps the best understood, large inherent variability from the system itself.

  • lucia // September 15, 2008 at 7:26 pm

    George-

    What (precisely) qualifies as a “major stratospheric volcanic eruption”?

    Or asked another way: which (magnitude and type) volcanic eruptions should one include in the noise and which not?

    I count as “major” from the point of view of inducing variability those eruptions that either Alan Robock or GISS include as important when estimating external forcing for temperature variations. The post discussing that is here.

    The fact that models do predict the known variability induced by these eruptions is considered one of the points in favor of model’s accuracy.

    So, if the effect of the volcanos is deterministic, and we know when they occurred, I think we must recognized that if we treat GMST as a trend plus noise, then the “noise” process must either account for variability induced by eruptions like Pinatubo explicitly (as an exogenous variable) or we must try to treat periods with no eruptions differently from those with eruptions when estimating whether the properties of “weather noise” that might apply to a particular periods.

    What’s the specific criterion dictating when a time period is unaffected by variations in stratopsheric aerosols due to volcanos? Well, we can all argue about that.

    That said, I don’t think the probability of a downturn or flat periods since 2001 is properly estimated by fitting the ARMA(1,1) process to a period where the major plunges are due to Pinatubo, Fuego, and El Chicon rather than weather processes like El Nino, La Nina, the PDO, AMO or other oscillations.

    Obviously, people can disagree. But, basically, the disagreement has to do with the volcanic eruptions– not how to fit the ARMA(1,1). (We could also argue about wheter ARMA(1,1) is suitable– but I’m happier with an attempt at a fit than with making no attempt.)

    So, as I said before, over-all, this approach by Tamino, but the volcanic eruptions are an issue for me.

    Tamino-
    On the volcano issue– we are going to disagree on this.

    I would agree with you that the volcano noise is “just noise” if what we were doing was trying to create USDA climate maps to help people select plants that might be suitable in their yards going forward. In that event, we don’t know when the next volcanic eruption will be, and so we would simply assume they erupt at the rate similar to in the past.

    However, I think we must exclude the volcanic eruptions when we estimate the variability during a period where we know there were no volcanic eruption. So, with regard to the question you, I and all the addicted climate bloggers ask, the issue of volcanos represents a roadblock to any sort of agreement about whether or not 2C/century falls inside the 95% confidence intervals consistent with recent data.

    This is less important in terms of fundamentals, but important in terms of details: GISS hasn’t been falsifying with OLS in anycase. It’s HadCrut and NOAA that have been. GISS seems to be the more “weather noise” in measuring GMST for whatever reason.

    I thought you would suggest we need 30 years. :) I’m going to do 1914-1944. (I’ll explain that choice later.)

  • Georg Hoffmann // September 16, 2008 at 10:16 am

    Lucia
    I dont think what you write is correct. Volcanic erruptions have a specific noise spectrum as other type of forcing noise or internal variability as well. There is no complete distinction between phase with and without volcanoes. Even smaller erruptions have their contributions to the global forcing.
    There is a paper submitted by Ammann and Naveau building typical volcanoe noise for future scenario runs. This includes in particular a careful treatment of the xtreme value statistics due to big erruptions.
    Georg

  • Bart Verheggen // September 16, 2008 at 10:52 am

    I don’t see why for a reference period volcanoes should be excluded, but El Nino, La Nina or other known factors that influence the global temperatures shouldn’t. The fact that the former is an external forcing, whereas the latter isn’t, doesn’t seem relevant to me. The issue, if any, is that the reference period should have been influenced by perturbations of the global temperature to the same extent as the period under investigation (i.e. the last decade).

    Lucia’s point is that if the variability of the temperature in the reference period is influenced by volcanoes, but the variability in the last decade is not, then they cannot be properly compared. Fair enough, but the exact same could be said for the El Nino/La Nina cycle, and of course other factors that exhibit a relatively fast and strong influence on temperature (eg aerosols). A period that is relatively free of ENSO fluctuations will have a very different variability in temperature than a period with strong El Nino/La Nina activity. A perfect match between the reference period and the period under investigation is not possible, but by taking a long enough timeframe of the reference, the error may be minimized, and if you decide to exclude (or correct for) known interferences to try and make a better comparison, then all the large interferences should be excluded, regardless of whether they are internal or external to the weather system. The benchmark is their effect on the temperature variability.

  • apolytongp // September 16, 2008 at 2:19 pm

    If you are trying to understand internal climate variability versus CO2, then you shouldn’t compensate for it’s occurrence (the internal climate variability, the El Nino). Similarly, if you are trying to understand internal climate variability versus CO2 and the level of vulcanism is known, the vulcanism should be compensated for. (Although I agree with Georg, that we should not see it as a digital situation, but as a level of vulcanism.)

    P.s. I’m actually not a Lucia champion. Haven’t followed her work, since it is not well written down. Is just sort of some evolving set of trial analyses…and I don’t care to read through hundreds of pages of that sort of chronological discovery path. My basic impression is that climate IS highly variable. Thus it is silly for Lucia-types to think that the last 10 years invalidates a long, slow trend. (Just as the stock market has grown steadily for 100 years, but there can be times when it is down for 10.) Also, I find it strange to say “aha” with the last 10 years of down temp (seemingly ignoring high inherent variability…and then posit that centenial scale MWP excursions can occur.)

  • Gavin's Pussycat // September 16, 2008 at 2:32 pm

    apolytongp:
    > P.s. Note that the tendancy to want to talk about recent 10 year temp decline as invalidating AGW is not
    > consistent with the other denialist tendancy to posit very long term inherent variability.

    Very observant! I got involved with a local letter-to-the-editor writer claiming

    (1) that the drop of 0.77K in temp anomaly between January 2007 and May 2008 (UAH data) proved that global warming had come to the end of the road and was reversing, and
    (2) that the observed global warming trend is just part of a long, 60 year, natural cycle.

    He was completely serious about both claims, not even noticing their contradiction — no back-of-the-envelope estimates in his alternative universe. That’s when I gave up on him.

  • Ray Ladbury // September 16, 2008 at 2:46 pm

    apolytongp, There are two ways to approach looking at CO2 forcing. In the first, you try to exclude any anomalous data. That would especially include events such as large eruptions, since the uncertainties of aerosol forcing are significantly greater than those of ghg forcing by CO2.

    In the second approach, you try to model the noise (e.g. volcanism, etc.) as well as the signal. This takes a lot more data and probably requires that you model both signal and noise.

  • lucia // September 16, 2008 at 4:47 pm

    apolytongp:
    I’ve never said the current period invalidates AGW an upward trend. Only that the current inconsistent with a long slow trend of 2C/century, which is the trend line through the average of the IPCC models. That is: It was a prediction of what is happening now.

    The long slow trend of 2C/century has never occurred, and so there is no way to suggest what I claim invalidates anything that has been observed.

    So, if you wish to use your stock market analogy, it would be more like I’m testing someone’s prediction we are entering a period where the market will climb more rapidly than observed over a long period in the past, but then the market immediately went flat.

    The market is variable. So, when can you begin to say their prognostications were incorrect?

    FWIW. It’s fine if you don’t read my blog. It is a series of analyses. It interests some, but not others. That is the way of blogs.

    You do correctly capture my point of view with respect to the volcanic eruptions.

  • apolytongp // September 16, 2008 at 5:38 pm

    Lucia:

    Ok, but that is kind of a semantic argument. And a disproof of a claimed statement of your opponents, which is not the most reasonable way of expressing their basic viewpoint. At most, I would say that it means they need to tighten their language. Not that they are bad predictors or that they have a flawed understanding of the monotonic (or roughly monotonic) nature of CO2 on temp or what have you.

    That’s at best, I also wonder if there are some flaws in your characterization of them even semantically. How you transform their words to math implications. For instance is a general trend expressed in decadal terms implicating a certain plus or minus for decades or for the overall trend, but for a longer period…yada yada.) But like i said, I really skimmed it.

    Let me know when you have something crystallized in terms of work. I’m interested in the topic, but want to read it in a normal manner, rather than through the “open notebook” of chronological development. Much faster and much easier intellectually. I think you would also clarify your own thinking, even if only in sharpening things to nescessary points.

    Thanks on the volcanos.

  • Hank Roberts // September 16, 2008 at 5:45 pm

    > long slow trend of 2C/century has never occurred

    Which “never” — “long” or “slow” or “trend” or “2C/century” — has been published?

  • george // September 16, 2008 at 10:01 pm

    I messed up blockquotes on previous post (please ignore)

    Lucia says:

    I don’t think the probability of a downturn or flat periods since 2001 is properly estimated by fitting the ARMA(1,1) process to a period where the major plunges are due to Pinatubo, Fuego, and El Chicon rather than weather processes like El Nino, La Nina, the PDO, AMO or other oscillations.

    Actually, the eruption of El Chicon in 1982 (which injected 7Mt of SO2 into the stratosphere) did not cause a “plunge” in temperature.

    One of the strongest El Ninos of the last century started at virtually the same time as the eruption (some scientists even thought for a while that the volcano may have triggred the El Nino, although most now believe it was just coincidence).

    From Volcanic eruption, El Chicon

    Normally a large eruption like this would cool the global climate, especially in the summer, but during the first year after the El Chichon eruption, no large cooling was observed, as the El Ni˜no produced large compensating warming.

    But the key thing (with regard to this discussion, at least) is that the volcanic eruption actually acted to dampen the positive temperature “swing” due to the El Nino, which would otherwise (in the absence of the eruption) have been significantly greater.

    In fact, the positive temperature swing due to the El Nino in 1998 was much greater even though the 1998 El Nino was not deemed strong as the one in 1982.

    In other words, the eruption of El Chicon actually acted to reduce the (positive) variation in temperature due to the El Nino.

    I’m not claiming that it is the norm for there to be no “plunge” in global temp after a volcanic eruption that injects several Mt of S02 into the stratospeher, only pointing out that one has to be cautious about one’s claims.

    As it turns out, the eruption of Pinatubo also occurred during El nino (albeit a weaker one), so its negative “pulse” was presumably damped by the positive pulse of the El Nino.

    But I would be interested in knowing just how much the volcanic eruptions during that period (from 1975-present) affect the values of the parameters that Tamino used.

  • lucia // September 16, 2008 at 10:14 pm

    Hank–
    apolytongp had said this:

    Lucia-types to think that the last 10 years invalidates a long, slow trend.

    He also made an allusion to existing stockmarket trends. So, I inferred he meant to suggest I was testing a pre-existing long slow trend.

    I testing a trend of 2C/century. Why do you ask me which 2C/century has been published. As I said in my response to apolytongp, there is no pre-existing trend of 2C/century.

    Needless to say, I can’t point you to a publication claiming such a trend existed prior to 2000, or even now.

    apolytongp– Sorry to disappoint you, but my blog is what it is. A blog. I’m not sure I know who you mean by my opponents, and I have no idea who you are referring to as “them” in your response to me.

    As I said before– it’s fine with me if you don’t read my blog. But you seemed to suggest I was testing the truth of some standing pre-existing trend. That is not so, and I wished to point this out.

    Obviously, I can’t address the “yada, yada, yada” points in your comment. I will now return to mine. Ciao. :)

  • apolytongp // September 17, 2008 at 12:31 am

    Ciao.

  • george // September 17, 2008 at 1:05 am

    Bert said
    Bert Verheggen said:

    I don’t see why for a reference period volcanoes should be excluded, but El Nino, La Nina or other known factors that influence the global temperatures shouldn’t.

    That’s because you are thinking logically.

  • Duane Johnson // September 17, 2008 at 4:16 pm

    Bert and George,

    Isn’t it a matter of conditional probability? If there hasn’t been a major eruption presently influencing the weather, why would the past influence of major volcanoes be influencing present weather variability? If a major eruption should occur, the condition would not be applicable.

  • Bart Verheggen // September 17, 2008 at 7:07 pm

    apolytongp writes: “If you are trying to understand internal climate variability versus CO2 and the level of vulcanism is known, the vulcanism should be compensated for.”

    You’re right. I guess it depends on what you’re trying to do. If the purpose is to check whether the data of the last 10 years are consistent with a longer term (i.e. 30 year) linear trend, then my point remains valid: The source of the variability (internal/external) is then irrelevant, as long as the variability in the 30 year reference period is comparable to that in the 10 year period of investigation. Tamino, as well as Lucia, did a purely mathematical test, which led me to think that that was their purpose.

  • Luis Dias // September 18, 2008 at 11:08 am

    As I said before, I commend the post. One or two things in it as in the comments strike me as questionable. Don’t hammer me too much, you know I’m just a layman in this.

    1. The first thing that strikes me is that Tamino cherry picks the 10/7/14 year trend that best suits him in his analysis. It’s fine and apples-to-apples comparison, for the same could be said to the claims that we are living in a downward trend since 1998, a very hot year. But what really itches my head is that IPCC 2001 happened in… 2001, and most of the scientific controversy started there (I’m not arguing nothing happened until then, I’m just saying that IPCC 2001 was a “milestone” in GW discussion). It’s quite a curious coincidence that exactly when such report was made, a cold trend was starting. If GW predictions happen to be precise, it would have been very ironic this statistical phenomenon, for when predictions of more than 2ºC/century rise of temperatures were made, the earth temperature slowed down. As you’ve shown, this is entirely possible, but it does make one raise eyebrows.

    2. Lucia’s point about volcanoes does make sense, the recent trend didn’t need volcanoes, and tamino’s cooling trends needed volcanoes. Well, okay, but 1998 was really an over-the-top El Niño phenomenon. And I humbly believe that both El Niño and volcanoes are part of the same chaotic system, that is, that the assertion that volcanoes are somehow “external” cooling forcings may be right, but the same could be said about El Niño for knowledge about it’s origins are generally unknown. El Niño could be as “external” as volcanoes.

    A better analysis could (could) be one that didn’t envolve neither El Niños, El Niñas and volcanoes. I’ve seen here in this blog a graph that didn’t include the former two (and I am unsure about the third), so this could be done. In such an analysis, one could see if 1) the current trend is actually cooling, and 2) compare it with a model similar to the one presented here to check if it’s that unprobable or not.

    Thanks.

  • t_p_hamilton // September 18, 2008 at 3:59 pm

    Luis Dias, if you are going to make accusations of cherry picking “1. The first thing that strikes me is that Tamino cherry picks the 10/7/14 year trend that best suits him in his analysis.” it is most unwise to do it when the post is right above for all to see.

    Tamino does a 100 year plot with an underlying linear trend plus noise. NO downward component of the data, by construction.

    Tamino then notes noise dominates more in a 33 year period, even though the trend is still visually clear.

    The Tamino notes that OTHERS cherry pick 10/7 year trends. The last choice of 14 is merely to show that people who dishonestly cherry pick 10/7 could conceivably do it up to a 14 year period for a typical 100 year record.

    Showing that cherry picking is invalid is the whole point of the post!

  • pough // September 18, 2008 at 4:16 pm

    The first thing that strikes me is that Tamino cherry picks the 10/7/14 year trend that best suits him in his analysis

    Wha? I thought he chose 10 and 7 because that’s what people are erroneously pointing to as “trends”. This post shows that it’s far more likely to be noise. In fact, (and this is, shall we say, the unpicked-cherry on top) you can even find 14 years of what looks like downward “trend” in the noise but still isn’t a trend. This whole post is pointing out the dangers inherent in cherry-picking, using the preferred numbers of years as cherry-pickers, and you call it cherry-picking?

    It’s quite a curious coincidence that exactly when such report was made, a cold trend was starting.

    Wha? Didn’t you read this post? It’s not a trend. It’s like you’re saying, “Great post! I’m going to ignore everything it says!”

    If GW predictions happen to be precise, it would have been very ironic this statistical phenomenon, for when predictions of more than 2ºC/century rise of temperatures were made, the earth temperature slowed down.

    Wha? They’re projections, not predictions. There is a difference. And I don’t think they’re meant to be so absolutely “precise” as you seem to be assuming. Also, pay closer attention to the century part of 2ºC/century. Nobody is expecting every year to be one hundredth of 2ºC warmer than the year previous. It doesn’t work that way, which is something you can learn by, um, reading this post.

    I don’t mean to “hammer you too much”, but it seems like you didn’t understand any of this at all. … or maybe I didn’t understand it.

  • Patrick Hadley // September 18, 2008 at 8:46 pm

    Pough says that a period of 10 or 7 years with no temperature rise is “far more likely to be noise”.

    I thought that the purpose of the post was to show that one could not rule out that it is simply “noise”. Tamino has certainly shown that we cannot say with any confidence that the underlying trend has changed, but that does not mean that he has proved that it is very unlikely to have changed.

    As a non-statistician this seems to me to similar to some Bayesian theory, where in order to calculate the probability of an event you have first to examine the likelihoods of your prior assumptions.

    If you are 100% convinced by the AGW orthodoxy then even after 20 years of no warming you would also want to argue that it was caused by “noise” rather than doubt the theory. The possibility of AGW theory being wrong would always be much smaller than the possibility of noise causing a long cool period.

    On the other hand someone who thought that there was only a 50% chance that the trend is still 0.2 per decade or more, might well consider the recent lack of rising temperatures makes it rather unlikely that the warming trend is that high.

  • pough // September 18, 2008 at 10:22 pm

    Pough says that a period of 10 or 7 years with no temperature rise is “far more likely to be noise”.

    Two non-statisticians, duking it out! The whole world is hanging in the balance!

    I’m happy to retract that one sentence. I agree it was probably incorrect. I’ll wait for an actual statistician to tell me for sure.

    As for the rest, I think it was kinda addressed a while ago:
    http://tamino.wordpress.com/2008/01/24/giss-ncdc-hadcru/
    http://tamino.wordpress.com/2008/01/31/you-bet/

    Use of the phrase “AGW orthodoxy” makes it pretty hard to take you seriously, especially when you make claims about 20 years not being enough here at Tamino’s place when he’s already put himself on the line for a shorter period of time.

  • Bob North // September 19, 2008 at 5:18 am

    Well, I had typed out a long detailed reply but forgot to put in my email address befor I hit submit and the whole thing went poof. So I will keep it brief this time.

    Pough -

    You are right to retract the statement that a 7 or 10 year downward “trend” is far more likely to be noise” than an change in the underlying trend. This post simply points out that such a flat or downward period is entirely consistent with a trend +ARMA (1,1) statistical model of the recent estimated global mean temperature record. At this point, we simply can’t know if the temperature flatline is simply noise or indicative of an actual change in the underlying longer term trend. That being said, since we know that the forcing due to increased GHGs remain, it is probably appropriate that we set our null hypothesis to “There has not been any change in the underlying trend”.

    For what is worth, at least one blogger that other posters regularly seem to refer to (Lucia) has never asserted that her test indicate that there has been a change in an underlying trend. Rather, her focus has been soley on comparing the temperature record since 2001 to the IPCC projected trend for the first two decades of this century. This is not the same thing as has been evaluated in the post, which is whether the apparent flat line since 2001 indicates a a change in the underlying long-term trend or is simply the result of expected natural variation (”noise”). Just because we reasonably conclude, based on this post, that the lack of any increase in global mean temperature since 2001 is not necessarily indicative of a change in the underlying trend, doesn’t mean that we cannot also reasonably conclude that the trend in global mean surface temperatures since 2001 is not currently consistent with IPCC projections. Two different issues which don’t necessarily conflict.

    BobN

  • dhogaza // September 19, 2008 at 3:45 pm

    doesn’t mean that we cannot also reasonably conclude that the trend in global mean surface temperatures since 2001 is not currently consistent with IPCC projections.

    I’m sorry, that’s just *wrong*. Go back to the beginning, read again, this time with comprehension in mind.

  • Bob North // September 21, 2008 at 12:30 am

    dhogaza - re-reading the portion of my post you cited, I admit would change the word “cannot” to “could not”. In other words, the conclusions are not mutually exclusive or inherently contradictory. My comprehension of the issues at hand is just fine, but thanks for your concern.

  • Hank Roberts // September 21, 2008 at 4:19 am

    Bob North, can you rephrase what Dhog quoted, in words without the triple negative?

  • Gavin's Pussycat // September 21, 2008 at 1:37 pm

    Probably a stupid question, but is it only the physics that is ‘unorthodox’ in Lucia et al.’s alternative universe, or also the logic?

  • Bob North // September 22, 2008 at 1:54 am

    Hank, I’ll try, but you will also get the beginning of that train of thought which didn’t quote. To rephrase my original post — From Tamino’s analysis presented here, one can reasonably conclude that there is not enough evidence to conclude that the apparent flatline in temperature since 2001 represents a change in underlying trend. However, reaching this conclusion does not logically preclude one from also concluding that the temperature trend since 2001 currently is not consistent with the general IPCC projected trend of about 0.2C/decade for the first two decades of the 21st century (which is insenstive to both emission scenarios and varying model sensitivities).

    As I stated at another blog, I think Lucia’s use of the term “falsify” is way too strong. We are not at that point yet. Where we are is down 6-0 going into half-time. Haven’t lost the game and there is still a good chance of winning but we may need to make some adjustments.

    Here’s where I stand - Nothing in the temperature trend since 2001 invalidates that conceptual framework of AGW due to “GHG emissions or necessarily indicates that the models or various parameterizations are “wrong”. However, modelers and other should be looking at the data and asking “is this just noise or is there some underlying deterministic cause for the flatline?” Is it energy being use to melt the ice caps? Do we need to re-evaluate our estimates of various forcings/sensitivities? etc. etc. In other words, use this to drive further inquiry and analysis both of what we think we already know as well as what we don’t know.

  • dhogaza // September 22, 2008 at 2:44 pm

    However, reaching this conclusion does not logically preclude one from also concluding that the temperature trend since 2001 currently is not consistent with the general IPCC projected trend of about 0.2C/decade for the first two decades of the 21st century

    It’s much colder today in Portland, Oregon than it was a week ago. So using your logic, I can say with equal validity that the trend since last week is CURRENTLY not consistent with the general IPCC projected trend of about 0.2C/decade for the first two decades of the 21st century.

    If I consistently use noon-to-midnight comparisons, I’ll be able to make that statement forever, no matter how large the long-term trend might be.

    I don’t believe such statements are worth much.

  • Bob North // September 22, 2008 at 4:47 pm

    dhogaza - it is nice to know you are such a logical person and, yet, are so throughly able to misrepresent a position. Nonetheless, I think the primary point to consider is in the last paragraph of my most recent post.

  • dhogaza // September 22, 2008 at 6:53 pm

    dhogaza - it is nice to know you are such a logical person and, yet, are so throughly able to misrepresent a position.

    I’ve simply changed the timescale, so there’s no misrepresentation whatsoever. The point is that without statistical relevance you can’t draw the conclusion you conclude.

  • Philippe Chantreau // September 22, 2008 at 8:38 pm

    Bob North, Dhogaza is still right. The trend per decade has to be considered over a number of decades. That’s the only way that it can be meaningful. It will take a long time for observations to truly invalidate IPCC forecasts. At least 30 years. Saying that the trend of the past 7 years is not consistent amounts to little more than a talking point. Why would you even go there?

    Although the “modelers” most likely are pondering the questions you list, it is way too early to make drastic changes in how the physics are represented in the models. A couple more decades would be necessary. We all like instant gratification and getting things done now, but there is no accelerating how fast the planet is going around the Sun.

  • dhogaza // September 23, 2008 at 3:33 am

    Although the “modelers” most likely are pondering the questions you list, it is way too early to make drastic changes in how the physics are represented in the models. A couple more decades would be necessary.

    I’ll disagree with this. I doubt the modelers are pondering the questions. Rather, atmospheric physicists, solar physicists, and the like are working hard to pin down the science.

    Modelers are *clients* in this sense … they work with what the physical scientists give them.

    This seems to be a fundamental point that denialists miss. After all, they universally seem to confuse statistical and physical models. They seem to think the modelers work in a vacuum where they twirl model knobs regardless of any connection to physical reality.

  • Hank Roberts // September 23, 2008 at 4:20 am

    http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php#

    http://scienceblogs.com/stoat/upload/2007/05/5-year-trends.png

    We see patterns –whether or not there’s anything there. It’s how people see the world.

    Learning how to do statistics (or how to trust those who know) is the opportunity to go beyond the limits we were born with.

  • Ben Lankamp // September 25, 2008 at 11:39 pm

    Once again, thank you Tamino for an insightful piece and also providing another DO try-this-at-home experiment. I quickly put everything in an Excel sheet (see website link) and uploaded it for others who like to reproduce results or play with the ARMA model. Though I am not entirely sure I got the ARMA right :-). The results look OK though, see http://benlanka.tweakdsl.nl/climate/arma.png and below:

    sigma 0,1123 K
    avg trend 0,0183 K/yr
    10year-x 0,0315 K/yr
    10year-n -0,0058 K/yr

    Sigma is the white noise standard deviation, the 10year-x and -n values are the highest and lowest 10 year trends found.

  • Ben Lankamp // September 25, 2008 at 11:48 pm

    Sorry, wrong table (hopefully this can be merged with the previous comment)

    stdev 0,1146 K
    avg trend 0,0182 K/yr
    10year-x 0,0521 K/yr
    10year-n -0,0266 K/yr

Leave a Comment