Changes

I’ve often said that the modern global warming era starts in 1975. Denialists seem to love to accuse me of cherry-picking that time. In fact I’ve received some really nasty comments to that effect, most of which went straight into the trash-bin. The fact is that I didn’t pick 1975 out of thin air, nor is it cherry-picked. It’s an estimate of the time at which the trend in global temperature took its modern value. I’m not interested in answering fabricated, ignorant objections. But sincerely interested readers might want to know where the 1975 comes from.


A glance at the global temperature during the instrumental era indicates that the trend hasn’t been constant over the last 130 years or so:

The smooth line (a lowess smooth) emphasizes the longer-term variations. Clearly there’s a lot of random jitter superimposed on the trend, so no delineation of the trend can be perfect.

But it’s still a pretty good approximation to model global temperature as a piecewise-linear function. That’s just a set of straight lines. Of course, that leaves open the question when to stop one straight line (at one slope) and start a new one (at a new slope) — these are the “change points” of the slope. A long time ago I devised a method to do so. First, choose some approximate change points. Doing so by eye is good enough. Use them to split the data into intervals. In this case, visual inspection indicates there are four “episodes” of different slope, so we’ll need three change point times.

Then fit straight lines over each separate interval, independently. Extrapolate these lines to find when they intersect with the lines of their neighboring intervals. These intersection times define new change point times. Iterate this procedure.

Sometimes the process converges, which defines estimates of the change times unambiguously. Other times, the set of change points goes into an infinite loop. But often, even when it loops the differences are small and we can get good estimates of the change point times by taking the average of the “limit cycle.”

And that’s how I determined that the change point for the rate of global warming is at about 1975. It was years ago I performed this calculation, and we’ve acquired more data since then so it’s worthwhile to repeat the calculation to see whether or not the best-estimate change point times have changed.

Using monthly data from GISS, the average change point time for the start of the most recent slope is 1974.732. By this calculation the “modern global warming era” starts with October 1974. But let’s face it, we can’t really pin it down with that much precision. We should round to the nearest integer year, and accept the fact that we can’t really pin it down that precisely either. It’s just an estimate, which turns out to be — no surprise — 1975.

We can do the same calculation using annual averages rather than monthly data. This gives a slightly different result; using annual data the most recent episode (the modern global warming era) starts in 1973. This only serves to emphasize that we can’t nail down the change point time to the nearest year with certainty. But we can certainly approximate it, and both 1973 and 1975 are about as good estimates as I’d believe.

Just because we can’t be certain when the new slopes begin to the nearest year, doesn’t mean there’s any uncertainty at all that the slopes in different episodes really are different. We can take the episodes as defined by the annual data and estimate the slope, and its uncertainty, over each. The uncertainties are computed using a white-noise approximation, but for annual rather than monthly data that’s not a bad estimate. And here they are:

There’s absolutely no doubt — none at all — that the slopes in different intervals are different. And the slope in the modern global warming era (whether you choose to start it at 1973 or 1975) is the fastest.

I often say that the modern global warming era starts in 1975. I didn’t pull that number out of a hat, that’s the year which was chosen by the data itself. If you want to dispute that it’s not accurate to the nearest year, fine — I’ll agree with that. If you want to accuse me of cherry-picking it, then you’re another one of those who doesn’t know when a choice is justified and when it isn’t.

4 bloggers like this post.

160 Responses to Changes

  1. I have a sense of déjà lu about this post… but a timely reminder nevertheless.

  2. The long term GISS trend from 1880-2009 is about 0.7C / century. Your cherry-picked trend is several times higher than that.

    CO2 did not magically change properties in 1975. The only justification for 1975 is your personal belief system. The period from 1910 to 1940 warmed nearly as quickly as the last 30 years, and was followed by 30 years of cooling.
    http://data.giss.nasa.gov/gistemp/2010july/figure2.pdf

    1975 is a cherry-picked low point. Period. You can’t talk your way out of it. Regardless, warming has been much less than Hansen expected.

    http://wattsupwiththat.com/2010/08/13/is-jim-hansens-global-temperature-skillful/

    [Response: The final sentence of this post describes you.

    On a sad note: how hurt must you feel, having your cherry-picking shenanigans so exposed for all the world to see, to make such lame comments? You even end with another attempt to change the subject entirely -- anything to deflect attention away from your own incompetence.

    I almost feel sorry for you. Almost.]

    • I believe in the previous thread Ray Ladbury already addressed this issue.

      Ray Ladbury wrote at August 13, 2010 at 9:39 pm:

      The SCIENCE says that temperatures did not rise from the mid-30s to the mid-70s because of sulfate aerosols in fossil fuels. And what happened in the mid-70s? Clean-air legislation, and more importantly the phasing out of sulfur-rich fuels. Care to actually address the science?

      Now what happened in 1930?…. Not much, really.

      As for the properties of CO2, no one is making that up, either.

      Water vapor, carbon dioxide, methane and nitrous oxide all have distinct absorption spectra that leave their mark on outgoing thermal radiation.

      You can see it here:

      http://www.globalwarmingart.com/wiki/File:Atmospheric_Transmission_png

      … where the effects of atmospheric gases is shown on both incomeing and outgoing radiation.

      By increasing CO2 concentrations, you increase the optical thickness of the atmosphere to thermal radiation. Increase the optical thickness of the atmosphere and you raise the effective height from which radiation tends to escape the atmosphere without being re-absorbed.

      Assuming a constant moist air adiabatic lapse rate and given the distance to the surface you can then calculate how much the temperature at the surface has to increase. Simply as the result of carbon dioxide itself, roughly 1°C per doubling of concentration.

      But raising the temperature will also increase the absolute humidity of the air, roughly by 8% for each 1°C and by a factor of 2 for every 10°C. And as you can see from the link on atmospheric transmission, water vapor is a greenhouse gas, so that will raise the temperature a little further and given other effects, in the end (that is, up to the point that we are only considering fast feedbacks) we are probably looking at something more like 3°C — according to a variety of lines of investigation.
      *
      We can see the principle of optical thickness and how it applies to radiation absorbed by carbon dioxide here:

      This infrared image of the Earth was taken on 5 March 2005 after Rosetta’s closest approach to Earth by VIRTIS from a distance of 250 000 kilometres and with a resolution of 62 kilometres per pixel.

      The image shows the distribution of CO2 bands in the Earth’s atmosphere. In the green areas the CO2 concentration is enhanced.

      CO2 bands in Earth’s atmosphere
      http://www.webcitation.org/5rgo0J0X0
      http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=37190

      … or if you prefer to see it in the lab:

      CO2 experiment: Iain Stewart demonstrates infrared radiation absorption by CO2

      Elevated levels of carbon dioxide locally increase the optical thickness of the atmosphere, implying that radiation escapes at a higher, colder altitude . This is what the satellite is measuring in terms of the spectra of thermal radiation then rendering in false color.

      But lets say that we stop emitting carbon dioxide and let the system equilibriate. Once a new equilibrium is achieved the rate at which radiation leaves the system will equal the rate at which radiation enters the system. Neglecting changes in albedo, the brightness temperature of the earth as measured at a distance will return to what it was prior to the slug of carbon dioxide we put into the atmosphere.

      But this brightness temperature will be the temperature of the atmosphere at a higher altitude. And given the roughly constant lapse rate? The earth’s surface will be warmer — because we have increased the distance from the surface to the effective radiating altitude.

    • My dear Mr Goddard,

      I feel compelled to point out that this is not entirely a new result. Before Mr Tamino cherry-picked 1975, these guys did:

      http://www.appinsys.com/GlobalWarming/The1976-78ClimateShift.htm

      So you’re absolutely correct, it wasn’t CO2 that ‘magically changed properties’ in the mid-70s, it was the entire climate system.

      I wonder why that could have been…

    • The combination of CO2 forcing and the Atlantic Multidecadal Oscillation
      http://i50.tinypic.com/2a24ok.gif
      accounts for much of this modulation of long term temperature.

      And it’s the extreme greenhouse effect that gives Venus it’s high temperature, not the lapse rate.

    • Some things to consider:

      1) The climate’s response to CO2 increases takes decades.
      2) About half the increase in CO2 since the IR has happened in the past 30 years.
      3) The growth rate of CO2 has been doubling every 30 years since 1800 – a Moore’s Law for CO2 growth rates essentially.

      If increases in CO2 are not causing modern day global warming then two things must be true:

      i) Something unknown is suppressing the well-understood greenhouse effect (and doing so during massive increases in GHGs).

      ii) Something unknown is causing the warming that mirrors the GHE.

      So we can accept what we know to be true (AGW) or we accept two unknowns.

  3. Paging Mr. Goddard…

    :-)

  4. If you are going to pick dates for linear trends, this reasoning certainly makes a good case for 1975 (or thereabouts): however, could you also address why plopping linear trends on everything makes sense in the first place? For the most part, the climate is responding to a mash of different forcings as well as internal cycles and pseudo-cycles, so I see “linear trends” as being a decent way to compare, say, different rates of change in different time periods, but not really a way to determine “different regimes”?

    -M

    [Response: The piecewise-linear model is approximation of the time changes, nothing more. The pieces are different "regimes" of warming rate. As for different regimes of climate or climate change ... that's another question entirely.

    Why use linear models? Because over a single interval the linear model is the simplest model which captures all the trend which can be confirmed with statistical certainty. That's all. The linear model is surely wrong -- but it's definitely useful.]

  5. David B. Benson

    stevengoddard | August 14, 2010 at 12:04 am — On the previous thread I posted decadal averages of the AMO, an index of internal variablity; study it. On an even earlier thread I posted decadal averages of ln(CO2); study it as you’ll find it quite surprising. I’ll leave it to you to consider decadal averages of
    http://data.giss.nasa.gov/modelforce/NetF.txt
    which you will also find of interest. I believe you will find that using just these indicies you will see why what Tamino calls something like “the modern trend” indeed started in the 1970s.

  6. Goddard

    “CO2 did not magically change properties in 1975.”

    Saying this makes you either incompetent or dishonest. Take your pick; those are your two choices.

    You might consider how the net sum of forcings changes over time, Goddard. CO2, CH4, solar, aerosols – all of it.

    Then again, in Goddard-world, CO2 does have magical properties. It snows CO2 in Antarctica, after all.

    • Harald Korneliussen

      It snows CO2 in Antarctica, after all.

      No! He didn’t really say that, did he? I couldn’t find it when I googled (though I did find a number of other arguments I have trouble believing were made in good faith).

  7. You might consider doing this analysis on only the land temps, as that will avoid some of the large steps within the SST data.

  8. I wonder of Steve listens to himself? It doesn’t seem like it, since he answers almost all the questions he poses.

    “The period from 1910 to 1940 warmed nearly as quickly as the last 30 years, and was followed by 30 years of cooling.”

    Yes! That’s exactly what Tamino went to great trouble to explain. Since you can see these periods have different trends, why do you insist on concatenating them and producing nonsense answers?

    “CO2 did not magically change properties in 1975.”

    Well, duh! Of course it didn’t. 1975 is merely when the effect of anthropogenic greenhouse additions outstripped natural climate forcings and anthropogenic particulate pollution.

  9. Short order cook

    Referring to something Goddard said, which seems to be one of the denialist themes, about the rates of warming being the same at the warming parts of the trend.

    I’m wondering whether anyone has done any work on if there’s upper and/or lower boundaries on the warming rate.

    In an extreme example, would you expect the rate of warming to be the same over the period since 1975 if the level of CO2 had instantly been increased to 500 or even 1000 ppm? What physical parameters could be responsible for the observed warming rate?

  10. Tamino,
    Have you considered doing a likelihood fit with multiple line segments, and the slopes and intersections as variables. You could then use AIC to determine which number of segments gave the most predictive power.

    As to Mr. Goddard, it is true, the properties of CO2 did not change in 1975. Rather the opacity of the atmosphere to incoming sunlight due to sulfate aerosols changed due to legislation regarding allowable sulfur content. It is interesting to note that many climate scientists–including the late Stephen Schneider–were worried about cooling due to aerosols from the mid 60s into the 80s. This was the origin of the “predicted ice age” denialists like Mr. Goddard love to trumpet. In fact, the only reason why the cooling was not more severe was because CO2 forcing kept pace–a fact that constrains the CO2 sensitivity to be around 3 degrees per doubling.

    So, Steve, I’ve always wondered: Do you only consider one forcing at a time because it makes it easier to distort the science or is your brain really so simple that that’s all you can handle?

  11. STEVE said……….”I see that you prefer to cherry pick 1975 as the start gate…”

    Actually, the “1975″ can be read (eyeballed, as the imagined intersection of the imagined downsloping line from the left, with the imagined upsloping line on the right) from each of the 19 state summer temperature graphs.

    Given this situation where ALL of the (with enough squinting) downslope pre-1975 straight-line, ALL become upslope straight-lines post-1975.

    Looking at the y-axis interpretive consequences, can’t we say that it is wrong for Steve in his graphs to bypass these 1975 break points? 19 of 19 should have some statistical status.

  12. Personal appeal to Steven Goddard:

    PLEASE keep posting here.

    It adds to your published record at WUWT.

    We love to see you dig your hole deeper.

    Meanwhile, the earth warms, the arctic icecap diminishes in volume, etc.

  13. Hook, line and sinker and then he shoves his foot in as well….
    Only Steve “it’s snowing CO2″ Goddard could be this entertaining.

  14. The 1920-1940 warming rate seems to be about the same as the “modern” trend; however, since it lasts only 20 years (about half the 35-37 years of the 1975 – 2010 tred) it is less significant. The more you smooth out shorter term variations, the smaller that trend becomes relative to 1975 – 2010.

    To illustrate my point, I filtered GISTEMP with a Hodrick-Prescott filter with a lambda of 2·10⁸ (http://i.imgur.com/YvkKb.png); then I took first differences and multiplied the resulting series by 120 (10 years/decade · 12 months/year) to obtain the decadal trend: http://i.imgur.com/Sp4H0.png

    • yyl – you will need to step outside your current belief system to understand why the 1920-1940 warming period lasted only half as long as the modern warming period.

  15. sg: CO2 did not magically change properties in 1975.

    BPL: He didn’t say it did. You’re assuming CO2 is the only thing that affects temperature. It does. The 1940s cooling and ’50s-’60s plateau was caused by stratospheric aerosols from industry and transportation.

    sg: The only justification for 1975 is your personal belief system.

    BPL: Which part of “he did statistical tests for breaks in the relation” do you not understand?

  16. Sorry, that “it does” above should read “it doesn’t.” It’s early…

    • “You’re assuming CO2 is the only thing that affects temperature. It doesn’t.”

      My wife tells me that my grammar has been permanently damaged by my attempt to learn German, and I am often tempted to use “which” where I should use the word “that”, but in this case I am fairly sure that “It isn’t” is more appropriate.

      “Does(n’t)” refers to action, what a thing does (not do) , in this case it would be the same action as what you previously referred to as “affects temperature.” So to say “It does(n’t)” in this context would mean that “CO2 does (not) affect temperature”. In contrast, in “It is(n’t)” the word “is(n’t)” would refer to the identity of CO2 . In this case, “It is (not) the only thing that affects temperature.”

  17. Amazing how Goddard remains unabashed. Evidently the Dunning–Kruger effect confers some kind of immunity to public humiliation.

    • If he believes he is smarter than the thousands of scientists who have studied for a fair number of years, acquired PhD s, performed detailed studies of various aspects of physics and published research in climatology — in many cases for decades, it takes only a little more fantasy on his part to believe that he is smarter than any one of us or all of us combined. So what does a little mocking matter to someone who is quite clearly the greatest genius of our age? People mocked Einstein — so this serves only to underscore the greatness of his intellect.

    • “Amazing how Goddard remains unabashed.”

      Its true Adam and I would say it reflects his denialist fanaticism more than anything else. He’s not here to argue statistical analysis. We can’t forget that Goddard has a very deep seated contempt of mathematics and mathematical analysis. His responses always have to be considered with both of those characteristics in mind.

  18. It is also impressive that many people are unaware that the CO2 release is accelerating still, so that the CO2 effect should not be linear. An important factoid–of the 100 ppm of anthropogenic CO2 in the atmosphere, the first 50 ppm buildup took about 225 years, from 1750 to 1975. The second 50 ppm buildup took only 31 years, from 1975 to 2006.

    • Given that radiative forcing is (roughly) proportional to the logarithm of CO2 concentration, if the concentration of carbon dioxide were to double every 50 years the radiative effect of carbon dioxide would grow linearly with temperature rising as a linear function of time.

  19. The only justification for 1975 is your personal belief system.

    Actually, this is a nice insight. As a professional statistician, I imagine that Tamino’s personal belief system includes a healthy respect for impartial statistical analysis …

    • That is a very astute observation, dhogaza. Tamino’s belief system revolves around performing correct analysis and letting the data lead to the conclusions. Goddard’s personal belief system is to cherry pick the analysis to reach the conclusions he wants.

  20. Guys, stop asking why Goddard cannot see facts that are staring him in the face. Ask instead what his financial incentive is for denying reality.

    • You don’t need financial incentive to act like Goddard, and it’s better if nobody insinuated that you do. A lot of those guys are on their own time and dime, just like anybody else who isn’t actually in the field by profession.

      • arch stanton

        I agree – “it’s better if nobody insinuated that you do”. Speculative insinuations are the hallmark of many of the posts at Anthony’s and should be a red flag for anyone reading there.

        Likewise, I have absolutely no reason to suspect Goddard’s blast-offs are “liquid fueled”.

    • I believe in his case the incentives are more likely psychological — and at least in part ideological as he appears to be libertarian. And personally I doubt he is growing rich off of shares in Exxon or North American Coal.

    • It need not be financial gain at all; if the last century of history shows anything it is that people will go to great lengths to defend their world view/ philosophy etc, for no monetary reward at all. Some denialists I’ve met online seem motivated solely by reaction against dirty hippy environmentalists.

  21. Look at us all trying to psychoanalyse Mr “Goddard”. If he wasn’t such good comedy value, we would have dropped him in the troll bin long ago, and brought his 15 minutes of internet sunshine to an abrupt end. And make no mistake: he displays all the defining troll characteristics. Not remotely interesting, really.

    • And make no mistake: he displays all the defining troll characteristics.

      Certainly every one of those you’ve listed.

      I suppose we might add to your list pretending to be an expert where he is not and then attacking mainstream science, but mere quacks share this characteristic as well. I know that you have referred to him as a troll on various occasions. I am not sure anyone else ever has though.

      Regardless of whether or not he is a troll, it might help to ask why he has been “successful” among certain crowds. He has been given a position of considerable prominence at WUWT — but why is it that people continue to listen to him once he’s there?

      Treated him as some sort of science expert — when most quacks aren’t afforded the same respect? And if you say that he and his audience are a product of modern day conservativism, what is it that has made conservativism so virulently anti-science nowadays? The influence of libertarianism?

      And if libertarianism is so influential what has made that possible?
      *
      Doubt is their product. Tobacco and a host of health problems, CFCs and the destruction of the ozone layer, DDT and cancer, dioxins and both birth defects and cancer, asbestos and lung disease, fossil fuel and global warming, etc.. The ability to sow doubt has helped companies avoid or at least postpone regulation by government. Organizations have been funded to sow that doubt.

      But ideology is a strong motivator as well — and they can’t possibly fund all the people that are needed to push their propaganda into the news and affect public opinion. So they have promoted the ideology of libertarianism since the 1970s using the same organizations that they have used to sow doubt. An ideology that as a matter of principle is opposed to government regulation.

      Given the ideology of libertarianism, whenever industry has faced scientific facts that it deemed inconvenient it has had a willing army of believers who are ready to regard such facts as nonexistent or at worst irrelevant.
      *
      Not saying that we should spend a great deal of time on Goddard. But we should at least call a spade a spade.

      And rather than simply dismiss or denigrate the wider phenomena of denialism that we find inconvenient it might be best if we take a little time out to understand it on occasion. Perhaps so that we can help others understand why people — even some prominent scientists — are willing to embrace nonsense and discard all the evidence — even when so much is at stake.

      • Tim, you have a good point. It seems like terms like “libertarianism” and “objectivism” provide intellectual cover for good, old-fashioned greed. It is interesting to observe left-leaning libertarians, who believe that under a libertarian society things like drugs will be allowed, but who for the sake of consistency (these people are almost fundamentalists in their belief in the free market) argue against societal (governmental) controls, even through taxation, cap and trade, etc.

        It is interesting to see who cites these “free market” think tanks and who supports them financially.

  22. Didactylos | August 14, 2010 at 7:01 pm | Reply
    Look at us all trying to psychoanalyse Mr “Goddard”. If he wasn’t such good comedy value, we would have dropped him in the troll bin long ago, and brought his 15 minutes of internet sunshine to an abrupt end. And make no mistake: he displays all the defining troll characteristics. Not remotely interesting, really.

    Goddard per se, no. But Watts keeps giving him space in spite of his record of ludicrous incompetence. That speaks volumes about Watts, who is an important figure in the deniosphere. That is what makes Goddard interesting.

    Timothy Chase and guthrie have Goddard pegged. He is a self-motivated, garden variety science crank, nothing more complicated than that.

  23. Dikran Marsupial

    Adam R. “That speaks volumes about Watts, who is an important figure in the deniosphere.” to be fair, WUWT has also had some sane posts recently, for instance on one CO2 measurements at Mauna Loa, and a series of articles by Ferdinand Engelbeen pointing out that, yes, CO2 levels are rising becuase of anthropogenic emissions folks! Sadly there are some who can’t even accept that.

    • Odd, isn’t it? Anthony seems all at sea these days, as if he cannot tell land from water.

      How clueless must one be, to be unable to gauge Englebeen’s credibility against Goddard’s? Watts-level clueless, it would seem.

    • It is somewhat interesting to see that WUWT has been taking up some really freakish arguments in the deniosphere. Roy Spencer also has taken up defending the greenhouse effect, and even took on Miskolczi. Neither was greeted with enthousiasm, it should be said…

      I’m not sure what that’s all about, but I think there is such an outpouring of outright crackpottery (which is different from Goddard’s incompetence), that some ‘skeptics’ are getting a bit scared of being ranked amongst those. Better to say “Hey, I’m not like them, I also criticised these nutters!…(but continued my other poorly/nonsubstantiated argumentation)”

  24. Looks like WTFWT is a-flutter about some new paper that supposedly shows the “hockey stick” is broken. Not being a statistician myself, I wonder how Tamino can illustrate and illuminate it for the rest of us…

    http://wattsupwiththat.files.wordpress.com/2010/08/mcshane-and-wyner-2010.pdf

  25. New paper on the Mann reconstruction: to be published in the Annals of Applied Statistics .

    A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures Over the Last 1000 Years Reliable?

    http://tinyurl.com/AAS-paper

    Abstract. Predicting historic temperatures based on tree rings, ice cores, and
    other natural proxies is a difficult endeavor. The relationship between
    proxies and temperature is weak and the number of proxies is far
    larger than the number of target data points. Furthermore, the data
    contain complex spatial and temporal dependence structures which
    are not easily captured with simple models.
    In this paper, we assess the reliability of such reconstructions and
    their statistical significance against various null models. We find that
    the proxies do not predict temperature significantly better than randomseries generated independently of temperature. Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts. Finally, the
    proxies seem unable to forecast the high levels of and sharp run-up in
    temperature in the 1990s either in-sample or from contiguous holdout
    blocks, thus casting doubt on their ability to predict such phenomena
    if in fact they occurred several hundred years ago.
    We propose our own reconstruction of Northern Hemisphere average annual land temperature over the last millenium, assess its reliability, and compare it to those from the climate science literature. Our
    model provides a similar reconstruction but has much wider standard
    errors, reflecting the weak signal and large uncertainty encountered
    in this setting

    • You know, I can’t take something seriously when it says that proxies “predict” temperature. Even I know its more like “are representative of local temperature”.

      Then they say:
      “On the one hand,
      this is peculiar since paleoclimatological reconstructions can provide evidence
      only for the detection of AGW”
      Eh? You don’t proxy reconstructions to have evidence for AGW, their value is in comparins past temperatures with modern ones and working out the factors which affect temperatures, as well as climate sensitivity. THey even admit a sentence later that the main evidence for AGW comes from basic science.

      That they then say:
      “On the other hand, the effort of world governments to pass legislation to
      cut carbon to pre-industrial levels cannot proceed without the consent of
      the governed”
      suggests that they are carrying out a political exercise not a scientific one.
      The above is what I see wrong with it, I’m not even looking at the stats, I don;’t know enough about that.

      • Rattus Norvegicus

        Yeah, that paragraph really put my hair on end. One thing I noticed is that they did not benchmark their method against GCM generated pseudo-proxy data so we really have no idea of how it performs against a known answer. They did point to a paper by Li, Nychka and Ammann which is doing similar work with a Bayesian model. The main difference between the two is the Li, et. al. is informed by actual knowledge of the subject…

      • It’s even worse: in their Section 3.3 they completely mischaracterise the point of pseudoproxy benchmarking.

    • Typhoon, As I refuse to follow any link to WUWT, I’m afraid I cannot comment in detail. However, borehole temperatures certainly support the reconstructions back at least to 1500.

      I would not be terribly surprised it errors were broader. However, I would also note that when you have many separate reconstructions that get pretty much the same trends, that also ought to increase confidence.

      Also, I would ask why the denialosphere is rejoicing over this result. The Hockeystick is still there–we just don’t know how long the shaft is. And if the authors are right, then it makes it virtually impossible to provide evidence for an MWA. Finally, even if the hockeystick were broken, how would that change things? Ice is still melting, CO2 and temperatures are still rising, and all of the overwhelming evidence that says we are warming the globe remains intact. It would seem that not only are the WUWT denizens incapable of scientific reasoning, but also of logical thought.

      • They are using Mann’s hockeystick data, proxies and accepted this data as reported in his papers as background for the statistics they present.

  26. Any of you guys want to come on over and debate the latest posting on that well known contrarian blog.

  27. Ray Ladbury and BPL,

    Where can I find the data set that documents the changes to opacity of the atmosphere relative to incoming sunlight due to sulfate aerosols?

  28. One is disappoointed to see that some well known denialists, McShane and Wyner, have managed to scrape a paper through the peer review process which is critical of Michael Mann’s work. A great pity.

    Bayseianism as employed here is the last refuge of statistical scoundrels, mostly ferocious right wing neo liberals. This is no exception.

    When you have time, Tamino, one of your clear basic ordinary language refutations will be found helpful by many readers. The paper is wrong, but obscure and over complicated, and it needs elucidation and commonsense refutation for many people.

    • lerogue,

      I am interested to hear about the track record of denialism which you say that McShane and Wyner have.

      I would sincerely appreciate a response with some links.

      John

    • lerogue wrote:

      When you have time, Tamino, one of your clear basic ordinary language refutations will be found helpful by many readers.

      Howdy, stranger! I see this is the first thread that you have ever participated in at Tamino’s. Welcome.

      lerogue wrote:

      One is disappoointed to see that some well known denialists, McShane and Wyner, have managed to scrape a paper through the peer review process which is critical of Michael Mann’s work.

      “Well know denialists” I am not pulling up anything on either of the two authors names having anything to do with global warming — prior to this paper. What have you seen that I haven’t?

      lerogue wrote:

      Bayseianism as employed here is the last refuge of statistical scoundrels, mostly ferocious right wing neo liberals. This is no exception.

      How is it being “employed here” that differs from its valid uses?

      • lerogue’s “well known denialists” seems to be a bit strong, but Wyner repeated many of the denier talking points on “Politically Incorrect Statistics”.
        http://picstat.blogspot.com/
        Wyner’s moniker there was Adi, you can click on the link on that blog to verify. All this info and more was dug up by John Mashey over at deltoid.

      • bluegrue wrote:

        lerogue’s “well known denialists” seems to be a bit strong, but Wyner repeated many of the denier talking points on “Politically Incorrect Statistics”.

        I know. Gave an example here. But lerogue’s performance (including the feigned familiarity) was way over-the-top — a bit like his more recent:

        In addition to this, rises in temperature also lead to rises in water vapor, and these also cause further rises in temperature. It is not simply rises caused by CO2 that do this, all rises do it. Once this thing gets started, it has a snowball effect. This is why warming is so very dangerous.

        … and:

        The total effect is roughly that for every doubling of CO2, the temperature will rise by a little over 1 degree C, and the other effects will take this up to somewhere in the region of 4 degrees C or more. You can imagine, if there is some other warming already going on for some other reason, the results can be very large.

        “Once this thing gets started, it has a snowball effect. This is why warming is so very dangerous,” and, “You can imagine, if there is some other warming already going on for some other reason, the results can be very large”? Run away global warming, Oh my!

        Previously “we” were fanatics. Now “we” are alarmists, too. And the “4 degrees C or more”? Might be true, likely is even once you take into account the “slow” feedbacks, but the consensus centers on 3 with Charney. So it leaves “us” looking like we don’t know what we are talking about.

        Fanatics, alarmists and fools. At least he’s ambitious.

      • CORRECTION

        The first two sentences of my response to bluegrue read, “I know. Gave an example here.”

        The link in here should have been to here. As it was the link was broken and took you to Open Mind’s home page.

      • I managed to overlook that link Timothy, sorry.

      • bluegrue wrote:

        I managed to overlook that link Timothy, sorry.

        And that is a good thing, too, as it would have taken you to the wrong spot!

        If you mean what was at the end of that link I certainly don’t expect you to read everything someone has posted in a long thread before responding to them, myself most certainly included. But I like to do things right, and if I mistype a letter in the word “blockquote” within a tag or leave a broken link then I screw up what I have written, making it more difficult for others to follow my thoughts. And not just you but anyone who is following the exchange and decides to click the link.

        So its important to do things right — and all things being equal correct yourself when possible. But you should also realize that if you correct yourself too often there is that annoyance factor for people who are more generally just following the thread. They may not have much if any interest in your particular exchange. A broken link? Pretty bad, but easily corrected. A busted blockquote? Probably worse, but more difficult to correct, usually requiring a lengthier response.

        Or at least that’s the way I tend to think of it.

  29. “the proxies seem unable to forecast the high levels of and sharp run-up in
    temperature in the 1990s”

    We expect climate proxies to have psychic powers, now? That’s a new low in straw-man efforts.

    Presumably what is really meant is that if trees aren’t coping with high temperatures and recording them now, how could they do so in the past? And the answer is threefold: most trees show the modern temperature rise clearly, trees old enough to provide ring data evidently survived any temperature rises they experienced, and third, we aren’t depending only on tree data.

    As for the nonsense about “much wider standard errors” – what’s new about this? Does this paper actually break new ground, or is it a tired old retread of long forgotten arguments?

  30. B. Buckner,
    You can start with this:
    http://en.wikipedia.org/wiki/File:Climate_Change_Attribution.png

    There’s also Schneider’s ’71 paper for a historical perspective. A couple of other references to get you started:
    Gerald Stanhill and Shabtai Cohen, Agricultural and Forest Meteorology
    Volume 107, Issue 4, 19 April 2001, Pages 255-278

    http://www.ldeo.columbia.edu/~liepert/pdf/2003GL019060.pdf

    R. E. Carnell and C. A. Senior, “Changes in mid-latitude variability due to increasing greenhouse gases and sulphate aerosols ,” Climate Dynamics
    Volume 14, Number 5, 369-383, DOI: 10.1007/s003820050229

  31. I find it a bit strange that a paper that has not yet been actually published is being spun so hard. Links to it are popping up all over the place. Hardly a conventional thing in academia – well certainly not over here in the UK!

    I look forward to a readable review of it here or at RC (on WUWT it’s all welcome-with-open-arms, “hallelujah! it’s broken” type whooping)…. methinks they readeth too much into it!

    Cheers – John

    • John,

      When falls over the cliff one will grasp at any sapling no matter how unlikely it will be able to support one’s hope of stopping the fall toward certain death.

      • Very true indeed, Scott!

        “Michael Mann is a WITCH!”
        “How do you know?”
        “He turned me into a newt”.
        “You don’t look like a newt”.
        “I got better….”

        Cheers – John

    • Soo.. if the realty based community (I can think of no other term) based their acceptance of AGW on one paper, which opened with a few paragraphs of political advocacy, and had yet to actually be published, AND was written by non-climatologists.. the ‘skeptic’ crew would be happy to accept that?

      Hmmm.

      • I did try posting on WUWT last night, not something I do often, suggesting that people waited until the paper was published and other specialists in the relevant field had formally responded before arriving at a considered opinion – as per standard academic procedure.

        It didn’t make me that popular, although some of the membership did seem to broadly agree.

        I feel that this practice of circulating a draft MS around the blogosphere prior to publication – for any reason – is an unwelcome development in this or any other scientific discipline. It encourages the exact opposite to objectivity.

        Cheers – John

  32. It’s amazing how much nonsense can be spread in the broader public.

    No denier can demonstrate CO2 does not obstruct infrared – that would be denying basic physics.

    No denier can refute the dozens of agreeing independent studies that show climate sensitivity.

    All that remains is cherry picking and focusing in details that allow you to get distracted from the big, obvious picture.

    It’s like going to the beach and saying “hey, but if we look the other way, it’s like there’s no sea! And no one can prove otherwise!”

    Not a conversation worth having.

  33. I have the same question as B Buckner above:

    Ray Ladbury and BPL,

    Where can I find the data set that documents the changes to opacity of the atmosphere relative to incoming sunlight due to sulfate aerosols?

    (I am particularly interested in the 1940 to 1980 period, as I want to see the data that shows the drop in worldwide opacity as stated earlier in this thread)

  34. Hi Tamino,
    I have just seen this reported as a pending new paper. I wonder what your thoughts are about it?
    “A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures Over the Last 1000 Years Reliable?”
    http://www.e-publications.org/ims/submission/index.php/AOAS/user/submissionFile/6695?confirm=63ebfddf

  35. Given that proxy’s are trained and tested on the modern instrumental record, one has a hard time understanding the claim that they don’t predict it. If there is any argument buried in the mud it is that the proxy’s have not been well calibrated.

  36. “…our model offers support to the conclusion that the 1990s were the warmest decade of the last millennium,…”

    It is an interesting paper and may influence how proxy reconstructions are done in the future. Mann’s papers already had large error bars – maybe they should be larger; it will be interesting to see how he responds. It does not change the fact that CO2 warms the earth and we need to be thinking about what to do about our CO2 emissions.

    My guess is this paper will help refine how proxy studies are done, but the dust is far from settled. Academic debates like this can be very healthy. Unfortunately charlatans with political agendas will try to use such debates to undermine science. You see this quite often when creationists point to legitimate debates in biology to try to discredit evolution.

  37. Interesting comment on CNN debate

    Gavin and Economist Sachs say we have the technology and act now. Michaels says we should just wait to see if technologies come up…

    Fareed Zakaria “Mr. Michaels, is your research funded by oil companies?”

    Patrick Michaels-Not much of it

    Fareed “Mr. Michaels, how much of your research is funded by oil companies?”

    Michaels-I don’t know, 40%

  38. I did a first read of the McShane & Wyner paper. Don’t know if you want it discussed here. I am not a climatologist or a statistician.

    I think everyone agrees there are large uncertainties when “backcasting” global temps a thousand years. M&W take issue with how proxies are calibrated with the instrumental record. If you delete a block of data you can try to “predict” it by using the remaining temp data (and its derivative) or using your proxies. You’d like for the proxies to do better than interpolation. What M&W call pseudo-proxies or “fake” data is really a form of interpolation. They aren’t really using pure noise. Climatologists, according to M&W, block out a 50 year window in the middle of the instrumental record. M&W block out various 30 year windows. With the shorter window interpolation does as well as the proxies they claim and therefore the proxies are of little use. It makes sense to me that interpolation would do better with shorter window length. So, I don’t see paper as punching that big a hole in the “hockey stick” constructions. I look forward to seeing what people who know more about this than I do have to say.

    • I did a first read of the McShane & Wyner paper. Don’t know if you want it discussed here. I am not a climatologist or a statistician.

      Don’t let that stop you. Your statistical intuition is clearly better than McShane and Wyner’s or their referees. Your point that 30 year interpolation doesn’t require proxy information is spot-on.

    • PolyisTCOandbanned

      I’m not so sure of that. With annualized data and proxies and a phenomenon that has a fair amount of year to year variability regardless of long trend (look at the recent stall, but still trend up long term) would hope to get soe calibration showing that proxies can match those gyrations. If not, then the degrees of freedom become much less than the years would imply.

      • You’re right, a proxy-based interpolation should do a little better than a noise-based interpolation. However the difference will be a lot smaller than in the extrapolation case.

        But the Lasso M&W use to match the gyrations is designed to pick out a small subset of proxies. This results in a high-variance reconstruction, which is going to mean high (i.e. bad) RMSE scores.

  39. The Policy Lass has a thread on this paper.

  40. Having started to read the paper, Eli’s initial take is that the place you want to calibrate is where the “independent” variable is changing the most. This paper does the opposite, which means that the dynamic range over which they test the response is minimal. Under such conditions one does not expect a precise result.

    • PolyisTCOandbanned

      Yes, but ideally you will do more than just match two long trends. Matching wiggles shows that you really have some correspondance, not a chance match. Zorita has made this point before.

      • Take a look at Tamino’s fit above for the answer. You basically found the answer at Policy Lass, but the graph makes clear why M&S method finds that noise is a better fit to the global temperature series than proxys. Eli has posted this as a puzzler.

        Entries welcome

      • Gavin's Pussycat

        You do not want to do trend matching at all; you want to produce a match using only temperature info, not temporal dependence. This is another thing the M&W paper gets wrong (in spite of talking about it!), and is a once-removed cousin to the r^2 vs RE kerfuffle. Unfortunately the temporal stuff “leaks” across the borders between cal and val periods, which is why having the val block in the middle is a particularly bad idea. Another reason for this, and for using the RE metric, is that you want to test if the recon is able to get the absolute level about right even if it is completely different from that of the cal period. “Wiggles” are of little interest.

  41. Regarding CO2 as a forcing, I thought the borehole data showed CO2 lagging temperature. Why does this blog presume the opposite?

    [Response: Why do you comment without knowing what you're talking about?

    First: it's not boreholes, it's ice cores that show during ice age glaciations/deglaciations, CO2 increase follows temperature increase. Second: this in no way contradicts global warming due to CO2 because both factors (CO2 and warming) are both cause and effect -- or is that to complicated for you? Third, the fact that during ice ages temperature change would precede CO2 change was predicted, before it was observed, by mainstream climate scientists including Jim Hansen.

    The "CO2 lags temperature so global warming is wrong" blarney is one of the stupidest arguments in existence. Not only did you fall for it, you repeated it.]

    • I actually head John Christie make that same false argument in an answer to a question by an elderly lady at UAB a couple of years ago! That tells you all you need to know about John Christie.

    • “I thought the borehole data showed CO2 lagging temperature. Why does this blog presume the opposite?”

      To give a more basic explanation, its a bit like this. We start out with some event which causes a rise in temperature. All rises in temperature cause a subsequent rise in CO2 levels. These rises can be for any cause, the cause does not have anything to do with CO2. Whatever their cause, they will lead to a rise in CO2.

      Any rise in CO2 also causes a rise in temperature. It has this effect whether it is itself the result of a prior rise in temperature, or, as in modern times, is the result of human activity by burning fossil fuels.

      You can now see the sequence of events. The temperature rises for whatever reason. This is followed by a rise in CO2 levels. This rise in CO2 levels then causes a further rise in temperature. This accounts for the lag. It takes a while for the first rise in temperature to cause the rise in CO2 levels which produces the second rise. It is a bit like a hand grenade. You arm it, then there is a pause, then the detonator kicks in and it explodes. CO2 is the detonator, it is just that it is being triggered earlier by something other than it.

      In modern times, things are really different. This time we have rises in CO2 which are not, or not mainly, due to temperature rises. Instead what has happened is that the rise in CO2 has occurred for a quite different reason, human activity, and this is now leading to temperature rises.

      In addition to this, rises in temperature also lead to rises in water vapor, and these also cause further rises in temperature. It is not simply rises caused by CO2 that do this, all rises do it. Once this thing gets started, it has a snowball effect. This is why warming is so very dangerous.

      You can now see that back in the time you are speaking of, a sequence of things occurred. The first thing was that temperatures rose. This led to a rise in CO2 levels, which caused further rises. At the same time, these rises (both of them) produced rises due to increased water vapor.

      The total effect is roughly that for every doubling of CO2, the temperature will rise by a little over 1 degree C, and the other effects will take this up to somewhere in the region of 4 degrees C or more. You can imagine, if there is some other warming already going on for some other reason, the results can be very large.

      So you can now see how very important it is to avoid raising CO2 levels, and how we may already have raised them beyond the critical point, and how the right thing to do now is get them back down to where they were in about 1850 as fast as we reasonably can.

      Hope this helps.

  42. I embedded a nice Java applet by CTG Software on my Global Cooling page. Scroll about half-way down to use the applet.

    The applet shows GISSTEMP or HadCRUT data and allows one to move a slider to view slopes for different time intervals. Very useful to show why shorter intervals will show cooling trends inside the long term warming trend.

  43. Gavin's Pussycat

    The Puss recommends that Eli and the other bunnies proceed through to the end of the paper where the reconstructions live, and try to answer the following questions:

    1) Why did McShane and Wyner regress their proxy PCs against the global mean temperature time series rather than against the whole NH temperature field over time, like Mann does? In other words, why throw away all that detailed info and calibrate against aggregated data only?

    2) Why did M&W not notice that their calibration using the first PC of the instrumental field, rather than the global mean of instrumental, gives a reconstruction indistinguishable from Mann et al. 2008?

    3) …and why did they not use this calibration — also aggregated but apparently better — in their Bayesian run, when it also clearly gives the best fit even by their crippled (because again, taken relative to the instru mean time series, not the whole field) RMSE criterion?

    Inquiring furry animals want to know.

    • PolyisTCOandbanned

      What does it mean to “regress against the whole field”? Are we talking about the calibration step? And how do you calibrate against a field? (Not even debating, curious).

      • Gavin's Pussycat

        TCO, what I mean is that in the RegEM formalism, the proxies are calibrated separately against each 5×5 degree grid cell’s instrumental data, and the temperature reconstruction is done separately for each cell. Only afterwards is the areal average over all these reconstructed cell values computed for each epoch, producing the global-average reconstruction time series. A very different procedure from directly calibrating the proxies against the global/hemispherical average.

      • Gavin's Pussycat

        …I mean of course the RegEM formalism as implemented in Mann et al. 2008, i.e., EIV.

    • Gavin’s Pussycat,
      I have been meaning to ask you something. About a month and a half ago you gave me some advice on combining station data. First I was meaning to thank you because it really helped me out. Secondly, I was going to ask whether you had any way of perhaps showing the mathematical basis behind why the method works. I plan on using this method in a paper so it would help with my methods section. You would certainly find yourself in the Acknowledgements section.

      The method you taught me was to use the average of all my raw (yearly) data and then calculate offsets between the individual station data where it overlaps with the average of all the yearly data. Then adjust the raw data by the offsets (per station) and iterate until the offsets equal 0. (One last question, once the offsets equal 0 or 0.03 in my case, do I adjust the data one last time or leave it as is?)

      • Gavin's Pussycat

        Robert,

        I’m happy you worked it out… about the math basis, see Hansen & Lebedev 1987 section 3:

        http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf

        The method used there is slightly different, but the idea is almost the same: they weigh by distance, but you can just put Wn = 1 (Eq. 2)

        Formally what you’re doing is solving the observation equation:

        y(t,i) = T(t) + B(i) + e(t,i), where

        T(t) is the temperature anomaly solution of time, B(i) the bias (or offset) vector, and y(t,i) the observed temperatures. When you’re finished, the square sum of the residuals

        e(t,i) = y(t,i) – T-hat(t) – B-hat(i)

        will be minimized. T-hat and B-hat are estimators.

        (One last question, once the offsets equal 0 or 0.03 in my case, do I adjust the data one last time or leave it as is?)

        Hmmm… I would say, look at how your solution T-hat(t) behaves. Assuming that your computation time is cheap, just go on until the change per step is clearly smaller (say one-tenth, as a rule of thumb) than the error you would be happy with. That depends on the speed of convergence though. Slow convergence calls for a tighter criterion.

        You will want to refer the final result to a reference period 1961-1990.

      • I think you just went over my head haha.
        y(t,i) = T(t) + B(i) + e(t,i) Makes sense to me. The Observed temperatures is equal to the temperature anomaly solution of time + the Bias (offset) + the Square sum of the residuals?

        I don’t actually know what residuals are. Oh dear I feel out of place. Essentially what I don’t understand is how bringing the sum of offsets closer to zero reduces the square sum of the residuals. As in what am I doing when i’m lowering the sum of the offsets?

        My final result makes clear sense to me. It is much improved after I iterated. However, explaining how this is an effective method is where i’m running into problems. The T-Hat and B-Hat are estimators but how do they fit into this. Sorry if I am slow but stats is not my forté.

        Finally,

        Why is this method not used by individuals in temperature reconstructions?

      • Another short (and likely dumb) question is what does (t,i) represent. t is time but what is i?

      • Gavin's Pussycat

        Eh, i is the counter of the stations, it runs from 1 to the number of stations.

        The residuals are the “misfits” of the model: even after applying the correct bias term, the individual y(t,i) – B(i) isn’t going to be precisely equal to the T-hat(t) from all stations together. Think of linear regression: fitting a straight line through a point cloud. The individual points will not lie on the fitted line (the intercept and slope of which are “hat” quantities), but slightly above or below. The distance is called residual. The least-squares solution is the line that minimizes the sum of the squares of these residuals. This method goes back to Gauss.

        This may help — or not ;-)

        http://mathworld.wolfram.com/LeastSquaresFitting.html
        http://en.wikipedia.org/wiki/Least_squares

        Explaining why your iteration arrives at the least-squares solution may be tricky.

        Why is this method not used by individuals in temperature reconstructions?

        Well, a related method was used by Hansen and Lebedev in the good old days of magnetic tape reels… I suppose nowadays it’s easier just to write down the closed solution as computers have lots of memory. That wouldn’t be hard here either: You would write y(t,i) linearly as the observation vector, the vector [T(t) B(i)] (consisting of the two sub-vectors) as the vector of unknowns, and a coefficient matrix with ones and zeros describing the dependence between the two. And then Matlab gives you the least-squares solution straight.

      • Hello again Gavin,

        Is it safe to assume that the T-hat(t) = T(t) and B-hat(i) = B(i)
        So essentially if i’m following the square sum of the residuals is equal to the observed temperatures y(t,i) minus the temperature anomaly solution of time T(t) – Bias or offset B(i)
        e(t,i) = y(t,i) – T-hat(t) – B-hat(i)

        Therefore if the offset vector is equal to 0 then the some of squares residual is only equal to the observed temperature minus the temperature anomaly as a function of time T(t).

        I don’t know if what i’m assuming is right but I hope so since it’s getting tough to write my methods section with such a difficult explanation for this math portion.

        Any ideas?

  44. The original post was about the modern global warming era starting in 1975. It is interesting that where I live (Perth, Western Australia), 1975 was the start of a decrease in rainfall, which worsened again in the 1990s. Nowadays, if we in Perth have a month of “average” rainfall, everyone is surprised.

  45. Perhaps the discussion on the new statistical assessment of Mann’s reconstructions should have its own thread — the subject of this one is surely interesting enough.

    My eyeball breaks in the GISS graph would not have begun in 1920: there is surely a very definite flex in about 1910 which gives a nice clear run of thirty years before the downturn shown in 1940 (or, in fact, just a little later which is preferable). Then the 1975 to 2005/2008 line is more directly comparable.

    Interestingly enough the Hadcrut3 graph (available on Deltoid Jun 2009) gives the same sort of general trends with the three phases from 1910 clearly shown. The WWII ‘blip’ mentioned in Tom Wigley’s ‘hacked’ email is suppressed in both — the Folland and Parker adjustment presumably — which is a shame as the adjustment, in my opinion, is not well-thought through.

    Tamino was kind enough to calculate the different CO2 forcings for the two warming periods — IIRC it was .25 watts/m^2 for the first and 2 watts/m^2 for the second. At first sight this presents a problem, but if one accepts that aerosols are keeping the runaway cooling damped down then the extra 1.75 watts/m^2 can be assigned to that cause.

    Does anyone have a link to a paper which quantifies aerosol cooling from about 1940 onwards? It would be preferable to have actual measurements as the back-calculation from cooling rates would, of course, be a circular argument.

    TIA.

    JF

    • Julian Flood wrote:

      My eyeball breaks in the GISS graph would not have begun in 1920: there is surely a very definite flex in about 1910 which gives a nice clear run of thirty years before the downturn shown in 1940 (or, in fact, just a little later which is preferable). Then the 1975 to 2005/2008 line is more directly comparable.

      I have very little knowledge of statistics, but as an exercise for myself if I may…

      Looking at the lowess smooth I can definitely see you arguing for a break around 1910. Around 1910 there is a local maxima in the absolute value of the second derivative of the lowess smooth. This is where locally the rate of change in temperature is itself changing most rapidly.

      But the use of a lowess smooth itself involves choices. It is a locally weighted polynomial regression of degree 1, whereas a weighted moving average would be of degree 0, and a weighted polynomial regression of degree 2 would be a loess. Then there is the choice of smoothing parameter. (1, 2) Depending upon the choices you make in your smoothing algorithm you may get local maxima at different points.

      But then again, Tamino has already pointed out that there are some choices that are involved. For example, between looking at monthly or annual data. Or treating the noise as either white (which works best at the annual level) or pink, acknowledging the autocorrelation using an Arma model which will identify a somewhat larger range of uncertainty, but which would also be more calculation intensive.

      So at this point the question becomes how robust are the results? To what extent are they dependent upon the methodology one uses – as to the number of segments and the location of the breaking points? If the results are largely the same using different methodologies then the results are more robust.
      *
      Now in the case of 1975 Tamino has pointed out that when treating the noise as white noise one gets different breaking points depending upon the choice of calculating things at either the monthly or annual level. 1975 in the case of the monthly, 1973 in the case of the annual. And his lowess smooth might suggest 1970 — based upon its second derivative. But the difference between 1970 and 1975 is fairly small and the results fairly robust.

      And using the iterative (rinse and repeat until there are no more suds) methodology that Tamino outlined, if one were to begin with an eye-ball break around 1970 based upon the second derivative of the lowess smooth, the methodology would likely settle down at either 1973 or 1975, depending upon whether one were calculating things at the annual or monthly level. This makes sense — given distance between the range of uncertainties associated with the slopes of the segments (i.e., 1940-1975 and 1975-2010).

      However, in the case of 1880-1920 and 1920-1940, even though the range of uncertainties are still fairly distant, they are decidedly closer. So it wouldn’t be too surprising if the results were somewhat less robust — with say a breaking point as early as 1910 according to some methodologies (e.g., simply on the basis of the second derivative of the lowess smooth) but as late as 1920 according to others (e.g., on the basis of Tamino’s iterative methodology applied at the monthly level).
      *
      Then again, if you break at 1910, then there is the question of whether you break again at 1900. This would mean a range going from 1880-1900 and 1900-1910. And the shorter the segments the more likely the range of uncertainties will overlap — and if they overlap then you are no longer justified in treating two neighboring segments as separate segments. And if they are treated as the same segment this will likely shift the breaking point that one has at 1910 towards 1920 — if one applies the interative methodology.

      However, regardless of whether one has a breaking point at 1910 or 1920, this will likely have little effect upon the breaking point around 1940 and if 1940 remains roughly the same very little change if any on 1973 at the annual level or 1975 at the monthly level with which you mark the beginning of the modern age of global warming. And it is this latter breaking point and how it is roughly determined by the data itself that was the primary focus of the essay.

      In more difficult cases such that between 1910 or 1920 a more rigorous, calculation-intensive methodology might be required to choose between the alternatives. But in the case of 1975 vs. 1970 or 1980 the methodology that Tamino employs is quite likely more than sufficient. And more to the point 5 years earlier or later wouldn’t make much of a difference in the slope of the later segment, either way.

  46. It looks like Tamino’s post contains the answer to where the M&S paper goes astray.

  47. “Well, duh! Of course it didn’t. 1975 is merely when the effect of anthropogenic greenhouse additions outstripped natural climate forcings and anthropogenic particulate pollution.”

    Priceless.

  48. There’s global warming and there’s anthropogenic global warming and it is not a priori obvious to me if the two started at the same time.

    Global warming -warming through whatever cause- is determined purely by temperatures going up. Which period to call ‘The’ global warming seems to me a bit arbitrary. For example, why not take 1920 as a starting point? The combined data of the three subsequent periods shows warming after all. I would also guess naively that any starting date should have an uncertainty of approximately the length of the timespan required to establish a definite trend around that given date. That would put the onset of the most recent global warming period roughly in the 1970′s.

    To determine the onset of anthropogenic global warming, one would need to compare models, right? I am happy to see that Roger Romney-Hughes provides a useful link (thanks for that), where it reads

    This clearly shows that prior to about 1973, the global warming is fully explained by climate models using only natural forcings (i.e. no human CO2). The models need input of CO2 only after about the mid-1970s – prior to 1970 all warming was natural, according to the IPCC. (There is no empirical evidence relating CO2 to the post-1970s warming as a causative factor. The only evidence is the fact that the computer models require CO2 to produce warming.)

    The quoted statement does surprise me, however. I expected a larger difference between the GW and AGW starting dates, with the AGW starting date significantly predating the GW. On the one hand, if you define AGW to start when the anthropogenic emission makes its contribution felt, rather than when it is emitted, the two starting dates can indeed be similar.

    On the other hand, AGW is not limited to the most recent episode of warming per se, as was arbitrarily decided for GW when only taking the 1975+ period into account, and there is no reason not to find AGW start earlier. Actually I would even have expected AGW to start at least in the ’30′s, since I understood that the post-war cooling period was the result of different anthropogenic contributions canceling each other. Is it easy to understand why a model plus anthropogenic forcings doesn’t differ significantly even before 1975 from a model with all anthropogenic forcings set to zero? Or did I misunderstand the quoted text and is that restricted to CO2 only?

    • hveerten wrote:

      To determine the onset of anthropogenic global warming, one would need to compare models, right? I am happy to see that Roger Romney-Hughes provides a useful link (thanks for that), where it reads

      “This clearly shows that prior to about 1973, the global warming is fully explained by climate models using only natural forcings (i.e. no human CO2). The models need input of CO2 only after about the mid-1970s – prior to 1970 all warming was natural, according to the IPCC…”

      You quote, “prior to 1970 all warming was natural, according to the IPCC…”

      BrrrrT!!

      From AR4 WG1 Chapter 2…
      http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter2.pdf

      Long-lived greenhouse gases include carbon dioxide, methane, nitrous oxide, halocarbons and sulfur hexafluoride (pg 131, pdf 3). Using a base year of 1850, their variation has resulted in a net positive radiative forcing ever since 1850 (pg. 208, pdf 80). While the forcing due to direct aerosols was more than enough to cancel out their effects as recently as 1975, they were nevertheless a component.

      And warming occurs as the result of net radiative forcing, including solar, greenhouse gases and aerosols. The net negative forcing of tropospheric aerosols outpaced the positive forcing due to long-lived greenhouse gases starting at about 1940, leading to a net negative radiative forcing, but by 1975 net radiative forcing was positive and climbing at a roughly constant rate.

      If this source which you put so much stock in states that “prior to 1970 all warming was natural, according to the IPCC…” when quite clearly the IPCC says nothing of the sort, and according to their methodology what always matters is net forcing, not anthropogenic or natural forcing, and their graph shows that the forcing due to our greenhouse gas emissions have been a factor going back well into the 1800s, then how much more does the author get wrong — and why?
      *
      To consider just one example, in the section “Global Average Temperature Regimes” the author states:

      All of the warming in the last 70 years occurred in two extremely short periods – and this is the time frame covering the entire period officially designated as having warming due to CO2.

      The author uses CRU global land and ocean, monthly. However, as the periods that the author is considering are whole years I will work with annual temperatures. Furthermore, I will also calculate the warming for a period as the rate of warming times the length of the period where the rate of warming is the slope of the linear trendline.
      *
      According to the text that the author superimposed upon the Met Office chart all warming of the past 70 years occured between 1976-78 and again 1997-8. This is an extreme form of cherry picking.

      1977 spiked upwards from 1976 by 0.414 °C. 1978 fell by -0.112 °C from 1977. If the author had wanted to, he could have said that 0.414 °C of the warming had taken place between 1976-7.

      But this sort of cherry picking would have been obvious to just about anyone. So instead he picks 1976-1978 as the first of the two periods in which all the warming took place — knocking out more than a quarter of all that “warming” in one year with the “rapid cooling” that took place in the next.
      *
      But now notice: more warming takes place from 1978-1997 (0.338 °C — where rate of warming is the slope of the linear trendline and warming is rate times years) than from 1976-1978 (0.302 °C). Likewise the warming from 1978-1997 exceeds the one year of “warming” that took place from 1997-1998 (0.326 °C).

      Furthermore, if he had included the “rapid cooling” from 1998-1999 (-0.296 °C), then the trendline for the period from 1997-1999 would have a slope of 0.015 °C/yr. This would mean a modest warming of 0.030 °C for the “entire” period from 1997-1999. Less than a tenth of the warming from 1978-1997 — simply by including one additional year.

      In contrast the period from 1996-1997 shows a “warming” of 0.265 and the period from 1996-1998 has a slope of 0.2955 °C/yr meaning a “warming” of 0.591 °C. However, from 1998-1999 we have “rapid cooling” again of -0.296 °C and thus -0.296 °C/year!

      With data that has a great deal of variability, by cherry picking an especially small range I can show a wide variety of rates of change, in this case positive or negative. But if you actually want to know how rapidly the world is warming you don’t engage in cherry picking your range of years but instead use rules to avoid cherry picking, both by picking an appropriate length of time over which to calculate a trend, and avoiding the cherry picking of starting and ending points.

      Here is an approach for picking an appropriate length of time over which to calculate a trend:

      Results on deciding trends
      Jan 5, 2009
      http://moregrumbinescience.blogspot.com/2009/01/results-on-deciding-trends.html

      … and Tamino gives an appropriate approach for picking starting and ending points.

      Why did the author at appinsys so thoroughly misunderstand the IPCC’s position regarding anthropogenic radiative forcing? Why did he engage in such cherry picking when it came to determining when “all the warming” took place? I will leave these questions as exercises for the reader.
      *
      One final note: Roger Romney-Hughes is the Executive Director of the Friends of Gin and Tonic. When he left the link behind I think he may of just wanted to see who would take the bait.

      • You may very well think that, but I couldn’t possibly comment.

        Actually, it was left for Mr Goddard. I suspect he’s still working his way through it and wondering why he didn’t think of it himself.

    • hveerten, I am sorry, but yes, the paper that Romney-Hughes linked to was written by a denialist. I see from your blog that you do not belong in that category — and I probably should have read the rest of your comment not to mention the blog that you link to via your handle (your blog?) before I assumed that you agreed with the author of that paper. If I had taken one twentieth of the time to get the fuller context that I did writing my response… Well, in any case my apologies.

      • I understood that Friends of Gin and Tonic was a Poe, and therefore did not not trust the link. Perhaps, this is too subtle for me. Anyway, I scanned the text at the link briefly, and there are a number of statements there I do not agree with, in addition to the quoted text -that I politely called `surprising’ .

        More to the point, Hank Roberts (assuming he is not an insidious triple agent) provides a link here where the influence of man on climate is argued to date back thousands of years. Interesting!

        But not precisely what I meant. I recall having read somewhere once that climate models start ‘predicting’ temperatures from the beginning of the 20th century or so, something which is then used as a consistency check. (I believe this was on the site of an erudite weather forecaster, anyone here know professor Watts? …ok just kidding, I don’t remember the source). What I meant was a more narrowly defined starting point for AGW after t = 0 for the model runs, knowing that CO2 is only a part of the whole story and that the rise and fall in the early 20th century may also be closely linked to anthropogenic forcings. In other words, isn’t it true that you need anthropogenic forcings right from the start in order to get a realistic 20th century temperature out of the simulation and that the net anthropogenic forcing has always been both positive and the strongest single forcing? That would put the start of AGW at the beginning of the twentieth century, in my opinion.

        (Perhaps I should have avoided the quote, for I did not need it to make my point – it was for this reason that I was sloppy with checking the source. And yes, the handle links to my blog. The political manifesto I discuss there in my latest post is unfortunately not a Poe but a Dutch political reality)

  49. First: it’s not boreholes, it’s ice cores that show during ice age glaciations/deglaciations, CO2 increase follows temperature increase. Second: this in no way contradicts global warming due to CO2 because both factors (CO2 and warming) are both cause and effect — or is that to complicated for you?

    [Editor] Is this response a typo? I think you wrote that CO2 and warming are both cause and effect. Maybe in quantum physics you can have it both ways but this is not quantum physics. The ice cores are probably the cleanest proxy, and they clearly show temperature first, then CO2 rising.
    I would appreciate hearing why the lessons from the ice core data are not valid today but are taken as valid in all of history (a real answer – hold the crab pls).

    [Response: We'll take a chance that *maybe* you're not just a troll.

    Increasing CO2 inhibits the escape of infrared radiation to space, so it warms the planet. CO2 is cause, temperature is effect.

    Increasing temperature decreases the solubility of CO2 in seawater and releases CO2 from melting permafrost and glacial ice, so it increases atmospheric CO2. Temperature is cause, CO2 is effect.

    Both CO2 and temperature are both cause and both effect. This results in a "feedback loop," due to which global temperature changes by about 5 or 6 deg.C during glacial cycles -- way more than it would change due to orbital forcing alone. But, thank goodness, the feedback is not a runaway feedback.

    If you can comprehend that, you're well on the way to overcoming the brainwashing that denialists feed you. They count on your naivete to believe such is impossible, because it's a simpleton's argument. You may also begin to realize that whoever fed you that "CO2 lags temperature in ice ages so CO2 can't cause warming" story is either too stupid to realize the mistake they've made, or knows the truth but deliberately lied to you. In either case, you should be mad as hell at those fools.

    If you can't comprehend that, perhaps you'll understand that we're not the least bit interested in arguing with you about it.]

    • With the regular transitions to interglacials that are driven by changes in the Earth’s orbit and consequent tilt of the Earth’s axis, temperature rises first. 251 million years ago, carbon dioxide rose first — due to a flood basalt supervolcano that erupted in Siberia and apparently melted shallow-water methane hydrates along the continental shelf. 55 million years, same thing — except that supervolcano was in India. But the volcano Siberia caused the Permian-Triassic extinction that converted nearly all of the biosphere into atmospheric carbon dioxide, also known as The Great Dying whereas the event 55 million years ago resulted in the much smaller Paleocene-Eocene Thermal Maximum.

      Anyway, to see more on the absorption of thermal radiation by CO2 (including absorption bands as measured from space, satellite imaging and a simple lab experiment) please see my earlier comment above.

    • Is this response a typo? I think you wrote that CO2 and warming are both cause and effect. Maybe in quantum physics you can have it both ways but this is not quantum physics.
      ______________________________________________

      Well that’s how it is thought to be: temperature varies because of, e.g., Milankovitch cycles. A temperature increase can release CO2 (solubility of ocean water, ice melting) — hence the lag T-CO2. The release of CO2 then amplifies the temperature increase: it’s a feedback effect. (Observed temperature variations cannot be explained with Milankovitch cycles alone.)

      So yes, CO2 is both a consequence AND a cause. No need to be in the realm of quantum physics to observe such an effect — or maybe you believe that Larsen effect has a quantum nature?

    • If a cause cannot also be an effect, have you ever thought about cause and effect with matches?

      Strike a match, the resulting ignition (cause) leads to heat (effect).

      Heating a match sufficiently (cause) results in ignition (effect), which results in more heat…

      • In explaining cause and effect to me there seems to be a reach for some unexplainable pimordial rise in global temperature that sets off a death spiral of increased CO2 then increased temperature. I find that mechanism hard to believe for we would be toast by now. And why could not that primordial event be the cause today?
        If you saw the ice core data superimposed on the temperature data it is more likely that temperature always precedes CO2.

        [Response: There's no "unexplainable pimordial [sic] rise in global temperature.” The “primordial” cause of ice age cycles is well known: changes in the distribution of incoming sunlight due to changes in the shape and orientation of earth’s orbit and of the tilt of earth’s axis. These are called Milankovitch cycles, they’ve been well-understood for nearly a century now.

        No, this “primordial event” cannot be the cause of present warming. We have very precise calculations of earth’s orbit and tilt, and those factors at present would be reducing earth’s temperature. However, not only are they very slow (even “fast” Milankovitch cycles take 20,000 years), at present they’re overwhelmed by climate change induced by CO2. Make no mistake, if not for human-caused greenhouse gas increases, we would at present be headed toward the next ice age cycle, albeit very slowly (in the next few tens of thousands of years).

        As for your suggestion “If you saw the ice core data superimposed on the temperature data,” I have. I’ve also analyzed that data in detail, including the period analysis which establishes the reality of exactly the Milankovitch cycles which are the trigger for ice age glaciation/deglaciation. I’ve also analyzed in detail earth’s orbital and axial parameters, so I know that those are not the cause of present warming.

        As for your talk about a “death spiral of increased CO2 then increased temperature… we would be toast by now,” there’s a big difference between a convergent feedback loop and a runaway feedback loop. Only runaway feedback leads to “toast by now.” Convergent feedback amplifies the effect, but doesn’t send it into a “death spiral.”

        It’s time for you to ask yourself how honest you’re willing to be about this. Are you mature enough to simply face the fact that you don’t know what you’re talking about? Clearly you didn’t have a clue as to the real cause of ice age temperature changes. Clearly you didn’t have a clue that temperature change induces atmospheric CO2 change. Clearly you didn’t know the difference between ordinary feedback and runaway feedback. Yet in spite of your astounding ignorance of climate science, for some bizarre reason you feel entitled to pontificate on the subject.

        If you have maturity, you’ll face the fact that your present opinions are bullshit and you should stop telling others what reality is. If you’re only motivated by ignorance, by all means stick to your guns.]

      • Lotharsson reinforces my point.

        No, Lotharsson does not. Go back and read it again – carefully.

        Why have we not crisped if this feedback is operating for a million years?

        The answer is in my earlier post – or if you don’t care for that, crack open any “automatic control” or “control theory” textbook – several will be found in the library of every university that has an engineering faculty. You no doubt own and use on a frequent basis any number of products that rely on this body of knowledge.

    • So the CO2 re-absorbed naturally as a result of less sunlight then was released due to more intense sunlight. But now it’s rising due to burning fossil fuel? Is this your position? Sounds contrived.

      [Response: You've got a lot to learn before you're even prepared to ask intelligent questions, let alone offer an intelligent critique.]

      • But now it’s rising due to burning fossil fuel? Is this your position?

        So you think that burning fossil fuels doesn’t release CO2?

        How about CO?

        If you distrust basic chemistry so much, you can run the following experiment:

        Acquire a van. Close all windows. Heat said van using a charcoal burner, ignoring all those basic science warnings on the charcoal bag you bought.

        It will be too late for you to admit your mistake, most likely, because you’ll be dead.

        Look, science works. Love it or leave it (or this earth you live on).

      • “…that sets off a death spiral of increased CO2 then increased temperature…”

        That’s not what any climate scientist believes – and for good reason.

        Imagine you have a system at equilibrium, and then you change something which changes the key variable you are measuring (e.g. if you’re talking about the earth’s climate, you add some greenhouse gases which – sans feedback – change the global average temperature). Call the magnitude of this change “delta”.

        Now imagine you have a feedback response gain of “g” in the system. What happens next?

        The initial temperature change is +delta, so the feedback response adds delta*g more.

        …but that extra temperature change due to the feedback is itself subject to feedback, so you get another (delta * g) * g change.

        …and that additional change is subject to feedback, so you get a further (delta * g * g) * g change.

        …and so on, which leads to a total change at equilibrium which is the sum of a geometric series delta * SumOf(g^k) for all k between 0 and infinity. If you remember high school maths, this sum is finite for g in the range (-1..1) (i.e. non-inclusive) – and in that case the sum equates to delta / (1-g).

        So you only get a runaway situation if g is >= 1 (or <= -1 which isn't relevant here). And given that the sum = delta/(1-g) you can get any positive amplification of delta you like at equilibrium with a gain less than one. For example when g = 2/3, delta / (1-g) = 3*delta.

      • Why do you expect things to be simple? Isn’t that the opposite of the denier mantra: “Climate is too complicated for us to understand, waaa waaa waa”?

        And when scientists make excellent progress in understanding the nuances of these complex interrelationships, suddenly it is “contrived”?

        What is even more amusing about your reply is that the sun is the number one denier answer for everything. We already know solar is a minor forcing. What is hard to understand about the idea that a long term change in forcing can change climate over thousands of years? If the sun can’t do this, then it must have absolutely nothing to do with our current warming. Which is it to be?

      • Lotharsson reinforces my point. Why have we not crisped if this feedback is operating for a million years? The loop he describes is fed by the sun continually. I expect the loop is interrupted by a solar minimum. Hence, we are back to the sun (or lack of it) as puppetmaster.

        Dhogaza: my “it’s” referred to temperature not CO2. I trust chemistry. Sorry for the unclear reference.

        [Response: You've already been informed of the difference between a convergent feedback loop and a runaway feedback loop. Yet you persist in this "crisped" argument.

        That leaves two possibilities. #1) You're either too stupid, too lazy, or both to get it. #2) You're nothing but a troll.]

      • “Contrived?” We can account for the fossil fuel burned, which makes it relatively straightforward to know how much CO2 goes into the atmosphere. This has been cross-checked with isotopic analysis of the CO2, which shows that yes, it is of fossil origin. So we know that it’s human activity currently changing the atmospheric composition.

        On the other hand, we also know that natural CO2 fluxes into and out of the atmosphere are in part temperature-dependent. That hasn’t changed since the days before humans existed.

        With all due respect, it strikes me as “contrived” to suggest that one of these processes precludes the other.

      • Ralphie, if you think Lotharsson reinforces your point, you need to read his comment again.

        Working carefully through the numbers.

        What he does is to show just how it is that feedback doesn’t necessarily equal “runaway” feedback (ie., us being “crisped.”)

        Try it again.

      • The only way to converge such a feedback loop is to develop a sink that increases faster than increasing temperature (or reduce the solar energy). I guess I am asking if such a dynamic sink exists. If no such sink exists then the only alternative to stasis or divergence is reduced solar input. What is the sink?

        [Response: You are just plain wrong, you simply don't know what you're talking about. We tried, but you appear to be too stubborn to educate.]

      • arch stanton

        Ralphie- You are mistaken.

        You seem to be hung up on a preconceived notion concerning “feedback”. You seem to confuse its use in physics with the use of the term “loop” as in computer programming.

        Forget about “loop”; the use of the term is not appropriate in our discussion here. Even the climate of Venus has reached equilibrium. Feedback does not continue infinitely as “loops” do.

        In physics feedback is a finite entity. Lotharsson demonstrated this.

        Let’s say for hypothetical example that a doubling of CO2 causes an increase of globally averaged surface temperature of 1.5 degrees C, and that all the feedbacks driven by this change (both long term and short term) equal another 1.5C. We get a total change of 3C. The forcings stop there. There is no “loop”. All the feedbacks and their influences are included. A new equilibrium is reached (provided no new forcing are applied). Feedbacks are not new forcings.

        Perhaps an explanation of Zeno’s paradox will help you overcome your misperception.
        See in particular: “Achilles and the tortoise”.
        http://en.wikipedia.org/wiki/Zeno's_paradoxes

        HTH

      • arch stanton

        Addendum – If you took a basic course in calculus you would understand this effect.

      • The only way to converge such a feedback loop is to develop a sink that increases faster than increasing temperature (or reduce the solar energy)

        Or perhaps additional forcing due to increased CO2 diminishes as the concentration of CO2 increases … like maybe it’s like linear in the doubling of CO2 …

      • Dhogaza: if CO2 has a diminishing effect on temperature the loop would not reverse. The temperature would continue to increase. What is needed is a sink of CO2 to explain the ice core data that shows CO2 dropping in cycles over time. In the absence of a major sink only a solar energy reduction can explain the lack of a runaway temperature.

        I merely wanted to know what mechanism caused the CO2 to decrease after a run-up as indicated by the ice cores. I thought I would get a response to that question. I’m getting the feeling no one here knows.

        (I have a graduate degree in engineering – math skills way way beyond basic calculus, top schools too, no troll)

        [Response: You came here pushing opinions that are not just mistaken, they betray appalling ignorance. You were told repeatedly how wrong you are but you haven't yet admitted at all how mistaken you've been or shown any sign of changing your tune. Instead you're desperately trying to contrive a justification for your beliefs. You continue to insist on a wrong (and wrong-headed) misunderstanding of feedback in climate, yet you claim you're educated and that you just want answers. I don't believe you.

        We actually tried to give you useful information but you just threw it back in our faces. So if you really want us to spend one moment more trying to educate you, first admit that you've been wrong and that you don't have a clue what the hell you're talking about.]

      • arch stanton

        Ralphie – You seem to conflate two different arguments (8/22 7:23) that should not be confused.

        1) The reason feedback is self limiting is illustrated by basic calculus. You said you took it, David B. Benson illustrated it.

        2) “What is needed is a sink of CO2 to explain the ice core data that shows CO2 dropping in cycles over time. In the absence of a major sink only a solar energy reduction can explain the lack of a runaway temperature.” – Here you hint at the other issue “What causes the drop (or rise for that matter) of CO2 in the ice cores?” This is a whole different question and is not directly related to your previous (runaway loop) question which has been addressed many times.

        Yes there are CO2 sinks and they are driven by the ice ages which in turn are driven by the Milankovitch cycles. We already demonstrated why warming would stop on it’s own. Atmospheric CO2 levels begin to drop as the ocean cools and becomes a bigger sink.

        John Cook explained this pretty well:
        http://www.skepticalscience.com/co2-lags-temperature.htm

      • Ralphie:

        (I have a graduate degree in engineering – math skills way way beyond basic calculus, top schools too, no troll)

        That’s why I honored your ginormis math abilities by saying:

        “like maybe it’s [CO2 forcing] linear in the doubling of CO2 …”

        Thinking that you could work out that this leads to a converging series on your own.

        David Benson laid it out explicitly for you.

      • I have a graduate degree in engineering – math skills way way beyond basic calculus, top schools too, no troll

        Then you have no excuses.

        The rebuttal I posted of your claims is high-school level, and in almost every top school’s engineering curriculum there would likely be the very basics of control theory which would have steered you away from the fallacious claims you have been making.

        Might be time to go back and revise what you learned – and once you’ve refreshed your memory and skills – then try to apply it again.

    • David B. Benson

      RalphieGM — First work out the sum

      1 + 1/2 + 1/4 + 1/8 + …

      and then study “The Discovery of Global Warming” by Spencer Weart:
      http://www.aip.org/history/climate/index.html
      before coming back.

      Thank you.

    • Again, work the numbers.

      One who can’t or won’t do that doesn’t merit a place at the discussion.

      It isn’t that hard–and I should know: I’m a musician whose last formal course in math was 11th grade algebra.

      (And I’ll leave unsaid just how long ago that was.)

  50. Given the effect on the albedo of clearing the NA west and Australia in the 19th and early 20th century Eli would be cautious about any claim that only natural causes are involved before 1970. Moreover, the effect of our generation of aerosols in the 1950-70 period balanced the additional greenhouse gases we added to the atmosphere.

  51. hveerten, you might look at Ruddiman:
    http://press.princeton.edu/titles/8014.html

    He suggests that the first sign of anthropogenic involvement is some thousands of years ago, when due to agriculture, the expected very long slow decline from the previous warm peak into the next ice age instead leveled off.

    If he’s right, that would be the long period of noisy ‘no change’ variation from then til 1975 or so. It’s not as rapid a rate of change, though, so I don’t know if Tamino’s methods would detect breakpoints.

  52. hveerten asks:”Is it easy to understand why a model plus anthropogenic forcings doesn’t differ significantly even before 1975 from a model with all anthropogenic forcings set to zero? Or did I misunderstand the quoted text and is that restricted to CO2 only?”

    Given that the link is to a site parodying global warming deniers, I would say yes, you did misunderstand it.

    Just look at the GISS forcings for the modern era, it quite explicitly breaks down forcings by category. CO2 is not the only anthropogenic component. Aerosols dimmed the earth increasingly until mid 70′s, that is one reason for the break point.

    • appinsys is the genuine article, a denialist website owned by Alan Cheetham, some sort of a programmer that is on one of Morano’s lists as a scientist who is skeptical of anthropogenic global warming.

      The link to it was however dropped off by Roger Romney-Hughes, the Executive Director of the Friends of Gin and Tonic…

      Friends of Gin and Tonic
      #1 AGW Denial Site in Canada & Australia
      http://friendsofginandtonic.org

      … which is a parody. I assume he meant the link as a joke to Cheetham’s site as a joke.

  53. Re: Timothy Chase | August 16, 2010 at 8:12 pm

    I find, poking about, that Spencer has done something similar but using Hadcrut3. Using the extra data available back to 1850 he comes up with 1850 to 1910 as steady, then the first rise, and he agrees that the period of modern warming begins in 1975.

    JF

    • Makes sense to me. With the longer range stretching from 1850-1910, the range of uncertainty should be smaller.

      However, I would be interested in seeing his approach. Of course it would also be neat to actually try the calculations with the data that Tamino used to see whether the earlier breaks iteratively play out as I described. The one part that is missing is the calculation of the range of uncertainty associated with the slopes. But this is actually something that Tamino showed how to do a while back :

      First let’s look at annual average temperature. I used the trend from 1975 to the present to estimate the trend, and used the standard deviation from the residuals (after subtracting the trend from the data) to estimate the noise level. The trend is upward at 0.018173 deg.C/yr, and the standard deviation of the residuals is 0.0959 deg.C.

      http://tamino.wordpress.com/2008/01/31/you-bet” rel=”nofollow”>You Bet!, January 31, 2008

  54. CORRECTION to the above

    That wouldn’t be the uncertainty associated with the slope per se, but rather at two standard deviations (95% confidence interval) from least square linear trendline for temperature anomaly given the month or year. Which is what we would want.

  55. Re: Roger Romney-Hughes | August 17, 2010 at 3:20 am |

    “You may very well think that, but I couldn’t possibly comment.”

    If memory serves, the one who was wont to utter that phrase met with a sticky end indeed. Mind you, he had a reasonable taste in single malts.

    Adrian

  56. Tamino,

    What’s all the fuss with this paper http://www.e-publications.org/ims/submission/index.php/AOAS/user/submissionFile/6695?confirm=63ebfddf that many people are making noise about? Something smells fishy about it though…….

    • You may want to check out the following:

      In 2006, it was reported that the total amount of ice was shriking at an alarming rate:

      “THE first survey of gravity changes caused by the Antarctic ice sheet has confirmed that it is shrinking at an alarming rate.”

      I turns out that the scientists looked at 2002-2005 data and yes, over that time that the total sea ice (mass; as opposed to area) in the Antarctic was indeed shrinking. Woops! If they had actually bothered to consider past data they would have seen the reality for what it is: changes that are well within the limits of natural variation.

      Sea Ice Continued
      “Adi” Abraham Wyner (Sunday, May 04, 2008) “Politically Incorrect Statistics
      http://picstat.blogspot.com/2008/05/sea-ice-continued.html

      What Adi is quoting refers to the Antarctic ice sheet. That isn’t sea ice. That is ice on land. But when Adi states, “I turns out that the scientists looked at 2002-2005 data and yes, over that time that the total sea ice (mass; as opposed to area) in the Antarctic was indeed shrinking” he obviously believes that the scientists are referring to sea ice.

      Then later in the same paragraph he states, “Three more years later and we are right back on the general increasing trend” and mocks the “‘climate change’guru” who he quotes as stating, “A more recent study based on satellite measurements of gravity over the entire continent suggests that while the ice sheets in the interior of Antarctica are growing thicker, even more ice is being lost from the peripheries.” Then apparently comparing this statement that pertains to the accelerating loss of mass of ice on the continent to what he reads pertaining to sea ice he concludes, “Alas; the study was not current by the time this was written. It was already contradicted by current data and now with the 2008 numbers it seems that the point is wholly wrong.”

      In terms of their mathematical methods these guys are presumably rather impressive — or so a certain pussycat has stated at one point. But here and elsewhere it is obvious that these guys are way out of their depth when it comes to Climate Science 101. Anyway, keep a close eye on Deep Climate, follow the Rabett (and you might want to pay close attention to the cat), a fellow by the name of Tim and a certain lass as well.

    • This would appear to be spot on.

      Please see:

      Finally, we can then use the fitted model to obtain predictions for each of the thirty years in the holdout block and then calculate the RMSE on this block.

      pg. 13

      …and:

      We performed two variations of this procedure. In the first variation, we continued to holdout thirty years; however, we calculated the RMSE for only the middle twenty years of the thirty year holdout block, leaving out the first five and last five years of each block, in order to reduce the correlation between holdout blocks. In the second variation, we repeated this procedure using sixty year holdout blocks. In both cases, all qualitative conclusions remained the same.

  57. Mika Murtojärvi

    I just performed a similar analysis using a different method for determining the turning points. After eyeballing rough candidates for the times of the turning points and the values of the piecewise linear function at these points, I computed the sum of squared errors of the model. Then I used Excel solver for minimizing this sum. The tool was allowed to modify the times of the turning points and the values of the piecewise linear function at these times. The dataset was GISS Global annual Land-Ocean (years 1880-2009).

    The results were quite similar to those that have been reported here. When the times of the turning points were forced to be integers, the latest turning point was in the year 1974 (the others were 1918 and 1940). Allowing non-integer years did not have much effect on the result. On the other hand, the initial guesses had some effect. The time of the latest turning point varied between 1972.6 and 1974.5 when different reasonable initial guesses were used. This dependence on the initial guess reflects that the Excel Solver does not always find the global minimum; the values of the objective function were slightly better (lower sum of squared errors) for the solutions with the latest turning point in 1974 compared to the other solutions found by the tool.

    I still don’t know whether and how one might obtain confidence intervals for the times of the turning points.

  58. If I’m not mistaken isn’t 1975 close to the time that the temp trend started to diverge from the solar cycle/pattern. It may be masked in the beginning, I think a pretty strong cycle began then.

  59. David B. Benson

    Ani | August 18, 2010 at 9:27 pm — Solar cycle is quite a minor forcing. Study
    http://data.giss.nasa.gov/modelforce/RadF.txt

  60. Is the modelled data on arctic sea ice good enough to do this sort of analysis?

  61. I meant Arctic sea Ice Volume.

  62. Tamino, do you have the latex turned on? Testing…

    \frac{P}{A} = \int_0^\infty \frac{2 h\epsilon(\nu)\nu^{3}}{c^2}\frac{1}{ e^{\frac{h\nu}{kT}}-1} d\nu \int d\Omega

    Just curious.

  63. Lotharsson | August 21, 2010 at 4:32 am | …

    Mr. (Dr.?) Lotharsson, please let me say what a very lucid explanation of feedback you gave in that post. Nicely and economically expressed.

  64. Barton Paul Levenson wrote:

    Lotharsson |August 21, 2010 at 4:32 am… please let me say what a very lucid explanation of feedback you gave in that post. Nicely and economically expressed.

    What I most like about it is the stepwise approach using feedback gain. A lot of times that is glossed over.

    But if we inject carbon dioxide into the atmosphere we are going to have feedback gain from water vapor — and from carbon dioxide. Raising the temperature due increased levels of carbon dioxide will increase the partial pressure of water vapor. But it will also raise concentrations of carbon dioxide through by warming the ocean and reducing its capacity to hold carbon dioxide, melt methane hydrates where the methane will decay into carbon dioxide, etc..

    And as both water vapor and carbon dioxide are greenhouse gases they will necessitate a further increase in temperature. But as long as the rise in temperature that they result in is smaller than the original rise in temperature (that is, as long as the gain factor g is less than 1) then total increase in temperature after all feedback has occurred will be something that can be expressed as a geometric sum. Which really does make it a version of one of Zeno’s Paradoxes.

    Of course, if you put a slug of carbon dioxide into the atmosphere, the forcing due to carbon dioxide will not cause the temperature to rise all at once prior to any feedback. Rather, the rise in temperature simply due to the initial slug of carbon dioxide will raise the temperature gradually and the feedback due to this rise in temperature will be occurring at the same time.

    As such the process would be best described in terms of a differential equation where the solution would express carbon dioxide concentration, water vapor concentration and temperature as a continuous functions of time — and location — if one wanted something realistic. But in all but the most idealized cases one couldn’t expect an analytic solution.

    Therefore one would likely use a finite difference method. This is essentially what climate models do within a three-dimensional grid. (Please see for example this explanation by Vicky Pope.) But if one looks at a grid consisting of only a single cell and considers only a single greenhouse gas as part of the feedback then we are right back to a stepwise approach with feedback gain.

  65. We see signs of climate changes other than temperature at this time. A colleague Shiyu Wang did some work on precipitation in Ireland (see a blog entry on it here, including a link to a poster presented; paper to follow). We saw a change in precipitation from 1978 onwards: a wetter climate.

  66. Søren Rosdahl Jensen

    I would like to point to the following article by T. Subba Rao and E. P. Tsolaki: “Nonstationary time series analysis of monthly global temperature anomalies”.
    The data analysed is the CRU temperature time series for northern hemisphere, southern hemisphere and global.
    Among other things they look for change points in the covariance structure in the series using the CUSUM test. For the northern hemisphere data they find a highly significant change (5 sigma level) between 1950-1960. For the southern hemisphere and global data they find a change (2.5 sigma) between 1960-1970.
    So that is another analysis, using another technique on a (somewhat) different data set. And the result support the analysis presented here.

    The article is highly recommended and can be found here: http://tinyurl.com/2dy87u9

  67. If you feel like smashing people who say it’s ad-hoc, I’m pretty sure you can get the same result with the following algorithm:

    1) Pick an accepted correlation coefficient. Something like .75 or higher would probably be fine. The point is a “loose fit”.
    2) Run a least squares fit for an N-degree polynomial starting at N=1 until the correlation is at least the value chosen in (1)
    3) The order N should be the number of piecewise segments that would acceptably model a piecewise linear function.
    4) Basic calculus gives the inflection points; basically where the “edges” of the piecewise function lie.
    5) Proceed as you do, content knowing that silly people can’t criticize you for an arbitrary selection for your piecewise function.

    If you really want to get their goat, you can give them the source code and just have the correlation stored in a variable near the top. As that’s the only “arbitrary” thing remaining, they can screw with it as much as they want.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <pre> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>