Open Mind

Central England Temperature

April 28, 2008 · 137 Comments

The longest single instrumental temperature record, one which has recently come under scrutiny in comments on this blog, is the Central England Temperature, or CET. The primary CET record consists of estimates of monthly average temperature from 1659 to the present. Let’s take a close look.


Here’s a graph of the CET data:

The first thing we notice is that the earliest data tend to fall along dense lines of whole-numbers values. This is because from 1659 through 1670, monthly estimates of CET are only given to the nearest whole degree C. From 1671 through October 1722 they’re only recorded to the nearest 0.5 deg.C, except for a brief interval from 1699 through 1706 when the data are to the nearest 0.1 deg.C. Only from November 1722 to the present do we have an uninterrupted record of monthly averages recorded to the neares 0.1 deg.C. In addition, there’s a record of daily temperature estimates from central England, but these don’t begin until 1772. Hence at the outset we can distinguish four episodes of varying precision in the data:

1659-1670 1 deg.C
1671-1722 0.5 deg.C
1723-1771 0.1 deg.C
1772-2008 daily data

Not only are the earlier episodes less precise by virtue of fewer significant digits, they were recorded when the science of thermometry was in its infancy. The alcohol thermometer was invented in 1709, and the mercury thermometer in 1714, so all thermometer records prior to that are due to different instruments whose accuracy and calibration are uncertain. Nor were these early measurements taken on modern scales; the Fahrenheit scale wasn’t invented until 1724, and the Celsius scale didn’t appear until 1742. Hence all thermometer measurements prior to the mid-18th century should be considered far less reliable than those taken later.

We can of course remove the annual cycle of the seasons by taking the difference between each month’s value, and the average for that month during a reference period. I’ll use the entire time span as the reference period, so that negative values are cooler-than-average months and positive values are warmer-than-average months:

Something caught my eye when viewing the anomalies, which is more evident when viewing a larger version of the graph (click to see the whole thing):

In the early years, there are dense areas with the same value. They’re highlighted in a closeup of the early years, here:

I haven’t read the documentation of the construction of the CET (if I were conducting a study for peer-reviewed publication, I surely would), but this indicates the possibility that during the early years, missing values were “filled in” with the average value over a longer time span. If true (and I emphasize the “if”), then those values aren’t direct measurements at all. This further emphasizes the reduced reliability of the earlier part of the record.

Using monthly values emphasizes the rapid fluctuations of temperature which are much more “noise” than “signal” in the long-term record. We can begin to reduce the noise by taking annual averages:

To make this easier to see, here’s a larger version (click to see the whole thing):

One thing that’s apparent is that most of the hottest years have happened recently; not to sound too much like a certain Nobel-peace-prize-winning former politician, but 9 of the 10 hottest years in the CET record have occured in the last two decades.

If we’re looking for climate rather than weather, even annual averages show more noise than signal. So I computed moving averages, using a 5-yr window, a 10-yr window, and a 30-yr window (the most common time period for computing climatological means):

As the averaging interval gets longer, the size of the “wiggles” in the smoothed data gets smaller. By the time we get to 30-year moving averages, the wiggles are greatly reduced but the signal still remains. This argues for the appropriateness of the “industry standard” 30-year window for climatological means. Hence I computed smoothed values on a 30-year time scale, using both moving averages and a Savitsky-Golay filter:

I’ve also plotted lines marking out the episodes of known changes in the precision of the data. It’s now clear that the recent rise in temperature is unlike anything seen in the more accurate part of the CET record. It’s also much warmer than anything seen in the entire CET record. If we plot only the most precise part of the data, from 1800 to the present, we get a very clear picture of the climatological changes in CET for the last two centuries:

In this limited view, the temperature increase during the modern global warming era is both quantitatively and qualitatively unlike any other part of the record. Even including all the earlier, less precise data, the recent warming has brought central England to a climate much warmer than previously measured.

The rate of warming in CET since 1980 is 0.05 +/- 0.02 deg.C/yr, or half a degree C per decade. If this trend continues, then by mid-century CET will have increased by a substantial amount, another 2 deg.C. This will bring CET to heights unknown for at least 350 years, probably several thousand years, and in all likelihood warmth not seen since humans inhabited the British Isles.

Categories: Global Warming · climate change
Tagged:

137 responses so far ↓

  • TCO // April 28, 2008 at 5:46 pm

    How recent is “the present” (where base data ends in current times)?

    The stuff with looking at the patterns of data filling in the old years is very nice. Very SMish (reminds me of him plotting the grass plots). I don’t know if your circled areas are chance clumps or a good catch and would obviously feel better with analysis.

    Looking at your seventh figure, red line, there seems to be a fair amount of natural variability of the regional climate, prior to 1950. Even if you constrain it to 1750-1950, there’s still reasonable regional temp variation. Given that the MWP is posited to have been a large, natural regional variation, even by stout AGWers (regional, I say), I don’t think you can well find support for AGW from a single region series. Since a single region is capable of having experienced natural variations to match the current one, no? And of course we could probably find regions that have recent temp downturns if we scan the globe.

    I think you are letting the end slopes dictate too much of rise, with the way you display that red line on the last figure. I’m not crazy about the extrapolation at all (given there’s actual data there). But if you’re going to do it, why not extrapolate “back” at 1800 as well?

    [Response: The CET data are current (up to March 2008). There's no extrapolation at all in any of the graphs. One of the advantages of most smoothing filters over moving averages is that they permit estimates throughout the entire span of the data, while moving averages, if we require complete 30-year time spans, can only come within 15 years of the data edges (the value plotted at 1800 is based on data from 1785 to 1815).

    The variation observed in the 30-year smoothed data from 1750 to 1950 isn't nearly as big as the warming observed recently. The entire range of variation is only about 0.5 deg.C, while we've seen a little over 1.2 deg.C warming in just the last 30 years. Even including the less precise earlier data, the 300 years preceding 1950 show a total range of variation of about 1.2 deg.C, roughly what we've seen in the last 30 yr.]

  • Hank Roberts // April 28, 2008 at 6:16 pm

    The red isn’t extrapolation, is it? It’s annual data.

    The last figure’s just a closeup of the recent end of the full previous one; the 30-year average (gray) can’t extend out to the ends of the yearly (red) data.

    It’s clearly not a hockey stick –rotate this:
    http://www.antiquemystique.com/images/6820_jpg.jpg
    turn it 180 degrees for a better match.

  • nanny_govt_sucks // April 28, 2008 at 6:23 pm

    This will bring CET to heights unknown for at least 350 years, probably several thousand years, and in all likelihood warmth not seen since humans inhabited the British Isles.

    Maybe soon it will be habitable there. :-) But seriously, is warmth such a bad thing?

    Spectacular orchids double due to global warming
    http://www.independent.co.uk/environment/spectacular-orchids-double-due-to-global-warming-475373.html
    “Until now, the effects of global warming on Britain’s plant kingdom have only been detected in phenology - the timing of appearing, leafing and flowering. For example, oak trees are coming into leaf as many as 10 days earlier than they were 30 years ago, and spring flowers such as snowdrops are blooming as early as December.

    But the new study - the BSBI Local Change Survey - clearly shows that some species are now increasing in numbers and frequency of occurrence in a way that is consistent with steadily rising temperatures.”

  • Lost and Confused // April 28, 2008 at 6:29 pm

    I may be misinformed, but didn’t the CET record undergo a significant change in 1974? I know the adjustments for urban warming began then, but I have read they completely changed which stations were used that year as well.

    I am not sure if this is true or not, and I do not know how to find out offhand. If it is true, it would seem rather awkward to compare the record before 1974 to after 1974. Pardon a bit of skepticism, but I cannot help but wonder at the timing of the warming in these graphs.

    I looked through the official CET website and could not find information about which stations were used for particular periods. Could anyone provide clarity on what, if any, station changes happened in 1974 (or ideally, a source discussing all changes in which stations made the CET record)?

  • gerda // April 28, 2008 at 7:51 pm

    Hank! that’s not funny….well it is, but you know…

    actually the first thing i noticed on the CET data was the disappearance of sub-zero monthly averages from about 1990. but then i am a gardener, it tallies with my observation of much milder winter averages and lack of long cold snaps to kill the bugs.

  • TCO // April 28, 2008 at 7:55 pm

    OK, I see now that there is not an extrapolation. I’m still concerned about using a smooth as opposed to a moving average as it sort of pins the end point, no? Whether we call it some fancy math name or not, that’s the impact. And is that reasonable? Are the end data points carrying more impact than the middle ones?

    [Response: No, smoothing doesn't pin the end points.]

  • Hank Roberts // April 28, 2008 at 8:07 pm

    Lost, did you look at any of these pages?
    http://www.google.com/search?q=CET+temperature+record+England+time+stations

    If not, where were you looking? Remember, negative results _do_ count because they help others know what you tried to get an answer.

  • Hank Roberts // April 28, 2008 at 8:10 pm

    P.S., for all who use or want CET data — no doubt you read this on their website:
    ———–
    Leave feedback For problems getting or understanding the data, or to suggest some improvement please leave feedback. If you find the data useful, please return and tell us what you did with it.
    ———–

  • TCO // April 28, 2008 at 8:35 pm

    When should one use smoothing and when moving average?

  • Hank Roberts // April 28, 2008 at 8:39 pm

    The 2005 paper (PDF, last under References, link is at the CET main page) ends:

    “On 1 November 2004, Squires Gate and Ringway were replaced by a new automated station at Stonyhurst, owing to closure of Ringway. We took account of systematic differences in CETmax , CETmean and CETmin in each calendar month by using parallel observations made during 2001 – 04. We also plan to replace Malvern by an automated, more rural station at Pershore when adequate parallel observations have been made and analysed.”

    That’s the right way.

  • Hugh // April 28, 2008 at 8:41 pm

    Land C. You seem to be suggesting that the 1974 station change was probably carried out with absolutely no consideration of effect at all? I suppose that’s up to you. This is a quote from the very first paper cited on the Met Office CET page (Parker and Horton, 2005), seems they were using additional data in order to improve the series; to make it better

    Owing to the availability of additional digitized daily data, Parker et al. (1992) used different stations for daily CET[mean] than Manley (1974) had used for monthly CET (Table I). Because of these differences in stations, the areal average temperature at the Parker et al. stations differed slightly from the Manley values. So, to maintain homogeneity (Section 1), Parker et al. (1992) adjusted their daily CET[mean] values to make their monthly averages consistent with Manley (1974). For the same reason, when we created daily CET[max] and CET[min] series, again using a different sequence of stations, we adjusted the values so that each day’s average of CET[max] and CET[min] equaled that day’s adjusted CET[mean] and was therefore also compatible with Manley (1974). Here, we estimate the uncertainties arising from these adjustments. We also estimate the uncertainties stemming from the adjustments applied by Parker et al. (1992) to recent CET[mean] to compensate for urban warming.
    The adjustments applied to daily CET[mean] account for differences in station position, instruments and time of day of observation between the Manley (1974) data and the Parker et al. (1992) data. They were calculated by Parker et al. (1992) for individual months in their common period, 1772 to 1973. The adjustments for 1878 to 1973 are tabulated in Parker et al. (1991). Since 1974, the CET for each month has been adjusted by the mean adjustment for that month calculated using available Rothamsted, Malvern, Squires Gate and Ringway data over the years 1944, 1948, 1949 and 1959–73 (Parker et al., 1992). This is done before the urban warming adjustments are applied.

  • John Andrews // April 28, 2008 at 8:51 pm

    I’m bothered by the use of different scales for the graphs. Seems to me that if you start plotting the anomalies using +-6 degrees, then you should do the same for all, even the smoothed graphs. Only in this way will the appearance of the graphs when compared with each other visually be the same. Although the amplification of the scale tends to show what you want it to show, doing so tends also to bias the viewers perception of the graph. In other words, it ain’t as bad as it looks!

  • dhogaza // April 28, 2008 at 8:57 pm

    But seriously, is warmth such a bad thing?

    Spectacular orchids double due to global warming

    Meanwhile, kudzu is marching northwards in some sort of weird, invasive species reversal of Sherman’s march through the south.

  • TCO // April 28, 2008 at 9:01 pm

    I thought the US SE was one of the regions that hasn’t had recent warming.

  • Doug Clover // April 28, 2008 at 9:03 pm

    And down here in NZ we are waiting the first cases of Dengue fever as the northern part of the North Island gets warmer and wetter.

    Doug

  • dhogaza // April 28, 2008 at 9:12 pm

    Kudzu’s northern limit is, AFAIK, limited by winter freezes, so all you need is a bit of warming at the northern limit of the range, to the degree that it doesn’t freeze hard enough to kill it, and voila!

    There you are.

  • dhogaza // April 28, 2008 at 9:39 pm

    And down here in NZ we are waiting the first cases of Dengue fever as the northern part of the North Island gets warmer and wetter.

    And in the mediterranean they’re tracking a couple of diseases (maybe even Dengue) as they push further north in Italy, along with their (originally) African mosquito vectors …

    But, hey, when you’re sick in the hospital, near death, there’s nothing like a few orchids to cheer you up!

  • David B. Benson // April 28, 2008 at 9:48 pm

    Pine bark beetle’s northen range is limited by very hard winters (they possess a natural anti-freeze). Let the winters warm up and, voila, southeastern British Columbia with about 34+ million acres of dead lodgepole pine.

  • Lost and Confused // April 28, 2008 at 9:55 pm

    Hugh, I have no idea what would make you think I suggest that. I do not care to guess at people’s thoughts. However, I have no problem looking at facts and evidence. In this case, there seem to be two important pieces. One, 1974 saw a major change in gathering of data, as the stations involved were replaced with different ones. Two, the data prior to 1974 has a markedly different trend then the data after 1974.

    If both of these are accurate, it seems extremely awkward to compare the two time periods. I do not how one could justify comparing them. Perhaps some justification exists. I do not know it, I have not seen it, and I cannot imagine what it would be.

    Hank Roberts, the first two results of the Google search you linked are of the Met Office page. I have already looked through their CET section. The next two results were from a blog. I do not see this helping me any.

    As I said before, I have already looked through the official website for CET. It is possible I missed the information, or it could be buried within a PDF file. I would like to know if what I have read is correct, but I have no reason not to believe it. So for now I will assume 1974 saw the change I described.

  • TCO // April 28, 2008 at 10:14 pm

    Dhogza, if you look at the plant hardiness (based off of the worst 5 days of freeze) charts, they have recently gotten warmer, but really only to return to about the level that they were at two generations ago. US SE is well known not to have really warmed like the rest of the globe. If you look at things like Spanish moss, alligator or palmetto extents, they are all not moving up. Bugs me…as I lubs me doze dings.

  • TCO // April 28, 2008 at 10:21 pm

    Tammy: I “wiki-ed” that filter. And it says that it does local quintic regressions to get the curves. Tufte (I have a man crush on him) has some harsh things to say about quintic regressions, here:

    http://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=0001Zl

    This is not exactly a point. So don’t get all mad at me. I’m just trying to understand the universe.

    [Response: I looked at the wiki and it says no such thing. A Savitzky-Golay filter does not imply quintic regression, it uses local polynomial regression with the polynomial degree chosen by the user.

    Furthermore, I actually used a *modified* Savitzky-Golay filter because I apply a weighting function of my own design, tailored to minimize the impact of noise on the result. I called it "Savitzky-Golay" without the "modified" because this post is not about filters, so I decided to keep it simple. I'm working on a peer-reviewed publication to introduce my new weighting function.

    Furthermore, I also applied a wavelet filter and a low-pass filter. The results are the same. The very large warming in the last 30 years is real, and shows up just as prominently in all the filters I applied.

    Tufte's comments about quintic fits may rightly apply to the *mis*application of that method, but there's nothing wrong with method in general. And of course that's irrelevant to this analysis, since I didn't use quintic fits.

    You say you're not trying to make a point, but that seems implausible given that you're trying so hard to imply that the dramatic recent warming in CET is somehow an artifact of the chosen filter. It isn't.]

  • Hank Roberts // April 28, 2008 at 10:31 pm

    Lost, did you read the part I quoted, from that page? Did you read the papers in the References section?

    Tell us what you understood, from reading those papers.

    And tell us what you learned when you used the Feedback link to ask for more information.

    Otherwise, you’re just expressing the same thing as in the last thread, a lack of ability to find facts, and mistrust, and assumptions the problem has to be someone else’s.

    Maybe it’s you.

  • Hank Roberts // April 28, 2008 at 10:39 pm

    Oh, just for the record, Lost, you write:

    “I would like to know if what I have read is correct, but I have no reason not to believe it.”

    What do you believe? What’s your source? Why do you consider your source reliable?

  • Hank Roberts // April 28, 2008 at 10:44 pm

    Oh, for the record, there’s an English equivalent of you know what:

    An Englishman’s Castle: Surfacestations.uk
    Jul 27, 2007 … used different stations for daily CETmean than Manley (1974) had used for monthly CET ). Because of these differences in stations, …
    http://www.anenglishmanscastle.com/archives/004389.html -

  • TCO // April 28, 2008 at 11:08 pm

    Ok…I admit it. I’m an evil mole, Tammy. I’m not trying to understand the universe. I just want to drill in Anwar.

    Joking. Seriously, chill man. Yeah, I get that they are different. I just wondered if there might be insight from one to teh other.

    Oh…and actually it doesn’t make me feel better that your smoothing function is home-made and such and you didn’t bother us with teh details, given that’s what I’m interested in.

    [Response: If that's true, then why did you flatly state that the wiki describes a Savitzky-Golay filter as using quintic regression when it says no such thing? Could it be because you had a link to an unflattering opinion of quintic regression? Why did you start commentary on this thread saying that I let the end slopes dictate too much of the rise, and claim it was extrapolation? Why did you then claim that smoothing somehow "pins" the end points and imply that it made the result unreasonable?

    Every one of your questions about smoothing has carried an implication that it's the analysis rather than the data that has created the dramatic recent warming in CET. That's why your claims of purely innocent thirst for knowledge are implausible.]

  • TCO // April 28, 2008 at 11:14 pm

    I’m still interested when a moving average is warrented (that omits end points) and when a smooth all the way to the end points is justified. I want to know the principles that dictate the choice. Not just what benefits my evil Ron Paul loving soul.

    [Response: Moving averages have the virtue of utter simplicity, and that their precision is the same over the entire span of estimated values. The main defect is that they don't permit estimation within half a "smoothing interval" of the edge of the data. They also aren't very smooth, as smoothing methods go; note the tiny wiggles in the moving average curve which don't reflect actual changes in the "smoothed" value, just the essential discontinuity of the method.

    Other smoothing methods, all the way to the edge of the data, are just about always "justified." They too can show defects, especially reduced precision (but not reduced accuracy) near the edges. However, for a good method the reduction in precision is limited to a small fraction of the smoothing interval (in this case, just a few years in spite of the 30-year smoothing interval). Different methods have different strengths and weaknesses; a low-pass filter is less susceptible to edge effects but very poor at "turning the corner" when the corner is sharp.

    No smoothing method is perfect. I chose the modified Savitzky-Golay filter because it's what I'm working on (theoretically) at the moment, so it fascinates me.]

  • TCO // April 28, 2008 at 11:55 pm

    I’m using the Strunk and White method of saying a word that I don’t know how to pronounce LOUD, so that it gets corrected. That and I’m a bloody mary drinking mole.

  • Hank Roberts // April 29, 2008 at 12:00 am

    Yeh, but our host’s point is you’ve been making him do what should be your work, in this thread — checking your beliefs stated in your postings against your sources, and correcting errors that all fell in the same direction. Clue.

  • cohenite // April 29, 2008 at 12:08 am

    You mention the early data problems such as whole numbering and periods of estimations; there is a greater problem; Shakespeare may have died before the CET data began but his influence continued on the data collectors; in homage to his Sonnet 18 the collectors applied the following criteria to data adjustment:

    To freeze or not to freeze: That is the question:
    Weather ’tis fav’rable to gain entrophy
    or with such jostlings of outrageous data
    Or to bond, freeing ph seas of enthalpy
    And thus subside PDO? ENSO thus SOI
    No more; and by reducing TSI we end
    Activations of thousand anomalous shocks
    that bases do suffer. ‘Tis interpolation
    Devoutly pursued. To freeze, no melt;
    To melt: perchance to runaway; ay, there’s the rub
    which warms poles that tie so rigidly;
    by such warmth to be dislodged, even boil.

    (With apologies to Craig Carter)

    [Response: Isn't that homage to Hamlet's famous soliliquy, rather than Sonnet 18?]

  • David B. Benson // April 29, 2008 at 12:19 am

    Another thing which could be done with this data (since 1749 CE) is to look at correlations between the Wolf number

    http://solarscience.msfc.nasa.gov/greenwch/spot_num.txt

    and variations in the CET. I would expect to find essentially no correlation; the variability in TSI implied by sunspot numbers is too small to matter, IMHO.

    But maybe somebody has already done this?

  • Dano // April 29, 2008 at 1:53 am

    Well, Tamino, it’s obvious Hansen and Algore have corrupted the data, as everyone knows it was warmer…erm…well, sometime in the past.

    Best,

    D

  • cohenite // April 29, 2008 at 2:16 am

    You are of course right; I should have said they were inspired by sonnet 18.

    “Shall I compare thee to a summer’s day?

    And summer’s lease hath all too short a date:
    Sometimes too hot the eye of heaven shines,”
    and so on.

  • Lost and Confused // April 29, 2008 at 3:44 am

    Hank Roberts, I have read the information of station changes on several blogs. I have no reason to believe the posters were lying. There certainly have been station changes within the CET record.

    Quite frankly, I find your post mildly insulting. Why do you use the term “mistrust” to describe skepticism? Why would you say, “And tell us what you learned when you used the Feedback link to ask for more information”? I have never said I used the Feedback line, and you have no reason to believe I used it. This comment is a setup.

    I find it hard to take you seriously when you use tactics like that.

  • TCO // April 29, 2008 at 4:13 am

    Anyone have a good reference on the use of that filter? What it’s advantages/disadvatnages are? How it works at endpoints? Other usage in time series studies, etc? I did shake my fat ass and do a little work in terms of googling and wikiing. But haven’t really found a good ref. Most of the stuff seems to be either individual usage in certain studies or advice to use this for analyzing light spectra.

    Oh…and before anyone jumps me again, I’m just asking for a ref. Not maligning the polar bears.

  • TCO // April 29, 2008 at 4:14 am

    I guess I should break down and read the original paper and the better spelled versdion of it…

  • TCO // April 29, 2008 at 4:29 am

    Tammy, man:

    There are two possible reasons why I said what I did:

    A. (What I think I did) I read your post. Tried to spend a little bit of time thinking about it. Tried to even do a little research (read wiki). When I read wiki, I saw some text talking about fifth derivitives: “The Savitzky-Golay method essentially performs a local polynomial regression (of degree k) on a distribution (of at least k+1 equally spaced points) to determine the smoothed value for each point. Methods are also provided for calculating the first up to the fifth derivatives.” And I just sorta said…hmmm…I had just a couple days ago come acrross Tufte talking about fifthe derivatives. I realized that they were sorta different issues (local smoothing, versus whole spline curve). And also realized that you had shown data (good for you) where that Tufte-fisked pop-writer had not. but I donno…I just kinda locked on the fifthe derivative (which now that I look at it, may not be the same thing as fifth degree polynomal regressions). But I didn’t think that at the time…but I did see this fifth thing. So I just thought I would toss it out there and see if there was any useful insight from your smooth to that regression. It was just sort of a shot in the dark. But heck, I asked about PCA having similarities (in forms of solutions, to particle in a box Schroedinger with having more nodes and having wave functions perpindiucular to eaach toehr and someone at that time told me that the problems have some commonality of the form of the equagtion that dirves what I saw as similar.) So It was just thrown out there for teh discussion.

    The other choice is that I’m evil. Maybe self-deluding myself to think that I’m not.

    Hank says I’m lazy…but that is so, so, mea culpa. I’d be lazy under either of those options.

    but enough with distracting you. consider my knuckles rapped.

  • Hank Roberts // April 29, 2008 at 4:41 am

    > the information of station changes on
    > several blogs. I have no reason to
    > believe the posters were lying.

    Well, is this a skeptical attitude?

    Which blogs? Who says? Did you read the paper discussing running old and new stations in parallel? Did the people you rely on point that out?

    You can find out if there’s a reason to trust what people say by reading the papers they cite and seeing if they describe them accurately.

    If they don’t cite their sources — well?

  • John Mashey // April 29, 2008 at 7:24 am

    While I wouldn’t want to be guilty of claiming that similarity of chart appearance proves anything, I do recommend looking at Holzhauser et al, http://www.unige.ch/forel/PapersQG06/Holzhauser2005.pdf
    Figure 2, on Great Aletsch glacier, whose chart of glacier length is more-or-less upside-down version of CET, although the glacier length does the smoothing naturally, has some lag time (~24 years), and of course depends on precipitation, not just temperature.
    Finally, one would expect some differences in sulfate aerosols between Central England and the Alps.

    As for why there might be jiggles, I think Ruddiman’s hypothesis about pandemics is much further developed than it was in “Plows, Plagues and Petroleum”. See the 37-page “The Early Anthropogenic Hypothesis: Challenges and Responses”, Section 10.
    http://www.agu.org/pubs/crossref/2007/2006RG000207.shtml
    ($ reqd, sorry).

    Fig 21 shows CO2 deltas computed from pandemics and reforestration estimates, aligned with CO2 concentrations from the Law Dome, i.e. as in:

    ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/law/law_co2.txt

    Ruddiman says: “In summary, pandemic-driven reductions in atmospheric Co2 can explain half or more of the ~7ppm drop between 1200 and 1700. Depending on the highly uncertain size of the global mean cooling during this interval, this anthropogenic forcing could account for anywhere in between 16% and 66% of the total cooling.”

    In this case, the relevant pandemic would be the native American one, i.e., Europe’s problems with the Little Ice Age maybe have been partially self-inflicted.

  • sdw // April 29, 2008 at 7:31 am

    Again the obvious point is missed, and again I will get no response for pointing it out.

    Please look at the derivative of the temperature increase from ~1680 to ~1735. Look at the length of this period of time. What caused the change? What were the forcing factors? Climate sensitivity to what exactly? So temperatures do ‘naturally’ vary wildly and the fact that we are presently near a peak in a short instrumental record is completely irrelevant.

    Yes, it is warming. Is this some scientifically astute observation? It is entirely irrelevant to a sound scientific mind and provides no supporting evidence for the AGW hypothesis.

    I am not saying AGW is not plausible - I have no idea. However I do believe solid science should be readily convincing. The absurdity in quoting a high recent temperature record as supporting evidence for AGW, with these data sets as evidence, is absolutely mind boggling.

    Now if only the rate of temperature change revealed something, anything of interest.

    regards, sdw

  • Julian Flood // April 29, 2008 at 9:06 am

    quote As the averaging interval gets longer, the size of the “wiggles” in the smoothed data gets smaller. By the time we get to 30-year moving averages, the wiggles are greatly reduced but the signal still remains. This argues for the appropriateness of the “industry standard” 30-year window for climatological means. unquote

    There is, however, a striking and surely important loss of information in the graph just above the words I quote. The ten year smoothing shows an extraordinary leap around the years 1700 to about 1730, about 2.2 degrees of warming. Had I been sitting in Warwick in 1730 and been shown a comparison of my own warming and the 20th century’s 1.6 degrees, I might have pointed out that my own time had much more to worry about.
    But, of course, it was just weather — the thirty year smoothing magics it away. However, I would not have known that at the time and I would have written learned papers about the chances of malaria reappearing on Coney Weston fen.

    Are there no weather signals with a frequency of around 30 years? Or even longer? Look at:
    ftp://ftp.fao.org/docrep/fao/006/y5028e/y5028e02.pdf
    page 10, the wind speed variation. It looks like half of a sine wave with a length of 90 years. What does a 90 yr smooth look like, and are there other smooths which yield insights?

    (you will no doubt be impressed by my not drawing attention to the notch in the data around 1940.)

    JF

  • Adam // April 29, 2008 at 9:13 am

    Didn’t Eli post a comment about some of the early CET measurements being made in unheated rooms? This my account for some of those early oddities.

    Philip Eden a wealth of knowledge about the
    CET, the original series, the Hadley changes and Manley’s work. He tracks the current CET based on the original Manley series, which can be used to compare with the Areal series.

    See http://www.climate-uk.com/page5.html
    He’s normally very helpful when asked nicely. I’d recommend a visit.

  • Petro // April 29, 2008 at 9:40 am

    L and C jerked:

    “Hank Roberts, I have read the information of station changes on several blogs. I have no reason to believe the posters were lying. There certainly have been station changes within the CET record.”

    It would be gerat if you would trust scientific articles even as much as you trust denialists’ websites.

  • Leif Svalgaard // April 29, 2008 at 11:30 am

    It is clear from your plot that the increase from 1700 to 1730 is larger than from 1975-present. So increases of the magnitude of the recent one can happen for other reasons than CO2.

  • DocMartyn // April 29, 2008 at 12:24 pm

    My parents grew up in the Midlands in the fourties and fifties. My mother has described how the children used to walk home from school, while all holding hands, under adult supervision. The reason being that smog levels were so high that “you could not see your hand in front of your face”.
    The UK began to tackle air pollution in 1956, with the first of the clean air acts. This started a process where airborne particulates and sulphur oxides, from the burning of coal, were removed from ground level. The smog disappeared rather quickly from that date.
    Moreover, when she grew up, the countryside was country, and the cities were packed. During the past 50 years, with the introduction of cars, the cities have discharged their populations and the green belts that surrounded them are filled by human activity, redbrick, concrete and tarmac.

  • dhogaza // April 29, 2008 at 1:33 pm

    I have read the information of station changes on several blogs. I have no reason to believe the posters were lying…

    “several blogs” is an interesting cite. I’d love to see that kind of reference where it counts. “Earth is flat [1] …. 1. Several blogs 2. Ibid …”

    Why would you say, “And tell us what you learned when you used the Feedback link to ask for more information”? I have never said I used the Feedback line…

    I think that’s rather the point …

  • Petro // April 29, 2008 at 2:08 pm

    Leif stated:
    “It is clear from your plot that the increase from 1700 to 1730 is larger than from 1975-present. So increases of the magnitude of the recent one can happen for other reasons than CO2.”

    Are you blind? In smoothed average the increase around 1700 is 1 degree, and from 1975 it is 1,25 degree.

  • Hansen's Bulldog // April 29, 2008 at 2:32 pm

    Please look at the derivative of the temperature increase from ~1680 to ~1735. Look at the length of this period of time. What caused the change? What were the forcing factors? Climate sensitivity to what exactly? So temperatures do ‘naturally’ vary wildly and the fact that we are presently near a peak in a short instrumental record is completely irrelevant.

    Please look at the derivative of the temperature increase from last night to this afternoon. Or from last January to this January. Or from the 1998 el Nino to the 1999 la Nina.

    It’s important to distinguish between natural fluctuations which don’t persist and unnatural changes which do persist. On a 30-year time scale — the standard for climatological means — your argument fails.

    And of course, your premise is based on data taken before the invention of the alcohol thermometer, let alone the mercury thermometer, and before the definition of the Fahrenheit scale, let alone the Celsius scale. How accurate do you think such conclusions are?

    The ten year smoothing shows an extraordinary leap around the years 1700 to about 1730, about 2.2 degrees of warming. Had I been sitting in Warwick in 1730 and been shown a comparison of my own warming and the 20th century’s 1.6 degrees, I might have pointed out that my own time had much more to worry about. But, of course, it was just weather — the thirty year smoothing magics it away.

    The one-year averages show an extraordinary drop from 1733 to 1740, fully 3.7 deg.C. Had I been in Warwick in 1740 and been ignorant of the difference between weather and climate … But of course, doing any smoothing at all just “magics” it away…

    It’s important to distinguish between natural fluctuations which don’t persist and unnatural changes which do persist. On a 30-year time scale — the standard for climatological means — your argument fails.

    And of course, your premise is based on data taken before the invention of the alcohol thermometer, let alone the mercury thermometer, and before the definition of the Fahrenheit scale, let alone the Celsius scale. How accurate do you think such conclusions are?

    It is clear from your plot that the increase from 1700 to 1730 is larger than from 1975-present. So increases of the magnitude of the recent one can happen for other reasons than CO2.

    It’s clear from the thermometer that the increase from midnight last night to noon today is much larger than anything in any of the plots given. So increases vastly greater than anthing in the smoothed plots can happen for reasons other than CO2.

    But of course that just illustrates that we cannot draw valid conclusions based on cherry picking the start and end points at temporary extremes.

    And of course, your premise is based on data taken before the invention of the alcohol thermometer, let alone the mercury thermometer, and before the definition of the Fahrenheit scale, let alone the Celsius scale. How accurate do you think such conclusions are?

  • Hank Roberts // April 29, 2008 at 2:45 pm

    > several blogs

    ” … cut and paste the address for the exact page where the information was found and the date of retrieval. [But keep in mind that "I found it on the internet" may not make the information any more reliable than "I heard it from some guy in a bar."]”
    http://www.umaine.edu/history/stylesheet.htm

  • cce // April 29, 2008 at 3:03 pm

    Using the convention that AGW started in 1975, and taking the data at face value, the rate of warming exceeded modern times during a cluster of 33 year periods starting in the late 1600s. However, it’s likely that modern warming will exceed this in within a decade.

    http://cce.890m.com/cet-33-trends.jpg

    The last 10 years are about 0.6 degrees warmer than any comparable time in the series however, so the magnitude of the warmth is “unprecedented,” and within 0.2 degrees of the peak of Lamb’s MWP (using 50 year averages).

    http://cce.890m.com/lamb-updated.jpg

  • Leif Svalgaard // April 29, 2008 at 3:11 pm

    Petro, look at the real data [5th and 6th plot in Tamino's post] that shows yearly averages. A 30-year smoothing of the last 30 years is a poor representation of the truth.

    [Response: I disagree. In fact, I think your statement is foolish.]

  • Lost and Confused // April 29, 2008 at 3:11 pm

    In a somewhat unsurprising event, I found the information I wanted in the first pdf file I opened from the CET website (Parker et al. 2005). From their Table 1, the CET monthly mean series switched to a different set of temperature stations in 1974. After 1974, the ones in use were Rothamsted, Malvern, Squires Gate and Ringway (the last two averaged together). None of these were used in the series before.

    At a cursory look, I do not see how the CET can be considered a “continuous” series. It completely switched which stations were used, and a new trend appeared.

    Maybe the CET is a meaningful series. I do not know. I do know the average reader will feel quite differently if told the most important trend in the graph appeared only after completely altering the source of data.

  • Dano // April 29, 2008 at 3:11 pm

    Lost and Confused purveys FUD thusly:

    Quite frankly, I find your post mildly insulting**… [t]his comment is a setup…I find it hard to take you seriously when you use tactics like that.

    HTH display the FUD purveyance tactics and the similiarly-structured research agenda for focus-grouping questions and framing testing done by ‘Lost and Confused’.

    It’s perfectly fine to reveal for others the holes in the tactics of L&C, but let’s not help them along by giving them the answers they seek to test.

    Best,

    D

    *’confused’ is, of course, part of the ‘U’ in FUD. Tactics, folks. He’s sowing doubt and testing how well it grows.

    ** this victim tactic is the ‘F’ in FUD. Tactics, folks. He’s playing the poorpoor victim to elicit sympathy.

  • george // April 29, 2008 at 3:13 pm

    TCO,

    RE: savitzky-Golay:

    If you are using Savitzky-Golay for just smoothing, you use a version that does not take any derivative — though there are versions of the filter that do take a particular (1st, 2nd, 3rd, etc) derivative at the same time that they do smoothing.

    The degree of the filter is just the degree of the polynomial used to “fit” the curve in question in the neighborhood of a given point, centered on that point.

    HB:
    Practically speaking, (ie, for real world purposes), why would anyone ever want to calculate a 5th derivative? (referred to in the wiki article with regard to Savitzgy-Golay)
    Is there some case for which there is a physical meaning to be attached to the 5th derivative?

    I could be wrong (have been before once of twice), but I would think that the noise at that derivative level (even using a filter like Savitzgy-golay) would pretty much wash out the signal.

    [Response: I don't know of any case in which it's necessary to estimate the 5th derivative, or why one would want to.]

  • Leif Svalgaard // April 29, 2008 at 3:17 pm

    HB:

    But of course that just illustrates that we cannot draw valid conclusions based on cherry picking the start and end points at temporary extremes.

    If we calculate the 30-year trend for every year and slide it along from 1659 to the present, then tabulate the trends and ask if the recent trend is highly unusual, the answer would be that it is not, as similar trends have occurred in the past. Thus no cherry-picking needed, except perhaps for picking 1975-present.

    [Response: You're the one who pronounced a dramatic change from 1700 (a temporary minimum) to 1730 (a temporary maximum), neither of which represents a persistent change. It's no better than computing a trend from an el Nino to a la Nina. Cherry-picking.]

  • Leif Svalgaard // April 29, 2008 at 3:30 pm

    HB:

    And of course, your premise is based on data taken before the invention of the alcohol thermometer, let alone the mercury thermometer, and before the definition of the Fahrenheit scale, let alone the Celsius scale. How accurate do you think such conclusions are?

    As accurate as your conclusion based on the same data that the recent temps are the highest or have the largest increase.

    [Response: Do you REALLY think that conclusions based on data before the invention of the alcohol or mercury thermometers, and before the definition of the Fahrenheit or Celsius scales, are as accurate as conclusions drawn from 20th-century measurements?

    Do you REALLY think that smoothing on a 30-year time scale is not a proper approach to define climate?

    Don't equivocate -- just say "yes" or "no."]

  • John Mashey // April 29, 2008 at 3:31 pm

    I knew people were going to worry about those early jiggles, and of course measurement errors and natural effects were likely to be present.

    BUT, from the Ruddiman paper I mentioned earlier:

    “An estimated 80-90% of the pre-Columbian population (50-60 million people) dies between 1500 and 1750, with the highet losses probably occurring in the 1500s.” By adding up various specific cases, he estimates ~13.8 Gigatons of C sequestered via reforestration. Given the lag times between people dying and reforestration, what you’d expect is a dip in CO2 towards the end of the 1500s, as forests re-establish themselves, and then later, get back into steady-state. Then, after a while, European settlers start cutting enough trees to be noticeable.

    From the Law Dome records:

    From 1006 to 1570, CO2 usually stayed above 280ppm, with two points below (279.4 in 1006 and 279.6 in 1465). and an average of 281.8. Then:

    1589 278.7
    1604 274.3
    1647 277.2
    1679 275.9

    1794 281.6 (first time back above 280.0)

    Now, it *could* be an accident that the biggest drop in CO2 in the last 1000 years just happens coincide with the reforestration from the biggest die-off in human history…

  • Leif Svalgaard // April 29, 2008 at 3:57 pm

    me:

    If we calculate the 30-year trend for every year and slide it along from 1659 to the present, then tabulate the trends and ask if the recent trend is highly unusual, the answer would be that it is not, as similar trends have occurred in the past. Thus no cherry-picking needed, except perhaps for picking 1975-present.

    I actually did this, the result is here: http://www.leif.org/research/CET1.png
    Too bad I cannot show the graph inline for its visual impact.

  • Phil. // April 29, 2008 at 4:02 pm

    Leif,
    “It is clear from your plot that the increase from 1700 to 1730 is larger than from 1975-present. So increases of the magnitude of the recent one can happen for other reasons than CO2.”

    Indeed, perhaps it was getting too cold to keep going outside so he moved his thermometer indoors.

  • Leif Svalgaard // April 29, 2008 at 4:23 pm

    HB: I do REALLY think that stopping the smoothing curve in 1993 and that the 30-year average centered on that year is a definition of the climate for that interval. Your red curve extrapolated past that is not.
    As to the first question: it is silly in the extreme and should have phrased whether the data is good enough for the conclusions we draw and if they are corroborated by other evidence [LIA etc].

    [Response: Ignoring all smoothed values beyond 1993 is closing your eyes to the warming which has happened since then. I suspect that you'd like nothing better than to ignore warming after 1993, or would you rather simply deny that it's happened? And EVEN IF we do stop 30-yr averages in 1993, it's still 0.49 deg.C warmer than anything prior to the 20th century.

    As for the first question, rather than make yourself look like a fool by saying "yes," or retracting your foolish claim by saying "no," you try to change the question! And the stuff about the LIA is silly in the extreme. The LIA was not a brief temperature decrease, nor was it a brief temperature increase, so how does it "corroborate" a brief drop in temperature followed by a brief temperature increase above average? It doesn't.

    You've gone to some length to refute "my claim" that recent temperatures in the CET "have the largest increase." Now let's see whether you're even *capable* of giving a straight answer to a simple question: where did I make that claim?]

  • Thomas Huxley // April 29, 2008 at 4:41 pm

    We have used a rather diverse set of stations to build on Manley’s work and create a daily mean CET series for 1772 to date. Our daily series is one of the longest available, but it cannot at present be based on an entirely satisfactory set of stations. This is unfortunate …. The uncertainties involved in the replacement of Stonyhurst and the evidence for urban warming at several of our stations, leads us to stress the importance of the establishment of guaranteed reference stations for monitoring climatic variability and change….

    This is the conclusion from Parker Legg & Folland (IJC 1992).

    Suggest that the whole paper is essential reading for this thread (takes a while to download though)

  • dhogaza // April 29, 2008 at 4:41 pm

    Indeed, perhaps it was getting too cold to keep going outside so he moved his thermometer indoors.

    You’re forgetting that the early instrumental record, while too imprecise to support a “hockey stick”, are precise enough to prove a worldwide MWP and that today’s warming has a historical precedent … :)

  • steven mosher // April 29, 2008 at 5:33 pm

    Tammy, I rarely go to the extremes of calling someone foolish. And I am class 1 jerk. So I was a bit suprised to see you call Dr. S foolish. But
    your choice.

    You invent a method that has not been published, that you could document here, but you choose not to. So, filter schmilter. No body can really assess your method, or modification without the facts. It draws smooth lines. We all see that. Shrugs. looking forward to the paper.
    Hopefully it will advance things.

    Anyway, the world is getting warmer, man is the cause. But you need to chill a bit dude. Take that from an angry white guy who knows. All that said, I did like your post. I was especially drawn to the point where your talked about these things.

    1. Early Measurements only recorded to 1 deg accurracy. That would describe all the measurements in the USHCN since 1880.
    It wouldnt seem to me that taking 60 measures
    per month at 1C resolution, would be much of
    problem. Given our previous discussion of
    the matter.

    2. Filling in missing data. We should talk about this WRT to GISS. on a monthly and global scale
    what percentage of GISS USA monthly data do you suppose is missing and infilled? Is there a better way to infill than is currently used?

    Sometimes I think if you where just writing about data ( without saying its temperature) that people would not get so wrapped around the axel. FWIW

  • Hank Roberts // April 29, 2008 at 5:40 pm

    Most of you know this, but for anyone who doesn’t, a famous article

    http://www.americanscientist.org/template/AssetDetail/assetid/55905/page/3;jsessionid=baa9...

    gives some perspective on how temperatures were being measured in the late 1700s

    —–excerpt——
    … Figure 2 shows the locations of the Madison Montpelier plantation, Jefferson’s Monticello and the closest nearby modern stations where high-quality daily weather records have been routinely kept for decades. Figure 3 compares the difference between 4:00 p.m. and sunrise data observed by Jefferson as well as by Madison for the month of June, to the average values of hourly data from Charlottesville (the nearby modern station that has the most complete hourly series of data).

    click for full image and caption
    Figure 4. The northeast portico of Monticello

    The siting of Madison’s instrument inside the home before 1787 (irrespective of the lighting of fire) clearly greatly reduced the morning-to-afternoon changes in the data compared to outdoor values: The indoor morning data were much warmer than the outside air temperature, and the afternoon data decidedly cooler, so that the range in temperature through the course of the day was greatly reduced by the thermal lag of the home.

    After noticing that the data were inconsistent with the occurrence of ice and taking the bold step of moving the instrument outdoors to the box on the porch in 1787, Madison’s morning and afternoon observations are dramatically different, and the morning-to-afternoon changes immediately approach modern outdoor data…..
    —–end excerpt——

  • Phil. // April 29, 2008 at 5:53 pm

    dhogaza, you’re right! Actually siting the thermometer in a North facing room without a fire was recommended practice from ~1723 - 1760.

    http://www.rmets.org/pdf/qj74manley.pdf
    Also essential reading (that means you too Leif)

  • sod // April 29, 2008 at 6:18 pm

    great post as always.

    and i just love how every single of their replies exposes the denialists as completely incompetent.

  • dhogaza // April 29, 2008 at 6:20 pm

    Filling in missing data. We should talk about this WRT to GISS. on a monthly and global scale
    what percentage of GISS USA monthly data do you suppose is missing and infilled? Is there a better way to infill than is currently used?

    You do realize there’s a difference between algorithmic interpolation and simply copying past data?

    The first gives you sharp photos when you double or triple the size of an original image in photoshop, for instance. Interpolation of data can be a very effective technique.

    While simply copying data … well, a friend of mine, working for the EPA, was reviewing lake temperature data for a very large lake up near the canadian border, trying to get a handle on the effect of spilling water from the reservoir on downstream temperatures (important issue for salmon, among other things).

    A tribe had contracted to perform the data collection, and further subcontracted it out to others.

    She had me look at a graph of temps taken weekly for several years, and various depths.

    Asking me, “see anything unusual? Please tell me this can’t be!”. And, indeed, there was about a five year stretch where temps at various depths (every 10m, 20m something like that, repeated down to 100m or so) were *identical* each day.

    No thermal layering etc. In stark contrast to the data taken before or after.

    Well, the solution’s simple, of course. Some lazy-assed subcontractor took their first (say 10m depth) measurement, and simply copied the value for each of the other depths he or she was supposed to measure.

    And it wasn’t caught until years after the person left that job.

    Moral here is that COPYING data in this way, and INTERPOLATION, as is done for various datasets today, have nothing in common. Mentioning them together in the same post is just wrong.

    Early Measurements only recorded to 1 deg accurracy.

    I guarantee you that those measurements were not recorded to 1C accuracy.

    Do you know why?

  • David B. Benson // April 29, 2008 at 6:22 pm

    Dano // April 29, 2008 at 1:53 am — Yes, it was warmer in the past. The far distant past, say 7600 ypb.

    I’ll have more to post about this, relating it to the temperatures of the last 2000 years, Ruddiman’s thesis and all that, later today.

  • george // April 29, 2008 at 6:39 pm

    Leif Svalgaard

    “A 30-year smoothing of the last 30 years is a poor representation of the truth.”

    I can’t remember the title of the specific paper, but I read a paper on Savitzgy-Golay once that determined the optimum filter size for that filter and it turns out to have a full width that is between 1-2 times the FWHM of the “features” one is most interested in looking at in the data.

    I have used that filter quite a bit in the past for spectroscopic applications and I know from practical experience that this is a pretty good rule of thumb.

    Features with width less than this will be smoothed out (ie, lose definition) and those with width equal or greater will still remain.

    It therefore makes some sense (to me, at least) that if one is looking for temperature changes that happen on time scales 30 years or greater (ie, climate changes) that one would use a Savitzgy-Golay filter with about that size. This would tend to smooth out El Nino, volcanic eruptions and other short term effects and leave the climate signal intact.

  • Bill Bodell // April 29, 2008 at 7:34 pm

    Didn’t Eli post a comment about some of the early CET measurements being made in unheated rooms? This my account for some of those early oddities.

    Sounds like Open Mind’s version of Watts, searching for explanations of tempurature observations they don’t want to believe.

  • Bill Bodell // April 29, 2008 at 7:35 pm

    Ooops, my blockquote didn’t work. Should be quotes around the 1st paragraph.

  • Bill Bodell // April 29, 2008 at 7:37 pm

    dhogaza,

    The metric system?

    Check out the big brain on Brad.

  • dhogaza // April 29, 2008 at 7:50 pm

    The metric system?

    Actually the Celsius scale was older but not old enough :) Wikipedia says 1742 IIRC but reversed (0 boiling, 100 freezing)

    But, it was the big brain on Mosher, not Brad, I wanted to check out :)

    (great line from a great movie, though!)

  • Lost and Confused // April 29, 2008 at 7:55 pm

    Apparently until approximately 1770 temperatures were read from unheated rooms Bill Bodell. I believe it is discussed in Parker (1992) as a reason for starting their station comparisons in 1772.

  • Hank Roberts // April 29, 2008 at 8:01 pm

    I want to thank Phil. for the link and recommendation:
    http://www.rmets.org/pdf/qj74manley.pdf
    Also essential reading

    And second the nomination. It’s a reminder that we are just a few centuries into the use of science as a way of understanding nature — out of several thousands of years of history and tens of thousands of years of prehistory as a species, it’s just been these few hundred years that observation, recordkeeping, and the scientific method were done.

    The link is well worth a serious read, partly just as a reminder that we are only a few generations beyond the very beginning of the work — it’s humbling and inspiring to read what the beginnings were like, so few years ago.

    Small candle. Big darkness. Hope.

  • Phil. // April 29, 2008 at 8:20 pm

    dhogaza

    “I guarantee you that those measurements were not recorded to 1C accuracy.”

    Agreed, some were to the nearest inch as I recall. ;)

  • Frank O'Dwyer // April 29, 2008 at 8:55 pm

    David P. Benson,

    “Yes, it was warmer in the past. The far distant past, say 7600 ypb.”

    Interesting - Hansen has a paper out yesterday which states that we are at or near the peak of the entire Holocene (12000 yrs).

    The paper is here:
    http://www.columbia.edu/~jeh1/2008/StateOfWild_20080428.pdf

    The reference cited in support is this:
    http://www.pnas.org/cgi/content/full/103/39/14288

  • Hans Erren // April 29, 2008 at 9:00 pm

    a link to the CET data in your post would be useful
    http://hadobs.metoffice.com/hadcet/data/download.html

  • Dave Andrews // April 29, 2008 at 9:14 pm

    Hey,

    The people in the 17th and 18th Centuries did the best they could with the technology available to them.

    Bit like surface temperature measurements in the 2oth/21st Centuries actually. There are all kinds of problems with those measurements so why not investigate something a tad more relevant now rather than dumping on people in the past?

  • luminous beauty // April 29, 2008 at 9:21 pm

    Until the 1720s or so, Fahrenheit was calibrating thermometers by sticking them under his armpit.

  • Leif Svalgaard // April 29, 2008 at 9:22 pm

    HB: You said: “It’s now clear that the recent rise in temperature is unlike anything seen in the more accurate part of the CET record. It’s also much warmer than anything seen in the entire CET record”
    Petro said: “Are you blind? In smoothed average the increase around 1700 is 1 degree, and from 1975 it is 1,25 degree.” and you did not disagree with him [something you are quick to do with, e.g. my stuff].

    But let that slide. The real deception comes in when you claim that the ‘red’ [smoothed] data point is 2007 or 2008 is an accurate representation of the 30-year climate centered on 2007. and that therefore the rise from 1975 to 2007 is the largest in the ‘good’ part of the record [1895 to 1945 is larger]. nobody has ANY idea of what the true climatic 30-year average will be for any of the years since 1993. You can, of course, claim that you fervently believe that the curve will keep going up, and that cannot be denied.

    [Response: I asked you where I made the claim that recent tamperatures in the CET "have the largest increase." I dared you to give a straight answer. Rather than simply have the decency to admit that you were mistaken, you chose to point a finger at someone else and criticize me for not disagreeing with him. How childish.

    Consider your statement that "nobody has ANY idea of what the true climatic 30-year average will be for any of the years since 1993." The 30-year average centered on 1993.0 is 0.7178. Of the individual years since 1994, only *one* of them has had an annual average lower than that. Seven of the ten hottest years in the entire CET record have occured after 1993. So we do in fact have a *very good* idea what the 30-year average will be for 2008: a lot warmer than 1993. For an unbiased, objective, and simple estimate of the long-term trend value in 2008, take the last 30 years of data, fit a straight line, then compute the value of the regression line in 2008. Result: 1.49. It's certainly not a perfect estimate of the 30-year smoothed value in 2008, but it's a good one. Your claim that "nobody has ANY idea" isn't just false, it's idiotic.

    You make that claim because you really do want to "wish away" the warming since 1993. That's the only way you can maintain your belief that the rise from 1895 to 1945 is greater than that from 1975 to the present.]

  • David B. Benson // April 29, 2008 at 9:39 pm

    John Mashey // April 29, 2008 at 3:31 pm — A very good
    observation regarding anthropogenic influences. Your post,
    and this thread generally, prompted me to revisit the GISP2
    Central Greenland temperature records for the Holocene, this
    time with 1000 year averages divided into 5 bins of 0.323 K
    each [I've annotated with some events from prehistory]. The
    millennia end at the date shown:

    bin04 @ 9100 ypb (wall built around Jericho)
    bin03 @ 8100 ypb (canoe in The Netherlands)
    bin04 @ 7100 ypb (sea level +2 m)
    bin03 @ 6100 ypb (Neolithic culture in Britian)
    bin03 @ 5100 ypb (Otzi, the Ötztaler glacier man dies; sea level +1 m)
    bin02 @ 4100 ypb (Stonehenge started and completed.)
    bin04 @ 3100 ypb (Drought in Middle East; first climate wars.)
    bin03 @ 2100 ypb (Influenza epidemics in Greece)
    bin01 @ 850 CE
    bin00 @ 1850 CE

    From 7100 ypb on, the average temperature ought to drop
    about 0.5 bins per millennia as the orbital forcing effect
    continues to decline towards the next attempt at a stade
    in about more 20,000 years. This trend line fits fairly well
    except for the millennia ending 3100 and 2100 ybp, both
    well above the trend line. Both millenia figure heavily
    in the prehistory of the middle east, with the first empire,
    that of the Akkadians, destroyed about 4000 ypb and the
    destruction of Jericho about 3500 ypb.

    The current temperature appears to lie in bin02.

  • Thomas Huxley // April 29, 2008 at 9:46 pm

    Phil: Re the
    Manley 1974 paper
    . Agree with Hank. Inspiring stuff. Must have been then and still is now. Definitely a must read!

  • Dave Andrews // April 29, 2008 at 9:50 pm

    Oops!

    That should be the ” people in the 18th and 19th Centuries”

  • Hank Roberts // April 29, 2008 at 10:06 pm

    Repeat after me:

    England is not the entire world.
    England is not the entire world.
    England is not the entire world.

    Once you get that down, step two:

    Greenwich is an _arbitrary_ zero….

  • David B. Benson // April 29, 2008 at 10:28 pm

    Frank O’Dwyer // April 29, 2008 at 8:55 pm — Thank you for the links! I’ve been using the GISP2 Central Greenalnd data, where reasonably good connections to extreme global palsotemperatures and modern northern hemisphere temperatures exist. Based on those, and the amount of warming per decade seen so far, it seems that there is still a few decades until mid-Holocene extremes are reached.

    This also appears to agree with the melting of the Greenland ice sheet now and various observations and reconstructions for the mid-Holocene.

    That said, it is clear that even if the temperature does not rise much further, the resulting changes in precipitation are going to be unconducive to agriculture in most regions. This is in good agreement with a wide variety of paleoclimate indicators and archaeological studies of the prehistoric periods.

    The temperature matters less than the precipitation patterns, but these can be determined fairly well from a knowledge of both global and regional temperatures.

  • dhogaza // April 29, 2008 at 10:34 pm

    The people in the 17th and 18th Centuries did the best they could with the technology available to them.

    Bit like surface temperature measurements in the 2oth/21st Centuries actually. There are all kinds of problems with those measurements so why not investigate something a tad more relevant now rather than dumping on people in the past?

    They’re not being “dumped on”, their measurements are being discounted as being less accurate than today’s.

    And regarding recent data, how can you say “why not investigate something a tad more relevant now ” as though no one does, when NASA has been doing so for years?

    It’s *your* side of the denialist divide that denies that investigation and analysis can allow us to glean useful information from that imperfect 20th and 21st century data.

    That’s why we call folks like you “denialists”, after all …

  • Phil. // April 29, 2008 at 10:47 pm

    Thanks Hank.

    “The link is well worth a serious read, partly just as a reminder that we are only a few generations beyond the very beginning of the work — it’s humbling and inspiring to read what the beginnings were like, so few years ago.”

    I agree, I used to haev students in my thermo class read Humphrey Davy’s account of his development of the miner’s safety lamp in the Proc. Roy. Soc. It’s a great narrative about how in response to a visit by mine owners to the Royal Society he visited the mines
    , assessed the problem, conducted some experiments on the propagation of flames through metal gauzes and designed a lamp in a very short time.

  • Leif Svalgaard // April 29, 2008 at 11:26 pm

    HB: “So we do in fact have a *very good* idea what the 30-year average will be for 2008: a lot warmer than 1993.”.
    This is assuming that the next fifteen years will not have a strong negative anomaly [say of -1C each].

  • TCO // April 29, 2008 at 11:28 pm

    One thing that’s interesting is that if you eliminate the old data (truncate before 1800), it no longer becomes a story of remarkable warming versus a 400 year series. But just versus 200. This is still interesting, for instance if the CET is being used by denialists, in that we deny them the argument. However, we can’t ourselves both point to the length of the series as making it remarkable and then truncate most of it.

    Feel free to rap the knuckles for the “we” or the “denialists”. I’m just thinking out loud, not launching a PR campaign.

  • TCO // April 29, 2008 at 11:29 pm

    Oh…and “most” is incorrect as it’s I don’t know 40% that is truncated.

  • David B. Benson // April 29, 2008 at 11:45 pm

    Here is data for a Europe reconstruction:

    http://www.ncdc.noaa.gov/paleo/pubs/casty2007/casty2007.html

    entitled “A European pattern climatology 1766-2000″

  • DocMartyn // April 30, 2008 at 12:33 am

    L.S. I have had a look at the UK data record in the past. What I found most odd was the rate of change in the average temperature of diffent months. For instance, June gives the smallest rise in temperature over the time period, and March the greatest. What is very puzzling is the bimodal changes in the rate of temperature change over the course of the year. It is a fingerprint of something, but what I don’t know.
    May I suggest you have a look at the monthly averages using a similar plot to that you used for your yearly average.

  • Leif Svalgaard // April 30, 2008 at 12:45 am

    DocM: “May I suggest you have a look at the monthly averages using a similar plot to that you used for your yearly average.”
    That is a good, constructive idea. So that would be twelve plots, with, of course, a greater error and scatter, but we can always have a look.

  • Alan Woods // April 30, 2008 at 1:10 am

    DocM, Leif. For monthly plots (although not totally up-to-date), see:

    http://www.cru.uea.ac.uk/cru/climon/data/cet/

  • Leif Svalgaard // April 30, 2008 at 1:29 am

    Josh April 29th, 2008 at 5:11 pm
    over at lucia’s has this to say [coming to a similar conclusion as I, perhaps being as big a fool and idiot]:
    “But we can find the magnitude of the PDO in the temperature data. Notice that at least visually, the slope of the temperature data from about 1910-1945 seems to match very closely with the temperature data from about 1975-present. And similarly, the slope from about 1880-1910 seems very close to the slope from 1945-1975. [In fact, if you take the slope from ~1910 to ~1943, you get a slope of ~1.6*/C, and if you take the slope from ~1975-~2004, you get a slope of ~1.8*/C. Similarly, if you take the slope from 1943 to 1975, you get a slope of 0.07*/C, while taking a visually similar period from 1880-1910, you get a slope of -.65*/C]. It’s interesting that when climatologists see a temperature drop from 1945-1975, they determine it is aerosol cooling, even though there is a very similar 30yr period with a similar downward trend - and it’s 30yrs earlier, no less. Similarly, they see a positive 30yr trend in the early part of the century - and call it natural - and then see another positive 30yr trend - 30yrs later - with a similar trend and call it global warming. To me, when I start seeing the same alternating trends repeating at regular intervals, that’s a sin wave.

    Unfortunately, there aren’t any standard techniques that I’m aware of for determining a best-fit sinusoid. So I wrote a simple genetic program that basically finds it by trial and error. After a few hours, it converges to some very similar sinusoids, the best of which is:

    y = +4.00e-01*sin((x +4.29e+01) * 2pi/335.70) +1.03e-01*sin((x +6.09e+01) * 2pi/62.07)

    where y is the temperature anomaly, x is the date in years (e.g. 1910.5 is June, 1910, etc.). The period of the sin is the part in the denominator (335.7 yrs, etc.). A plot of this sinusoid is here: http://picasaweb.google.com/jg…..7291716402 with the best fit (red), the temp data (blue) and the residual (cyan).

    The 335 yr period is probably bogus because there are only 150yrs of data. But the 62 year period is probably fairly meaningful. And note that the period lines up pretty closely with the 20-30yr PDO. And the magnitude is roughly 0.1 deg (0.2 deg peak-to-peak). And even the phase seems about right from your graph of the PDO – the local maxima/minima are in 1948 and 1964 respectively, probably about 5 years behind the corresponding point in the PDO (which might make sense given heat stored in the oceans, etc.).”

    [Response: Where do I begin?

    The trend in HadCRUT3v from 1880 to 1910 is -0.007 +/ 0.004. The trend from 1943 to 1975 is +0.001 +/- 0.004. Yes, positive, although it's indistinguishable from zero. For GISS, the 1880-1910 trend is -0.001 +/- 0.004 (indistinguishable from zero) and the 1943-1975 trend -0.001 +/- 0.004 (again indistinguishable from zero).

    The statement "when climatologists see a temperature drop from 1945-1975, they determine it is aerosol cooling" indicates both arrogance and ignorance of the facts. First, neither HadCRUT3v nor GISSTEMP indicates any statistically significant cooling during that time. Second, the cooling effect of anthropogenic aerosol emissions is very real, unless you think the legislation enacted in the 1970s to reduce sulfate emissions because of the severity of "acid rain" was just a ploy by congress to support a future global warming fraud, or that sulfate aerosols don't cause climate cooling.

    Climate scientists do *not* call the trend from 1910 to 1943 entirely natural. It's partly due to lower-than-average volcanic activity during that time period and *possibly* an increase in solar output, but it's also partly due to increasing greenhouse gases.

    There are in fact standard techniques for determining a best-fit sinusoid. The best 2-period fit to GISS includes periods of 500 yr and 73 yr, for HadCRUT3v it includes periods 520 yr and 64 yr. Of course the 500/520 yr period is bogus. Guess what? So is the 73/64 yr period. Ever hear of "red noise"?

    EVEN IF we allow that a 73/64/62 yr period is real (which it isn't), and EVEN IF we attribute it to PDO (which I don't), it still doesn't explain the *secular increase* in global temperature from 1880 to 2008. That increase is about 0.7 deg.C, three and a half times as large as the 0.2 deg.C amplitude of the supposed PDO "period." And in case you weren't aware, periodic fluctuations don't show secular increase.

    Your amateurish attempts at time series analysis are an embarrassment. Your eagerness to fall hook line and sinker for the first crackpot theory to come along is ...]

  • Leif Svalgaard // April 30, 2008 at 2:12 am

    Alan: as expected, the noise is larger. Maybe cherry-picking the seasons and doing those separately might be illuminating without being too noisy.

  • dhogaza // April 30, 2008 at 2:20 am

    Short summary of the above would seem to be that natural variation exists therefore CO2 can’t be causing warming. Climate scientists are well aware of the first bit. The second bit doesn’t follow.

    So, Leif, where’s the energy going? Or is the basic physics regarding CO2 and IR wrong? Someone fudged the lab work? What’s your hypothesis?

  • cce // April 30, 2008 at 2:39 am

    I don’t like this rhetoric, and I’m a pretty snide person. It diminishes the argument.

    Here’s the bottom line (according to me):
    1) Tamino’s statements regarding the rate of warming only referred to the “high quality” data.
    2) The rate of warming is close to, but not quite, the fastest of any relevent time period in the series.
    3) The only period of time that shows faster warming is from the least reliable part of the data.
    4) The warmest period in the series is today, and it is warmer by a substantial amount.
    5) If you believe that warming is likely to continue into the future (as I am sure most of us do), then the smoothed data represents a fairly good approximation of warming up to and including March 2008. However, if you are a skeptic who believes that AGW has not been sufficiently established, then the smoothed graph is not proper evidence because it assumes warming will continue. If the point is to debunk the arguments of “skeptics”, a built in assumption of future warming should not be included.

  • Hank Roberts // April 30, 2008 at 3:08 am

    Leif, is it correct that everything after the open quotation mark there:

    “But we can …

    (all the way to the end at the close quotation mark, end of posting)

    Is your quotation of something from
    “Josh April 29th, 2008 at 5:11 pm
    over at lucia’s” — not your writing?

    And you’re saying what, exactly?

    Seems, well, odd to hang someone else up here to be dissected. No?

    is your quote from someone else, right?

  • Leif Svalgaard // April 30, 2008 at 3:13 am

    HB and dhog: First about the ~60 year period. This paper is just out:
    GEOPHYSICAL RESEARCH LETTERS, VOL. 35, L08715, doi:10.1029/2008GL033611, 2008
    Recent global sea level acceleration started over 200 years ago?
    S. Jevrejeva, et al.
    Abstract
    We present a reconstruction of global sea level (GSL) since 1700 calculated from tide gauge records and analyse the evolution of global sea level acceleration during the past 300 years. We provide observational evidence that sea level acceleration up to the present has been about 0.01 mm/yr2 and appears to have started at the end of the 18th century. Sea level rose by 6 cm during the 19th century and 19 cm in the 20th century. Superimposed on the long-term acceleration are quasi-periodic fluctuations with a period of about 60 years. If the conditions that established the acceleration continue, then sea level will rise 34 cm over the 21st century. Long time constants in oceanic heat content and increased ice sheet melting imply that the latest Intergovernmental Panel on Climate Change (IPCC) estimates of sea level are probably too low.

    It seems that these people could also benefit from a lecture about red noise. I was quoting Josh’s comment, because it was also obvious to him [so, Petro, I'm not blind] that there has been similar increases in the [even] recent past. The long term trend could be due to many things, even have a CO2 contribution, [although probably not solar]. What causes what is not established. I have never said that it was not CO2, just that the observational evidence for CO2 is weak as there has been similar increases not caused by CO2. For all I care, the CO2-related rise could be twice as big as observed 1975-present, but is riding on top of a deep decline in temperature, to give us the smallish rise we have seen several times before.

  • Leif Svalgaard // April 30, 2008 at 3:32 am

    Hank, yes, as I clearly indicated everything was a quote. And if you do not already know, “lucia’s” is here: http://rankexploits.com/musings/ .
    And what is wrong with pointing out that something is so visually obvious that others can see it too? What do the details of the sine-curve fitting matter? And the specific numbers - as long as they are in ballpark? The conclusion is the same. And, if you post something anywhere, you always expose yourself to be quoted. The dissection bit is the Bulldog’s joy, not mine.

    [Response: You say "I have never said that it was not CO2," but the implication oozes from just about every comment you make. You imply that climate scientists attribute every non-warming to natural causes while calling every warming man-made *just because it's a warming*. The implication is abundantly clear, and it's astoundingly insulting to the folks who are *your peers*. But you say "I was quoting Josh's comment," as though that absolves you of any responsibility for repeating a slander. You try to refute the lack of evidence for a 60-ish year period in 130 years of temperature data by referring to a 60-ish year quasi-period in 300 years of sea level data. You make the insulting snide comment "It seems that these people could also benefit from a lecture about red noise," as though I've denied the existence of a 60-ish year period, quasi or otherwise, in any data set for any physical quantity at all. You criticize me for not contradicting someone else, in an attempt to imply that I'm picking on you. You refute "my claim" that the warming rate in CET over the last 30 years is greater than at any previous time, when I made no such claim at all; the only reference I made to rate is that since 1980 it's 0.05 deg.C/yr and if that continues central England is headed for unprecendented territory.

    As for something being "so visually obvious that others can see it too," that's why we invented statistics.

    Your understanding is feeble, your implications are unprofessional, your attitude is supremely arrogant, and your attempt to avoid responsibility for what you yourself have posted here is despicable. Is that plain enough?]

  • Leif Svalgaard // April 30, 2008 at 4:09 am

    cce: The CET plot was supposed [I presume] to represent observations [although Tamino -truth be told - never uses that word; calls it an 'instrumental record']. One of your points was:
    “5) If you believe that warming is likely to continue into the future (as I am sure most of us do), then the smoothed data represents a fairly good approximation of warming up to and including March 2008″
    That is: you let your belief invent new data [for the 15 years after 2008] to bolster that belief. Not the way to go.

  • Hank Roberts // April 30, 2008 at 5:50 am

    > And what is wrong with pointing out

    Eh! nothing. Just checking to be sure I could tell who our host is irked at, you or the guy you were quoting (or of course both).

    Tamino, just checking, you’ve read some of these papers?
    http://scholar.google.com/scholar?q=leif+svalgaard

    Leif, do you know Tamino’s professional work?

    Just saying, I see plenty of confusion in what’s said, what’s implied, what’s inferred, and what’s visible only to some but not others.

    Nature’s real. What we think is going on is just what we think, at best.

    If Leif’s trying to get your goat, I hope he’ll stop; if he’s an inveterate goat collector, you have to stop letting him get yours. If, as I do think, both of you are courting Nature and want to know true things, you might find common ground respecting that endeavor, even here. Or even pretending that you do.

    And the horse might learn to sing, too.

  • Hank Roberts // April 30, 2008 at 6:04 am

    One more thought. I’ve seen Leif discouraging the “it’s the Sun, it’s gotta be the Sun” crowd, both in blogs and in print, as here:
    http://www.agu.org/cgi-bin/SFgate/SFgate?&listenv=table&multiple=1&range=1&directget=1&application=fm07&database=%2Fdata%2Fepubs%2Fwais%2Findexes%2Ffm07%2Ffm07&maxhits=200&=%22GC31B-0351%22
    “…. I would suggest that the lack of such secular variation undermines the circumstantial evidence for a “hidden” source of irradiance variability and that there therefore also might be a floor in TSI, such that TSI during Grand Minima would simply be that observed at current solar minima. This obviously has implications for solar forcing of terrestrial climate.”

    The implication being perhaps obvious. Or maybe not?

  • cce // April 30, 2008 at 7:53 am

    The moving average represents observations (or our best attempt), which he included. The smoothed data represents a reasonable extrapolation to the present. If you believe the reason for the warming, then it is likely to continue into the future, and the filter gives us a good approximation over the entire timespan. If you don’t believe the reason, then it doesn’t mean anything.

    i.e. If you want to know how much warming has occurred in Central England since 1975, you’d say “about 1.25 degrees” which would be perfectly reasonable since the filter removes the noise AND extends all the way to the present, which the moving average doesn’t. But it shouldn’t be offered as proof of “unprecedented warming.”

    The most recent 30 year slope is proof of “unprecedented warming,” but only for the more reliable data.

    FWIW, I get slightly different results when I take the 30 slope than your graph. Both the 1978 to 2007 slope and 1691 to 1720 slope are 0.504 degrees per decade.

  • Wolfgang Flamme // April 30, 2008 at 10:58 am

    Tamino,

    German temperature history also confirms this analysis:
    http://i171.photobucket.com/albums/u304/wflamme/MonthlyAnomaliesGermany.png

    However man and nature do not experience 30yr means of anomalies as such, nor monthly means of anomalies as shown in the bottom figure.

    Showing daily … hourly anomalies (or even absolute temperatures) would make an even more interesting comparison about what climate trends actually might add to CET warming never experienced before but I didn’t have the data handy.

  • DocMartyn // April 30, 2008 at 11:04 am

    Take a look at the monthly temperature from the UK. There is a bimodal waveform with peaks in March and Sept/Oct. These peaks appear to follow the farming cycle, planting and harvesting. During 1914-1919 and again in 1939-1940 these were two major attempts to bring more land under cultivation. The industrialization of agriculture began in the 1950’s. The hedgerows were ripped out and large scale grain productions dates from this time.
    What effect land use will have on the UK’s temperature, MAY, be reflected in this temperature profile. In March the soil is turned over and the countryside becomes brown.
    In the Autumn the crops are harvested, and the stubble used to be burnt, leaving black earth.

  • sod // April 30, 2008 at 12:36 pm

    That is: you let your belief invent new data [for the 15 years after 2008] to bolster that belief. Not the way to go.

    Leif, why not stick to ONE topic, for once?

    you are using the typical “topic hopping” technique of the denialist camp.

    nothing that you brought up, did shed ANY new light on the discussion at hand. NOTHING.

    when Tamino was discussing errors and misleading graphs of modern temperature, sceptics complaint that he needs to use “all” available data.

    now that he is looking at the longest record, you bring up PDO, without any reason what so ever.

    what topic will you jump to, when he takes apart the PDO hype next week?

    (that is easy. we “are” in a “cool” PDO phase now, but have the HIGHEST temp on record!!!!)

  • Barton Paul Levenson // April 30, 2008 at 12:47 pm

    I am very suspicious of sinusoidal curve fits to almost any data unless you have strong a priori reasons for suspecting a particular physical mechanism. The reason is that sinusoidal curves can be fit to any data at all. All you’re doing in such a case is Fourier-analyzing the data, and like Ptolemy, you get epicycles.

    I did something like this with the spacing ratios of the semimajor axes of the planets from about 1975 to 1978, sending all my fits to the planetology journal, Icarus. I had some really nice fits, too. Some depended on the existence of a second asteroid belt between Saturn and Uranus, extrapolated from the discovery of Chiron. Some depended on the orbital eccentricity of the planets to either side. It took a while for me to realize, essentially through self-education, that what I was doing was not meaningful.

    Interestingly, my crackpot efforts at coming up with a new Bode’s Law contributed to a real scientific advanced — Icarus added something to their guidelines stating that they would no longer accept papers dealing with “improved” (their quote marks) versions of Bode’s Law. : )

  • TCO // April 30, 2008 at 1:01 pm

    It sounds like using a smooth (which gets you to present day), versus a moving average, is a bit based on an assumption that warming will continue. There’s a danger of circularity here.

  • TCO // April 30, 2008 at 1:08 pm

    Also, Tammy, I complete agree with your point that the most recent 30 years is notably warmer than any other period in the record. But it’s still interesting if we find large wiggliness in the pre-AGW record as it implies high natural variability of climate.

    I also (still) caution both my side and your side, not to get over-exercised on a single region. We could probably search the globe and find one with a large drop in the last 30 years. Heck, AGW proponents have already said that they think there was high NE Europe temps during the MWP, but that it was a regional issue. So debating one single series HS shapedness will not be a killer blow for either side, even if we somehow verify the true answer and the looks support one side or the other more.

  • Julian Flood // April 30, 2008 at 1:14 pm

    Re HB 29 Apr 2:32

    quote The ten year smoothing shows an extraordinary leap around the years 1700 to about 1730, about 2.2 degrees of warming. Had I been sitting in Warwick in 1730 and been shown a comparison of my own warming and the 20th century’s 1.6 degrees, I might have pointed out that my own time had much more to worry about. But, of course, it was just weather — the thirty year smoothing magics it away.
    (reply) The one-year averages show an extraordinary drop from 1733 to 1740, fully 3.7 deg.C. Had I been in Warwick in 1740 and been ignorant of the difference between weather and climate … But of course, doing any smoothing at all just “magics” it away…
    unquote

    You might not have noticed that we agree on this, which is why I wrote that it was just weather — the thirty year’s jump in 1740 _was_ just weather. How could it be anything else when there is no physical theory to explain something which is merely time-localised variation?

    quote It’s important to distinguish between natural fluctuations which don’t persist and unnatural changes which do persist. On a 30-year time scale — the standard for climatological means — your argument fails. unquote

    Argument? I was merely pointing out what the record would have looked like to someone living at a time when the weather was going through a prolonged excursion. Matters nowadays are entirely different — we have an agreement that climate is a thirty year excursion and, more important, we have a theory which explains our current thirty year excursion. Our instrumental record is incomparably better with modern methods of data management. But one can’t help but wonder (a subtext which no doubt explains your rather defensive manner of reply)… what if we’ve got something wrong? What if climate is not a thirty year thing — our hypothetical Warwick resident would have known nothing more than we do? What if it’s 170 years, a full cycle of the wind variation I asked about in my first post?

    BTW, is there any sign of a cycle that long in the record? Your smoothings might show it, but I’m afraid it’s beyond me.

    JF

  • dhogaza // April 30, 2008 at 1:54 pm

    I have never said that it was not CO2, just that the observational evidence for CO2 is weak as there has been similar increases not caused by CO2.

    Looks like our esteemed solar physicist should’ve set aside a few moments in his undergraduate days to take an introductory course in formal logic.

    A logically equivalent statement would be “I never said that Dresden didn’t burn due to firebombing by the allies, I only said that the observational evidence is weak because London burned to the ground in the mid-1600s long before the airplane was invented”.

  • Jon // April 30, 2008 at 5:13 pm

    A logically equivalent statement would be “I never said that Dresden didn’t burn due to firebombing by the allies, I only said that the observational evidence is weak because London burned to the ground in the mid-1600s long before the airplane was invented”.

    I usually go the forest fires route- lightning causing them in the past has no bearing on arson in the present or future.

    Another favorite of mine is thermonuclear war causing catastrophic cooling, and what bearing the lack of thermonuclear war in the past would have on the reality and/or attribution of nuclear winter.

  • JD // April 30, 2008 at 5:27 pm

    Why are the results so strikingly different to those of this analysis?

    http://www.trevoole.co.uk/Questioning_Climate/_sgg/m2_1.htm

    [Response: Exactly what striking differences are you referring to?]

  • Hank Roberts // April 30, 2008 at 5:47 pm

    No one questions that the “observational evidence” for CO2 is weak, it’s expected to be weak. It’s known to be a weak signal in a noisy background. If it were strong we’d have a situation like we have with chlorofluorocarbons, where …. well, where a large group of industries, nations, and flakes are still in denial, the problem material is still being produced, and the laws, regulations, and financial systems in place to control the problem are being gamed for short term profit at considerable cost to the environment and the global commons.

    I also wish at this point to note a faint but distinct hint that it’s possible Godwin’s Law may invoke itself here soon, if we’re not very careful.

    Read the FAQ. Seriously:
    http://www.faqs.org/faqs/usenet/legends/godwin/

  • Petro // April 30, 2008 at 7:29 pm

    dhogaza realized:
    ‘A logically equivalent statement would be “I never said that Dresden didn’t burn due to firebombing by the allies, I only said that the observational evidence is weak because London burned to the ground in the mid-1600s long before the airplane was invented”.’

    You are nasty, still you represent the essence. Keep on going!

  • dhogaza // April 30, 2008 at 7:56 pm

    Apparently, Hank, it’s inevitable when discussing global warming:

    talk with a Libertarian for more than a few hours and he’ll almost certainly
    bring up Nazis

    Heh heh …

  • trrll // April 30, 2008 at 8:03 pm

    The moving average represents observations (or our best attempt), which he included. The smoothed data represents a reasonable extrapolation to the present. If you believe the reason for the warming, then it is likely to continue into the future, and the filter gives us a good approximation over the entire timespan. If you don’t believe the reason, then it doesn’t mean anything.

    I don’t see how a fit using all of the available data up to the present, and not extending beyond the data, can be referred to as an “extrapolation” by any standard.

    Of course, you can always argue that the trend is “just about” to change, but that is pure wishful thinking.

  • anon // April 30, 2008 at 8:06 pm

    And I never said that there is no evidence God smote the inhabitants of Delhi with cholera last year, because of their sexual laxity. I only pointed out that many people had suffered from cholera at many points in history, and that usually contaminated water had been blamed. However, I agree, it is certainly possible that this time it was different, and that this time it was all down to the Lord and not contaminated water at all….

  • Lost and Confused // April 30, 2008 at 8:10 pm

    There is a degree of absurdity in predicting an occurrence of Godwin’s Law, as it inherently is an invocation of Godwin’s Law.

    This is aptly demonstrated by dhogaza.

  • dhogaza // April 30, 2008 at 8:27 pm

    Uh, L&C, quoting a passage from the “law” (or more precisely, from Hank’s link which is more along the lines of a volume on related case law) is not an invocation of the law.

    But, then again, we already know you have some fundamental issues with understanding english …

  • Hank Roberts // April 30, 2008 at 8:48 pm

    Actually not. Read the FAQ ….

  • Lost and Confused // April 30, 2008 at 9:04 pm

    Ah, indeed not. I had never bothered to read the actual wording of Godwin’s Law, and apparently it was poorly explained to me (In the explanation I was given, the law only stated Nazis or Hitler would be brought up, not that they would be part of a comparison).

    Then again, apparently Godwin’s Law only applies to Usenet discussions. In that sense, it seems nobody here is going by the literal interpretation. I suppose we are all wrong?

  • Hank Roberts // April 30, 2008 at 9:12 pm

    Then again, one thing it says in the FAQ is mentioning Godwin leads to endless discussion of ….

    Oh, never mind. Where were we before Dresden? Forest fires. Lightning.

    The ‘trevoole’ guy’s page just shows straight line fits, and picks one for the same stretch of time around 1700 that our host here has pointed out in one of his analyses:
    http://tamino.files.wordpress.com/2008/04/cetsmooth.jpg?w=500&h=391

    Not inconsist analysis, just much less analysis of any kind at trevoole’s page than on our host’s here.

  • David B. Benson // April 30, 2008 at 9:25 pm

    Leif Svalgaard // April 30, 2008 at 1:29 am — If you want to look for power in the 60–70 year band, go to the NOAA Paeoclimatology site and pick up, say, the GISP2 temperatures as determined by Alley. Use just the Holocene, maybe starting nicely after the 8.2 kya event. Determine the power spectrum via a standard tool such as fft.

    Will you find power in the 60–70 yer band? Certainly. There is power at all frequencies. Will you discover a peak in that band? Probably. My bin analysis suggests so. Will it be a statistically significant peak? Very, very unlikely.

  • DocMartyn // April 30, 2008 at 9:46 pm

    Here is the average change in temperature of the individual months in the UK data set (1659 to 2006).
    As you can see, there is a season change that appears to have a human activity fingerprint.

    http://i179.photobucket.com/albums/w318/DocMartyn/MonthlyTempChangesUK16592006.jpg

  • Hank Roberts // April 30, 2008 at 9:56 pm

    > human activity
    Spring and Fall Equinox
    Periods of fastest change in day length

  • David B. Benson // April 30, 2008 at 10:07 pm

    Yesterday I happened across a strange paper claiming a Dalton-type minimum in sunspot numbers once each 20 or so 10.448 year solar cycles. As this caused me to improve my bin analysis of Holocene Central Greenalnd temperatures a bit, I tried it.

    No significant peak.

    But more, at least in Central Greenland only the beginning of the Maunder Minimum (and maybe the end, by doubling the years for a sunspot sycle and having the number of cycles) can be detected, if you know where to look for it. The central part doesn’t show any temperature deviations and the Dalton minimum is not to be found:

    bin03 @ 1589 CE
    bin03 @ 1599 CE
    bin03 @ 1610 CE
    bin02 @ 1620 CE
    bin02 @ 1631 CE
    bin03 @ 1641 CE
    bin03 @ 1651 CE
    bin03 @ 1662 CE
    bin03 @ 1672 CE
    bin03 @ 1683 CE
    bin03 @ 1693 CE
    bin03 @ 1704 CE
    bin03 @ 1714 CE
    bin03 @ 1725 CE
    bin03 @ 1735 CE
    bin03 @ 1746 CE
    bin03 @ 1756 CE
    bin03 @ 1766 CE
    bin03 @ 1777 CE
    bin03 @ 1787 CE
    bin03 @ 1798 CE
    bin04 @ 1808 CE
    bin04 @ 1819 CE
    bin04 @ 1829 CE
    bin05 @ 1840 CE
    bin05 @ 1850 CE

    Note the temperature run up since the interval ending in 1801 CE.

  • JD // April 30, 2008 at 10:28 pm

    Hi Hank Roberts,
    “Not inconsist analysis, just much less analysis of any kind at trevoole’s page than on our host’s here.”

    You will find more analysis if you read on:
    http://www.trevoole.co.uk/Questioning_Climate/_sgg/m2m1_1.htm
    http:/www.trevoole.co.uk/Questioning_Climate/_sgg/m2m2_1.htm
    http://www.trevoole.co.uk/Questioning_Climate/_sgg/m2m3_1.htm

    Comments are very welcome.

  • JD // April 30, 2008 at 10:39 pm

    DocMartyn,

    Is that a straight difference between temperatures of each month 1659-2006?

    You might find the monthly trends are quite interesting:
    http:/www.trevoole.co.uk/Questioning_Climate/_sgg/m2m2_1.htm
    (link posted earlier)

  • Cthulhu // April 30, 2008 at 11:21 pm

    JD perhaps you, like this trevoole guy are not away that greenhouse enhancement is supposed to raise minimum temperatures more than maximum, ie winters will get more warmer than summer…

    Of course that the CET record shows this could just be coincidence. Still it’s rather ironic when someone thinks they have found a problem with the co2-warming theory when in actual fact this looks more like confirmation than a problem.

  • Cthulhu // April 30, 2008 at 11:24 pm

    Sorry to use a media article, but being 17 years old this article shows noone is suddenly making up the warmer winters claim post-hoc:
    http://www.newscientist.com/article/mg12917520.800-warmer-winters-fit-greenhouse-model-.html

  • george // April 30, 2008 at 11:31 pm

    CCE says

    The smoothed data represents a reasonable extrapolation to the present. If you believe the reason for the warming, then it is likely to continue into the future, and the filter gives us a good approximation over the entire timespan. If you don’t believe the reason, then it doesn’t mean anything.”

    “extrapolation to the present”??!

    There’s a phrase I have not seen before.

    When I look at the 30 year smoothed data, I get about 1.5C for the last year and about 1.0C for 2000.

    When I look at the 1-year average, I see about 1.3C for the last year, but that follows a year which was about 1.6C. If I average those two together, I get 1.45C, very close to the 1.5C value given by the smoothed graph.

    When I look at the 1-year average, I see about 0.7C for 2000, but that follows a year which was about 1.4C and the next year is also about1.4C. If I average those 3 together, I get 1.16 , slightly bigger than the 1.0C value given by the smoothed graph.

    Based on an admittedly crude comparison of the smoothed data to the 1-year average data near the end of the time interval , i would say that the smoothing filter appears to be doing what it is supposed to do — smoothing (imagine that)

    A smoothing filter is supposed to “smooth” which is “interpolation”, not “extrapolation”. There is a difference — albeit just a slight one :)

    Not only is the idea that a smoothing filter “extrapolates” complete bunk, but so is the idea that it necessarily “assumes future warming.”

  • Zeke // May 1, 2008 at 12:26 am

    Tamino: It might be worthwhile to write a post on the relative magnitude of PDO cycles, and the upcoming Nature article that is generating a lot of attention in the usual quarters (e.g. http://www.telegraph.co.uk/earth/main.jhtml?xml=/earth/2008/04/30/eaclimate130.xml and http://wattsupwiththat.wordpress.com/2008/04/30/a-look-at-hadcrut-global-temps-and-pdo-with-hodrick-prescott-filtering-applied/ )

  • george // May 1, 2008 at 1:44 am

    BPL said:
    “I am very suspicious of sinusoidal curve fits”.

    Me too. After all, there are three kinds of fits: fits, damned fits and sine curve fits…

    Icarus added something to their guidelines stating that they would no longer accept papers dealing with “improved” (their quote marks) versions of Bode’s Law. : )

    Perhaps it is because you left out the word “new” — “NEW & IMPROVED” (Caps are important too, when trying to make a point)

    Who in his right mind would reject the latter claim?

    “NEW & IMPROVED” is patentable. Just plain “improved” is not.

  • cce // May 1, 2008 at 2:23 am

    George,

    “30 year smoothed data” requires 30 years of data. Smoothing the last 15 years, no matter how good the method, is an extrapolation based on partial data and the assumption of continued warming. The smoothed value of 2007 ultimately depends on data from years that have not happened. If you doubt this, perhaps Tamino can run the filter on the CET data again, this time stopping in 1736. Then we can compare that graph to this one:
    http://tamino.files.wordpress.com/2008/04/cetsmooth.jpg

    They will be quite different, even though they contain the same data to 1736, and use the same filter. The difference is that one knows the temperature of the “future” (which was colder than the years leading up to 1736).

Leave a Comment