Open Mind

Exclamation Points !!!

February 3, 2008 · 100 Comments

A reader recently linked to a post by Anthony Watts which reports an analysis by Joseph D’Aleo. The work seeks to establish that the correlation between temperature in the U.S. (as indicated by the latest version of USHCN data) and CO2 levels is not as strong as that between U.S. temperature and other factors, namely total solar irradiance (TSI) as estimated by Hoyt & Schatten, and a combination of oceanic temperature indices, specifically the Atlantic Multidecadal Oscillation (AMO) and the Pacific Decadal Oscillation (PDO).


The overall results are summarized in a table near the end:

Factor Years Correlation (Pearson Coefficient) Correlation Strength (R-squared)
Carbon Dioxide 1895-2007 0.66 0.43
Total Solar Irradiance 1900-2004 0.76 0.57
Ocean Warming Index (PDO and AMO) 1900-2007 0.92 0.85
Carbon Dioxide Last Decade 1998-2007 -0.14 0.02

In case you’re wondering about the title of this post, it’s based on the fact that in his text D’Aleo chooses to attach three exclamation points to his strongest correlation:


This was the jackpot correlation with the highest value of r-squared (0.85!!!).

The purpose of the work seems clear: not merely to imply that factors other than greenhouse gases are driving temperature change, but that in fact the impact of CO2 is blown all out of proportion by mainstream climate science. Alas, those who read this work hoping to learning something — anything — about correlation between various factors and global temperature, are in for quite a disappointment. D’Aleo focuses on correlations with temperature in the U.S. only.

But modern climate science doesn’t support the idea that all parts of the globe will warm equally under the influence of greenhouse gases, it contradicts it. It’s the global average temperature that will show the signs of human influence unambiguously; we expect strong regional differences in temperature change. Hence one wonders why D’Aleo chooses to focus on a region, and (compared to the globe) a rather small region at that. He justifies his selection by going to some lengths to suggest that the global temperature record is unreliable, but his sources and reasons for discrediting the global record are nothing more than the usual static from the usual suspects. Regardless, it’s next to impossible to take him seriously when he makes such strong and clear implications about global warming, based on less than 2% of it.

The limitation to U.S. temperatures isn’t the only highly dubious aspect of D’Aleo’s data selection. Looking for correlation with total solar irradiance, he uses the reconstruction of Hoyt & Schatten. But in my opinion, of the available reconstructions that of Hoyt & Schatten is perhaps the least reliable — in fact it’s simply no longer credible. Much more believable is the reconstruction of Lean, and also credible is the more recent reconstruction of Svalgaard — both of which differ dramatically from that of Hoyt & Schatten. In fact, we were fortunate enough to have Dr. Svalgaard comment here on TSI reconstructions, and he kindly included a link to his (and other) TSI reconstructions. It’s notable that none of the others agrees with the long-term changes suggested by the Hoyt & Schatten reconstruction, but the fact that it’s the only one that correlates with D’Aleo’s temperature data choice, seems to explain his selection.

D’Aleo doesn’t just make questionable choices of data series, he then modifies all data by taking 11-year moving averages. The stated purpose of this transformation is


For each, I will do an 11 year running mean to eliminate any influence of the 11 year solar cycle.

It’s perfectly valid to smooth a data series for analysis, and moving averages is a good smoothing method. But it has a profound effect on the results of correlation analysis, which must be compensated for when interpreting the results. D’Aleo doesn’t mention this issue at all, in fact he seems to be completely unaware of it. The squared correlations he reports imply astounding agreement between data sets (0.85!!!), when in fact a large part of the correlation is due to the fact that all the series are smoothed in the same way on the same time scale. Under such conditions, the different series actually conspire to inflate correlations dramatically. For the data series used in this work, 11-point moving average smoothing serves to magnify the impact of noise on the squared correlation coefficient (D’Aleo’s chosen favorite statistic to characterize correlations) by a factor of more than 20. That deserves an exclamation point!

I could at this point give a sequence of equations to show why this is so, but it’s probably more illuminating to show by example. So I generated a pair of pure white-noise time series. I didn’t generate lots and lots until I found a pair that gave the desired result; I only generated one pair. Here’s the correlation between the two series:

noisecor1.jpg

Note that the squared correlation is very low, only 0.004. The expected value, from a pure noise series, is 0 +/- 0.02, so this is well within the expected range. Then I computed 11-point moving averages; here’s the correlation between the series of 11-point moving averages:

noisecor2.jpg

Now the squared correlation coefficient is over 0.2. That too deserves an exclamation point! If these were white-noise series, this result would be highly significant. But they’re not white noise — moving averages of white noise yields a red-noise series, with strong autocorrelation. And that greatly exaggerates the computed correlation between them.

The real pity is that smoothing isn’t even needed, because the “influence of the 11 year solar cycle” is barely detectable, if at all, in any of the series except TSI; it’s questionable whether or not it’s even there, and certainly doesn’t need to be removed by a procedure which exaggerates correlations, a fact which isn’t even mentioned, let alone accounted for. Attach another well-deserved exclamation point!

The strongest correlation D’Aleo computes — the one he calls the “jackpot” — is U.S. temperature with AMO and PDO. But AMO and PDO are actually temperature indices, for the Atlantic and Pacific oceans. And they’re not for the entire oceans, they’re for the north Atlantic and north Pacific oceans. So he’s effectively shown that U.S. temperature is correlated to temperature in its neighboring ocean basins. This amounts to showing that temperature here is correlated with temperature nearby, which isn’t any great revelation, it’s simply a demonstration of the well-known and well-established (by mainstream climate scientists) fact of teleconnection: that temperature changes in nearby areas are strongly correlated, out to far greater distances than most would suspect. And if the analysis is done correctly, without the moving-average step which so greatly inflates the correlation (which by the way uses two predictor variables while the others use only one) the impressive 0.85!!! becomes a not-so-many-exclamation-points 0.24. Of course, we already knew they’d be correlated by virtue of teleconnection.

The entire premise of this article is that correlation between temperature and CO2 concentration is poor by comparison with other variables. But the correlation with TSI is a sham, and the correlation with ocean temperature indices is nothing more than teleconnection. Setting all that aside, the premise itself carries an implication which amounts to a misrepresentation of climate science: that if temperature doesn’t faithfully follow CO2 levels, then something’s wrong with our understanding of global warming. This is just a straw man argument; climate science tells us that temperature should follow climate forcing, and that it will show a sizeable amount of natural noise in the system. Climate forcing includes the sun, but the changes in solar forcing are so small that not only is it difficult even to detect the “influence of the 11 year solar cycle,” when you look for correlation of temperature (even restricting to just the U.S.) with a credible reconstruction of TSI, D’Aleo’s claim falls apart. Climate forcing also includes man-made sulfate aerosols (prevalent in the post-WW2 era), and volcanic aerosols, and black carbon, and water vapor changes, and lots of other greenhouse gases, and albedo changes, etc., and to expect that temperature would not show their influence — which will of course reduce the correlation with CO2 levels — is a fool’s proposition.

The right approach is to test correlation between global temperature and climate forcing. Lo and behold, we even know the actual physics behind it! Of course, for this correlation to be impressive we have to include in climate forcing the impact of — you guessed it — greenhouse gases, including CO2.

In a final coup-de-grace of folly, D’Aleo shows that correlation between CO2 levels and global temperature since 1998 is miniscule, even negative. He finally gets around to global temperature, but when he does he examines a whopping 10 whole years! Of course the starting point is 1998, the year of the huge el Nino. This final “test” utterly ignores the noise level in global temperature. The signal-to-noise ratio is such that the noise swamps the signal on such short time scales; but on longer time scales, we ignore the inexorable signal to our peril. Not only does D’Aleo desperately need to read this post, he needs to understand it.

In the end, what has D’Aleo really accomplished? He has managed to:

  • Cast doubt on global warming by seeking correlations with temperature over an area less than 2% of the globe.
  • Inflate all correlations way beyond their meaningful values by taking moving averages. He seems utterly unaware of the consequences.
  • Show correlation with an estimate of total solar irradiance which is no longer a plausible choice (and this, after expending considerable effort to discredit global temperature data).
  • Show correlation between temperature on a land area and temperature on its neighboring ocean areas (surprise!!!).
  • Join the “cherry-pick 1998″ club.
  • Seriously mischaracterize the expected behavior of temperature due to man-made global warming.
  • In spite of D’Aleo’s use of three exclamation points to emphasize the “jackpot correlation” — we are not impressed.

    Categories: Global Warming · climate change

    100 responses so far ↓

    • chriscolose // February 3, 2008 at 5:58 pm

      Very nice post, and much more complete than my criticism. I have also been challenging solar enthusiasts to quantify the solar forcing in a RF of W/m2, rather than give me lines that go up and down. If D’Aleo can find support for between 1 to 2 W/m2 (or even less) than I suppose we can talk. Someone will also need to remind me how internal variabiltiy is going to show up as a global mean temperature fluctuation on timescales of decades to centuries, when you primarily get a redistribution of heat.

      and they scratched their heads when I called the denial work “sloppy” over at their forums.

      Very nice work.

    • JCH // February 3, 2008 at 7:12 pm

      Tamino, did you secretly film team denial’s red-zone walk through???

    • EliRabett // February 3, 2008 at 7:25 pm

      As I recall, seeking correlations between smoothed data is a fool’s errand. One of the first things you learn.

    • Hank Roberts // February 3, 2008 at 8:18 pm

      Astonishing!!! correlation found:
      http://www.sourcewatch.org/index.php?title=ICECAP

    • George // February 3, 2008 at 10:30 pm

      Dude, you are good!!!

      No wonder they pay you the big bucks to debunk this stuff!!!!!

    • John Cross // February 3, 2008 at 10:47 pm

      I came across this argument about a week ago so I ran the correlation between annual CO2 and annual global anomaly. I am not saying it means anything, but I found a correlation of 0.79!! (obviously worth only 2 exclamation points).

      Best,
      John

    • Zeke // February 3, 2008 at 11:12 pm

      This brings to mind one of Gavin’s old posts, Fun with Correlations!: http://www.realclimate.org/index.php/archives/2007/05/fun-with-correlations/

      And they didn’t even have to use any smoothing techniques!

    • Ian // February 4, 2008 at 12:19 am

      Very interesting post, thanks Tamino. For anyone out there not familiar with scientific peer review, this post gives you a taste of the process: having critical experts examine your work for flaws. Hence the frequent call for passing peer review as a first hurdle for taking data analysis and arguments seriously.

    • jl // February 4, 2008 at 2:06 am

      Tamino thank you for the post.
      one question, what do the correlations look like if you use global temperature ??

    • Steve Bloom // February 4, 2008 at 2:28 am

      I hadn’t visited Watts’ blog for a while, but having just done so I see that the strident denialism has been cranked up a notch or two. Red meat for the masses, I suppose. A similar trend seems to be underway at CA.

    • Hank Roberts // February 4, 2008 at 3:39 am

      Good advice from Dr. Curry:

      http://dotearth.blogs.nytimes.com/2008/01/24/earth-scientists-express-rising-concern-over-warming/#comment-10491
      Click link for full text

      ——-excerpt——-

      “I just spotted this thread, and I am astonished to see such tilting at windmills from so many major figures from both sides of this debate.

      I am a climate researcher with over 140 refereed journal publications, but some of my more relevant statements on the issue have have been presented in congressional testimony and posts at climateaudit. For the record, I view the IPCC 4th Assessment Report to be the best available statement of the state of climate science at the time it was written. Policy makers do not have a better document or analysis from which to work with in grappling with the myriad of issues associated with climate change…..
      ….
      … The overall list of names collected by Mr. Morano frankly does not have a high level of credibility. But rather than pick on individual scientists on the list with meager credentials, i am far more concerned by the appearance of a number of international scientists on this list that likely would not want to be a part of this list, and don’t know about it simply because they don’t pay attention to the political shenanigans over here on the subject of global warming

      The statements by AGU and AMS are in yet a different class. They are written by a very small subset of the membership that are in someway appointed by elected officers of the society. No rigorous independent assessment of the literature is undertaken in these statements, and typically no journal articles are referenced. These statements undergo some sort of minimal review by other members of the society. The wording in these statements is not at all careful (unlike the IPCC and NAS/NRC assessments). … I can find no rationale for these types of statements being made by the AGU, AMS and other organizations. If the membership of these organizations “voted” on these statements, I suspect that a small majority of AMS members would vote no, and a large majority of AGU voters would vote yes. What does this mean? Not much, frankly.

      At the end of the day, it is the thorough assessment of scientific arguments, high quality data, and sophisticated climate model simulations, and the assessment of their uncertainties, that provides the scientific knowledge base upon which policy makers should consider in their decision making.

      Further consensus statements and statements by professional organizations don’t help things; we need to do more and better science, and more extensive assessments. We are wasting time attacking each other’s credentials and motives. Andy Revkin is right in desiring to switch the focus to policy, management and technology solutions to these complex issues.

      — Posted by Judith Curry

      ———end excerpt———

      A call to get real and do the science.

      I think it’s very good advice. The distractions and the nonsense are taking up way too much attention from people who could be helping work that ought to be the priority.

    • Evan Jones // February 4, 2008 at 4:12 am

      But I don’t get it. What’s the objection to a few extraneous exclamation points? Is that really such a Bad Thing?

      And yeah, the point was to establish a better correlation than CO2. As Obama said (refreshingly), “That was the point”. And?

      Yes, the correlation is with US and not World temps. (The latest version. They do keep changing, don’t they?) And, yes, the continental US is only 6% of the world’s land surface. But so far as I know, Canada’s record pretty closely matches the US record. And the US record is not all that unlike the rest of the world: Up by the end of the 30’s (not quite as high as the US), then a bit of a dip for a decade or three-to-four, then an upward trend.

      So rather than being automatically dismissive, why not just run the PDO/AMO, CO2, and TSI with the world record and see what happens?

      If you don’t like the TSI curve he’s going with, why not just use the one that you deem better? You say it doesn’t fit but not by how much or whether it fits better than CO2 or not. (I am not a great subscriber to TSI warming myself, FWIW. But I’m willing to look at it.) As for the 11-year averaging, the sunspot cycle only varies by 0.1C, so I don’t really see the objection.

      And what if the correlation with PDO/AMO is merely a “teleconnection”? Is that not the standard starting point for further investigation? Other than the supposed impropriety of his exclamation points, Mr. D’Aleo didn’t say he was proving anything. Just that he found a better “teleconnection”.

      And maybe Mr. D’Aleo missed a trick. If, as Mr. Watts has shown, based on his USHCN microsite examinations, there may have been a bias in temperature measurements in the last two decades and the recent rise is more modest than measured (say closer to the uncooked satellite readings), the “teleconnection” may even be closer than he himself has measured.

      And yes, the US is only 6% of land surface. But the USHCN system, as godawful as it is, is a rare gem when compared with that of the rest of the world–China, Russia, Brazil, even Western Europe providing shocking examples of microsite bias. Therefore it is entirely possible, if not probable, that the recent measurements are suspect in the same direction and for the same reasons?

      Mr. D’Aleo is not claiming to have found the holy grail. He is claiming to have found a coincidence. He’s put his findings out there for all to see and I’m sure he doesn’t object to review and criticism.

      What seems to touch your heart most about all this seems to be about a couple of poor stray exclamation points jest a lookin’ fer a home? Is that a crime, all of a sudden?

      Mr. D’Aleo may be wrong. On the issue of TSI, I suspect he is. But I think he may well have something to say on the PDO/AMO “coincidence”. And your objections are fine. So why not just check it out by comparing the correlation with the graphs you think he should be checking, and not be so prone to reject out of hand.

      And, while you’re at it, you might try reducing the temperature increase from 1979 to 2001 by, say, about half and see what gives:

      After all, if you average the CRN violations and the effects that NOAA says they should have with the percentage of observed stations in each category and total the results, the results, so far (with 40% of USHCN net observed) are a striking => 2 degree K spurious warming effect. Preliminary, yes. But to be ignored?

      So why not see if that makes an even better fit? Don’t be mad—be, well, curious.

      If not, then fine, then not. Sic semper scientia. And if so, well, that would be grounds for further investigation!!!

      [Response: I see you're one of those who desperately wants us to believe that D'Aleo is just investigating correlations in order to extend our knowledge. I'm sure he wants us to believe that too.

      D'Aleo's real purpose is crystal clear: to discredit antrhopogenic global warming. To do so he's willing to use irrelevant correlations, outdated data sets, analysis methods which distort correlations, time spans way too short to draw any meaningful conclusion, and misrepresentation of climate science. But when the folly and uselessness of his analysis is laid bare, you (and I'm sure others will show up too) cry foul. Enjoy the bitterness of sour grapes.

      His "research" is nothing more than a first-magnitude swift-boating, as are his (and your and Mr. Watts') disparaging remarks about the surface temperature record. As for doing the *right* thing scientifically, there are lots of people doing exactly that: at NASA GISS, HadCRU, and through the IPCC.

      I'll agree with you this far: you don't get it.]

    • Hank Roberts // February 4, 2008 at 5:32 am

      Here’s what the correlators are getting excited about now. No mechanism, as far as I know.

      http://www.geomag.bgs.ac.uk/earthmag.html#_Toc2075558
      http://www.geomag.bgs.ac.uk/images/image018.jpg

    • Evan Jones // February 4, 2008 at 5:53 am

      “I see you’re one of those who desperately wants us to believe that D’Aleo is just investigating correlations in order to extend our knowledge. I’m sure he wants us to believe that too.”

      But I don’t think his motive matters. Is he right or is he wrong? That is the question.

      “D’Aleo’s real purpose is crystal clear: to discredit antrhopogenic global warming.”

      Sure. But that’s scientific method. Falsification.

      “To do so he’s willing to use irrelevant correlations, outdated data sets, analysis methods which distort correlations, time spans way too short to draw any meaningful conclusion, and misrepresentation of climate science. ”

      I think it’s worth the formality of running the updated data sets. And examining the correlation. I know that a correlation is not proof. Lack of one may be disproof, however.

      “Enjoy the bitterness of sour grapes.”

      Hey, if he’s right, he’s right. If he’s wrong, he’s wrong. I’m easy. And I’m not particularly concerned with motives. Scientific method protects us from motive. E.E. “Doc” Smith’s Arisians don’t need scientific method: they’re above it. Scientific method is for us humans. Us and our “motives”.

      “His “research” is nothing more than a first-magnitude swift-boating,”

      Fine. If so, then that fact will out.

      “as are his (and your and Mr. Watts’) disparaging remarks about the surface temperature record.”

      Disparage! Disparage! #B^l

      Now ignore the disparagement and run the numbers for the CRN violations.

      482 stations measured so far.

      CRN Ratings, CRN effects (CRN Handbook):
      CRN-1: 4% (no bias)
      CRN-2: 9% (no bias)
      CRN-3: 17% (1C warm bias)
      CRN-4: 56% (=>2C warm bias)
      CRN-5: 14% (=>5C warm bias)

      Result: =>2.0C

      Most of this is suburban/exurban creep and the switchover to MMTS occurring since 1980. So it’s not a mere offset–it’s a delta.

      This does NOT include UHI. Many of the worst violations involve rural stations. If UHI is being lowballed (as per LaDochy, 2007), it’s worse.

      “As for doing the *right* thing scientifically, there are lots of people doing exactly that: at NASA GISS, HadCRU, and through the IPCC.”

      Sure. But if they are operating off badly compromised data, their conclusions are bound to be affected. I am NOT saying AGW is a “fraud”. I say the data needs due diligence.

      Confirming the site measurements would be relatively CHEAP, EASY, and QUICK. So why not just do it? Let the scientific chips fall where they may.

    • chriscolose // February 4, 2008 at 6:01 am

      Two points for the first person who can see the problems in the latest post over there on the Arctic. I probably won’t comment there, it would just get lost in the comments.

    • Evan Jones // February 4, 2008 at 6:23 am

      Volume vs. coverage.

      However, coverage is the vital factor because that’s the one that affects the albedo and ties into positive feedback.

      Half a point?

    • sod // February 4, 2008 at 7:07 am

      482 stations measured so far.

      CRN Ratings, CRN effects (CRN Handbook):
      CRN-1: 4% (no bias)
      CRN-2: 9% (no bias)
      CRN-3: 17% (1C warm bias)
      CRN-4: 56% (=>2C warm bias)
      CRN-5: 14% (=>5C warm bias)

      Result: =>2.0C

      Evan Jones, this conclusion clearly demonstrates, that you have not the slightest clue about this subject.

      the error is a POTENTIAL error, not a permanent one. you can t calculate an average as you did!

      we had a nice discussion about it, over at CA.
      my argument was, that most people using the Watts version of the MeteoFrance error don t understand it at all. you are a living proof!

    • sod // February 4, 2008 at 7:10 am

      here is a quote and a link to what Hu said about it:

      The Besse83 page is based on Leroy’s paper, but like CRN omits the question marks after the error estimates that are present in the original. I was originally planning to translate the Besse page, since it is quasi-official and all I could find, but then Leroy sent me his original, which is in fact a little different. Perhaps CRN is based on the Besse version, since it confidently omits the question marks.

      Basically, Leroy is saying that the error in a Class 5 site may be as big as 5 dC or even higher, not that it is certainly at least 5 dC. This is just a judgement call, not a documented measurement.

      http://www.climateaudit.org/?p=2641#comment-205498

    • Anthony Watts // February 4, 2008 at 7:45 am

      SOD,

      Its not the “watts” version, no such label has ever been applied. Please do not use that label.

      Its the NOAA version as used to define their own CRN network site quality evaluation. In absence of any better system for site quality ratings that anyone has proposed, this is what is used.

      Note the signatories that signed off on it. See the “approval page - ii “:

      Thomas Karl 1/6/03 director of NCDC.

      Original document here: http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/program/X030FullDocumentD0.pdf

      I figure if its good enough for him (Karl) to put his signature to, and for use in NOAA’s new flagship surface measurement system, its good enough for this purpose. If you want to quibble over it that’s fine, but I’m sticking with the NOAA definition since it has been endorsed by Karl and others at NOAA.

      I think its a good rating system, and I think they did a splendid job setting up the new CRN. I’ve spoken with the director of the CRN, Dr. Bruce Baker, he’s seen my work, and even asked for a copy of what I presented at Pielke’s conference in August. Karl’s seen it, Petersen’s seen it (so I’m told by Baker). They had no complaints whatsoever.

      I haven’t done any analysis such as Mr. Jones did. The ratings are used to define the site quality on an easy to interpret scale, and to tabulate a census of stations. Later, when the majority of the network is surveyed, and geographic distribution is better, it will be used to test ranges of stations with those quality assignments.

    • fred // February 4, 2008 at 8:38 am

      Two things I don’t grasp about the surface stations stuff.

      One, people keep saying the US is only 2% of the earth’s surface. Yes. But how much of the earth’s surface is included after you add in the surface station data from the rest of the LAND surface? Its not like we are comparing 2% versus 100% is it? Or have I got this wrong? If you add in ROW do you add in surface measurements from the oceans as well?

      Two, I just cannot understand the hostility to the surface stations project. Surely we do want our surface station data to be measured by stations which conform to the criteria? Otherwise, why have criteria in the first place? Are people seriously saying that siting and station histories simply don’t matter and should not be investigated? Fine learn the basic principle of quality management, and rewrite the specs. Specs and procedures must conform. If you can’t change the procedures, at least make the spec describe what you are really doing. Anything else is fraud and you’ll pay for it sooner or later in rejects and rework.

      As for Anthony Watts’ motivation, who cares? The issue is not about that, but about the stations.

    • dhogaza // February 4, 2008 at 9:31 am

      Mr. D’Aleo is not claiming to have found the holy grail. He is claiming to have found a coincidence.

      His coincidence regarding nearby ocean temperatures is equivalent to the “coincidence” responsible for the fact that when I turn on the oven, as it gets hot, my stovetop warms up, too.

      And his suggestion is the equivalent to my claiming that it’s the stovetop’s warming, not the electric elements inside the oven, that causes the oven to get hot.

    • Hank Roberts // February 4, 2008 at 12:07 pm

      > is he right or wrong?

      No. That’s the problem.

      He’s not even wrong.

    • Barton Paul Levenson // February 4, 2008 at 12:54 pm

      What’s more, as I think I’ve demonstrated elsewhere, you can have a pretty big mean error on individual stations and still get a very small mean error for all of them combined. This is elementary statistical thinking.

    • Joe D'Aleo // February 4, 2008 at 1:07 pm

      Re Tamino et al. I am not at all impressed. As usual you said a lot and yet said nothing.

      Your point with white noise isn’t valid, because the white noise is truly random, finding a signal in it is like finding pictures in clouds, the human mind will pop out something.

      [Response: Are you really that naive, or are you just hoping to persuade those who are?

      Noise (red, white, or blue) affects statistical estimates of everything, making them far more uncertain, and increasing the expected value of the squared correlation. Moving averages, in this case, magnify the effect of that noise by a factor more than 20. Just a fact, one you didn't include and didn't mention.]

      Smoothing to remove noise from a SIGNAL is a tried and true practice. It is done in electronics all the time (a low pass filter for example to remove hiss aka Dolby noise reduction and other techniques). Here is a reference from Stanford. All you’ve done is improved the signal to noise ratio…there’s no “gotcha”.

      [Response: Smoothing also increases (in this case, severely) the autocorrelation of the data series. Just a fact. Here's what I suspect: either you totally forgot to consider this at all, which is quite embarrassing for you, or you *didn't even know it*, which is far more embarrassing. So you're desperately trying to discredit a simple truth.

      Just like you're doing with the surface temperature record, and global warming.]

      It’s no different than what GISS does to remove seasonal variation from the surface temperature “signal” to get a yearly average. I as told by someone who should know that for missing months (which increased ten-fold after 1900), GISS substitutes annual anomalies which produces a warm bias in warm eras and cold bias in cold eras. NCDC used distance weighted station anomalies for hand picked relevant nearby stations, a preferred approach. That was another reason I felt I had to settle for using the NOAA NCDC data set (along with the 60% station dropout and peer review proof that the urban and land use adjustments are insufficient and may account for 30-50% of the recent warming in the global data).

      Filtering is used throughout climate science. As for “moving averages”, they use a 12 month “moving average” to get the yearly temperature number at GISS. If what you say is true then all of GISS’s work on surface temperatures is falsified.

      [Response: Beyond belief. Are you trying to discredit your own work, or are you *really* that ignorant? Because this part of your response is far more damning than anything I've said.

      GISS (or anybody else) taking yearly averages isn't taking *moving averages*. Annual averages don't overlap, so they don't create artificial autocorrelation which is only due to the averaging process. *Moving averages* overlap -- in this case neighboring values have 91% overlap (!) -- which creates artificial autocorrelation due to the averaging process.

      And for your information, the seasonal cycle isn't removed by the annual averaging. It's removed by using a different average during the reference period for each separate month, so monthly anomaly is the difference between a given month's value and the average *for that month* during the reference period. That's how the seasonal cycle is removed from GISS *monthly* data. I guess you didn't know that either.

      Apparently you don't know the statistical impact of moving averages, you don't know the essential difference between a series of non-overlapping averages and a series of moving averages, and you sure don't know how GISS handles their data. But we're supposed to glean insight from your analysis?]

      For a similar multi-year smoothing correlation approach example see this paper http://www.cosis.net/abstracts/COSPAR02/02163/COSPAR02-A-02163.pdf. The authors claim a 76% of the variance in tree ring index was explained by solar activity and ENSO, they use 10 year running averages.

      And by the way even if smoothing inflates the apparent significance, it does so too for the CO2 and I compared the same 11 years running mean numbers for ALL the possible factors including CO2. As Evan Jones correctly stated, why not look at ALL the data and let chips fall where they may. That is the scientific method as it USED to be practiced.

      [Response: If only you and your denialist ilk would practice what you preach. Like looking at ALL the series for TSI, including the credible ones. Like looking at GLOBAL data when slinging mud at global warming.

      And by the way, it's because moving averages are applied identically to ALL the data series, that the process inflates correlations so greatly: all the series end up having similar autocorrelation structures. That's not a problem -- unless you're ignorant of it.]

      The fact that the ocean temperature cycles are in sync best with land is not a “so what” it is “exactly the point”. These are natural oscillations that control land temperatures. The IPCC AR4 in chapter 3 noted: “the decadal variability in the Pacific (the Pacific Decadal Oscillation or PDO) is likely due to oceanic processes. Extratropical ocean influences are likely to play a role as changes in the ocean gyre evolve and heat anomalies are subducted and reemerge. The Atlantic Multidecadal Oscillation (AMO) is thought to be due to changes in the strength of the thermohaline circulation.” They do so on mulitdecadal time scales, they cause temperatures over land to rise and fall accordingly.

      As both the and AMO have been in their warm mode in recent years, as they were last back in the 1930s and 1940s, it is not at all surprising that temperatures have been warm again like they were back then. Since the PDO has reverted cold and the AMO has diminished nearly 2 STD since 2005, the warming may be over and indeed the temperatures have leveled off (in the CRU and MSU and RSS satellite data sets).

      Even Pachauri the head of the U.N. IPPC said he would look into the apparent temperature plateau so far this century. “One would really have to see on the basis of some analysis what this really represents,” he told Reuters, adding “are there natural factors compensating for increases in greenhouse gases from human activities?”

      [Response: Indeed, it should be evaluated "on the basis of some analysis of what this really represents." I've done so. You should read it. Far more important, you should understand it.]

      Maybe you never had a real job and had to work with real data to make real forecasts that had to satisfy real clients to make real money. If you did perhaps you would look at all the factors and let the past guide your choice of which to use to make the best possible forecast. That is what we did in my last company developing statistical models using the all the teleconnections to decide which would provide the best probability verifications of what future anomalies will be. We didn’t get into the pseudo science world of modeling with its guesswork about forcings and parameterization schemes and feedback assumptions. We preferred to work with real data and let it drive our choices and methods.

      [Response: Well that settles it. I must never have worked with real data.

      Except of course for all those years I spent doing time series analysis in astrophysics. And now doing it in the private sector. And the fact that I'm one of the guys who *invents* methods that guys like you sometimes abuse.]

      As for using Hoyt Schatten. It was their revised data set updated through 2004. The Hoyt-Schatten TSI series uses five historical proxies of solar irradiance, including sunspot cycle amplitude, sunspot cycle length, solar equatorial rotation rate, fraction of penumbral spots, and decay rate of the 11-year sunspot cycle. It wasn’t chosen at random. It may not be your choice. It might not be the right or best one. I have high regard for Schatten and Hoyt. Schatten’s forecasts for cycles 22 and 23 were best of the lot.

      [Response: That's a lot of proxies. Other series use a lot of proxies too, or do you think Dr. Svalgaard is ignorant of their meaning and usefulness? The simple fact is that *nobody else* doing reconstructions agrees with Hoyt & Schatten.]

      And finally Joseph D’Aleo is my real name. Why do you and Eli hide behind pseudonyms? [edit]

      [Response: I was hoping you'd respond. I was also hoping your response wouldn't be quite so pathetic.]

    • dhogaza // February 4, 2008 at 1:55 pm

      Along with displaying an amazing amount of ignorance about statistical analysis, and totally misunderstanding the white noise example, Joe shows an appalling tendency to drop into pure ad hom…

      Maybe you never had a real job and had to work with real data to make real forecasts that had to satisfy real clients to make real money…

      and displays true rudeness by hinting at outing Eli’s real identity …

      Why do you and Eli hide behind pseudonyms? [hint edited, both here and in the original comment, with apologies to dhogaza and Eli]

    • George // February 4, 2008 at 2:50 pm

      For some time now, Watts has been talking about the “problems” with the USHCN stations (from barbecues, tennis courts and other things associated with summer picnics) — and by implication, possible problems with the temperature record associated with them.

      Now he is plugging an alternative theory to AGW on his site that depends on a perceived high (0.83!!!) correlation between PDO+AMO and USHCN?(!!!!)

      Maybe it’s just me, but I find that humorous.

      Also, when I look at D’Aleo’s graph of USHCN temps and PDO+AMO, it looks to me like temperature actually leads PDO+AMO for at least part of the 20th century.

      I wonder: How does that indicate “ocean induced warming for the hemisphere (and US)”? (ie, that PDO+AMO is the cause of the temperature changes in the US).

    • Zeke // February 4, 2008 at 4:00 pm

      Fred:

      Few of us are hostile to the idea of the SurfaceStations project. Its only when people use the incomplete results to make statements to the effect that,

      “Mr. Watts has shown… there may have been a bias in temperature measurements in the last two decades and the recent rise is more modest than measured.”

      or

      “the results, so far (with 40% of USHCN net observed) are a striking => 2 degree K spurious warming effect.”

      If work like that of JohnV that appears to vindicate the GISS record cannot be valid due to the incomplete nature of the project, than surely statement such as these are premature.

      Talk all you want about bad citing. But don’t imply that there is a systematic bias in GISS or HadCRU until you have some sort of proof.

    • cce // February 4, 2008 at 4:00 pm

      The RSS lower troposphere temperature analysis shows slightly more warming over the satellite era than either GISS or HadCRU. Either the problems with the temperature record ended in 1978, or, yes, the globe is warming at a rate consistent with what the thermometers tell us.

      And I would like to see the analysis that shows that 17 sites, when properly weighted, do not create a statistically sound trend for the United States. It strikes me a bit of coinicidence that the trends from these 17 sites are virtually identical to the GISS analysis.

      Also, regarding Hoyt & Schatten. Leif has remarked (and it’s obvious when plotted against others) that they are off one solar cycle prior to 1970.

    • Barton Paul Levenson // February 4, 2008 at 4:21 pm

      fred writes:

      [[Its not like we are comparing 2% versus 100% is it?]]

      Well, yes, it is. The United States constitutes 1.93% of Earth’s land surface. That’s how much weight it gets when figuring global averages.

      [[Two, I just cannot understand the hostility to the surface stations project. Surely we do want our surface station data to be measured by stations which conform to the criteria? Otherwise, why have criteria in the first place? Are people seriously saying that siting and station histories simply don’t matter and should not be investigated?]]

      Nope. They’re saying that bias at a given site can be detected statistically, and is, and is corrected for. The surfacestations.org people want the scientists to throw out the data from the stations they don’t like. No real scientist would do or want such a thing.

      When you find a bias in a data set, you don’t throw out the data set, you correct for the bias. The fossil record is biased toward creatures with hard parts — but we don’t throw out the fossil record. The cosmological surveys are biased toward events far in the past — but we don’t throw out the sky surveys.

    • Alan Siddons // February 4, 2008 at 5:41 pm

      What’s this? Tamino says,

      “modern climate science doesn’t support the idea that all parts of the globe will warm equally under the influence of greenhouse gases, it contradicts it. It’s the global average temperature that will show the signs of human influence unambiguously; we expect strong regional differences in temperature change”

      Oh really. The whole point of the Mauna Loa record is that CO2 is well-mixed in the atmosphere, so its forcing effect will supposedly be observable everywhere. Keep in mind that to the extent the greenhouse effect determines this planet’s climate, the sun does NOT. Thus you have Venus, queen of the greenhouse planets, with the same temperature EVERYWHERE - sunlit side and shadow, poles and the equator.

      It seems to me that Tamino wants to have it both ways: Greenhouse warming all over the earth with sharp climate contrasts. This is what make AGW dogma a pseudo-science. It is non-falsifiable because contradictions to the theory are included in the theory. It’s a child’s version of science.

      [Response: Going from "CO2 is well-mixed in the atmosphere" to the conclusion that we don't expect strong regional differences in temperature change, is a simpleton's conclusion. You need education in science. Seriously.

      The statement that "to the extent the greenhouse effect determines this planet's climate, the sun does NOT" is pretty much meaningless -- but it sure serves to reinforce the denialist suggestion that climate scientists are ignoring the effect of the sun.

      You've shown a childish interpretation of climate science far better than I ever could.]

    • Deech56 // February 4, 2008 at 6:01 pm

      RE: Joe D’Aleo // February 4, 2008 at 1:07 pm

      Mr. D’Aleo, you seem to take exception to Tamino’s analysis, and responded in such a way as to have your message edited. Being in science, opening up one’s work to criticism is expected, and accepting criticism is expected as well.

      If you’ve ever applied for a grant, if you’ve ever submitted a manuscript for publication, if you’ve ever given a seminar or stood before a scientific advisory board, if you’ve ever defended a dissertation or taken a graduate school qualifying exam, you would know the expectation of the scientific community (been there, done that for all of these). Reviews can be harsh, but they must be considered.

      Have you thought of submitting your work for peer review (without the contractions and exclamation points)? I am guessing that you might find the same reception among knowledgeable reviewers. At a minimum, you would have to rewrite the manuscript to address some of the concerns brought up by Tamino, justify some of your choices and maybe scale back the scope of the conclusions. If you want to be taken seriously and not lumped among denialists and wish to advance the field, this would be essential.

      And Tamino, thank you for taking the time to review this. Excellent!!!

    • dhogaza // February 4, 2008 at 6:22 pm

      Oh really. The whole point of the Mauna Loa record is that CO2 is well-mixed in the atmosphere, so its forcing effect will supposedly be observable everywhere.

      CO2 is well-mixed. However, climate science doesn’t predict uniform warming all over the planet. Never has, never will, and your wishing that it does, or your insistence that it must, doesn’t matter in the least.

      You might try reading a little basic background info before throwing your ignorance in our face.

      Keep in mind that to the extent the greenhouse effect determines this planet’s climate, the sun does NOT.

      Something else climate science doesn’t say that, never has, never will.

      Again, read some background material rather than parade your ignorance here.

      Your statements are as accurate a portrayal of climate science as someone’s claiming that if evolution were true, birds would lay chimpanzee chicks every five years.

    • dhogaza // February 4, 2008 at 6:26 pm

      Actually I misread his second snippet that I posted above (missed the “to the extent…”).

    • Evan Jones // February 4, 2008 at 6:30 pm

      “They’re saying that bias at a given site can be detected statistically, and is, and is corrected for. ”

      But they have not corrected statistically for microsite bias. Only UHI, and only via the “Lights =” process.

      That is the point.

      “The surfacestations.org people want the scientists to throw out the data from the stations they don’t like.”

      It’s not a matter of “like” or “dislike”. It is a matter of violations documented by photograph.

      It would appear that there are factors not accounted for. They should be accounted for.

      “No real scientist would do or want such a thing.”

      So far as I can see, a real scientist would want to retake the data using proper measure (both relatively cheap and easy) and compare with data using the current method. Then consider that in comparison with the historical record.

      In fact, great pains should be taken not to make any corrections to the current method in order to obtain a legitimate comparison.

      “What’s more, as I think I’ve demonstrated elsewhere, you can have a pretty big mean error on individual stations and still get a very small mean error for all of them combined. This is elementary statistical thinking.”

      But surely oversampling will only wash out a margin of error if the error is evenly distributed. If the error is primarily in one direction, the only thing oversampling will do is fix the error in place.

      “And his suggestion is the equivalent to my claiming that it’s the stovetop’s warming, not the electric elements inside the oven, that causes the oven to get hot.”

      Perhaps. Perhaps not. But that leaves the question of the dynamics of the previous round of PDO and why there was a warming at that time as well.

      If you will recall, I said

      a.) Mr. D’Aleo may have established a coincidence and that it merits checking, and,

      b.) The surface stations do not conform with CRN standards, and therefore require due diligence.

      c.) The “coincidence” of both PDO/AMO and the microsite issues should be considered in tandem as a preliminary step.

      I do not see why this should cause such reaction.

    • dhogaza // February 4, 2008 at 6:40 pm

      The NASA LWS Sun-Climate Task Group put out this report in 2003. It is a close representation of what Joe D’Aleo findings. I am not going to repeat the details of this report just read it yourself and make up your own mind.

      Well, I haven’t read it all, just skimmed a dozen or so pages, but it’s nothing at all like “a close representation of Joe’s findings”.

      It’s an outline of issues that might make for good research. They do mention AGW in some of the pages I’ve skimmed, but there’s no “AGW is false” crap in there (and why would there be, they’re scientists!).

      So, which of the 54 pages is “representative of Joe’s findings”? I don’t want to read all 54 pages searching for the passage you’ve misunderstood.

      [Response: Jim Arndt's comment (to which you respond) was so stupid, I deleted it. I tolerate an immense quantity of garbage here, in the interest of free discussion. But something *that* stinky belongs in the trash.

      But you're absolutely right, his claim that the NASA report supports Joe's findings is either idiotic or an outright lie.]

    • dhogaza // February 4, 2008 at 6:45 pm

      I do not see why this should cause such reaction.

      Because competent people who know what they’re doing are already overworked and underfunded, and the publicity-seeking sideshows such as CA and the surfacestations project are unproductive.

      The ONLY contribution made by CA has been one statistically insignificant change to recent lower-48 temps, which the denialist spin machine has transformed into a refutation of the surface climate record.

      The surface station photography project is cut from the same cloth. Much ado about doo-doo.

    • Evan Jones // February 4, 2008 at 6:46 pm

      “Evan Jones, this conclusion clearly demonstrates, that you have not the slightest clue about this subject.”

      It does, however, demonstrate that I can read the CRN handbook and do sums.

      There seems a considerable reluctance on the part of many to do either.

      “the error is a POTENTIAL error, not a permanent one. you can t calculate an average as you did!”

      Watch me. (”Open Excel, type, type, type.”)

      But seriously, folks.

      If there is a “potential” error => than 2.0C (1.99C, actually, but that would be misplaced precision) in the 40% of USHCN surface staions measured so far, well, that would merit a careful reassessment, would it not?

      (Besides, wouldn’t the oversampling factor play a role in fining down said potentiality?)

    • sod // February 4, 2008 at 6:54 pm

      SOD,

      Its not the “watts” version, no such label has ever been applied. Please do not use that label.

      sorry Anthony, but where do you think did Evan Jones pick up his “knowledge” on station types and their error margins?

      what label we use, is irrelevant. we all know, that you were the one who introduced the station types and error margins into the climate discussion.
      if people routinely leave your website with a FALSE impression about the meaning of the error, then it IS your responsibility. (at least a big part of it, as thinks get even worse when second hand knowledge is spread further over the web.)

      it s your choice, of course. you can either deny to take any responsibility for people spreading nonsense that they picked up on your website.
      or, if you don t want this to happen, you could include a small explanation on your website. i already have a mental picture of a “what does “error>=5°C” mean?” info box on the front page of surfacestations.
      if you need any help, just tell me.

      Its the NOAA version as used to define their own CRN network site quality evaluation. In absence of any better system for site quality ratings that anyone has proposed, this is what is used.

      i am aware of the NOAA paper. but you are taking an easy way out again. the site information handbook is written for professionals. it includes a disclaimer about the errors being estimated values. (page 5)
      if people who pick up their informations on your site forget about this, then it might be because the information isn t displayed prominently enough on surfacestations.

    • sod // February 4, 2008 at 7:01 pm

      It does, however, demonstrate that I can read the CRN handbook and do sums.

      There seems a considerable reluctance on the part of many to do either.

      your sums are utterly meaning less.

      if 15% of the stations don t have a significant error. and 50% “potentially” have an error >2°C on a couple of days over the year, then the error of ALL stations depends massively on HOW many days that is.
      your averaging attempt is simply FALSE.

      ps:
      Anthony, do you have any idea how different errors combine? like “error>5°C and error 30%?

    • Jim Arndtand 17 // February 4, 2008 at 7:14 pm

      Hi,

      Response, Pages 12, 13 TSI compare that to Temperature records for the stated time period. Page 16 and 17 TSI comparison to temperature. Pages 19 thru 23 Paleo climate. If you post the link then all can read.

    • Jim Arndt // February 4, 2008 at 7:18 pm

      Hi,

      Didn’t say that it falsified AGW. Just said it was a representation. If you can reserve yourself enough and try not to use the colorful metaphors maybe we can have a discussion.

    • Dano // February 4, 2008 at 7:19 pm

      Hank Roberts said above:

      The distractions and the nonsense are taking up way too much attention from people who could be helping work that ought to be the priority.

      I increasingly see comment threads as performance art rather than dialogue.

      Here in Denver, it’s guaranteed that when there’s a story in the paper about global warming/climate change, the same 29 people will wipe off the Cheetos and flood the comment thread with the e-quivalent of crayon scribbling about ‘globul warmin’s a SCAY-UM!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!’

      Same about taxes, the need for infrastructure repair, anything about civil society, really.

      Since I don’t hear the same puerile level of dialogue at public functions where the people who build and make and plan and invest in things gather, I suspect that the discussion at DotEarth is an exemplar only for on-line dialogue, where people don’t have to have social skills or have to put up with smart people questioning them to their face.

      Short story: last summer I was asked to give a talk to a group of business owners. The discussion turned to obtaining enough surface water for their business type. I said Colorado’s declining snowpack is starting to affect the way we plan, as water managers now have a voice in long-range planning, and water is going to be a serious concern for future businesses. One guy tried to argue that the Algore is just a scam, but he was quickly shut down and discussion continued.

      IMHO, you just don’t see the ClimateAudit cheer squad getting traction in non-electronic public dialogue.

      Best,

      D

    • Bob North // February 4, 2008 at 7:21 pm

      Being somewhat a skeptic myself (primarily of the projected impacts not the basis of AGW), I at first thought Mr. Aleo’s analysis was very interesting, particularly the apparent low degree of correlation between CO2 and temp anomalies compared to other potential forcings. But looking at the article the comparison periods seemed a little off. Also, it was not clear that he had compared the 11 year CO2 trend to the temp trend as he says he did in the reply above.

      Now, I don’t know that my analysis has any more statistical validity than Mr. Aleo’s but here is what I did. I compared Mauna Loa CO2 concentrations to the mean temperature anomaly of the ~40 GISS Artic stations that have a reasonably continous temperature record from at least 1930 t0 the present. I had downloaded these data for a different analysis, but there is really no bias, other than all stations are located 60 degrees or greater north and have continous records dating back to at least 1930.

      Comparing the actual annual mean anomaly from this data set to the average annual CO2 values (1958-2007), I got an R2 value = 0.454, which is similar to what is reported in Mr. Aleo’s article. However, when I compared the 5 year moving averages for both the mean anomaly and the CO2 levels, R2 jumped to 0.818 and when I compared 11 year moving averages R2 jumped to 0.967. Obviously, comparing like to like (11-year moving averages), the correlation between temperature and CO2 at my subset of stations was much greater than Mr. Aleo found. Using a subset of 72 rural Eastern US stations, I calculated an R2 of 0.915 when comparing the 11 year moving averages of temp anomaly and CO2.

      Again, I don’t know if my analysis has any more statistical sigficance, but when I compare like-to-like, I find the CO2 has a higher correlation to temperature anomaly than any of the factors Mr. Aleo cites.

      Bob North

      [Response: Yours is yet another illustration that for this analysis moving averages exaggerate correlations.

      I don't really know, but I suspect that the CO2 correlation would be stronger in the arctic than most places. This is because arctic air is so dry that it has very little water vapor, making CO2 relatively more important as a greenhouse gas in that region. That's one of the reason we expect polar amplification of global warming (contrary to those who insist it has to be the same everywhere).

      But the arctic correlation might be artificially inflated due to other arctic warming factors, particularly albedo change due to loss of ice.]

    • Bob North // February 4, 2008 at 7:28 pm

      Oops, my apologies to Mr. D’Aleo for writing his name wrong in my above post.

      Bob North

    • Alan Siddons // February 4, 2008 at 7:42 pm

      See how funny you guys are, dhogaza? I make the very legitimate point that a progressing greenhouse effect entails progressive temperature uniformity and you feel compelled to jump all over it. You people have made such knee-jerk adolescent sneering your trademark. As I say, AGW is a child’s version of science. Its proponents demonstrate that repeatedly.

    • Evan Jones // February 4, 2008 at 7:44 pm

      sod:

      Perhaps I misunderstand where you are going, here. Therefore you argue that there is no reason to recheck the surface stations?

      With over 6 out of 7 stations measured so far with a “potentiality” for 1C or more warming bias? (Please note that the CRN distinctions between CRN-3, 4, and 5 include only warming effects.)

      I do not see how it is possible to come to that conclusion.

      But if, indeed, you feel that the surface stations do deserve a second glance, then we are in full agreement as to which steps to take–regardless of your assessment of my (potential?) shortcomings.

    • Zeke // February 4, 2008 at 7:44 pm

      Not that it means that much, given all the other factors influencing climate, but if you do a simple least squares regression between annual average global temperatures from GISS and annual average atmospheric CO2 concentrations from Mauna Loa for the modern climate change period from 1975 to 2007 (e.g. when GHG forcings begin to exceed all other forcings) you get an r^2 value of 0.78.

    • Barton Paul Levenson // February 4, 2008 at 8:18 pm

      Alan Siddons posts:

      [[The whole point of the Mauna Loa record is that CO2 is well-mixed in the atmosphere, so its forcing effect will supposedly be observable everywhere.]]

      Its radiative effect will be observable everywhere. With that correction, you statement can stand.

      [[ Keep in mind that to the extent the greenhouse effect determines this planet’s climate, the sun does NOT.]]

      Huh? What? Come again?

      [[ Thus you have Venus, queen of the greenhouse planets, with the same temperature EVERYWHERE - sunlit side and shadow, poles and the equator. ]]

      That’s because Venus very little sunlight (about 2% of the top-of-atmosphere irradiance) gets through to the surface, so the back-radiation from the atmosphere almost completely overwhelms the effect of the sunlight. And the atmosphere at the base is a supercritical fluid, so what sunlight does make it down pretty much gets refracted all over the globe. Conditions completely unlike Earth.

      [[It seems to me that Tamino wants to have it both ways: Greenhouse warming all over the earth with sharp climate contrasts. This is what make AGW dogma a pseudo-science. It is non-falsifiable because contradictions to the theory are included in the theory. It’s a child’s version of science.]]

      No. The deviations from uniform warming are of specific types which can be predicted in advance and checked — more warming toward the poles, the stratosphere cooling, more warming at night than during the day, etc. Not only is it testable, but it has been tested.

    • Barton Paul Levenson // February 4, 2008 at 8:37 pm

      Alan Siddons posts:

      [[I make the very legitimate point that a progressing greenhouse effect entails progressive temperature uniformity and you feel compelled to jump all over it. ]]

      It’s not a legitimate point. It’s what experts on climate science would call “wrong” or “incorrect.”

    • Kevin // February 4, 2008 at 8:42 pm

      I will make the dubious assumption for the moment that Alan Siddons believes what he says and is not just being a troll for the fun of it. Alan, you state that “a progressing greenhouse effect entails progressive temperature uniformity,” and fromwhat I’ve read and understood, I think you may be right. Greenhouse warming models predict relatively faster warming at night than day, and in winter compared to summer, if I correctly understand. Also, and someone correct me if I’m wrong here, that is exactly the observed pattern. The _progression_ towards uniformity does not mean we jump directly to Venus-like uniformity, though, does it? I mean, is your point that the Earth is not affected by the greenhouse effect at all, or else the temperature would be the same everywhere? Because the greenhouse effect has been important in the Earth’s climate for a very long time indeed…and it has kept temperatures more constrained, and higher on average, than would be the case without the greenhouse effect. What is happening now is that human emissions are enhancing the greenhouse effect. And observations line up with what theory predicts–warmer average temperature, nights and winters warming faster. So although your argument that we should already be at a point of complete uniformity doesn’t seem founded to me (but please, prove me wrong and explain why you think this is so), your statement about a progressing greenhouse effect leading toward progressive temperature uniformity hints at one of the fingerprints of greenhouse warming that has allowed climate scientists to confidently state that recent warming is caused by GHGs, rather than some other source.

    • Barton Paul Levenson // February 4, 2008 at 8:44 pm

      Note to all — correlation is all very well, but you have to beware of something called the “spurious regression problem.” Series that are increasing with time may appear to be correlated even if they’re really not. You have to perform checks for “unit roots” to find out if your series are “stationary” or not; if not, and you can’t find that the two in question are “cointegrated,” you may have to difference the series one or more times to get something reliable to examine — i.e., take the first time derivative, or the second, depending on how far “integrated” your series is.

      Sorry if this sounds like a lot of statistical gobbledegook. Tamino can correct me if I phrased any of this wrongly. It’s actually rather new statistical math, dating from the 1970s. The problem first showed up in economics; a lot of famous regression studies turned out to be spurious based on the non-stationarity effect. They know how to handle it now.

      And I should add that, even when you perform the necessary corrections, CO2 is still correlated with rising temperatures.

    • Evan Jones // February 4, 2008 at 8:50 pm

      “The research community, government agencies, and private businesses have identified significant shortcomings in understanding and examining long-term climate trends and change
      over the U.S. and surrounding regions. Some of these shortcomings are due to the lack of
      adequate documentation of operations and changes regarding the existing and earlier observing networks, the observing sites, and the instrumentation over the life of the network. These include inadequate overlapping observations when new instruments were installed and not using well-maintained, calibrated high-quality instruments.”

      The CRN Handbook

      (The Chief of CRN has personally requested the work of Anthony Watts.)

    • dhogaza // February 4, 2008 at 10:21 pm

      See how funny you guys are, dhogaza? I make the very legitimate point that a progressing greenhouse effect entails progressive temperature uniformity

      We’re a long way from Venus, baby. In fact the same climate science that informs your unwarranted exaggeration of what we would expect to see on earth, today, says that it’s probably impossible for us to reach the Venus state.

      So it’s really hard to see where you’re trying to go with this. We’re discussing moderate warming on a cool (compared to Venus) planet, and the uniformity you expect has not, is not, and will not be predicted by climate science.

      The Chief of CRN has personally requested the work of Anthony Watts.

      The photo project might help with siting issues in the future, I doubt anyone will disagree with that.

      But it doesn’t speak at all to the ability of the NASA crew to create a reasonably accurate temperature history from the data that’s available.

    • Evan Jones // February 4, 2008 at 10:46 pm

      “Because competent people who know what they’re doing are already overworked and underfunded, and the publicity-seeking sideshows such as CA and the surfacestations project are unproductive.”

      As demonstrated above, NOAA/CRN clearly disagrees with you. Note that “private businesses” are among those cited for having pointed out “significant shortcomings”.

      Are you saying that serious siting violations are to be ignored out of hand because hardworking scientists have better things to do with their funding? Do I understand this correctly?

      Clearly the CRN does not regard the surfacestations projrct as “unproductive”.

    • Evan Jones // February 4, 2008 at 10:50 pm

      I will add that it is poor payment to harworking scientists to have them expend their limited resources on calculations based on what may be seriously flawed raw data.

    • Evan Jones // February 4, 2008 at 10:54 pm

      “But it doesn’t speak at all to the ability of the NASA crew to create a reasonably accurate temperature history from the data that’s available.”

      I agree that the old data cannot re retaken and should not be discarded. It may be necessary to adjust it, however. How much remains to be seen. Making a reasonably accurate temperature history is what this is all about.

    • tamino // February 4, 2008 at 11:09 pm

      This post is about the invalidity and irrelevance of D’Aleo’s correlation analysis. But that’s uncomfortable for the denialist camp, so they change the subject to bogus criticism of the surface temperature record. In fact, they seem to do that on just about every post. Very clever.

      If that’s what you want to do, there are lots of places to do so. Go there.

    • Dano // February 5, 2008 at 12:58 am

      But that’s uncomfortable for the denialist camp, so they change the subject to bogus criticism of the surface temperature record. In fact, they seem to do that on just about every post.

      The denialist camp’s tactics are on full-throated display at DotEarth - they are uncomfortable with the lack of credentials of the Inhofe 399, so they must trot out their standard tactics to obfuscate the fact that their list has the weight of gossamer and zero credibility.

      There is a loud chorus satacking those who point this out by using bogus criticism, with the Mighty Wurlitzer as accompaniment…

      Best,

      D

    • Evan Jones // February 5, 2008 at 2:05 am

      Very well.

      I suggest that the PDO/AMO, CO2, and Solar correlations be run with, oh, say, a 0.4C reduction to the world 1980-1998 trend and see if that correlates better or worse.

      I don’t think that would bust anyone’s budget.

      If one of them (or even all of them) fit better, I think that might tell us something.

      [Response: This is best answered by a comment by "gerald" I read on RealClimate:

      ... all well and good, and I am sure that this “new archive” will continue to confirm your previous “findings” of Global Warming based on human activities. I put it to you that all of your models may in fact be based on a major flaw. The assumption that the earth is round and not flat. Why don’t you do a complete recalculation based on the flat earth thesis and then we will see. Of course as a sceptic with lots of opinions and no scientific training I cannot be expected to do any actual real work on this matter ...

      ]

    • Hank Roberts // February 5, 2008 at 2:05 am

      All I can say is, Judith Curry wears hip boots. And I admire her for wading where she does.

    • John Mashey // February 5, 2008 at 2:19 am

      Old sayings, attributed to various people:
      “If you torture your data long enough, they will tell you whatever you want to hear” OR
      “If we torture data long enough, it will confess.” OR
      “If you torture data sufficiently, it will confess to almost anything.”

      https://content.nejm.org/cgi/content/extract/329/16/1196
      has topics:
      “Opportunistic Data Torturing* [seen in JdA's]
      Procrustean Data Torturing
      Clues to Data Torturing
      P Values and Confidence Intervals
      Can Data Torturers Be Stopped?”

      I can’t see the full article. I’d love to see the answer to the last one.

    • henry // February 5, 2008 at 5:09 am

      “and displays true rudeness by hinting at outing Eli’s real identity …

      Why do you and Eli hide behind pseudonyms? [hint edited, both here and in the original comment, with apologies to dhogaza and Eli] ”

      I wasn’t aware that Eli’s name was not up for exposure. It’s been posted on CA for a while now…

      Sorry if I found out about the “secret”.

      [Response: Secret or no, it's up to Eli whether or not he wishes to remain anonymous, or even maintain the illusion of anonymity. Here in the U.S. we have a distinguished history of pseudonymy, starting with Silence Dogood.]

    • cce // February 5, 2008 at 5:23 am

      Evan, if the UAH satellite analysis shows 0.14 degrees per decade of warming since 1979, and the HADAT2 radiosonde analysis shows 0.16 degrees of warming per decade since 1979 and the RSS satellite analysis shows warming of 0.18 degrees per decade (all lower troposphere), why do you think it is plausible that GISS and HadCRU (0.17 degrees per decade) have overestimated the warming by 0.4 degrees over 28 years?

    • fred // February 5, 2008 at 5:34 am

      BPL

      “It covers an area of about 24,490,000 square kilometers (9,450,000 sq mi), about 4.8% of the planet’s surface or about 16.4% of its land area”.

      This is what wikepedia says about the continent of North America. I know W is not authoritative, but its what I found. Is it wrong, and if so what is the right number?

      The US appears from another source to be 3,537,441 square miles, which if all this is correct, would put the US at around 5% of land area.

      This also is not the whole story. The question is, after you add back in the land areas of the rest of the world that are actually covered to appropriate levels of coverage by surface stations and omit those that are not, then how much of the planet’s surface is covered, and what proportion is the US of that?

      I am not disputing that the US surface station record is a smallish proportion of the global surface temperature record. I am just concerned that the traditional 2% number is a misleading way of representing the real underlying facts. And of course following Hank R’s injunction to trust nothing, check everything.

      In addition you give an extraordinarily misleading analogy on the data series front. The issue is not that we have a data series all the elements of which represent accurate measurements, but with biased sampling and over- representation of some areas of the topic sampled

      The issue is that, for whatever reason, we have a series of instruments which are out of the specification for the experiment, with unknown consequences for their bias.

      We know that if the spec was right, the readings are not trustworthy for the purpose intended. We do not know how or if they are biased, or if its systematic. What we know is the instruments are out of spec.

      We also know there are instruments which are in spec.

      Any responsible process on this would start by doing a series based on the ones in spec. Or alternatively, change the spec. Lets see it written down if that is what is being done. Lets see a statement to the effect that it is fine to locate instruments in parking lots and near air conditioners, lets locate a few more in such places, introduce some corrections for them. Save a lot of time and money. My back yard will be as good as anywhere. Ridiculous!

      If this were any other field, we would not insist on defending the use of defective instruments. Especially when it appears there is no need to use them anyway.

      It is an example of what is increasingly seeming like AGW denialism. The insistence on defending to the last detail the most ridiculous and obviously wrong things that have ever been done or asserted by the luminaries of the movement. Apparently out of a vague fear that if we admit even one tiny error, the whole edifice will crumble.

      Why not just say, yes, there are some stations way out of spec, no we should not be using them, and then we could move on.

    • Steve Bloom // February 5, 2008 at 6:54 am

      Thanks to Sod for pointing out that the CRN handbook standard error values are completely seat of the pants.

      Anthony wrote: “Its the NOAA version as used to define their own CRN network site quality evaluation. In absence of any better system for site quality ratings that anyone has proposed, this is what is used.”

      But did NOAA ever sign off on or suggest the use of these criteria on the existing network? No. My suspicion is that they weren’t really being used for such a purpose in France, either, but that the French had decided to use existing locations for their new network (CRN-equivalents are going in all over the world) and were only classifying them for that purpose.

      A year or so ago I spent some time noodling around the the NOAA site looking at various documents relating to this “controversy” and found a couple of relevant documents:

      One was a paper that sought to quantify microsite biases at an operational weather station (an ASOS IIRC). The upshot is that it can’t be done.

      The other was a discussion of the early CRN design that involved pairs of stations cited quite close to each other. This approach was abandoned when it was determined that even under those circumstances there was so much variation in readings that the duplication was pointless.

      To underline the implication of this latter result for Anthony’s benefit, significant microsite effects are present even at CRN-quality sites where every effort has been made to eliminate them.

      That explains that.

      BTW, Tamino, I suspect that part of the reason Joe was so upset is that the people who pay for his efforts (now there’s some *real* anonymity) want to know that they’re getting credible work in return.

    • Timothy Chase // February 5, 2008 at 8:45 am

      Torturing Data

      I found an article that references “Data Torturing” (itself entitled “Torturing and the Misuse of Statistical Tools”) and it gets into problems with smoothing:

      SMOOTHING

      Smoothing is a technique frequently used in the display and analysis of data. Although it is possible to properly use smoothing to isolate trends or eliminate high frequencies in a time series, it is also possible to apply smoothing procedures and falsely introduce periodicities. Unfortunately, such injudicious practices are becoming more common as smoothing is used to make a corresponding graph look “nice” or “friendly” to the user by hiding the natural variability in the data. An example of the dangers of smoothing is given in Balestracci and Barlow (1997) and is reproduced in Figure 3. The figure shows run charts of data from four processes. A standard analysis may indicate that the process in Figure 3a is stable and contains only noise, while the other three processes (Figures 3b, 3c, and 3d) indicate the presence of trends.

      However, as Balestracci and Barlow reveal, the four graphs show the exact same data! The difference is that Figure 3a displays the raw data while the other three figures present the data as various smoothed versions of the raw data. To construct the charts, raw data points were generated fforn a normal distribution. The raw data are plotted in Figure 3a and exhibit stable behavior, Figure 3b was created by taking rolling averages of four raw data points, Figure 3C consists of rolling averages of twelve raw data points, and Figure 3d was const.meted using a rolling average of fifty-two. This example demonstmtes that smoothing can result in creating the appearance of something special in the process that does not actually exist. This form of data torturing may result in taking action when no justification exists. If instead the second principle of statistical thinking, that all processes vary, is recognized then arbitrary smoothing endeavors become pointless.

      SAND99-2178C / Paper ID #126 / Data Torturing and the Misuse of Statistical Tools
      Marcey L. Abate
      Sandia National Laboratories
      http://www.osti.gov/bridge/servlets/purl/10185-Dm8YvW/webviewable/10185.pdf

      The author points out that smoothing may introduce false periodicities and trends just as Tamino pointed out that smoothing may introduce false correlations. In point of fact, the case involving false trends is may be viewed as a special case of false correlations being introduced between the “dependent” variable and the “independent” variable where smoothing is simply applied to the dependent variable.

      In any case, I would have thought Tamino’s example clear enough that no one would have any difficulty grasping the nature of the principle that is involved.

    • Andrew Montgomery // February 5, 2008 at 10:06 am

      For all those - like myself - who have attempted to take a dispassionate interest in this debate and have found the endless blasphemy invective and ridicule tiresome I suggest they read an excellent article published on global warming in today’s New Zealand Herald.

      Gywnne Dyer is an exceptional intellect and provides an excellent summary of the position.
      I am not sure how to refer readers to this article and so I have cut and pasted it -
      Gwynne Dyer: Talk is cheap, and we will all pay for it
      5:00AM Tuesday February 05, 2008
      By Gwynne Dyer

      Climate Change
      Lobbyists attack carbon plan
      Bryan Walker: More hot air won’t help save planet
      It’s an old joke - everybody talks about the weather, but nobody does anything about it. The same, unfortunately, is true for the climate.

      They are talking about it. They were at it again in Honolulu last week, discussing mandatory, internationally binding commitments on greenhouse gas emissions (although Russia and India refused to allow any mention of that subject in the final statement).

      At the Bali meeting in December, China even hinted that it might consider something like binding emission caps in the long run. But there is no sense of urgency.

      Not, at least, the sense of urgency that would be required to take actions that would invalidate the prediction, in the latest issue of the journal Science. It suggested that climate change may cost southern Africa more than 30 per cent of its main crop, maize (also called corn or mealies), by 2030.

      No part of the developing world can lose one-third of its main food crop without descending into desperate poverty and violence.

      Even some parts of the developed world would be in deep trouble at that point. One part of the developed world, Australia, is already in trouble, with its farmers facing what may be a permanent decline in the country’s ability to grow food, although Australia’s overall wealth is great enough to cushion the blow. But elsewhere, the mentality of “it can’t happen here” persists.

      Over the past couple of years, due to a major shift in public opinion, we have arrived at something close to a global consensus that climate change is a major problem. Even George W. Bush now says that he is concerned about it.

      But there is no consensus on the best measures to deal with the problem, even among the experts, and the general public still does not grasp the urgency of the situation.

      The two Democratic candidates for the presidency in the United States promise 80 per cent cuts in emissions by 2050, and John McCain for the Republicans promises 50 per cent cuts by the same date, and nobody points out that such a leisurely approach, applied in every country, condemns the world to a global temperature regime at least 3-4C warmer than today.

      Nobody points out that those are average global temperatures which take into account the relatively cool air over the oceans, and that temperatures over land would be a good deal higher than that. Few people are aware that these higher temperatures will prevent pollination in many major food crops in parts of the world that are already so hot that they are near the threshold, and that this, combined with shifting rainfall patterns, will cause catastrophic losses in food production.

      And hardly anybody says that it is going to get really bad as early as 2030 unless we get global emissions down by 80 per cent by 2020, because “everybody knows” that that is politically impossible, and nobody wants to look like a fool. So we must just hope that physics and chemistry will wait until we are ready to respond.

      But here is a bulletin from the front. Over the past few weeks, in several countries, I have interviewed a couple of dozen senior scientists, government officials and think-tank specialists whose job is to think about climate change on a daily basis. And not one of them believes the forecasts on warming issued by the Intergovernmental Panel on Climate Change last year. They think things are moving faster than that.

      The IPCC’s predictions in the 2007 report were frightening enough. Across the six scenarios it considered, it predicted “best estimate” rises in average global temperature of between 1.8C and 4C by the end of the 21st century, with a maximum change of 6.4C in the “high scenario”.

      But the thousands of peer-reviewed scientific papers that the IPCC examined in order to reach those conclusions dated from no later than early 2006, and most relied on data from several years before that.

      It could not be otherwise, but it means that the IPCC report took no notice of recent indications that the warming has accelerated dramatically. While it was being written, for example, we were still talking about the possibility of the Arctic Ocean being ice-free in late summer by 2042. Now it’s 2013.

      Nor did the IPCC report attempt to incorporate any of the “feedback” phenomena that are suspected of being responsible for speeding up the heating, like the release of methane from thawing permafrost. Worst of all, there is now a fear that the “carbon sinks” are failing, and in particular that the oceans, which normally absorb half of the carbon dioxide that is produced each year, are losing their ability to do so.

      Maybe the experts are all wrong. Here in the present, out ahead of the mounds of data that pile up in the rear-view mirror and the studies that will eventually get published in the scientific journals, there are only hunches to go on.

      But while the high-level climate talks pursue their stately progress towards some ill-defined destination, down in the trenches there is an undercurrent of suppressed panic in the conversations.

      The tipping points seem to be racing towards us a lot faster than people thought.

      * Gwynne Dyer is a London-based independent journalist whose articles are published in 45 countries

    • P. Lewis // February 5, 2008 at 10:51 am

      Re the Joe D’Aleo reply.

      Hmm!

      Your point with white noise isn’t valid, because the white noise is truly random, finding a signal in it is like finding pictures in clouds, the human mind will pop out something.

      I’m somewhat perplexed by this statement, Mr D’Aleo. Perhaps you need to re-read what Tamino actually wrote on this score. There’s no human mind finding pictures in clouds or the like (except maybe you in your initial analysis). It’s a pure mathematical exercise. It is merely an illustration of what happens when you apply an arbitrary 11-point moving average to a data set that initially, and knowingly, contains no correlation and then calculate the new coefficient of determination. Now apply that thought to a smoothed time-series data set (e.g. like the PDO and AMO indexes).

      [And since I composed this comment before Bob North's superb example, congrats to him on producing Tamino's well-known point in an actual data set.]

      For those with a developing interest, a time-series analysis 101 can be obtained at Statsoft.

      For those who want an independent take on the problems of autocorrelation (though I ask myself why one would want this when Tamino has produced adequate evidence of this feature in this and other posts over the last 2 years and when it is a well-known problem at the coal-face in data analysis in science and finance/economics), you might like to look here and here (the latter being just a crib sheet).

      Now Tamino is oviously too courteous to raise the obvious point arising from your comment

      Maybe you never had a real job and had to work with real data to make real forecasts that had to satisfy real clients to make real money. … That is what we did in my last company developing statistical models using the all the teleconnections to decide which would provide the best probability verifications of what future anomalies will be. … We preferred to work with real data and let it drive our choices and methods.

      Enough said?

      And an obvious question then relates to what Mr Watts should do now in light of Tamino’s dissection of Joseph D’Aleo’s “paper”. Does it need to be asked? Does it need to be answered?

      At the very least, I would have thought, in the interests of scientific discussion and balance Mr Watts should post a prominent link to Tamino’s piece about Joseph D’Aleo’s “paper” (but perhaps this has already been done — in which case one could say well done Mr Watts).

    • P. Lewis // February 5, 2008 at 11:16 am

      Re Evan Jones’s comment.

      Very well.

      I suggest that the PDO/AMO, CO2, and Solar correlations be run with, oh, say, a 0.4C reduction to the world 1980-1998 trend and see if that correlates better or worse.

      Why would one want to “run with, oh, say, 0.4C reduction to the world 1980-1998 trend”?

      Would it be to remove the global warming signal from the PDO and AMO indices perhaps?

      Tamino’s points about how such an analysis is carried out would remain.

      [Aside: unless Joe D'Aleo used the raw temperature data -- and his links seem to indicate otherwise -- then the index values he has plotted are the PDO and AMO data which have already been detrended to remove global warming signals (as is made clear in the accompanying information for the data).

      Sigh ... yes, I know what's likely to ensue now: more useless septic tripe.]

      And anyway, you wouldn’t want to “take” 0.4°C from each individual year from 1980 to 1998 would you? Wouldn’t that be silly, the Aside notwithstanding?

    • Barton Paul Levenson // February 5, 2008 at 1:16 pm

      Not to mention “Student.”

    • Deech56 // February 5, 2008 at 1:17 pm

      John Mashey, to answer your question (Can Data Torturers Be Stopped?):

      “Many, if not all, of these data-torturing techniques have been familiar to experts for years. Some were described in the aptly titled book How to Lie with Statistics, published nearly 40 years ago. Unfortunately, little has been done to alert the medical community to these abuses, or to eradicate them.

      “How can data torturing be prevented? It cannot. However, journals can demand information from authors that will discourage it:…”

      In the scientific literature, maybe, if editors and reviewers are vigilant. In the blogosphere - eh…

    • tamino // February 5, 2008 at 2:07 pm

      Most of yesterday, this was one of the most active threads ever. Then I suggested that those who want to discredit the surface record can go elsewhere … and suddenly it got real quiet.

      But that (non-)issue is flirting with a revival. It’s not appropriate on this thread.

      Perhaps soon I’ll open a thread just for those who want to argue about the reliability of the surface thermometer record. It’s not that it hasn’t been done to death repeatedly, but people seem to like to argue, and about this in particular. In the meantime, comments here should be on topic, or at interesting.

      Also, I’ll take the opportunity to give D’Aleo credit where it’s due. He didn’t just propose a crazy idea and make claims on that basis; he didn’t just report correlations which we couldn’t verify for lack of sufficient information. He actually did analysis, and wrote it up in sufficient detail that it could be evaluated.

      That’s the way it’s supposed to be done. I don’t agree with his result (obviously) and I don’t think it’s meaningful anyway, but it’s there for all to see and to confirm/deny based on its merits.

    • JM // February 5, 2008 at 3:36 pm

      Evan Jones: “But I don’t think [D'Aleo's] motive matters. Is he right or is he wrong? That is the question.”

      He’s wrong.

      Tamino does a great job here, but because of his background in time series analysis he sometimes underplays something that is

      very dear to the hearts of physical scientists - the math doesn’t matter **** if the model isn’t based in the real world.

      You had better be prepared to back up your arguments with reference to real world physics or any amount of hand waving over

      numbers doesn’t matter diddly-squat.

      So we sometimes end up on this forum with endless hair-splitting that serves more to educate the ignorant (I include myself

      in that group) in the subltities of statistical analysis, than illuminate what is actually happening.

      In this case however, Tamino gets it right. “Teleconnection” is just a fancy word for the obvious. Two places that are

      next to each other will have similar climate - on any scale you care to name. Tamino is also right in pointing out that

      the US is not the globe, and that while the North Pacific and the North Atlantic are both next door to the US, they aren’t

      the whole globe either.

      D’Aleo compares the US to its adjacent oceans and finds their climate is related. Big deal. Surely this is not even worth

      “pictures at 11″.

      Going on and claiming that this commonplace observation undermines the science on a global scale? Ummm. 3 exclamation

      points are overdoing it I think.

    • JM // February 5, 2008 at 3:44 pm

      Me: ” .. Two places that are next to each other will have similar climate ..”

      Sorry, that should have been “correllated climate”

    • Barton Paul Levenson // February 5, 2008 at 5:28 pm

      fred writes:

      [[If this were any other field, we would not insist on defending the use of defective instruments. Especially when it appears there is no need to use them anyway.]]

      No. You simply do not understand how scientists use data. You do not throw out biased data. You correct for the bias.

      The fossil record is biased toward creatures with hard parts. But palaeontologists don’t throw out the fossil record.

      The sky surveys are biased toward galaxies earlier in time. But cosmologists don’t throw out the sky surveys.

      Your pals at Surfacestations.org and the right-wing denialosphere want to toss the land surface temperature record because they think (wrongly) that if they do, evidence of global warming will go away. They don’t seem to get that warming has also been detected in sea surface temperatures (are there badly sited temperature stations or urban heat islands on the ocean?), borehole temperatures, balloon radiosonde temperatures, satellite temperatures, melting glaciers and polar ice, tree lines moving toward the poles, growing and blooming seasons coming earlier each year, animals migrating toward the poles, etc.

    • Evan Jones // February 5, 2008 at 5:42 pm

      Hey! It got “real quiet” because a.) It was getting darn late and b.) My last post wound up on the D List!

      The only reason the surface temps record came up was my suggestion that the standard graph should be “bent” a bit to compensate for microsite bias in order to see if PDO/AMO, TSI, or CO2 correlate better.

      I will be happy to reply to the recent posts if/when I can. Back later.

      “Light=O”, Out!

    • Jeremy Shaw // February 5, 2008 at 5:59 pm

      Can someone answer a question for me, relating to Chris’ comment on feedbacks above. I’ve seen the LW “enhancement” of 150 to 170 W/m2 cited above before, but looking at Held and Soden figure 1 (http://www.gfdl.noaa.gov/reference/bibliography/2006/bjs0601.pdf), if you go to the line resembling “all feedbacks” it looks like ~1.5 W/m^2/K” Don’t you just do 1.5 x 3 degrees = 4.5 W/m^2??”

      That would be 4 W/m^2 from CO2 + 4.5 W/m^2 from all feedbacks = no more than 10. How do you get to 20, or am I just looking at this all wrong?

    • Evan Jones // February 5, 2008 at 6:59 pm

      “And anyway, you wouldn’t want to “take” 0.4°C from each individual year from 1980 to 1998 would you?

      No, no. Start at 1/18 that and pro-rate!

      cce: The IPCC version of surface temperatures indicate c. 0.8C increase since 1979 - 1998 peak.

      I am saying that the real increase may be around half that. Or c. 0.4C lower.

      You would seem to be indicating the less extreme tropospheric sat. measure . I would not reduce those emeasures by 0.4C! I suspect the increase you are claiming is probably closer to what’s actually going on.

    • Evan Jones // February 5, 2008 at 7:05 pm

      “Perhaps soon I’ll open a thread just for those who want to argue about the reliability of the surface thermometer record. It’s not that it hasn’t been done to death repeatedly, but people seem to like to argue, and about this in particular.”

      Well, it is an ongoing argument–on an ongoing set of observations. Such things die many times before their death. I’ll be there if you do.

      P.S., your fairminded comment re. Mr. D’Aleo noted, appreciated (though I still think you were too hard on him).

    • Chris Colose // February 5, 2008 at 7:51 pm

      Looks like they got another post over there…if you want some laughs check it out…January a new cold month in x number of years!!! This mus tbe the new “global warming has stopped.”

    • Timothy Chase // February 5, 2008 at 8:20 pm

      fred wrote:

      The US appears from another source to be 3,537,441 square miles, which if all this is correct, would put the US at around 5% of land area.

      This also is not the whole story. The question is, after you add back in the land areas of the rest of the world that are actually covered to appropriate levels of coverage by surface stations and omit those that are not, then how much of the planet’s surface is covered, and what proportion is the US of that?

      All that sounds about right. Incidentally, I at least tend to respect the straight science/facts I find in Wikipedia. Still, 5% is of the land area is small enough that it isn’t very representative of land per se, and it is still less than 2% of the global surface area.

      But what is more important than simply the proportion of surface area is it is all pretty much located in the same place. The temperature anomalies of the entire contiguous United States is strongly correlated.

      The success of Hansen’s and Barberra’s approach depended on the principle that temperature anomalies have a much larger scale than absolute temperature. Consider a mountain on which it can be much cooler on one side than the other. This example illustrates how absolute temperature patterns can vary sharply over relatively short distances. On the other hand, temperature anomalies are typically large-scale events driven by Rossby Waves. Rossby Waves are slow-moving waves in the ocean or atmosphere, driven from west to east by the force of Earth spinning. We see such waves in the atmosphere as large-scale meanders of the mid-latitude jet stream.

      “If it is an unusually warm winter in New York, it is probably also warm in Washington, D.C., for example,” Hansen explained. “At high- and mid-latitudes Rossby Waves are the dominant cause of short-term temperature variations. And since those are fairly long waves we didn’t think we needed a station at every one degree of separation.”

      Earth is Cooling…No It’s Warming
      http://earthobservatory.nasa.gov/Study/GISSTemperature/giss_temperature2.html

      Likewise, it is largely affected the same way by climate oscillations, which however will not be affecting the entire globe the same way.

      Please see:

      The Oceanic Influence on North American Drought
      http://oceanworld.tamu.edu/resources/oceanography-book/oceananddrought.html

      And some of these oscillations appear to be sensitive to changes in climate forcing, such that for example, the Arctic Oscillation and ENSO (for example) tend towards their positive phase under stronger solar and greenhouse gas forcing.

      For example, please see:

      We utilize the GISS Global Climate Middle Atmosphere Model and eight different climate change experiments, many of them focused on stratospheric climate forcings, to assess the relative influence of tropospheric and stratospheric climate change on the extratropical circulation indices (Arctic Oscillation, AO; North Atlantic Oscillation, NAO). The experiments are run in two different ways: with variable sea surface temperatures (SSTs) to allow for a full tropospheric climate response, and with specified SSTs to minimize the tropospheric change. The results show that experiments with tropospheric warming or stratospheric cooling produce more positive AO/NAO indices. Experiments with tropospheric cooling or stratospheric warming produce a negative AO/ NAO response. For the typical magnitudes of tropospheric and stratospheric climate changes, the tropospheric response dominates; results are strongest when the tropospheric and stratospheric influences are producing similar phase changes. Both regions produce their effect primarily by altering wave propagation and angular momentum transports, but planetary wave energy changes accompanying tropospheric climate change are also important.

      Rind, D., Ju. Perlwitz, and P. Lonergan, 2005: AO/NAO response to climate change: 1. Respective influences of stratospheric and tropospheric climate changes. J. Geophys. Res., 110, D12107, doi:10.1029/2004JD005103
      http://pubs.giss.nasa.gov/abstracts/2005/Rind_etal_2.html

      Given the teleconnections due to Rossby waves and climate oscillations and the relative closeness of stations in the contiguous states, temperature anomalies in among those are strongly correlated with one another, and thus US temperature anomalies are very unrepresentative of the world as a whole. However, given the distances of involved in teleconnections, it does not take a great many stations to measure the temperature anomalies for any region of the world — assuming those stations are properly distributed, and given the law of large numbers, the error associated with the average of those stations will generally become smaller the more stations one includes in that average.

      *

      Now you of course also raise the issue of the reliability and quality of station data. Siting effects and so forth. I will keep this short since Tamino wants to move on.

      A change in siting will produce a jump in the temperatures measured at a given station. It will not produce a trend. A change in siting will be something that one can pick up by that sudden jump in that stations temperature record — which does not show up in the temperature records of neighboring sites. The climatologists who specialize in this area know what to look for and know how to correct it. There have been plenty of studies. Likewise there are plenty of other indications that the global temperatures are rising just as the surface stations indicate.

      For example, there is the satellite temperature record for the lower troposphere:

      However, over the Northern Hemisphere land areas where urban heat islands are most apparent, both the trends of lower-tropospheric temperature and surface air temperature show no significant differences. In fact, the lower-tropospheric temperatures warm at a slightly greater rate over North America (about 0.28°C/decade using satellite data) than do the surface temperatures (0.27°C/decade), although again the difference is not statistically significant.

      2.2.2.1 Land-surface air temperature
      http://www.grida.no/climate/ipcc_tar/wg1/052.htm#2221

      Then there is the satellite-measured skin temperature:

      Inter-annually, the 18-year Pathfinder data in this study showed global average temperature increases of 0.43 Celsius (C) (0.77 Fahrenheit (F)) per decade.

      By comparison, ground station data (2 meter surface air temperatures) showed a rise of 0.34°C (0.61°F) per decade, and a National Center for Environmental Prediction reanalysis of land surface skin temperature showed a similar increasing trend in global and land surface temperature, in this case 0.28°C (0.5°F) per decade. Skin temperatures from TOVS also prove an increasing trend in global land surface temperatures. Regional trends show more temperature variations.

      April 21, 2004
      Earth Observatory: NASA News Archive
      Satellites Act as Thermometers in Space, Show Earth has a Fever
      http://earthobservatory.nasa.gov/Newsroom/NasaNews/2004/2004042116878.html

      Then there are the borehole temperatures, sea surface temperatures and so-forth. In fact, it all adds up to a coherent picture, fred, from our study of the absorption and emission of photons in transitions to and from the different vibrational, rotational and rovibrational states of the various greenhouse gas molecules studied in quantum mechanics and radiation transfer theory to climate oscillations and Rossby Waves.

    • Dodo // February 5, 2008 at 8:34 pm

      Tamino: “It’s the global average temperature that will show the signs of human influence unambiguously;”

      Will show? When? Or, to the spirit of this thread: Don’t you think the GMT of today shows any influence of AGW????

      [Response: 1. Does already. 2. Already does. 3. Yes, it does.]

    • Timothy Chase // February 5, 2008 at 8:52 pm

      Dodo wrote:

      Will show? When? Or, to the spirit of this thread: Don’t you think the GMT of today shows any influence of AGW????

      When?

      In the logical order, after one looks at and analyzes the data after. However, in psychological terms, genuinely looking at the data typically comes after one puts down the word games.

    • Eli Rabett // February 6, 2008 at 5:41 am

      Teleconnections in climate can exist over great distances and skip places in between. A classic is how El Nino correlates with rain in the Carribean (and hurricanes)

      And frankly I don’t understand this fascination with the identity of stuffed animals and computer programs who blog

    • fred // February 6, 2008 at 8:30 am

      BPL, they are none of them my pals, so please avoid personal aspersions. Meanwhile, your analogy continues to be misleading.

      The problem with some of the stations is that the instruments are out of spec. It is not that, like the paleo samples, there are more of some sorts than others. Well, that may be a problem too, but its not the one anyone is worried about.

      We are attempting to test code for reliability. We establish a spec for the test data set. An error creeps in. Sometimes we have used an in-spec data set, sometimes not. We do not know whether any errors, or what kinds of errors in the testing results, have been induced by the out of spec data set. What do do? Start out by using only the results from the in-spec data set.

      Imagine if it were readings of body temperature in some illness, with a view to correlating survival rates with fever levels. Some of our thermometers don’t meet spec. What effect this has had on readings, we do not know. What do we do? Start out using thermometers that meet the spec.

      Or, change the spec. That could be done too. Maybe we got it wrong. But one or the other.

      Its not about throwing out data. Its about doing measurements with instruments that are in spec. If you don’t, you don’t know if it is data.

      What you are insisting on is that it is legitimate to set up standards for instruments, and then use instruments which do not meet the standards. Its just ridiculous.

      [Response: No more of this on this thread; you and BPL should move this conversation to the Open Thread.]

    • Deech56 // February 6, 2008 at 11:26 am

      RE: Eli Rabett // February 6, 2008 at 5:41 am

      “And frankly I don’t understand this fascination with the identity of stuffed animals and computer programs who blog”

      C’mon, Eli. It’s the only research some people have ever done.

    • Barton Paul Levenson // February 6, 2008 at 1:30 pm

      fred, find a bunch of sites that don’t meet your standards, and see what their temperature trend has been for the last 30 years or so. Then find a bunch that do meet your standards and find their temperature trend. Contrast and compare. See if the trend lines are significantly different from one another. If they are, you have a case — and can probably get the paper published in the Journal of Geophysical Research or GR Letters.

      [Response: No more of this on this thread; you and Fred should move this conversation to the Open Thread.]

    • Marion Delgado // February 6, 2008 at 9:39 pm

      Oooooh, I am green with envy of Hank Roberts. That was so the exact right moment to say “not even wrong.”

    • Marion Delgado // February 6, 2008 at 9:46 pm

      Reason for pseudonyms:

      Denialist louts are tied, not to the scientific community, but to the same brownshirt network that sends death threats to W-appointed judges in pennsylvania.

      Only a fool makes it easy for them.

      Also, a pseudonym is not for hiding opinions and authorization. That’s called being ANONYMOUS. Like about half the climate denialist trolls are.

      We’re ENTIRELY accountable for what we say. We just don’t make it easy for THUGS to harrass us at home and at work, as you denialist fascist trolls have a very well-documented history of doing.

      Spare us the fake surprise.

    • dscott // February 7, 2008 at 3:40 pm

      But modern climate science doesn’t support the idea that all parts of the globe will warm equally under the influence of greenhouse gases, it contradicts it. It’s the global average temperature that will show the signs of human influence unambiguously; we expect strong regional differences in temperature change.

      Glad you acknowledge this so then why are you not questioning why the Southern Hemisphere hasn’t shown any warming in 25 years? Why are you accepting a GAT calculation that essentially allows the Northern Hemisphere (Europe and Asia) to swamp the calculation which gives a false view of planetary conditions?

      AGW has fallen flat on basic math, an average is an amalgamation of numbers which includes outliers and all numbers inbetween without respect to any frame of reference (apples and oranges). When those numbers are jumbled together all you end up with is fruit, not apples + oranges. No one who understands the basics of set theory would accept AGW as a valid conclusion. If the Southern Hemisphere isn’t warming and the US isn’t warming, then it’s not Global Warming. It’s regional warming of the European and Asian continents. Since CO2 is a global gas fairly evenly concentrated throughout the planet’s atmosphere, any claim that CO2 caused a warming in Europe and Asia but not in the US or other other HALF of the planet, i.e. the Southern Hemisphere disqualifies CO2 as the cause. CO2 physical properties don’t change in relation to the region! So you can’t have it both ways.

      [Response: Thank you!!! You've made it crystal clear that when people post ridiculous fabrications, like "the Southern Hemisphere hasn't shown any warming in 25 years" and "a GAT calculation that essentially allows the Northern Hemisphere (Europe and Asia) to swamp the calculation", such comments should be deleted.]

    • dscott // February 7, 2008 at 6:30 pm

      Of course if you wish to deny the data, go ahead and say lalalala, I don’t hear you, lalalala.

      More denial by deleting of course only proves our point, you are more interested in your own opinion and those who agree with you instead of being challenged to think for yourself instead of blindly accepting the talking points. Please note Anthony Watts didn’t delete your comments. BTW- you had no problem posting over at Wattsupwiththat saying what you wish but you have a problem with someone disagreeing with you at your site. Thanks for making that point with your nasty gram. If you have such a hard time accepting what other people say, their opinions and interpretation of the facts, then your position is clearly not supportable.

      http://mclean.ch/climate/hemispheres.htm

      Oh and please don’t insult people’s intelligence by claiming the Southern Hemisphere has less land area and therefore that’s the reason why it’s less, the temperatures used in the calculation are based on both land and sea temps. You can’t claim ocean warming as a result of AGW in the face of La Nina or the southern oceans. If CO2 doesn’t explain the Southern Hemisphere, then it certainly isn’t the explanation for the temperature Northern Hemisphere. CO2 does not work in reverse below the equator.

      [edit]

      [Response: Your own link makes a liar out of you.

      And for your information, I've never posted a comment at Watts' site. And I've never censored or edited any of his comments here.

      B-bye.]

    • Barton Paul Levenson // February 7, 2008 at 7:28 pm

      Marion Delgado thinks the southern hemisphere hasn’t warmed for the past 25 years. Taking that figure as approximate…

      Let’s look at the evidence. Here are the southern hemisphere temperature anomalies for the last 30 years (NASA GISS):

      Year Anom

      1978 9
      1979 14
      1980 34
      1981 31
      1982 14

      1983 33
      1984 18
      1985 21
      1986 18
      1987 34

      1988 33
      1989 19
      1990 33
      1991 36
      1992 12

      1993 10
      1994 15
      1995 22
      1996 41
      1997 22

      1998 54
      1999 31
      2000 21
      2001 39
      2002 52

      2003 48
      2004 40
      2005 53
      2006 39
      2007 43

      If we run a linear regression of the anomaly on the year, we get a regression line of Anom = -1787.33 + 0.911902 Year, R^2 = 0.36, t = 3.99. So the trend is significantly upward at about 0.009 K per year (0.09 K per decade). Case closed.

      This is, of course, lower than the trend in the northern hemisphere. This is to be expected, since the northern hemisphere is mostly land (low heat capacity), while the southern hemisphere is mostly ocean (high heat capacity). That the southern hemisphere would warm more slowly than the northern hemisphere was predicted a long time ago by the climate models and has, of course, been born out by the evidence. But both hemispheres are warming.

    • P. Lewis // February 7, 2008 at 8:27 pm

      Marion Delgado said no such thing. dscott said something much like that, though.

    • Deech56 // February 7, 2008 at 9:11 pm

      RE: scott // February 7, 2008 at 6:30 pm

      “Please note Anthony Watts didn’t delete your comments. BTW- you had no problem posting over at Wattsupwiththat saying what you wish but you have a problem with someone disagreeing with you at your site. ”

      To be fair, Mr. Watts did allow my post pointing to this page to go through, although with a couple of asterisks apparently added to the url.

    • Leif Svalgaard // February 12, 2008 at 3:51 pm

      About Hoyt & Schatten: They are both outstanding scientists [and good friends of mine]. Their reconstruction of TSI was one of the first [if not THE first]. That is has been extended to 2004 simply means that satellite data was added to the end of the [otherwise unchanged] series. We have moved on with better data and more insight and the ‘modern’ reconstructions show a MUCH smaller variation than H&S. In a private email to me Hoyt has this to say about the shift of one cycle [which was deliberate]: “It is probably the weakess point of the paper”. In any event, using H&S for climate correlations is now bad sicence.

    • Leif Svalgaard // February 12, 2008 at 6:33 pm

      Of course, one could nitpick a bit and say that the size of the variation doesn’t matter for the correlation, only the relative changes. If I took H&S, removed the mean, scaled the variation down by a factor of a hundred, added the mean back in, and ran the correlation I would get the same correlation coefficient [albeit with a slope 1/100th of before]. But, in general, using H&S is not the thing to do nowadays.

    • Marion Delgado // February 13, 2008 at 9:45 am

      Zeke:

      Sorry, but I MUST be counted among the few that, in fact, disagree with the project as a whole. As we have been forced to be a broken record in saying, cherry-picking a process to try to rid measurements of all errors, real or purported, that trend away from where you want the data to go does not make it more accurate. It makes it less accurate. This is a key principle.

      The two things the surface station harrassment program has added is, basically, an attempt to punish weather stations for not giving Party-approved data, and an insistence that a partisan and ad hoc theoretical model of distortion is proof of error even when actually proven error in the data is non-existent.

      It’s a sham. It has literally no connection with science whatsoever, and Watts is deliberately distorting the measured record of temperatures for partisan axe-grinding as part of an overall denialist strategy of confusion and false controversy for the purposes of delaying regulatory measures.

      How can anyone support such a thing who’s not themselves either very ideologically blind or a complete scientific prostitute?

    • Neil Fisher // February 28, 2008 at 11:57 pm

      Dear Tamino,

      I am confused! You say that correlations using averages introduce false correlations. This is certainly true. Yet in many posts, you suggest that the metric for AGW is global *average* temperature. Can you please explain why global average temperature is also not subject to spurious correlations? Thank you.

      [Response: Actually I stated that correlations using *moving* averages can introduce false correlations. For plain old averages, the "averaging intervals" don't overlap, but for moving averages they do overlap.

      Using 11-point moving averages means that any two consecutive moving average intervals have 89% overlap; the two moving averages are based on 10 of the *same* original data points, with only 1 not in common.]

    • Neil Fisher // February 29, 2008 at 12:33 am

      Dear Tamino,

      thanks for the explanation.

      However, I suspect that you may be incorrect in that averaging *does* increase correlation from what I have seen - sorry, don’t have a URL handy, but Willam Briggs blog has a post on this, and my take on it is that *all* averaging increases correlation.

      [Response: Are you referring to this? If so, please note that he states that any *smoothing* process exaggerates correlations, including the use of running means (which is another term for moving averages).

      Depending on how one defines "smoothing" it could include or exclude taking ordinary (non-moving) averages, but they still don't have the property of exaggerating correlations.]

    • Neil Fisher // March 4, 2008 at 12:15 am

      Dear Tamino,

      it seems to me that ordinary (non-moving) averages as used for, say, monthly temperature data, would indeed exaggerate correlations - or, at least, widen confidence intervals. In that specific case, isn’t the purpose of doing the averages to “smooth” the data and make any correlations more obvious to the casual observer? IOW, to remove noise. Isn’t removing noise “smoothing the curve”? Or is there another reason to perform such averages?

      Sorry to belabour the point - I hope you can help this pleb ;-)

      [Response: In my opinion, yes taking ordinary averages can be considered smoothing. It won't necessarily exaggerate correlations, but it does indeed widen the confidence intervals.

      But taking averages isn't usually *thought* of as "smoothing" -- if I were asked to smooth a time series, those in the know probably wouldn't be satisfied with simple averages. So it's no surprise that the web site you link to would state that smoothing exaggerates correlations. I don't think we have any disagreement at all, except perhaps regarding the use of terminology.]

    Leave a Comment