Open Mind

Surface Stations

July 30, 2007 · 206 Comments

There’s a website called surfacestations.org which claims to be leading the effort to conduct an objective “survey” of surface stations which contribute data to the U.S. Historical Climate Network (USHCN). USHCN data are added to the Global Historical Climate Network (GHCN), and form part of the data used by NASA GISS to estimate the global average temperature anomaly.

It’s abundantly clear that the purveyors of surfacestations.org doubt the validity of the surface temperature data. They’re surveying the stations by organizing efforts to take photographs, and identifying what they believe to be problems with the location of temperature (and other weather) data collection equipment. They have photos of over 200 USHCN stations. Never mind that taking pictures isn’t really a good way to investigate the quality of the data, or that 200 is only a small fraction of the USHCN stations, or that the U.S. is only about 1.5 % of the surface area of the globe, or that many other evidences (including glacial retreat, ocean warming, species migration, sea level rise, sea ice extent, and satellite estimates of lower-troposphere temperature) corroborate global warming over the last 30 years. Never mind that the good people at NASA GISS have worked very hard to identify any problems that exist in the source data, so that they can be corrected when possible and discarded when not. The surfacestations.org people are clearly convinced that the evidence for global warming can’t be trusted, and that the warming in the surface temperature record from thermometers is mainly due to bad siting of surface stations.

If you visit their site, the first pictures you’ll see — featured prominently on the home page — will be two stations in California, Orland and Marysville. They call Orland a “well maintained and well sited” location and Marysville as a “not-so-well maintained or well sited USHCN station”. Here are the pictures:

ssorland.jpg

ssmarys.jpg

Note also the temperature graphs overlayed on the photos; the Orland data show a sizeable cooling while the Marysville data show warming. The data are from NASA GISS, for the two stations in question. The implication is unavoidable, and in my opinion deliberate, that Marysville data show warming because the data are bad, while Orland data show cooling because the data are good.

Never mind that two stations do not a planet make. How accurate is the impression given by these graphs?

First, let’s take a closer look:

ssorland2.jpg

ssmarys2.jpg

Something is odd here. The Orland plot starts just after 1880, but the Marysville plot starts after 1900. Also, the y-axis on the Orland plot has a range of 6 deg.C while the y-axis on the Marysville plot only covers 5 deg.C; when comparing the graphs, that would tend to exaggerate the Marysville trend relative to Orland.

NASA GISS station data are available for download, so I retrieved the data for Orland and Marysville, CA. Since GISS corrects for identifiable biasing factors like station moves, instrument changes, time-of-observation bias, and yes, urban heating, I retrieved the corrected GISS data. From the raw monthly data, I computed yearly averages, keeping only those years that had data for all 12 months; plotting them together gives this:

maryorl.jpg

Suddenly the two stations don’t seem quite so different as the impression given by the graphs on surfacestations.org. Much of the different impression is due to the fact that the Orland data start decades before the Marysville data, when the Orland time series is cooling strongly. Also, GISS has corrected for known biasing factors. Lo and behold, all of a sudden the time series aren’t nearly so different!

I have often stated that the period of “modern global warming” is 1975 to the present. Others may choose a slightly different starting point, but objective mathematical analysis indicates 1975 as the “turning point” for modern warming, and that is the time interval I have always referred to. Hence the really important question to ask about these data sets is, “what’s happened since 1975?” Here’s a graph of the annual averages for complete years since 1975, together with trend lines fit to both data sets:

maryorl2.jpg

Not only are the graphs strikingly similar, the trend rates are nearly identical. Analyzing the monthly, rather than annual, data for greater precision, and accounting for the effect of autocorrelation, the indicated trends with their error ranges are:

Marysville: 3.77 +/- 1.88 deg.C/century
Orland: 4.06 +/- 1.98 deg.C/century

The trend rates are nearly the same! The difference between the computed trends is much smaller than the uncertainty in the calculations. So, for all intents and purposes, these two stations indicate the same trend during the modern global warming era.

Surfacestations.org calls the Orland station a “well-sited” station and Marysville a “not-so-well sited” station, and displays them prominently to give the impression that “good” data indicate cooling while “bad” data indicate warming. It’s ironic that when you look at the actual data, during the modern global warming era Orland indicates more warming than Marysville.

Imagine that.

UPDATE UPDATE UPDATE

Both Eli Rabett (in reader comments) and Gavin Schmidt (via email) have reminded me that the last step of NASA GISS adjustments — the correction for urban heating — uses data from nearby rural stations (like Orland) to apply a correction to non-rural stations (like Marysville). Hence in part, the urban heating correction applied to Marysville depends on the trend at Orland. Therefore these two data sets are not completely independent, so it’s not a complete surprise that they give similar trend rates.

So, I retrieved the data for Marysville before the urban-heating-adjustment was applied, and recomputed the results. The final graph now looks like this:

maryorl3.jpg

The trend rates are now:

Marysville: 5.06 +/- 1.88 deg.C/century
Orland: 4.06 +/- 1.98 deg.C/century

The Orland trend is unchanged, since it’s a rural station so it had no urban heating adjustment to remove.

The two trend rates, Orland and Marysville-without-urban-heating-correction, are still within each other’s error limits. And of course, the Orland data (labelled “good” by the surfacestations.org people) still show a large warming rate 1975-present (twice the global average, in fact). But my statement that Orland shows more warming than Marysville depends on the urban-heating correction applied to Marysville, which in turn depends in part on Orland data itself.

Categories: Global Warming · climate change

206 responses so far ↓

  • bigcitylib // July 30, 2007 at 5:26 pm

    A few simple questions.

    The data is downloadable, but the graphs were created by you and presumably Mr. Watts, using whatever software you use for the purpose. Yes?

    Is it precisely the same data, or would Mr. Watts possibly be using uncorrected data? You say “corrected” and you say “raw”. Do you mean “raw corrected” data? Or does that make sense?

    [Response: I'd guess that the graphs on the surfacestations.org website use the uncorrected data; I used the corrected data. Both are available for download from NASA GISS.

    I created the charts which compare Marysville and Orland on the same graph (using Excel). I don't know what tool was used for the graphs from the surfacestations site, but just by looking at them I'd guess they were done using Excel also.]

  • mrxyz // July 30, 2007 at 6:05 pm

    tamino, this is an off-topic comment.

    I was wondering if you able to prescribe/suggest mathematically rigorous treatments (books and/or online resources) of:
    - climate science and modeling
    - greenhouse effect

    Thank you in advance.

    [Response: A good start is an excellent online resource, The Discovery of Global Warming by Spencer Weart.

    Ray Pierrehumbert has a new textbook out, "Principles of Planetary Climate." I got a pre-publication copy online, but now you probably have to hunt it down and buy it. Search for it on the web.

    MIT has a lot of open courseware, including a course in Climate.]

  • Eli Rabett // July 30, 2007 at 6:06 pm

    GISS corrects trends at non rural stations by adjusting them to match the trend at nearby rural stations. Marysville as a non-rural station and Orland is the next closest rural station to Marysville. The trend at Marysville has been forced to that at rural stations within 500 km. I have two posts on this, one about a specific example of how this was done and one about the issue of how few rural stations there are and how these determine the trend.

  • bigcitylib // July 30, 2007 at 6:45 pm

    Thanks,

    As you probably know, Anthony Watts blog (he is behind the surface station “project”)

    http://www.norcalblogs.com/watts/

    …is filled with graphs illustrating pretty much the same theme: bad stations show warming, good stations show cooling. Wonder if they’ve all been fiddled with to give this result.

    [Response: I doubt the graphs have been "fiddled with." But I suspect there's some cherry-picking going on, and that the actual data haven't been examined very closely.]

  • John Cook // July 30, 2007 at 11:51 pm

    LOL, nice post. I wouldn’t be surprised to see the surfacestations.org homepage updated within the next few weeks - with the Orland, CA station replaced with another rural station showing more appropriate cooling from 1975 to 2007.

  • Alan Woods // July 31, 2007 at 3:09 am

    Tamino, I think you can be a bit stronger in your reply to BCL and state “I KNOW the graphs haven’t been “fiddled with”". Because, as you know, they are the GISS constructed graphs you get when click on these stations on the GISS website.

    [Response: I didn't know that. But I'll take your word for it.]

    On the issue of cherrpicking, it concerns me that previously you have used 1978 as the start of the modern warming period, but this time you have chosen 1975. Just happens to be a much cooler year than starting your analysis at 1978.

    [Response: Balderdash. I've only used 1978 when relating to satellite measurements of total solar irradiance, because that's when satellite TSI measurements begin. Otherwise, I've always used 1975.]

    I also notice that there are many data points missing in the Marysville analysis. Are these data that GISS has discarded due to lack of quality?

    [Response: Those were years which did not include data for all 12 months. For the numerical analysis, I used monthly data and analyzed anomalies, so that not all 12 months in the year are needed.]

    I dont doubt the GISS works very hard to correct the data when possible, but when you have to rely on the veracity of the rural stations to do this, your behind the 8-ball from the start. Add to this all the site changes, observation time changes, technology changes, microsite environment changes etc. that an adjustment needs to be made for and I cant help but think a better description is ‘adjustment’ rather than ‘correction’.

    [Response: Corrections give us the best available data set, and add to the statistical strength of the results. Why would anyone want any less? The term "adjustments" carries a connotation of deliberate deception. Is that what you're implying?]

    Again, I dont doubt the GISS scientists work very hard to sort this stuff out, but you can’t help but think, with at least some of these stations, they’re polishing a meadow-muffin.

  • Alan Woods // July 31, 2007 at 3:42 am

    Tamino, I shouldn’t have to say that I am in no way implying a deliberate deception. The term “correction”, for me implies that everything is fixed and as it should be and we are seeing the true temperature. I think “adjustments” to mean that we have attempted to remove all the biases in the data to the best of our abilities, but we can’t claim it represent the true temperature as you can never fix all the flaws. I think that is much closer to reality.

  • nanny_govt_sucks // July 31, 2007 at 5:23 am

    Something is odd here. The Orland plot starts just after 1880, but the Marysville plot starts after 1900.

    One of the infamous GISS “adjustments”. Some inconvenient 19th century warming was chopped off. More info: http://www.climateaudit.org/?p=1628

  • Andrew Dodds // July 31, 2007 at 10:08 am

    Couple of observations about the Marysville station..

    (a) The Cell tower. That implies a bit of extra shade, if anything, and hence cooler temperatures.
    (b) The Air-con exhausts. They don’t face the station; this is important, because convection will take the heated air straight up and away, drawing air in from the sides.. which implies cooler temperatures, again.
    (c) The asphalt. You would expect a step change in temperature when it was laid, not a trend in temperature.

    Bottom line is that there is no reason to think that every change will give a higher temperature.

  • Marion Delgado // July 31, 2007 at 10:21 am

    More zzzz yawn wha? hand-waving. Putting the urban station on a graph with the same scale as the rural one, and only looking at the years since they both had data, the difference is small.

    And it’s a lie to say that they’re just averaging growing urban areas in with rural areas. The denialists need new material more badly than Jackie Mason.

  • Dano // July 31, 2007 at 11:54 am

    The best part is referencing the cheer squad at the character assassination site.

    Best,

    D

  • Joel // July 31, 2007 at 1:58 pm

    I’ve followed this project from the start and as a volunteer have 10-12 stations under my belt. All told we have about 230+ completed, or 19% of the 1220 USCHN sites. I think that is pretty good considering we’ve only been at it 2-3 months.

    Why were those particular stations shown on the front page of the surfacestations.org web site? I’m almost certain it is because they were the first two or three surveyed by Anthony Watts. That’s all.

    A reader correctly points out above that the charts are from the GISS web site. I do not know if the various adjustments were done beforehand.

    OK, subjective qualifiers like “good” and “bad” should probably read:

    “good” = in compliance with NOAA’s own siting guidelines

    “bad” = not in compliance with NOAA’s own siting guidelines

    Presumably NOAA put some thought into these guidelines?
    You can talk all you want about parking lots not affecting trend. Hey I understand the argument, but the guidelines nonetheless specify a distance of 100 ft from any paved or concrete surface. Sounds reasonable to me, but if you don’t agree, talk to NOAA. Air conditioners aren’t specifically mentioned, but the notion that a station can be anywhere near an a/c is just silly.

    There is only 19% to draw conclusions from but from what I’ve seen NOAA/NWS/GISS certainly hasn’t “worked hard to identify any problems…” Rather, it appears to me that they haven’t put the first effort into basic quality assurance.

    BTW, my experience so far is that the NWS Coop stations in the back of private residences (I commend these people) are demonstrably better sited than the remainder, which tend to be at government facilities. Maybe someone can take an observation such as this and do something constructive.

  • dhogaza // July 31, 2007 at 4:28 pm

    One of the infamous GISS “adjustments”. Some inconvenient 19th century warming was chopped off.

    So warming isn’t real because researchers haven’t surveyed stations to remove bad ones, which means the warming that’s recorded is just an artifact of bad data.

    On the other hand, when GISS removes bad data from the record, it’s “infamous”, as in “bad”.

    Got it!

  • Cherry picking stations.org [Deltoid] · Articles // July 31, 2007 at 6:06 pm

    [...] has the scoop on the latest attempt to revive the old UHIs-mean-it’s-not-getting-warmer argument. Eli [...]

  • Science Blog » Blog Archive » Lubos Motl vs the logarithm function // July 31, 2007 at 6:19 pm

    [...] has the scoop on the latest attempt to revive the old UHIs-mean-it’s-not-getting-warmer argument. Eli [...]

  • Lance // July 31, 2007 at 6:51 pm

    Nice job of completely missing the point. USHCN has guidelines for siting data collection stations and if a casual survey of those sites turns up issues then the credibility of the data is rightly called into question.

    Everyone should be interested in the data being free of signals from AC units, BBQ grills, asphalt parking surfaces, Mig fighters etc.

    Except perhaps those that are only looking to reinforce their preconceived notions.

    [Response: I suggest that you are the one who has completely missed the point.

    This post is not about the reliability of the surface temperature record. It's about the reliability of the effort organized by surfacestations.org. Any sincere, objective effort to improve the quality of surface temperature data is both laudable and welcome. But when the home page of their site prominently displays two graphs to give the impression that "good" stations show cooling while "bad" ones show warming without examining in any detail what the data sets actually reveal, when they feature the results of an incomplete survey effort with clear implications about what the conclusions are before the effort is even 25% complete, when they show pictures to make the surface temperature record look bad without any effort to draw statistically valid conclusions based on all the data, I begin to doubt the sincerity and objectivity of their efforts.]

  • captdallas2 // July 31, 2007 at 9:47 pm

    Interesting and informative. Just curious though, when using USHCN data why not use a range more indicative of the industrial era? One hundred years when both sites have data during that time frame seems logical. There is an upturn in temperatures since the 1960’s and it is nice that both site match well since 1975. How well did they match prior to 1975?

  • MeGe // July 31, 2007 at 10:14 pm

    I was wondering about the Orland station.
    It’s presented as a good site but is it?
    There’s a nice lawn around the sensor (though not under it) and an orange tree next to it (which, says Watts, may cause problems later…)
    I was wondering what are the chances that they don’t use a sprinkler there in summer?
    Honest question, I’ve never been to CA.
    Then, of course, I was wondering what that water flume is for and how that field on the other side of it gets irrigated and what effect that might have?

    Incidentally, the orland climate station is operated by a water user’s association.

    By the by, anyone aware of this paper (press release):
    http://www.ucmerced.edu/news_articles/02082007_professor_s_research_shows.asp

    More info on the Orland Project from the bureau of reclamation:
    http://www.usbr.gov/dataweb/html/orland.html

  • Steve Reynolds // July 31, 2007 at 10:27 pm

    >The Orland trend is unchanged, since it’s a rural station so it had no urban heating adjustment to remove.

    Apparently CA claims that Orland has been adjusted with a _negative UHI_ correction:
    http://www.climateaudit.org/?p=1844

    How does that affect your argument?

  • Cherry picking stations.org [Deltoid] | COLE Blog Network dot com // August 1, 2007 at 2:18 am

    [...] has the scoop on the latest attempt to revive the old UHIs-mean-it’s-not-getting-warmer argument. Eli [...]

  • Marion Delgado // August 1, 2007 at 4:06 am

    Climate Audit claiming anything, by definition, cannot affect a serious argument. However, is it not the case that they are on the one hand positing Orland as a fine, upstanding, virgin rural site with great readings vs. Marysville as the Urban Heat Island Station with the many undiscovered, gradually increasing heat sources that only their photographs can reveal? And, on the other hand, as a station adjusted with a negative UHI correction?

    This is, e.g., a typical self-refuting argument.

  • caerbannog // August 1, 2007 at 5:00 am


    Apparently CA claims that Orland has been adjusted with a _negative UHI_ correction:
    http://www.climateaudit.org/?p=1844

    How does that affect your argument?

    Given that the *uncorrected* Orland data show a warming trend similar to that of the *uncorrected* Marysville data over the past few decades, probably not much….

  • captdallas2 // August 1, 2007 at 5:13 am

    Marion,

    You are so right! UHI should be a step function. America was built in a day. Let’s do the step in say 1950. That’s when all the construction was completed. It’s not like anything has change since then.

    Luckily, there has been no radiation forcing changes, natural oscillations or non anthropogenic influences on the climate for the past five decades.

    That greatly simplifies the calculations! Lord have mercy if the NAO started changing because of salinity in the Beaufort Gyre. That would complicate things! It is so much more comforting with the science settled.

    Of course, reduction in the NAO would be a local not global event. How could a change in salinity cause anything to happen in the rest of the world?

    Some guy with Woods Hole Oceanigraphic Institute did a study over a few decades I found interesting. Unfortunately, he had data that referred to something he called the MWP. Since the Sargasso sea is not what I would call subtropical, it is borderline, his research is not referenced far as I can tell by the IPCC.

    Then it is only proxy data. North American Bristle Cone Pines are much more indicative of the world tclimate than the silly ass Atlantic Ocean critters.

    Trees though were I grew up tended to be influenced by rainfall more than temperature. Heck, them diatoms and algae that are in the saltwater all the time probably weren’t a good indication of temperature change anyway.

    I thought it looked good that the temperatures seemed to fit history until I found out there weren’t no MWP. Now that research is just shot to all heck! Next thing you know there won’t be a little Ice Age.

  • anon // August 1, 2007 at 7:50 am

    We are seeing a process of managed retreat. That’s how it looks to a lay person. We abandon MBH98 and the Hockey Stick, without admitting it is discredited. We abandon the Chinese surface station record, again with no admission of guilt. The Vostok cores made us quietly abandon the Paleo CO2/temp argument. We would probably have to abandon Thompson’s ice core data too, but he wisely refuses to publish it. Same goes for Esper.

    We now start abandoning the US station record pre-1975. You can make all the vituperative comments you like about CA, but the fact is, in the public mind, you are increasingly losing the argument. Because every time you produce some of the concealed data, it turns out to be junk, but you don’t admit it, you just ‘move on’ to some other, usually more recent, data series.

    You are doing more damage to the argument for AGW than CA could ever do, by the approach, the tone, and by the lack of factual detail in your comments. It speaks (perhaps mistakenly) volumes. I started out with few questions about AGW, I accepted it. Then I researched MBH98. At the moment I’ve come to think AGW is either hysteria or scientific fraud, or both. Not alone, you know. Increasing numbers of lay people are coming to the same conclusion. You should worry. Well, we should all worry, if it turns out to be real. You and RC will be among the main reasons we did not take it seriously enough soon enough.

  • Paul G // August 1, 2007 at 7:56 am

    “Never mind that taking pictures isn’t really a good way to investigate the quality of the data . . .”

    Pictures from satellites are sufficient to classify a surface site as urban or rural and pictures are mandatory, on an annual basis, for the CRN. Pictures can be an invaluable tool.

    “. . . or that 200 is only a small fraction of the USHCN stations”

    200 is nearly 20% of USHCN stations with more added daily.

    “. . . or that the U.S. is only about 1.5 % of the surface area of the globe . . .”

    The only goal of the US surface site network is to measure that 1.5%.

    “. . . or that many other evidences (including glacial retreat, ocean warming, species migration, sea level rise, sea ice extent, and satellite estimates of lower-troposphere temperature) corroborate global warming over the last 30 years.”

    Bad data isn’t validated by lumping it in with other data, no matter how good the other data is.

  • dhogaza // August 1, 2007 at 8:07 am

    There is only 19% to draw conclusions from but from what I’ve seen NOAA/NWS/GISS certainly hasn’t “worked hard to identify any problems…” Rather, it appears to me that they haven’t put the first effort into basic quality assurance.

    Well, then, apparently you’ve analyzed the statistical methods used to detect anomalous readings.

    Since you’ve done so, will you please - in detail - explain to us just why this data analysis is insufficient? Don’t be afraid to speak math and statistics, since after all you are arguing that you’re better at such things than the people responsible for analyzing the data.

    I’ll be very disappointed if you respond by saying that “the photographs prove the data analysis done by NASA/GISS etc is insufficient” if you can’t tell us in detail just why this is true.

    If you can do so, though - hooray! Because no one else behind this project, included RP, Sr. has done so.

  • Dano // August 1, 2007 at 12:14 pm

    By the by, anyone aware of this paper (press release):

    Yes. The effect is well known and has been for some time. And everyone who works with this sort of thing knows that the UHI ends when you leave pavement, esp when entering irrigated fields.

    Best,

    D

  • Phlinn // August 1, 2007 at 5:16 pm

    Marion, it would be self-refuting to argue that Orlando is both good and bad, but it’s not self refuting to argue that Raw Orlando is good but GISS Orlando is bad.

    They are claiming that the raw data from Orlando is much better than the raw data from Marysville. The GISS adjusted versions, on the other hand, may not be. While attempts to remove known sources of bias are laudible, no one knows why there is an upward adjustment of Orlando for the GISS adjusted version. The worst argument would be that the GISS adjusted versions match because they adjusted both data sets to make them match.

    If the GISS adjustments aren’t used

  • Phlinn // August 1, 2007 at 5:30 pm

    Please ignore the last sentence fragment in my post. I was going to suggest that GISS data might be moot if it’s not being used by USHCN, but I don’t know enough about what does or doesn’t use the GISS adjustments to make that claim. The implication at CA is that the adjustments aren’t all that important in any case, but I could easily be reading more into it than intended.

  • Dano // August 1, 2007 at 6:04 pm

    Bad data isn’t validated by lumping it in with other data, no matter how good the other data is. (PaulG above)

    There is no evidence that data are bad (IOW, this is evidenceless rhetoric).

    Taking photographs contributes zero (nothing, nil) to providing evidence to back the evidenceless rhetoric. Capice?

    No one of import is falling for this freshman debating stunt.

    HTH

    D

  • caerbannog // August 1, 2007 at 6:34 pm


    We are seeing a process of managed retreat. That’s how it looks to a lay person. We abandon MBH98 and the Hockey Stick, without admitting it is discredited.

    The Hockey Stick has been discredited only in the minds of those who don’t understand the importance of eigenvalue magnitudes. If you don’t know what I am talking about here, that simply means that you are in over your head.

  • Steve Bloom // August 1, 2007 at 6:44 pm

    anon 7:50 am: “At the moment I’ve come to think AGW is either hysteria or scientific fraud, or both. Not alone, you know. Increasing numbers of lay people are coming to the same conclusion. ”

    Interesting. Polling data shows a strong trend in the other direction. But why let facts stand in the way of what you would like to be true?

  • dhogaza // August 1, 2007 at 6:50 pm

    You can make all the vituperative comments you like about CA, but the fact is, in the public mind, you are increasingly losing the argument. Because every time you produce some of the concealed data, it turns out to be junk, but you don’t admit it, you just ‘move on’ to some other, usually more recent, data series.

    This, to be blunt, is a load of hooey.

    You’re comment about “the public mind” isn’t even correct, because awareness of the problem has been steadily increasing. We’re seeing grassroots support for all sorts of measures to reduce CO2 emissions.

    And there’s a widespread and growing consensus among polticians worldwide, at all levels of government, that the scientific consensus is in, too.

    And as far as the data turning out to be “junk”, that’s simply a lie.

    Perhaps rather than “denialist” we should call these people “distortionists”… or … liars?

  • nanny_govt_sucks // August 1, 2007 at 6:52 pm

    On the other hand, when GISS removes bad data from the record, it’s “infamous”, as in “bad”.

    Similar temps in nearby Orland were kept. Why were high 19th century temps OK for Orland, but had to be snipped for Marysville? GISS has some explaning to do.

  • Stephen McIntyre // August 1, 2007 at 7:37 pm

    “The Hockey Stick has been discredited only in the minds of those who don’t understand the importance of eigenvalue magnitudes.”

    You keep saying this over and over in different internet locations, but you don’t know what you’re talking about. If there was some sort of argument along the lines that you advocate, you can be sure that Mann or Ammann and Wahl would have floated it. They’ve thrown a lot of spitballs against the wall but even they saw the silliness of your argument. If you think that your argument is any good, submit it to GRL or elsewhere and I’ll reply. Or submit it to climateaudit and I’ll post a thread there.

  • Dano // August 1, 2007 at 8:49 pm

    Perhaps rather than “denialist” we should call these people “distortionists”… or … liars?

    Mendacicizers.

    Trying to find another totem, as the Hockey Stick totem didn’t work, eh Steve?

    Best,

    D

  • guthrie // August 1, 2007 at 8:57 pm

    Thats the bit I don’t understand- how can the hockey stick be discredited when Realclimate can put up a graph, incorporating the recomendations made by those statisticians, yet it still shows a hockey stick shape?

  • John Cross // August 1, 2007 at 9:42 pm

    anon // Aug 1st 2007 at 7:50 am :

    Looking at the list of topics you present I would say that you are a reader of Climate Audit. While CA provides a service, it does only present one side of the argument. But more to the point it is very much involved in looking at climate “history” for lack of a better term.

    For me the basis of AGW is that adding CO2 will cause an increase in downward infra-red radiation. With that established, it is fairly obvious that under most circumstances an increase in IR radiation would cause warming. Are there complex mechanisms that would mitigate it? Perhaps but they are not published (and those that have been have not done well).

    Regards,
    John

  • nanny_govt_sucks // August 1, 2007 at 10:06 pm

    For me the basis of AGW is that adding CO2 will cause an increase in downward infra-red radiation. With that established, it is fairly obvious that under most circumstances an increase in IR radiation would cause warming.

    Except the links in your chain appear to be broken because we DON’T see the correlation between increases in CO2 and warming. CO2 increases FOLLOW warming increases in the ice cores record (and no accelleration is seen when CO2 jumps in), mid century cooling occured at a time when CO2 concentrations increased (aerosol “explanations” fail for many reasons), there’s a flattening of temp trends recently, again while CO2 concentrations rise and rise, and the most accurate global temp measurements that we have (satellite) show warming only in the Northern Hemisphere, i.e. ; It’s not global, which tends to rule out the well-mixed CO2 as a culprit.

    So, it’s a nice hypothesis, and I agree that it SEEMS LIKE temperature should rise when you add more CO2 but it just doesn’t seem to be happening in the real world. So perhaps there are some other properties of CO2 that impart a negative feedback on global temps, or something else that far outweighs the CO2 greenhouse effect has come into play.

  • Paul G // August 1, 2007 at 10:48 pm

    dano said: “There is no evidence that data are bad (IOW, this is evidenceless rhetoric).”

    Siting is bad; contaminants are numerous. NWS rules are violated.

    “Taking photographs contributes zero (nothing, nil) to providing evidence to back the evidenceless rhetoric. Capice?”

    See above. NWS rules have been violated.

  • John Cross // August 1, 2007 at 11:00 pm

    Nanny:

    Re: ice core/CO2 relation, this is the most recent paper.

    Re: sulphate aerosols, I was not aware that “aerosol “explanations” fail for many reasons)” but if you can correct your “high quality” theme I am all ears.

    Re: Satellites, it looks like you are using S&C only. Why are you ignoring RSS and more importantly, did you apply Fu’s correction?

  • Dano // August 1, 2007 at 11:01 pm

    PaulG, you have no evidence the temps are affected.

    After the blockbuster paper is written sans temps, there will still be no evidence the temps are affected.

    Until temps are measured, there will be no evidence temps are affected.

    In short, there is no evidence temps are affected.

    I hope this helps you understand that folks know that surfacestations.org is not contributing data that will determine whether temps are affected.

    Best,

    D

    PS: this is not to say anyone arguing as I have thinks that no one should look into the matter. Folks arguing as I have are stating these people and their methods ain’t gonna do it.

  • ks // August 1, 2007 at 11:21 pm

    “CO2 increases FOLLOW warming increases in the ice cores record”

    would you then conclude that the CO2 coming out of the smoke stack at the local coal burning power plant is caused by rising temperatures? Maybe the past doesn’t represent the current situation…. maybe…

    Meanwhile Paul G must be french, he’s all for “guilty until proven innocent” (get some numbers Paul, then you can be taken seriously)

  • PS // August 1, 2007 at 11:57 pm

    I seem to be missing some key point. What in the Marysville picture is responsible for the observed trend? For instance, what feature of the parking lot would cause the temperature measured in 2005 to be higher than the temperature measured in 1995?

  • Chris C // August 2, 2007 at 2:39 am

    “Except the links in your chain appear to be broken because we DON’T see the correlation between increases in CO2 and warming. CO2 increases FOLLOW warming increases in the ice cores record (and no accelleration is seen when CO2 jumps in), mid century cooling occured at a time when CO2 concentrations increased (aerosol “explanations” fail for many reasons), there’s a flattening of temp trends recently, again while CO2 concentrations rise and rise, and the most accurate global temp measurements that we have (satellite) show warming only in the Northern Hemisphere, i.e. ; It’s not global, which tends to rule out the well-mixed CO2 as a culprit.”

    So much wrong in a single paragraph.

    - CO2 follows warming in the paleo-record as it is acting as feedback, not a forcing in this instance. The forcing in the cases of glacial/inter-glacial transitions is a change in solar insolation as a result of orbital fluctuations (Milankanovic cycles). This is known to, more or less, every climatologist worth their salt;

    - Amplification of temperature response is from CO2 during inter-glacial transition is well documented. As example from Lorius et al., 1990:

    “changes in the CO2 and CH4 content have played a significant part in the glacial-interglacial climate changes by amplifying, together with the growth and decay of the Northern Hemisphere ice sheets, the relatively weak orbital forcing”.

    During warming periods of approximately 5000 years, temperature rise preceded CO2 rise for the first 800 years.

    - The 1940-1970 cooling period is readily explained, both by measurement and by global climate models. Aerosol concentrations effect on the Earth’s temperature are not yet well understood, but their cooling properties are understood well enough to accuratly reproduce the cooling period in GCMs. See:http://www.grida.no/climate/ipcc_tar/wg1/450.htm#fig127

    - There is no “flattening” of temps in any of the data sets I’ve seen. I think this is reference to Bob Carter’s rubbish that “global warming stopped in 1998″. See Tamino’s recent post: http://tamino.wordpress.com/2007/07/24/pmod-vs-acrim/
    As a matter of fact, the rate of temperature change appears to be increasing.

    - The southern hemisphere is warming along with the rest of the world, with the exception (until recently) of the antarctic. For example, here is Australia’s long term temperature change:
    http://www.bom.gov.au/cgi-bin/silo/reg/cli_chg/trendmaps.cgi

    Africa and South America are similar, along with the Pacific Islands. The surface obs are consistent with the satellite data in both hemispheres.

    Nanny, please let me give you some advice, and I don’t mean to be nasty. There is a world of literature out there on this subject. Please have a read of it before contributing to discussions where you are clearly out of your depth.

  • caerbannog // August 2, 2007 at 3:25 am


    . If you think that your argument is any good, submit it to GRL or elsewhere and I’ll reply. Or submit it to climateaudit and I’ll post a thread there.

    If “hockey-stick mining” were a real issue, you guys would have been able to plot your red-noise hockey-stick PC’s right along with Mann’s hockey-stick, on the same graph *with the same Y-axis scale*. But as we have seen in the Wegman Report, you had to *zoom* your hockey-stick by nearly an order of magnitude in order to get it to look like Mann’s (Fig 4.1 of the Wegman report). And what about fig 4.4? Are the y-axis units standard-deviation units? If so, those “hockey-stick” leading PC’s have a dynamic range of something like 0.1 sd-units. What gives with that?

  • dhogaza // August 2, 2007 at 6:01 am

    NGS, reveling in an endless state of ignorance, bloviates:

    CO2 increases FOLLOW warming increases in the ice cores record

    Which means nothing, of course.

    Of all the liars’ arguments, this perhaps is the weakest.

    And, yes, CO2 *does* contribute to lengthening the warming period after warming releases it to the atmosphere. You left out that little bit of physical reality in your lie.

    I’ve seen you post on several forums, and your tune is always the same - when reality interferes with your political beliefs, you deny reality.

    It’s sad, really.

  • John Cook // August 2, 2007 at 6:11 am

    nanny_govt, I’m curious as to how you conclude “no acceleration is seen when CO2 jumps in” regarding CO2 lagging temperature. From what I’ve read of analysis of CO2/T records, Hegerl 2006 calculates a climate sensitivity between 1.5 to 6.2C. Annan 2006 pins it down to around 2.5 to 3.5C. Eg - according to paleontological studies (quite independent of climate modelling), temperature goes up around 3C when you double CO2.

  • Andrew Dodds // August 2, 2007 at 8:00 am

    NGS -

    Why does Venus have a surface temperature in the region of 700K, if increasing CO2 does not lead to increasing temperature (or even has a negative feedback)?

    As for the rest:

    (a) CO2 is not seen ads the major driver of glacial-interglacial transitions, so any lag (if it exists) is trrelevant.
    (b) Mid century slight cooling as a result of aerosols is well established, if not, provide references.
    (c) Temperature trends have not flattened recently.
    (d) Satellite measurements are not the most accurate - a huge amount of processing and correction is required.
    (e) The warming at high lattitudes is an expected result of AGW; other ‘explanations’ like solar would not show this effect.

  • Marion Delgado // August 2, 2007 at 8:29 am

    Phlinn, I hope that’s the claim, since it’s the Marysville data that’s adjusted.

    However, I am deeply amused by this “retreat” meme. This is exactly like when I lived in E. Europe before the Berlin Wall fell. Hoenecker (no i didn’t live in E. Germany) kept claiming the tide was turning the other way. Even as people poured into Hungary from E. Germany.

    Keep it up, and bring it on, denialist whores. This is a good hill for you all to die on - your claim that climate science is “in retreat” because a rogue regime stole power in the US and put commissars in in place of people who knew and supported science. I have no doubt you can worm quite a few 6000-year-old-earth, tobacco is good for you, global warming is a myth rulings out of the packed Supreme and Federal courts, but that’s about it. Otherwise you’re already headed for the ash-heap of history. What’s so disgusting is, you’re trying to do as much damage to the world as possible for your little nut cult as you sink down out of sight.

  • cce // August 2, 2007 at 12:23 pm

    So, if the warming is only in the northern hemisphere, and therefore not the result of CO2, and “skeptics” all seem to believe it’s the sun, then we must assume that the sun likes warming the northern hemisphere and doesn’t like warming the southern hemisphere. Either that, or the climate is simultaneously complicated and simple — complicated when it comes to solar induced warming, extremely simple when it comes to GHG.

    Or maybe, just maybe the warming occurs predominently over land, which is mostly in the northern hemisphere, with even more warming in the Arctic just as the simplist models predicted 30 years ago.

    And aerosols largely do explain the mid century flat period in temperature.

  • Adam // August 2, 2007 at 12:54 pm

    Of course, a proper survey of the stations would have a detailed, accurate and reasonably precise diagram of the layout of the site, with all the potential obstructions and contaminants clearly marked, with both direction and distance labelled.

    Secondly, there would a detailed analysis of those contaminants - eg how often a a/c unit is switched on and what the output temperatures are.

    Thirdly there would be a detailed description of what those affects are - eg the temp gradient away from the a/c units, and how differing wind speeds and directions affect the gradient etc.

    Finally, there would be a station history with references to how the station changes - including contaminant changes, have affected the data.

    Carrying out such a survey would be extremely useful, even if for only a handful of sites.

  • Boris // August 2, 2007 at 2:36 pm

    “Siting is bad; contaminants are numerous. NWS rules are violated.”

    That is the sum total of what the pictures show. You need to prove your assumption that such violations invalidate the data.

    I predict that surfacestations.org and et al will claim that all poorly sited stations should be removed from any analysis. Then they (or someone else) will delve into the remaining station histories and declare those invalidated because they were moved or a cow bretahed on them in 1956. Pretty soon they will claim that there are too few stations remaining to make a statistically significant claim about the temp anomaly.

    And like Keyser Söze, global warming will disappear….poof.

  • guthrie // August 2, 2007 at 3:10 pm

    Ahh yes, here we are:
    http://www.realclimate.org/index.php/archives/2006/07/the-missing-piece-at-the-wegman-hearing/#more-328

    This explains about the hockey stick, and what happens if you do the corrections the statisticians recomended. It is still there. Does anyone have an answer to this?

    Paul G- sure, rules have been violated. I look forwards to your recomending increased funding or suggestions of re-training for the staff involved.
    Perhaps you would like to put the work in to decide whether the errors are positive or negative?

  • Paul G // August 2, 2007 at 5:47 pm

    = dano says: =
    =”PaulG, you have no evidence the temps are affected.”=

    And we have no evidence that temeprature has been measured accurately at these sites either.

    =”In short, there is no evidence temps are affected.”=

    Violating NOAA/NWS standards, we have no reason to accept the data from these sites as quality data.

    =”I hope this helps you understand that folks know that surfacestations.org is not contributing data that will determine whether temps are affected.”=

    Photgraphic documentation contributes to the metadata for any site, which is why it is required and mandatory on an annual basis at the CRN network.

    Photos are data. You can discard the data dano, others won’t.

  • dhogaza // August 2, 2007 at 7:08 pm

    And we have no evidence that temeprature has been measured accurately at these sites either.

    So, clearly, you’ve studied - in detail - the steps taken by those doing the data analysis to make the temperature measurements robust.

    Please - again, in detail - tell us why these steps aren’t sufficient, why and where they fail.

    If you’ve not studied how the data is massaged and analyzed to uncover and account for or remove anomalies, you’re just waving your noodly appendage in the wind.

  • Stephen McIntyre // August 2, 2007 at 7:16 pm

    The scale of Mann’s NOAMER PC1 is precisely the same scale as the red noise PC1s. The sum of squares of a PC add up to 1. In the regression phase of MBH, the PC1 is re-scaled and then regressed so the PC scale is irrelevant. It’s the HS shape that contributes.

    If you change the number of PCs used in the reconstruction and go down to the PC4, you can still “get” a HS - a point made in MM05b. By dipping deeper, you include the weighting of bristlecones - which are the active ingredient in the MBH HS. (Of course, the NAS panel said strip-bark bristlecones should be avoided.)

    Wegman observed that you can’t change statistical methods after the fact to simply “get” a HS.

    In addition, if you take the bristlecones out directly as a sensitivity study, you don’t get a HS. (Mann knew this - he did the sensitivity analysis himself.) Ammann and Wahl confirmed that there was no MBH HS without the bristlecones and the NAS panel noted this. Which makes Mann’s claim that his reconstruction was “robust” to the presence/absence of dendro indicators pretty laughable, given that it isn’t even robust to presence/absence of bristlecones.

    Added to all of this, the Mann reconstruction fails one of the verification tests said to have been used (verification r2). This failure was not reported in MBH98. So the Mann reconstruction is meaningless anyway.

  • cce // August 2, 2007 at 8:06 pm

    Here is an online pre-print version of the Temperature/CO2 /lag paper mentioned earlier.
    http://www.clim-past-discuss.net/3/435/2007/cpd-3-435-2007-print.pdf

  • Paul G // August 2, 2007 at 9:14 pm

    = dhogaza said: =
    =”So, clearly, you’ve studied - in detail - the steps taken by those doing the data analysis to make the temperature measurements robust.”=

    Can the data be “robust” when the data is from non-compliant sites? Especially considering that the anamolies are unknown to those doing the analysis?

    =”Please - again, in detail - tell us why these steps aren’t sufficient, why and where they fail.”=

    Non-compliant sites should be the first red flag. Obviously it isn’t.

    =”If you’ve not studied how the data is massaged and analyzed to uncover and account for or remove anomalies. . .”=

    You don’t know what the anomalies are, how can you remove them? Adjustments for other factors does not accomplish this.

  • dhogaza // August 2, 2007 at 9:18 pm

    So the Mann reconstruction is meaningless anyway.

    Yawn. You’ve been trumpeting the same bullshit for how many years now?

    Who listens to you these days? Brilliant scientific minds like Inofe? A bunch of internet synchophants? Got anyone else?

  • tamino // August 2, 2007 at 10:00 pm

    I realize that tempers can flare when passions are aroused over important issues. I’m certainly no paragon of restraint, but I do think that in most cases, those who lean toward civility are likely to be more persuasive.

  • george // August 2, 2007 at 10:31 pm

    Statistics are all well and good — PC’s, Apples, oranges, scales, weightings, Jenny Craig, r-values, p-values, de-values, regressions, digressions, aggressions, hockey sticks, lacrosse sticks, chop sticks — but sometimes it’s better to just look at a picture. (and besides, we all know what Mark Twain said about statistics).

    If you want to learn more about the temperature proxies (and see where that graph came from) you can get it directly from the horse’s* mouth.

    *some would probably call it an ass, but I regress.

  • guthrie // August 2, 2007 at 10:57 pm

    So, if I understand what you’ve said Stephen, you disagree with what the Realclimate folks actually say on that page I linked to?

    Can anyone with some statistical background then referee this?

    (I can bore you to death on chemistry, castles, SF and medieval technology, but statistics is a large blank area as far as I am concerned.)

  • Dano // August 2, 2007 at 11:03 pm

    Paul G,

    We have absolutely no evidence that you never beat your wife. Therefore, the implication is that you may be a wife beater. When did you stop beating your wife, Paul?

    Folks have played the game you are playing before; it didn’t work then, not working now.

    The soundest reasoning leads to the wrongest conclusions when the premises are false.

    Best,

    D

  • Paul G // August 2, 2007 at 11:39 pm

    = dano says: =
    =”The soundest reasoning leads to the wrongest conclusions when the premises are false.”=

    True. And it is also possible that data gathered from contaminated sites with homogeneity adjustments applied that do not address the specific site’s contaminations may be false also.

  • Alan Woods // August 3, 2007 at 12:32 am

    dhogaza, I can see you’re really struggling now, so I’ll help you out.

    Mann states in MBH98 that their recosntruction relies on the assumption that the indicators have a linear relationship with the instrumental record. Setting aside the fact that we know that tree growth does not have a linear relationship with temperature, it has been shown that the hockey stick shape of their reconstruction relies on bristlecone pines being promoted in the principal components method. Now, the 20th century bristlecone pine growth not only does not correlate with local temperatures but has been posited in previous papers to be possibly due to either CO2 fertilization or land use changes (or unkown). Simply, bristlecone pines are not a reliable temperature proxy. For this reason (as well as others) MBH98 falls down.

    I don’t understand why people get so upset about that. MBH98 is not the central plank of AGW theory, it was merely one piece of supporting evidence. It was an interesting attempt at paleoclimate reconstruction that had fatal flaws. Its failure doesn’t disprove greenhouse theory. But some people like yourself feel like you have to go to war on this issue and defend MBH98 at all costs. Why?

  • Dano // August 3, 2007 at 12:51 am

    It’s possible, Paul G, that you contaminated your wife with Polonium too.

    Anyway, this is the best they can do, folks: lather, rinse, repeat the argument from false premises.

    This will be the reaction to the paper from scientists and policy-makers’ staff: Where are the temperature measurements? Then a chuckle. Very, very few decision-makers will be briefed on the blockbuster paper.

    Best,

    D

  • Stephen McIntyre // August 3, 2007 at 4:48 am

    You ask: “Can anyone with some statistical background then referee this?”

    Wegman was Chairman of the National Academy of Sciences Committee on Theoretical and Applied Statistics. Read his reply to the House questions here:
    http://www.uoguelph.ca/~rmckitri/research/StupakResponse.pdf

    While RC has tried to vilify Wegman, his views on statistics are vastly more authoritative than those of Mann’s students and coauthors, Ammann and Wahl. He’s also critical of the seeming inability of climate scientists to grasp some pretty simple statistical points.

    The RC post evades many issues. For example, they don’t deny that verification r2 is zero. They evade this. Mann was asked about this at the NAS hearings and denied that he even calculated the verification r2 statistic. This is a falsehood.

    They also evade the issue of bristlecones in the post mentioned. Using covariance PCs, which are the default PC option, the bristlecones drop to the PC4. In the original calculation, Mann used 2 PCs from this network. Using the incorrect methodology, the bristlecones were in the PC1 and thus included. If Mann increases the number of retained PCs to more than 4, he gets the bristlecones back in. There’s never been any dispute about this. This is discussed in our EE 2005 article. The entire issue is moot if bristlecones are excluded as recommended by the NAS panel.

  • dhogaza // August 3, 2007 at 5:18 am

    And it is also possible that data gathered from contaminated sites with homogeneity adjustments applied that do not address the specific site’s contaminations may be false also.

    Those making the assertion need to back it up.

    Photos don’t. If the argument was that no stations have issues with accuracy, then perhaps the project would serve a useful purpose.

    But that point’s not in dispute, and merely creating photographs to document a known issue is useless from the scientific point of view.

    It’s clear the motive is to use the photographs to dramatically build the impression in people’s minds that the temperature record is bogus, without attempting to evaluate the analysis that’s done to make that data robust.

    A picture is worth a thousand words for propaganda purposes, especially when a thousand word treatment of the issue, honestly done, would undermine the political motivation of those organizing the photo project.

    Tamino:

    I do think that in most cases, those who lean toward civility are likely to be more persuasive.

    In most cases I agree, but this is McIntyre we’re talking about, and he’s not interested in being persuaded and will go to his grave fighting for the interests of the oil industry et al regardless of the depth and soundness of the scientific picture regarding AGW.

    He has played a great role in delaying action against AGW. Dishonest skeptics, of which he is a leading voice, have delayed action for what … a decade? at least? … and will probably succeed in watering down whatever steps are taken at the national level here in the United States.

    Tamino … want to make a back-of-the-envelope of how many lives this might cause by 2100 in places like Africa?

  • Anthony Watts // August 3, 2007 at 6:56 am

    The graphs on surfacestations.org front page for Orland and Marysvills are directly from NASA GISS, unedited, except for presentation size.

    There are links under the pictures to take you directly to those graphs at the GISTEMP website. If you are unhappy with how they look, or you think they’ve been “fiddled” as some posters have hinted at, compare, GISS is the source of the plots.

    Orland:
    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425725910040&data_set=1&num_neighbors=1

    Marysville:
    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425745000030&data_set=1&num_neighbors=1

    Tamino, it would have been a professional courtesy to at least ask what the source was before conjecturing as to how they were created.

  • Andrew Dodds // August 3, 2007 at 7:20 am

    Stephen McIntyre -

    Can you show us any peer reviewed papers showing this? For instance, in this:

    http://www.realclimate.org/images/Rutherford_fig2.jpg

    The green line completely drops out the PC analysis, so from this layman’s analysis it appears that it isn’t doing much.

    Of course, if the recent warming is matched by the MWP, the implication is that the climate is far more sensative than previously thought, which is somewhat alarming.

    Paul G -

    First you would have to demonstrate that there is a bias in ‘contaminated’ sites towards higher temperatures. Second - given the amount of spacial correlation in temperatures, we only need perhaps 20-30 stations in the whole of the US to reliably establish a trend. Perhaps the whole Surface Stations project would be better served trying to find examples of ‘pristine’ stations, if establishing a reliable temperature recors is their aim. (And if that isn’t their aim, what is?)

  • dhogaza // August 3, 2007 at 7:28 am

    Can the data be “robust” when the data is from non-compliant sites? Especially considering that the anamolies are unknown to those doing the analysis?

    How do you know they’re unknown to those doing the analysis? Assertions of this sort need to be backed up.

    Otherwise, as Dano points out, we’re in a “we have no evidence you don’t beat your wife, therefore off to the hoosegow with you!” scenario.

  • Petro // August 3, 2007 at 10:00 am

    Chris C wrote:
    “Nanny, please let me give you some advice, and I don’t mean to be nasty. There is a world of literature out there on this subject. Please have a read of it before contributing to discussions where you are clearly out of your depth.”

    dhogaza wrote to Nanny:
    “I’ve seen you post on several forums, and your tune is always the same - when reality interferes with your political beliefs, you deny reality.”

    What worries me how impotent science is against stubborn and willful ignorance. Nanny here is a case example on a person, who has during several last years denied the most fundamental facts of climatology and proudly repeated same false arguments dozens of times in different forum. The pleas to read and learn has been requested to no avail. Still, Nanny is considered as a respectable party for conversation, maybe few of the regular readers here think that he even have a point.

    I have a feeling why scientifically oriented people continues the apparently unfruitful dialogue with the denialists is a curiosity to understand their motives and ways of thinking on one side and the belief that someone would get eventually educated on other hand. For me the curiosity is the driving force, I no more believe the adult will change the strongly-held incorrect assumptions just for rational reasons.

  • caerbannog // August 3, 2007 at 4:42 pm


    While RC has tried to vilify Wegman, his views on statistics are vastly more authoritative than those of Mann’s students and coauthors, Ammann and Wahl. He’s also critical of the seeming inability of climate scientists to grasp some pretty simple statistical points.

    And climate-scientists are rightly critical of Wegman’s inability to grasp some pretty simple Earth-science concepts.

    Here is a comment Wegman made in a congressional hearing:

    Carbon dioxide is heavier than air. Where it sits in the atmospheric profile, I don’t know. I’m not an atmospheric scientist to know that. But presumably, if the atmospheric - if the carbon dioxide is close to the surface of the earth, it’s not reflecting a lot of infrared back.

    You can listen to Wegman make that remark at:
    http://tinyurl.com/328ljh (press the “listen” button — the remarks are made a few minutes into the discussion)

  • Boris // August 3, 2007 at 4:54 pm

    “The graphs on surfacestations.org front page for Orland and Marysvills are directly from NASA GISS, unedited, except for presentation size.”

    But Anthony, you’ve stated on several occasions that your purpose is to see if the sites are in compliance. Why have temperature graphs inset into the photos at all? Isn’t it misleading AND a red herring?

    It seems clear that tamino’s analysis of your intention is spot on: that is, you intended to give the impression that “good” sites show cooling and “naughty” sites show warming. It doesn’t give one much hope for accuracy when a project’s conclusion seems predecided.

  • Paul G // August 3, 2007 at 5:45 pm

    = My statement:=
    Can the data be “robust” when the data is from non-compliant sites? Especially considering that the anamolies are unknown to those doing the analysis?

    = dhogaza said: =
    =”How do you know they’re unknown to those doing the analysis? Assertions of this sort need to be backed up.”=

    Those who have done the analysis have to back it up. Care to refer me to the peer-reviewed literature where the climate experts demonstrates they have factored into their calculations microsite contaminants such as parking lots?

    We know they have applied homogeneity adjustments. There is no indication they have applied site specific adjustments for some of the more glaring inhomogenous charactersics.

  • Marion Delgado // August 3, 2007 at 5:48 pm

    This is why these people are so useless. It matters absolutely not at all whether you get charts to one scale for one site and another at another. It matters not at all that one chart starts earlier than another. It matters not at all that GISS is the source. Those are not excuses for trying to defraud people.

    Nor are we insisting anyone adjust the charts. Presenting them is just fine.

    What’s obvious fraud, frankly, is the “around 1999″ and “something suspicious happened around 1999″ and, “look at the enormous divergence” and so on nonsense.

    Admittedly, this is not particularly pernicious nonsense. By presenting the charts as you find them, you’re at least making the distortion obvious. That deserves praise.

    Nonetheless, people only will care if you show apples and apples.

    Here’s what a scientific person would have done:

    1. Presented the two graphs
    2. Explained that one started way earlier
    3. Explained that they weren’t to the same scale
    4. Probably showed adjusted graphs AFTER that, to the same scale, for only the periods where they overlap. Equations or even just numbers showing the real difference would do just as well.
    5. Not cited the jump as being around 1999, when they actually had no [edit] idea when it was.
    6. Not claimed some mysterious contaminating event happened right around then, then backpedaled and claimed it happened whenever the graph can be interpreted as saying the jump happened.
    7. Admitted the tininess of the difference.
    8. Looked around to see if stations generally trended up in 1997.
    9. Understood and even cited and explained the statistical issues involved.
    10. Understood, and even cited and explained the physical model of the climate data gathering conditions involved.

    NONE of these were done, except the first.

  • Paul G // August 3, 2007 at 5:52 pm

    = dhogoza said:=
    =”A picture is worth a thousand words for propaganda purposes, especially when a thousand word treatment of the issue, honestly done, would undermine the political motivation of those organizing the photo project.”=

    And the pictures and video of the Minneapolis bridge collapse are propaganda too? Didn’t think so.

    Anthony’s pictures, minus any political or propaganda purpose attributed to them, have already shown us that quite a few surface sites are not compliant with NOAA/NWS/WMO standards. A thousand words written or spoken does not change that.

  • Paul G // August 3, 2007 at 6:02 pm

    = Andrew Dodds said:=
    =”First you would have to demonstrate that there is a bias in ‘contaminated’ sites towards higher temperatures.”=

    Not necessarily Andrew. There can be many angles to approach an issue, with photos being one of them. What is NOAA/NWS/WMO policy when a site violates site protocol? Has this protocol been enforced? The photos add to the metadata and raise many questions at the same time.

    =Andrew:=
    =” Second - given the amount of spacial correlation in temperatures, we only need perhaps 20-30 stations in the whole of the US to reliably establish a trend. Perhaps the whole Surface Stations project would be better served trying to find examples of ‘pristine’ stations. . .”=

    Agreed. And Surface Stations might be the first ones to actually document the 20-30 pristine stations in the US (minus CRN). It does not appear the NOAA can supply the public with this information.

  • bigcitylib // August 3, 2007 at 6:53 pm

    PaulG asked:

    There can be many angles to approach an issue, with photos being one of them. What is NOAA/NWS/WMO policy when a site violates site protocol? Has this protocol been enforced?

    I think this, from

    http://www.nws.noaa.gov/directives/010/pd01013002c.pdf

    …answers that:

    “If standards can not be met by equipment in place, the standards should be
    achieved as stations are changed, equipment is installed, programs are modified, or new stations
    are established.”

    In other words the protocol is to use what you have until you can get something better.

  • Chris O'Neill // August 3, 2007 at 7:08 pm

    Steve McIntyre stated: “The entire issue is moot if bristlecones are excluded as recommended by the NAS panel.”

    The NAS panel made one statement that bristlecones should be avoided which was based on Biondi et al’s statement that this proxy is not a reliable temperature proxy for the past 150 years because it has a trend attributed to CO2 fertiliation. So the NAS panel was only talking about avoiding using bristlecones for the last 150 years. Using bristlecones only matters for the distant past so statements such as “avoid using bristlecones as recommended by the NAS panel” are a misleading quotation-out-of-context.

  • george // August 3, 2007 at 9:42 pm

    Steve McIntyre said above:
    “the Mann reconstruction is meaningless…”

    The NAS panel that reported on “Surface Temperature Reconstructions for the Last 2000 Years” did not believe the “Mann reconstruction is meaningless” (see below), but they did make one thing very clear in their report:

    “Surface temperature reconstructions for periods
    prior to the industrial era are only one of multiple
    lines of evidence supporting the conclusion that
    climatic warming is occurring in response to human
    activities, and they are not the primary evidence.”

    So with regard to the main issue — whether AGW is real –, the entire “argument” between McIntyre and Mann is really moot at this point. Climate scientists have moved well beyond this.

    About Mann’s results the national Academy had the following to say (and you can judge for yourself whether this jibes with McIntyre’s claim that “the Mann reconstruction is meaningless…”)

    “The basic conclusion of Mann et al. (1998,
    1999) was that the late 20th century warmth in the
    Northern Hemisphere was unprecedented during
    at least the last 1,000 years.

    This conclusion has
    subsequently been supported by an array of evidence
    that includes both additional large-scale surface
    temperature reconstructions and pronounced
    changes in a variety of local proxy indicators, such
    as melting on icecaps and the retreat of glaciers
    around the world, which in many cases appear to be
    unprecedented during at least the last 2,000 years.
    Not all individual proxy records indicate that the
    recent warmth is unprecedented, although a larger
    fraction of geographically diverse sites experienced
    exceptional warmth during the late 20th century
    than during any other extended period from A.D.
    900 onward.

    Based on the analyses presented in the original
    papers by Mann et al. and this newer supporting
    evidence, the committee finds it plausible that the
    Northern Hemisphere was warmer during the last
    few decades of the 20th century than during any
    comparable period over the preceding millennium.

    The substantial uncertainties currently present in
    the quantitative assessment of large-scale surface
    temperature changes prior to about A.D. 1600 lower
    our confi dence in this conclusion compared to the
    high level of confi dence we place in the Little Ice
    Age cooling and 20th century warming. Even less
    confi dence can be placed in the original conclusions
    by Mann et al. (1999) that “the 1990s are likely the
    warmest decade, and 1998 the warmest year, in at
    least a millennium” because the uncertainties inherent
    in temperature reconstructions for individual
    years and decades are larger than those for longer
    time periods, and because not all of the available
    proxies record temperature information on such
    short timescales.”

  • Anthony Watts // August 3, 2007 at 9:44 pm

    Not for post, just a question to you, Tamino. (or whomever you are, seems all the pro warming blogger don’t reveal thier names).

    But I’m goign to afford you the professional courtesy of asking first before I assume.

    Question: Why did you only show graphs in your analysis for data post 1975, when both stations have 50 years of parallel data prior to that?

    If you accuse me of presenting misleading information graphically, then I ask the same of you. Isn’t just showing part of the record misleading?

    I’m not trying to pick a fight, but I would like to know why your presentation does not include the entire record.

    [Response: Look again. The presentation *does* include the entire record; always has. The first graph in it which was created by me covers the entire time span of the data from Marysville and Orland, plotted on the same graph and on the same scale.

    The post *also* includes graphs which focus on the time span 1975 to the present, because that's the time period during which anthropogenic greenhouse gases have overwhelmed other climate forcings; it's what I like to call the "modern global warming era."]

  • dhogaza // August 3, 2007 at 9:45 pm

    And the pictures and video of the Minneapolis bridge collapse are propaganda too? Didn’t think so.

    And this is pertinent how, exactly?

    You’re saying absolutely nothing about the efforts made to make the temp data analysis robust.

    Nothing.

    It all boils down to “I don’t trust them”. That, essentially, is all you’ve got.

  • Paul G // August 3, 2007 at 9:48 pm

    =bigcitylib said:=
    =”I think this, from

    http://www.nws.noaa.gov/directives/010/pd01013002c.pdf

    …answers that:

    “If standards can not be met by equipment in place, the standards should be
    achieved as stations are changed, equipment is installed, programs are modified, or new stations
    are established.”

    In other words the protocol is to use what you have until you can get “=

    And like the apparent non-enforcement of current NOAA/NWS/WMO standards, do we know if this policy is being enforced (excluding CRN)? Or is it a toothless paper policy?

    And what of the confidence levels in the historical data from the USHCN sites?

  • dhogaza // August 3, 2007 at 9:49 pm

    Like, think a moment, dude. If I get a temperature reading from a thermometer stuck up your rear end of “100 C”, I don’t need a photo to tell me that something’s wrong with that data.

    I’ll reject it, out of hand. A photo might tell me specifically why the thermometer is reading high - perhaps you were being baked in an oven at the time.

    But it’s not going to impact any data analysis for which that temp reading has already been rejected.

    Now how is this related to photos of a bridge collapse?

  • Hank Roberts // August 3, 2007 at 9:54 pm

    Got a calculation done including the bristlecone data older than 150 years, and excluding only the last 150 years of bristlecone data, which would be as recommended by the NAS panel?

    It’s important not to exaggerate what the NAS panel recommended — that’d be the classic baby-out-with-bathwater fallacy, you know.

  • Hank Roberts // August 3, 2007 at 9:59 pm

    Ah, thanks for the reminder above:
    http://lh5.google.com/image/EliRabett/RjvmGmt8vyE/AAAAAAAAAKU/H0gN5qFcRw4/UntitledAlbum38.jpg
    http://rabett.blogspot.com/2007/05/it-writes-itself-ethon-after-sharing.html

  • Paul G // August 4, 2007 at 12:23 am

    == dhogaza saids:==
    =”You’re saying absolutely nothing about the efforts made to make the temp data analysis robust.”=

    How does one make the data “robust” when they are completely unaware of the violations of NOAA/NWS/WMO standards at the particular surface site?

    If you want to make the arguement that professional standards are irrelevant to the accurate measurement of temperature at the site, go for it.

  • Marion Delgado // August 4, 2007 at 2:27 am

    caerbannog 4:42 pm

    That looks awfully like Velikovsky’s early foray into scientism. He had some theory about the atmosphere and didn’t understand that the heavy and light gases in the atmosphere are mixed up, and was very shirty with people who tried to correct him. That, and not the Worlds in Collision stuff, was what first got him the probable crank label. Hank Bauer calls his stance “my way or the highway” but his way was a broken down trail heading for a cul de sac.

    plus ça change

  • Marion Delgado // August 4, 2007 at 2:30 am

    tamino:

    [edit] - sorry about that :) I’ll be a better more open mind in the future :)

  • Marion Delgado // August 4, 2007 at 2:36 am

    I hereby make the argument that professional standards as revealed by amateur photos are in fact irrelevant to measuring surface temperatures, and in particular, changes in surface temperatures, especially when stations with significant UHI and other chronic, possibly changing conditions are either calibrated against stations with no such problem or not counted. I further argue that we are not living in the 14th century and we’ve discovered statistical analysis.

    I further contend we should have the right to follow any of these market fundamentalists who try to shape public policy around and photograph and record their activities. I suspect MY theory - that they’re simply either paid liars, scientific illiterates or ideological fanatics, could be demonstrated easily, and therefore by the same arguments presented here, all of their FUD would be tossed out of the public process.

  • Paul G // August 4, 2007 at 5:16 am

    = dhogaza said: =
    =”Like, think a moment, dude. If I get a temperature reading from a thermometer stuck up your rear end of “100 C”, I don’t need a photo to tell me that something’s wrong with that data.”=

    Is that, like, an anal-ogy? ;)

  • ChrisC // August 4, 2007 at 7:55 am

    Petro,

    The reason I continue to engage with denialist and contrians like NGS is becuase, as a scientist trained in a closely related field, I feel I have a responsibility to try to explain to confused lay people (as nanny seems to be) the basis of current theories on atmospheric science, and correct common misconceptions. Often, it is like arguing with a tree stump. But, if at all possible, scientists should exlpain their work to the public. This is what I think Tamino accomplishes on his blog.

  • bigcitylib // August 4, 2007 at 11:04 am

    Paul,

    As one of your questions is answered, you whomp up two more, like a squid retreating behind a cloud of ink. Are you willing to have a rational conversation, or is this some kind of game to you?

  • Chris O'Neill // August 4, 2007 at 5:19 pm

    Paul G: “How does one make the data “robust” when they are completely unaware of the violations of NOAA/NWS/WMO standards at the particular surface site?”

    which should be “how does one make the data analysis robust”?

    Well if you really want to know what GISS does you could ask them or read their paper “A closer look at United States and global surface temperature change”, in particular, the sections relating to adjustments, sections 4 and 5. You should be busy for a fair while.

  • Anthony Watts // August 4, 2007 at 5:32 pm

    Well, I was referring to the 1975- 2005 graphs in my post where you do a detailed trend analysis and comparison, but perhaps I didn’t make my question clear enough.

    My question was about why those graphs didnt include the whole temperature record for those sites.

    But what was clear was the “not for post” at the top of my message, as it was a private question to you, Tamino. Since there does not appear to be a contact email or contact form on your blog page, writing a post with the “not for post” at the top is the only option I had.

    But you went ahead and published it anyway, without regard to that request. My email shows up in the WordPress comments in the admin page, so you could have simply emailed me back. But since it appears you have no scruples about doing common courtesy things like “asking questions first” or “honoring private requests for communications”, its clear that it is a waste of my time to interact here. I guess I should have known better, since like some other climate blogs, the blog administrator hides behind a facade of an anonymous moniker.

    So let this be a lesson to anyone considering posting here, when scientists run blogs anonymously, don’t assume you’ll get fair treatment. Realclimate of course is the exception. They announce who the principles are, they provide a contact method (in the about section) and Gavin has been quite fair and communicative with me and I learned much from him.

    It’s unfortunate here, because I thought “open mind” meant just that.

    There’s an old saying that could apply here: “you can catch more flies with honey than you can with vinegar”.

    [Response: You put in a comment with an accusatory tone, wondering why this post doesn't reveal all the data when it *does*, complain about too much "vinegar," and get all huffy because your very own "vinegar" is on display for all to see. But you continue to rather snidely insult my choice to remain anonymous on my blog.

    As for choosing not to visit here again, you will not be missed.]

  • stewart // August 4, 2007 at 8:32 pm

    Wow:
    And some of these people think that they are doing some kind of ’science’ by this?
    Well, if you start with the assumption that climate scientists are fools or frauds, then it may not be a surprise where you end up. It is about as valid as the ‘auditing’ that happens in Scientology.

  • Hank Roberts // August 5, 2007 at 9:35 pm

    http://www.sciencemag.org/cgi/content/full/304/5672/827?ijkey=G5JIkP07tZcn.&keytype=ref&siteid=sci

    “Many standards were more uniformly adopted in the 1960s during planning for the World Weather Watch, the ancestor of today’s Global Observing System. ….

    “Yet even these changes have not eliminated the importance of disciplined human beings for the successful implementation of standards. For example, in the late 1980s, the U.S. Weather Service replaced liquid-in-glass thermometers with digital electronic ones at thousands of stations in its Cooperative Station Network. … the new, more accurate instruments did not correlate exactly with the old ones. Network-wide, the new instrumentation altered the mean daily temperature range by -0.7ºC and the average daily temperature by -0.1ºC compared with the previous system (7).

    This example illustrates the complex combination of social and technical problems that affect the implementation of standards. The consequences for the detection of climatic change can be profound: The biases discovered in the U.S. Cooperative Station Network, although correctable, “are of the same magnitude as the changes of global and United States mean temperatures since the turn of the 20th century” (6).”

    6: T. R. Karl, R. G. Quayle, P. Y. Groisman, J. Climate 6, 1481 (1993).

    ——-
    Science 7 May 2004:
    Vol. 304. no. 5672, pp. 827 - 828
    DOI: 10.1126/science.1099290

    BEYOND THE IVORY TOWER:
    “A Vast Machine”: Standards as Social Technology
    Paul N. Edwards*

  • Paul G // August 6, 2007 at 6:28 pm

    == Chris O’Neill said: ==
    =”Well if you really want to know what GISS does you could ask them or read their paper “A closer look at United States and global surface temperature change”, in particular, the sections relating to adjustments, sections 4 and 5. You should be busy for a fair while.”=

    Chris, that paper does not address the microsite contaminations affecting some of the surface sites. Interestingly, the article call for MORE satellite photos of surface sites.

    Quote from article:
    =”We suggest further studies, including more complete satellite night light analyses, which may clarify the potential urban effect.”=

    The article only highlights the concerns that are being raised about the USHCN. They never visit the actual sites.

  • ks // August 6, 2007 at 9:58 pm

    Paul,

    Are you trying to say that if some of the sites, covering only 2% of the globe, are not compliant (as recorded only by a one time visit, photograph, and GPS), then the entire global temperature record should be thrown out, and thus humans are not responsible for changing the climate (in spite of numerous evidence lines saying they are)? Doesn’t that seem to be jumping the gun a little bit?

    If I am misrepresenting your position, please correct me.

  • ks // August 6, 2007 at 10:09 pm

    “Chris, that paper does not address the microsite contaminations affecting some of the surface sites. ”

    But if you check the update section of this post -

    “Both Eli Rabett (in reader comments) and Gavin Schmidt (via email) have reminded me that the last step of NASA GISS adjustments — the correction for urban heating — uses data from nearby rural stations (like Orland) to apply a correction to non-rural stations (like Marysville).”

    Using data from nearby rural stations to apply a correction to non-rural stations certainly sounds like addressing microsite concerns, huh Paul?

  • Paul G // August 6, 2007 at 10:58 pm

    Yes, Chris, you are misrepresenting me. :)

    The US surface site record is of critcal importance for both US interests and for the global database on temperature. We may only be a small portion of the world’s surface, but that does not excuse poor quality data.

    And yes, we know that NASA GISS does “adjustments”; they have been doing them for years. And doing them all the time while they are blissfully unaware of the cases of microsite contamination.

    Lastly, you can not correct all the contamination apparent at Marysville by simply using data from another site. The goal is not to match a trend or a grid, but to have the most accurate measurements possible from that one specific site.

    By necessity, you must apply a site specific adjustment to Marysville incorporating all known factors unique to that site and independent of any other site.

    I have seen no evidence that this has ever been done or attempted or even that climatologists were aware of the critical flaws at Marysville.

  • Science Blog » Blog Archive » Sex dolls for dogs // August 6, 2007 at 11:47 pm

    [...] has the scoop on the latest attempt to revive the old UHIs-mean-it’s-not-getting-warmer argument. Eli [...]

  • Steve Bloom // August 7, 2007 at 12:00 am

    “The goal is not to match a trend or a grid, but to have the most accurate measurements possible from that one specific site.”

    Whose goal? One more time, Paul: Correcting microsite influences by any method other than matching to uncontaminated data from a nearby “clean” station is the only means of making adjustments in which confidence can be placed. Carrying out such a procedure internal to the USHCN has limitations, which is why we now have the CRN. But as these facts mean that the surfacestations effort is bogus, you simply refuse to recognize them. There’s not much point in continuing to interact with you, is there?

  • ks // August 7, 2007 at 12:10 am

    “they are blissfully unaware of the cases of microsite contamination…I have seen no evidence that this [applying a site specific adjustment] has ever been done or attempted”

    I believe that the following quote does demonstrate that the are aware and attempting a site specific adjustment. Watts just doesn’t mention the awareness or adjustments on his site

    “the last step of NASA GISS adjustments — the correction for urban heating — uses data from nearby rural stations (like Orland) to apply a correction to non-rural stations (like Marysville).”

  • Boris // August 7, 2007 at 1:16 am

    I don’t think it’s honey that’s attracting the flies to Anthony’s site. It’s that other stuff that flies like a lot…

  • Hank Roberts // August 7, 2007 at 3:44 am

    Have you ever heard the phrase “do not let perfection be the enemy of the good”?

    This is the same old attack method, taking the newer and better information and trying to argue that the earlier, older, and perhaps original work in the field is demonstrably wrong, and the field is built on the early work, so the whole field has to be thrown out.

    What’s being done is worth understanding before dismissing it. It’s clear the newer stations are being set up to better serve what’s needed. The older stations don’t have to be perfect — they have to be kept discrete and checked for oddities.

    A consistently biased station is still useful, just as a consistently biased instrument of any sort is.

    Look at the info on the change from mercury thermometers to digital ones — it had to be tracked station by station, each individual station dealt with after that tweak.

    Yes it improved the record — it didn’t devalue the older information already accumulated. It allowed checking, with that one change, whether the station continued to perform according to its previous pattern.

    Tweaking them, fiddling with them, changing them now introduces an uncertainty that isn’t going to help compared to maintaining them, tracking what changes, and looking for oddities in the overall pattern by comparing each one to others around it.

    But this notion of throwing out anything that isn’t perfect is sneering at the history and effort that have been made to put together a record, starting from the very rough and primitive instruments of a century ago.

    Their old data sets get studied and yes adjusted as we understand better how their data were taken. They’re still the only and so the ‘best’ data from the time.

  • Petro // August 7, 2007 at 12:00 pm

    Paul G: How come the US wheather data correlaetes nicely with the local and global temperature proxies on the issue of global warming if there is the severe problems with the data? From the scientific perpective, US data is coherent with the observations.

  • Dano // August 7, 2007 at 12:51 pm

    I have seen no evidence that this has ever been done or attempted or even that climatologists were aware of the critical flaws at Marysville. [emphasis added]

    Let me remind everyone that the false premise rhetoric continues.

    The false premise is that there is a critical flaw in the temperature record at MYV.

    Nowhere is evidence given for the proper temperature record. That is: there is no evidence that the temperature record is flawed, biased, incorrect, wrong, off, wanting, nearly correct, kindasorta good, nothing. Nada. Zip. Zilch.

    Plus, again I remind everyone: sfcstns.org is not doing a thing to quantify the temperature “errors” “they” “find” via their picture-taking. Zero. Nothing. Nada. Zip. Zilch.

    The only thing the sfcstns.org folks have is implication and false premises until they quantify the thing they say is wrong. That is: they claim the temp record is wrong but take no steps to measure temperature. Zero. Nothing. Nada. Zip. Zilch. Nichts.

    They got nothin’. Nada. Zip. Zilch. i>Nichts.

    It is a stunt. Nothing more. Hope this helps.

    Best,

    D

  • guthrie // August 7, 2007 at 2:19 pm

    Ummm, Paul, you seem to be saying that somehow we can correct for all microsite contaminations. The fact is that there has undoubtedly been such contaminations for as long as there have been temperature measurements. So the question is, why are people suddenly going after microsite contaminations and insinuating that they are responsible for the warming trend?

  • joe // August 7, 2007 at 3:50 pm

    Like, think a moment, dude. If I get a temperature reading from a thermometer stuck up your rear end of “100 C”, I don’t need a photo to tell me that something’s wrong with that data.

    I’ll reject it, out of hand. A photo might tell me specifically why the thermometer is reading high - perhaps you were being baked in an oven at the time.
    ———————————————–
    if the sensor data said it was 300C, it would be discarded out of hand also. If the thermometer [edit] gave a reasonalbe 40C you would just assume the person is sick, not standing next to an open oven. a picture would be helpful in that case. its better get get the pictures not so that in teh furture more can be taken to verify if the sites have changed. The single site visits themselves don’t tell anyone if tsi changed over time, but going back and photographing them over time does. This is just a starting point.

  • James Lilling // August 7, 2007 at 4:12 pm

    This is quite a circle.

    Okay, the claim is the network is high quality. So check the stations. So far they are not all high quality. Conclusion; they lied (bad) or they don’t know any of the specifics (worse).

    Do the people running the network know specifically what the contaminations are? It doesn’t seem so. Conclusion; you can’t correct properly if you you don’t know what is wrong and how it’s wrong. Therefore, what the details are isn’t important; it’s enough to know you can’t fix the details if you don’t know the details.

    Are the temperature readings free from contamination? Obviously not. Are they accurate? No, they have to be corrected. Conclusion; the temperatures are not robust, and the corrections may or may not fix that, but given the fact that the specifics of the contamination doesn’t seem to be known, it at the least calls everything into question.

    Why show temps along with what the site looks like? Seems as if it’s a preliminary check to see perhaps if there is anything obvious in the historical record for the site or for the site versus others. The question is, why hasn’t that data already been gathered (bad) or gathered and displayed/tracked (worse)?

    Hank Roberts’s link to the Science article is quite a bit of this, as is Chris’s link to the Hansen 2001 PDF; people are aware of the issues, there are questions about the data, but the solution from the gatekeepers is more satellite images.

    In the end, it doesn’t matter, the temps nor the reasons for doing the site surveys. The people that should have been doing that haven’t been. There are many of the standards not being met. I see people make a lot of fun of the effort, with snotty comments about publishing the results. No I’m not involved in the project, I think it’s rather pointless. But nothing to get upset about and make fun of anyone over. Some of y’all need to grow up.

  • tamino // August 7, 2007 at 5:13 pm

    No more anatomically strategic thermometers, please.

  • guthrie // August 7, 2007 at 6:02 pm

    James- your comment would be fine, if it wasn’t for the fact that many people seem to be trying to use this as a means to deny that global warming is happening.

  • Paul G // August 7, 2007 at 6:04 pm

    == Steve Bloom said: ==
    =”Correcting microsite influences by any method other than matching to uncontaminated data from a nearby “clean” station is the only means of making adjustments in which confidence can be placed. “=

    No Steve, no. A single site has one purpose only: to measure temperature as accurately as possible and as representatively as possible within reasonable limits independent of the temperature of any other sites.

    You adhere to the idea of matching to a “clean” station yet I have not seen good metadata indicating how “clean” these other sites are.

    - Paul G

  • Paul G // August 7, 2007 at 6:20 pm

    == guthrie said: ==
    ==” Ummm, Paul, you seem to be saying that somehow we can correct for all microsite contaminations. The fact is that there has undoubtedly been such contaminations for as long as there have been temperature measurements. So the question is, why are people suddenly going after microsite contaminations and insinuating that they are responsible for the warming trend?”==

    Guthrie, the NOAA/NWS 100 foot standard is a simple one and I assume they put it in place for a valid reason. Why the climatologists have not enforced this simple standard at numerous sites raises question about the data and possibly questions about what other quality lapses might have occurred.

    So the question is not why people are “suddenly” going after microsite contamination issues, but why have the NOAA/NWS been negligent on this issue for many years?

  • guthrie // August 7, 2007 at 9:08 pm

    Because they are idiots? Your avoiding my question.

  • Dano // August 7, 2007 at 10:29 pm

    So far they are not all high quality. Conclusion; they lied (bad) or they don’t know any of the specifics (worse).

    Tell us: what is the average temp biasing and the sign of the sites photographed thus far?

    Oh, wait: we don’t know. No one is measuring temps.

    So we actually can’t assert that they aren’t high quality.

    Imagine that.

    Best,

    D

  • Hank Roberts // August 7, 2007 at 11:51 pm

    Standards (”the wonderful thing about standards is that there are so many of them”) are always being improved — and used for new installations.

    You don’t go dragging the existing equipment from one place to another on the pretense you’re “improving” it without serious attention to what it will change, ahead of time, unless you want to really screw up the longterm reliability of the system.

    Instead you keep track of which station is which, and can start to place more reliance on the newer stations with better equipment, as they come on line.

  • Steve Reynolds // August 8, 2007 at 3:32 am

    It appears that James Hansen is more open minded about audits than some here:

    …USHCN station records up to 1999 were replaced by a version of USHCN data with further corrections after an adjustment computed by comparing the common 1990-1999 period of the two data sets. (We wish to thank Stephen McIntyre for bringing to our attention that such an adjustment is necessary to prevent creating an artificial jump in year 2000.)

    From: http://data.giss.nasa.gov/gistemp/

  • Science Blog » Blog Archive » The New Phone Books Are Here!!! The New Phone Books are Here!!!! // August 8, 2007 at 6:03 am

    [...] has the scoop on the latest attempt to revive the old UHIs-mean-it’s-not-getting-warmer argument. Eli [...]

  • dhogaza // August 8, 2007 at 1:30 pm

    Okay, the claim is the network is high quality. So check the stations.
    </blockquote
    No, the claim is that the network is good enough to provide data that properly analyzed can provide robust information on temperature trends.

    So far they are not all high quality. Conclusion; they lied (bad) or they don’t know any of the specifics (worse).

    Or you’re arguing from a false premise.

    PaulG:

    No Steve, no. A single site has one purpose only: to measure temperature as accurately as possible and as representatively as possible within reasonable limits independent of the temperature of any other sites.

    We don’t really need accuracy in order to measure trends, just repeatability.

  • James Lilling // August 8, 2007 at 4:35 pm

    Mr. Guthrie,

    So what, some people will use this as a means to deny that global warming is happening. They’re already doing that and either others will listen to them or not. Separate issue.

    The topic here is that correcting and adjusting for non-standard stations is happening, rather than removing the contaminations to make them meet standards. So it has turned out there was a reason for checking. I doubt anything much can be done about that, but whatever.

    The only “truth” is the contaminations are responsible for casting the quality of the data into doubt. Knowing the specific methods for correcting or adjusting this data that is now known to be contaminated would help, but that’s being hidden. I would think you could understand what impression that conveys to some people.

    It appears the exact contaminations and how it affects the readings are unknown by the people correcting or adjusting for them. So then the question becomes if they don’t know what’s wrong, how can they accurately correct it?

    You know, it is possible that actually the warming trend is far understated and this will find that out and be used to prove more action is needed. Anyone that says it’s under- or over-stated at this stage will hopefully be ignored as being premature.

    [Response: It is woeful ignorance bordering on slander to say, "Knowing the specific methods for correcting or adjusting this data that is now known to be contaminated would help, but that’s being hidden. I would think you could understand what impression that conveys to some people."

    The adjustments made by NASA GISS are an open book; I posted about them here. I would think *you* could understand what impression that conveys to people. More important, I would hope you'd check the facts before making such a blatantly false statement]

  • James Lilling // August 8, 2007 at 4:51 pm

    Mr. Dano,

    That’s my point. I agree a site can’t be claimed “not high quality” based upon temp, therefore it can’t be claimed the temps are invalidated. I can’t answer your temp question and my opinion is not even the people that run the network know the average temp biasing and signs of the sites. It’s an unknown. Which would make it difficult to correct or adjust for. Based upon temp, nobody knows if the site is high medium or low quality.

    It seems to me the measure they’re using is “high quality sites are those that meet high quality standards.” If you’re using that as a measure the specifics become unimportant, which would explain why no one is measuring temps. I would also think it makes the census of the sites go faster also, for later analysis and faster improvement as well as being less expensive.

    Now me, I would like to see somebody put in a network of something like 50 groups of temp, humidity, and windspeed indicators at one or more of the sites that meets none (or few) of the standards, run it in parallel in a grid above, below and on all sides of the current sensor over some period of time, say 2-5 years. Computers would monitor all the inputs real-time and correlate them as to what’s really going on a continual basis. Then compare the two sets of data and show one way or another what, effects the non-standard factors do or do not have on everything and how they change over time. Or not. And end all this bickering about it. Sadly, I have neither the time, money, equipment, location or experience to do so.

    So for the time being, I suppose I have to settle with an approximation. And conclude that high quality sites meet high quality standards.

    Did you have the average temp biasing and the sign of the sites handy? If that info is available, it would be better than using the standards to grade sites.

  • James Lilling // August 8, 2007 at 5:24 pm

    Mr. dhogaza

    I don’t believe I have a faulty premise, I just think you just don’t accept “high quality sites meet high quality standards” as an accurate statement. You could be correct. So my questions to you are:

    Is a contaminated USHCN site (say CRN class 5) with unknown influences on temperature readings corrected by a hidden method that may or may not adequately take those influences into account “good enough to provide data that properly analyzed can provide robust information on temperature trends”?

    Is a USHCN station with artificial temperature influences that change over time and no readings for humidity or wind speed, no photographs of its status and no personal visits by those correcting it, no IR (heat flow) or UV (amount of sunlight) readings “good enough to provide data that properly analyzed can provide robust information on temperature trends”?

    But I agree. Accurate readings aren’t needed for trends. But it’s not just repeatability. It’s consistency, with as few extraneous factors as possible. And those that are extraneous need to be accurately accounted for. A high quality site that meets the standards would be the answer to that I believe.

    I don’t know if ACs, asphalt, buildings, parking lots, added gravel, watering, boats, cars, BBQs, trash cans, shelter lights or any of that actually influences the anomalies, no. But I also don’t know they don’t influence the anomalies. The same goes for adjustments, do those adjusting understand the influences and correctly take them into account?

    So why not just find all this junk and remove it from the equation by putting the sites not in compliance into compliance or removing them? You have to find out which is which first, don’t you?

  • Paul G // August 8, 2007 at 5:49 pm

    == Steve Reynolds said: ==
    ==”It appears that James Hansen is more open minded about audits than some here:

    …USHCN station records up to 1999 were replaced by a version of USHCN data with further corrections after an adjustment computed by comparing the common 1990-1999 period of the two data sets. (We wish to thank Stephen McIntyre for bringing to our attention that such an adjustment is necessary to prevent creating an artificial jump in year 2000.)”==

    Interesting that NASA would acknowledge Steve McIntyre on this site. Apparently the Y2K he discovered was real.

  • Paul G // August 8, 2007 at 5:56 pm

    ^ should say “Y2K error”

  • Paul G // August 8, 2007 at 6:07 pm

    And this is interesting too. Remember when the NOAA said 2006 was the warmest year on record in the US?

    http://www.noaanews.noaa.gov/stories2007/s2772.htm

    After adjusting for the simple Y2K error discovered by a non-climatogist, they have apparently revised their data.

    http://data.giss.nasa.gov/gistemp/graphs/Fig.D_lrg.gif

    1934?

  • John Willit // August 8, 2007 at 6:52 pm

    Apparently NASA GISS is busily redoing all of their data over the past few days because another of their UNDOCUMENTED adjustments was in error.

    The last 6 years (of US data at least) was out (high) by 0.18C due to an error made by GISS.

    1934 is back on top as the warmest year in the US.

  • dhogaza // August 8, 2007 at 7:04 pm

    Is a contaminated USHCN site (say CRN class 5) with unknown influences on temperature readings corrected by a hidden method that may or may not adequately take those influences into account “good enough to provide data that properly analyzed can provide robust information on temperature trends”?

    Essentially you’re asking “do I have any reason to believe that the scientists involved are incompetent or dishonest when they say they can build a robust history of surface temps in the United States with the historical record at hand”.

    No, I don’t. There are plenty of other indicators of warming out there that contradict denialist claims that warming is non-existent. There’s nothing in the surface temp record that contradicts these other indicators. I’ve seen no evidence that claims that folks like Hansen are dishonest are true. Etc etc etc.

  • guthrie // August 8, 2007 at 7:09 pm

    Mr Lilling, since I do not know you from Adam, I shall assume you have no axe to grind. Unfortunately your first comment misses the point here, which is that the people doing all this have got an axe to grind, namely that global warming is not occuring, and even if it is, it is nothing to do with us.
    Unfortunately this means that they will sieze upon even reasonable sounding arguments to bolster their cause.

    As for the stations, sure, lets get them the funding and ensure they are trained properly and the sites are up to standard. I’m all for that. I’m sure the people involved in this would be glad of your support in this issue, so please go an write to your representatives.

    Tamino has addressed the issue of “hidden” adjustments.

    Yoru point about correcting readings ignores the facts that firstly warming trends are corrected for urban heat island effects, i.e. one form of contamination, and secondly, that this is another reason for having many many stations- to average out the noise. The point here is not so much local hour to hour effects as the long term changes, which will be less affected by local contamination, after all we are talking about long term climate records here, not hour by hour temperature records used for local meteorological records.

  • James Lilling // August 8, 2007 at 7:40 pm

    Mr. Tamino,

    I was not saying what GISS does is being hidden, or that they are hiding anything or deliberately inflating anything. I am not accusing them of misconduct. I have read your other post. I understand they do work hard to try and adjust the data as best they can.

    My paragraph relied on some of the other things I said and I took too many shortcuts I think. I didn’t write it very well to stand alone, and it should have been two paragraphs (I might have cut out an important paragraph in between the two and not pasted it, it looks confusing.) Please let me fill in some of it to clarify what I meant.

    “Knowing the specific methods for correcting or adjusting this data [contaminated microsite data of unknown effect] would help [to remove doubt from this contaminated data], but that’s [the contamination is] being hidden [in the contamination's unknowableness].

    I would think you could understand what impression that [guthrie and others getting upset about " suddenly going over microsite contamination issues", taking pictures, etc] conveys to some people [Paul and others debating the worth of such endeavors].

    As you said about nonclimatic variations in homogeneity adjustments, “Quantitative knowledge of these factors is not available in most cases, so it is impossible to fully correct for them.”

    I am speaking about issues revolving around microsite contamination and the entire process hiding what effect the sub-standard stations are measuring, but that we don’t know the specifics. If we can identify the factors and remove them, we don’t need to correct for them at all. I’m not trying to discuss GISS or how they do anything.

    Sorry I have given the impression I think there is some conspiracy going on here, I don’t.

  • Petro // August 8, 2007 at 8:55 pm

    Paul G: You are comparing NOAA’s and NASA’s numbers, which have been slightly different all the time. Do you know difference between apples and oranges?

    Besides, what is your point? Are you claiming there is no warming trend in NASA’s temperature? See that red line there, how about that?

    Frankly, do you think US government should invest millions of dollars checking all the wheather stations in the U.S.A.? Would that be tax-payer’s money well invested? What are the benefits for the people of the Federation on that activity?

    Nanny, come help us, Paul G is wasting your money!

  • James Lilling // August 8, 2007 at 9:16 pm

    Mr. dhogaza,

    No I am not essentially asking that. I am asking how the possible contamination from not meeting siting standards can be measured or adjusted for, because I don’t see the methods being used to do so. On an individual station basis.

    I said nothing about other indicators of warming, nor am I discussing denialist cliams, nor am I discussing the surface temp record itself, nor am I claiming Hansen and others there are dishonest. But they have made errors. The data glitch Mr. McIntyre discovered shows that perhaps not all of their methods are as transparent as could be, or not understood well. And I understand the software is not well described or available? If it is, I appologize. In any case, those are mostly procedural matters. I am sure that GISS, NASA, NOAA, NCDC and the other FLAs are all staffed with fine, upstanding, competant people of impeccable honesty and sterling qualifications.

    I would like to know why it seems you do not want to simply remove the contaminating influences so we don’t have to deal with them in the first place. Then all these other arguments go away. All I’m saying is: remove the contaminating influences so we don’t have to deal with them.

    If your contention is that the adjustments described by Tamino solve even situations where unseen anomalies that are not consistent because of per station contamination, I would appreciate you saying that, rather than focusing on other issues that don’t relate to that.

  • James Lilling // August 8, 2007 at 9:22 pm

    Mr. guthrie,

    I suppose that you are upset that some are using this as an excuse to argue there is no warming. They are wrong. I agree with you it’s wrong to do it. I’m not disagreeing with you, it’s just their motivations or actions don’t concern me. They obviously bother you. I can understand why though.

    I appologize, I should have said something “possible unknown adjustments” or “no adjustments” to “data possibly contaminated by siting issues on a per station basis”, poor choice of words.

    Is it perhaps others here are saying the adjustments described by Tamino solve even situations where the anomaly is not consistent because of contamination? That sounds reasonable, but I can see how people would want to verify that.

  • dhogaza // August 8, 2007 at 10:29 pm

    The data glitch Mr. McIntyre discovered shows that perhaps not all of their methods are as transparent as could be, or not understood well.

    That glitch pales compared to McIntyre’s outright dishonesty about other issues, and Hansen’s graceful acceptance of the correction to the data analysis contrasts greatly with McIntyre’s unwillingness to acknowledge his own (frequent) errors.

    No I am not essentially asking that. I am asking how the possible contamination from not meeting siting standards can be measured or adjusted for, because I don’t see the methods being used to do so.

    Sorry, this is just another way of stating what I said in my original post. It’s an argument from personal incredulity, in essence.

    But bust your balls on it, go for it. You might ask yourself first, though, “what would happen if we threw out the entire surface temp record for the US?”

    The answer: nada, in terms of the gross-scale question (is warming happening?). So many lines of evidence support the fact that we don’t need accurate, inaccurate, fraudulent, or any other kind of surface temp measurements to establish that basic fact.

    So if it will make you feel better, I’ll concede the point…

    Surface temp measurements are worthless, AGW is real.

    But once again, since you seem impervious to input…

    I would like to know why it seems you do not want to simply remove the contaminating influences so we don’t have to deal with them in the first place.

    This assumes that contaminating influences can’t be and aren’t being removed analytically.

    I haven’t seen anyone saying such influences shouldn’t be removed from the analysis. However I do see professionals saying “we’ve done it”, and people like you saying “I don’t think that’s good enough, but I can’t really say why”.

    So stop with the strawman shit, OK? The issue isn’t whether or not data should be vetted and carefully analyzed.

    The question is whether a bunch of photographs will do a better job of that than formal analysis.

  • James Lilling // August 8, 2007 at 11:34 pm

    Mr. dhogaza,

    Allow me to just cover the pertinent items you’re brought up rather than the non-sequiters.

    I’ve never said the temps are worthless in and of themselves. I’m not arguing here at all. I’m just saying some of the sites don’t meet the standards. You’re the one drawing all sorts of conclusions from a statement along the lines that we should make the sites meet the standards because not meeting them might corrupt the data and not be noticed. They might be fine already on the other hand. I don’t know and can’t prove either one. And am not going to try. I’m looking at possibilites, not prove or disprove any of them

    What you think of their motivations or their efforts is immaterial, they’re doing the efforts regardless. Trying to argue with me when I’m not even arguing isn’t going to stop them or change their reasons.

    I’m just telling you my apprasial of the situation from somebody unconcerned with what either side is doing or thinks they’re doing. You just seem to want to argue with whatever motives you’re ascribing to me. If you’re not with us you’re against us, eh? You seem very hostile.

    Have a nice day.

  • James Lilling // August 8, 2007 at 11:39 pm

    Oh, and one more comment please, Mr. dhogaza. You spoke of contaminating influences being removed analytically. I am speaking of removing them physically. If “the experts” say they’ve removed them analytically, that’s fine, I have no reason to dis-believe “them”. They should still be removed physically.

    So you see, it’s not just been a non argument, we are not even discussing the same subject.

    Thank you once again for your time.

  • Paul G // August 8, 2007 at 11:55 pm

    == Petro said: ==
    ==”Paul G: You are comparing NOAA’s and NASA’s numbers, which have been slightly different all the time. Do you know difference between apples and oranges?”==

    You mean apples and apples. Both data sets are attempting to accurately the measure the same thing.

    ==”Besides, what is your point? Are you claiming there is no warming trend in NASA’s temperature? See that red line there, how about that?”==

    Accuracy of the data is the point. And removing .15 degrees warming for the US for the last 6-7 years is a major adjustment. You should be happy the warming is less. You’re not?

    ==”Frankly, do you think US government should invest millions of dollars checking all the wheather stations in the U.S.A.?”==

    Yes.

    ==”Would that be tax-payer’s money well invested?”==

    Yes.

    ==”What are the benefits for the people of the Federation on that activity?”==

    Accuracy of the data is the basis for formulation of good policy. The better the data, the better policy can be.

  • Lee // August 9, 2007 at 2:08 am

    James Lilling,

    You are missing some critical facts and ideas here.

    First, no one disputes that it would be good to have a network of well-sited stations going forward. Such a network is already funded, and is partly up and running. The Climate Reference Network consists of a set of stations sited with rigorous standards. There is no need to change the existing network to get to that standard - the new network will do that.

    We do NOT want to change the existing network. We use that network to look at historic climate trends. We can NOT go back in time and alter stations to anything diferrent from what they were.
    We know that there are issues in the existing network, issues that cause heterogeneities in the record. That is why they are doing inhomogeneity analyses and corrections - because they know that.

    Altering the extant stations would only add one more inhomogeneity at the time of alteration, and just further complicate the current effort to extract good data from those stations. We need to leave the network unchanged from its current state, so we can compare to new data from the CRN stations going forward, without a station heterogeneity precisely at the time the CRN starts running..

    What might be worthwhile is to examine the extant network to see if there are conditions correlated with particularly bad data, and look at what happens when those stations are removed from the analysis. But this can not just be looking at adherence with the meteorological standards in place. A perfectly sited station, out in the middle of a flat grassy field, might have significant heterogeneities from past irrigation changes, from a replacement thermometer, or from something as simple as moving the recorder from one place to another within the shield. Just taking pictures does not tell us diddly squat about whether there are inhomogeneities - for surfacestations to imply otherwise is simply dishonest.

    And for them to put pictures with temp records, with NO analysis of whether siting issues are relevant to those records - and to make it look like good siting means declining temps, and bad siting means rising temps - is particularly egregious and dishonest propaganda.

  • Sam Urbinto // August 9, 2007 at 2:10 am

    Hi from CA land everyone.

    I don’t know what’s cuter, JL calling everyone “Mr.” and repeating the same thing, and using words and phrases that will get him attacked, or d complaining about strawmen while trying to turn things into a discussion of how Dr. McIntyre is or how the audit folks are stupid.

    I wonder sometimes who is more developmentally challenged in this supposed debate over the meaning of sampled estimates.

    For those that have ignored the interesting comedy of errors, I will summarize it for you.

    “The analytical adjustments {are fine} {need to be coupled with physical ones also}” + “The physical adjustments {are not sufficient} {are taken care of by the analytical}”

    Conversation continues:

    Yes they are.
    No they’re not.
    Yes they are.
    No they’re not.
    Yes they are.
    No they’re not.

    In the meantime, over at CA, the discussion is mainly about the first, and significant, correction to the data brought about by the <a href=”http://www.surfacestations.org/” efforts. of categorizing the network of sites.

    Perhaps a gander at the discussion relating to the dataset might put this in a better perspective: <a href=”http://www.climateaudit.org/?p=1878″ USHCN data.

    Focus please, ladies and gentlemen. And lurkers and trolls of all ages and genders!

  • Sam Urbinto // August 9, 2007 at 2:11 am

    Ooops, forgot to use the > at the end of the tag.

  • Paul G // August 9, 2007 at 6:42 am

    == Lee said: ==
    ==”And for them to put pictures with temp records, with NO analysis of whether siting issues are relevant to those records - and to make it look like good siting means declining temps, and bad siting means rising temps - is particularly egregious and dishonest propaganda.”==

    Well thanks to a few photos surfacestations took of the Detroit Lakes, MN surface site, it has helped lead to the discovery that there was a Y2K error in the USHCN data.

    NASA has already acknowledged this error and significantly lowered the increase in the temperature trend for the USA between the years 2000-2006.

    As it says on the NASA site:
    ==”We wish to thank Stephen McIntyre for bringing to our attention that such an adjustment is necessary to prevent creating an artificial jump in year 2000.”==

    http://data.giss.nasa.gov/gistemp/

    Interesting where a few photos can lead you.

    - Paul G

  • Paul G // August 9, 2007 at 7:13 am

    And the new winner for the hottest year on record in the US according to NASA’s newly corrected GISSTEMP records?

    Ready?

    You sure?

    1934

    http://data.giss.nasa.gov/gistemp/graphs/Fig.D.txt

  • Petro // August 9, 2007 at 10:42 am

    PaulG: ” Both data sets are attempting to accurately the measure the same thing.”

    Obviously you are ignorant about differences between data-sets. Very scientific indeed…

    PaulG: “Accuracy of the data is the point. And removing .15 degrees warming for the US for the last 6-7 years is a major adjustment.”

    This alleged “major” adjusment do not remove the warming trend seen in dataset. Do you agree or do you continue denying the existence of the trend?

    PaulG: “You should be happy the warming is less. You’re not?”

    Science is not a matter of happiness, but existence. This is irrelevant.

    Your prerence to beat dead horse with more facts amazes me, but I can accept it as your point of view. I would spend that money differently.

    “Accuracy of the data is the basis for formulation of good policy. The better the data, the better policy can be.”

    This is not the case. The decisions in the world are not made based on most accurate data, such perfection do not exist in real world. The decisions made by societies and individuals always contain unknown aspects. In the case of global warming, data is as good as it can be with the resources available.

    The data has been collected and the results created by thousands of scientists all over the world. To my mind, claiming that all this effort is useless and the results are wrong, is a sign of not only living in denial, but a sign of paranoia as well.

  • Boris // August 9, 2007 at 2:25 pm

    “And the new winner for the hottest year on record in the US according to NASA’s newly corrected GISSTEMP records?”

    You do realize we are talking about GLOBAL warming, right?

    you sure?

  • Freddy // August 9, 2007 at 2:50 pm

    Dhogaza :
    ” … That glitch pales compared to McIntyre’s outright dishonesty about other issues … ”

    Details, please. What has McIntyre been outright dishonest about ?

  • Hank Roberts // August 9, 2007 at 3:22 pm

    “significantly lowered”

    For what value of “significant” please? What statistic are you referring to?

    Oh, just handwaving? Well, wave faster please.

  • Lee // August 9, 2007 at 4:16 pm

    Paul G
    No, surfacestations’ pictures did NOT discover any error in the temperature data. The pictures have no temperature data content whatsoever.

    Analysis of the data found this error. That data has been available, surfacestations did not add one whit to that data.
    And surfacestations still has that god-offal pair of pure-propaganda pics from Orland and Marysville on their front page, paired with uncorrected data, implying a link between station siting and temp trends. Surfacestations has done NO ANALYSIS of whether siting issues correlate with particular trends. None whatsoever. for them to publish that misleading pair of pictures, with NO analytical context, is blatant intellectual dishonesty..

  • Paul G // August 9, 2007 at 5:45 pm

    == Petro said: ==
    ==”This alleged “major” adjusment do not remove the warming trend seen in dataset. Do you agree or do you continue denying the existence of the trend?”==

    You must be referring to someone else or else projecting your own prejudices onto me. Where have I denied the warming trend?

    ==”PaulG: “You should be happy the warming is less. You’re not?”=

    ==”Petro: “Science is not a matter of happiness, but existence. This is irrelevant.”==

    Not happy I guess.

    ==”In the case of global warming, data is as good as it can be with the resources available.”==

    And thanks to the work of a couple of amateurs and at no cost to the taxpayer, the data just got a little bit better.

  • Marion Delgado // August 9, 2007 at 7:18 pm

    Sam Urbinto

    Your combination of pompousness, ignorance and mendacity makes you a good representative of Climate Audit. If only Lysenko had had a web site, eh?

  • Marion Delgado // August 9, 2007 at 7:56 pm

    Paul G:

    Surface station data for the contiguous 48 states is not equivalent to multiple sources showing the average temperature in the world, but if you’ll go here: http://data.giss.nasa.gov/gistemp/graphs/
    you will see that even with just surface stations, just the contiguous 48 states, there is in fact a gradual trend upwards in “Annual and five-year running mean surface air temperature in the contiguous 48 United States relative to the 1951-1980 mean,” which is your data table.

    Since that’s what the models we support would predict, I fail to see why 1934 should be an issue.

  • Marion Delgado // August 9, 2007 at 8:04 pm

    Also, to show that this is not news, let’s go back to 2001:

    https://listserv.umd.edu/cgi-bin/wa?A2=ind0112c&L=ecolog-l&P=1981

    Note:

    The global average surface temperature in 2001 is expected to be the
    second warmest on record, 0.42°C above the 1961-1990 average. The
    warmest year in the 1860 to present record occurred in 1998, according
    to records maintained by Members of the World Meteorological
    Organization (WMO). Nine of the ten warmest years have occurred since
    1990, including 1999 and 2000, when the cooling influence of the
    tropical Pacific La Niña contributed to a somewhat lower global average
    (0.29°C and 0.26°C above average, respectively). The end of La Niña
    brought a return of warmer sea surface temperatures to the central and
    eastern equatorial Pacific in 2001 and was a contributing factor to the
    higher annual average this year.

    These conditions are part of a continuing trend to warmer global
    temperatures that have resulted in a rise of more than 0.6°C during the
    past 100 years, but the rise in temperature has not been continuous.
    Since 1976 the global average has risen at a rate approximately three
    times faster than the century-scale trend (see graph 1). The year 2001
    will be the 23rd consecutive year with the global mean surface
    temperature above the 1961-1990 average.

    And:

    The annual
    temperature in the United States is expected to be similar to the 2000
    average, the 13th warmest since records began in 1895,

  • James Lilling // August 9, 2007 at 8:45 pm

    Yes, changing some sites (but no not all of them!!!) is fine because going forward we can compare the new readings to past adjustments and see if those adjustments were correctly done either less or greater or the same. We always need controls in any experiment. However from those that are repaired, logically, if the contamination is removed there is no need to try and extract good data because it already is! So I don’t understand that suggestion.

    But you are correct, not all of the substandard stations should be removed or repaired, so we can compare them as they now are to the repaired exsubstandard ones, the ones that were standard in the first place, and the CRN. Yes, of course.

    I am sure comparing the various levels of quality stations in various groupings will tell us a lot also. The entire network must be inventoried first. The ultimate use or importance of this won’t be known until then.

    Yes of course there are going to be some adjustments for other factors, but contamination shouldn’t be one of them. I don’t know how well the contamination is specifically taken care of right now currently, and fixing some stations will allow us to see that physically, if it is or isn’t.

    Mr. Urbinto and Mr. Paul demonstrate why your last points are off, the errors that were found, and the subsequent demonstration that not every adjustment process is perfect, and that errors and mistaken assumptions are made at times, is exactly why the sites need to be inventoried and why photographs are needed, and why they are part of metadata and a requirement for the CRN you spoke of. Documentation. I was not under the impression surfacestations wase implying a photo tells us anything about inhomogeneities. Certainly 1 photograph every 3-6-12-24 months would, and as you said about adjusting the inhomogeneities in the past by fixing the stations now, you have to start some place.

    In addition to photographs at various time intervals, photographic comparisons of the sites to each other becomes possible, which would perhaps not help with the adjustments but could certainly help us to understand the relationship of stations to each other, especially those being compared or being used to adjust other stations.

    Putting the temperature on the photograph serves many purposes. You have the impression it is for propaganda purposes, but I do not agree it does that or is for that. What it does is also let comparisons take on more meaning. It also says “This is what this looks like, this is what it is doing.” Over time that becomes more meaningful, now it is just helpful. Certainly this use is clear now as the other two have mentioned and linked to? If a site looks a certain way, and the readings look a certain way, then that is how it is. It is just what it is.

    I fail to see how saying “Some sites have issues let us document the issues so anyone can see them and fix some” becomes “there is no global warming” and leads to others attacking me so. I do not think I have ever claimed it’s not warming, I have even suggested it might be greater than we think now. Implicitly stating I believe it is warming. What I find it helpful to do when I see something is being done and it leads me to think of a reason, I try and think of other reasons for it both pro and con. Are there not other possible reasons for any given action? Some will use it for one purpose, some will use it for another, and similarly. That is just how things work.

    But patience everyone, time will tell and give us more answers.

  • James Lilling // August 9, 2007 at 8:47 pm

    Mr. Petro,

    I find it confusing that you speak of paranoia after the recent data error was found .

  • Dano // August 9, 2007 at 8:48 pm

    Petro can shred my posts anytime.

    Best,

    D

  • Hank Roberts // August 9, 2007 at 11:42 pm

    >1934

    http://newdeal.feri.org/timeline/1934a.htm
    http://www.ccccok.org/museum/dustbowl.html

    You’re acting like some kind of vulture, to be happy that current conditions are slightly less bad than they were in 1934.

    You can’t be a grownup yet. Try reading history before you gloat.

    http://www.odl.state.ok.us/usinfo/maps/dustbowl/

    Those who were around during the 1930’s are remembering the infamous Dust Bowl of that era, when at least five inches of topsoil were lost from nearly 10 million acres.

    Currently, the states that were part of the historic Dust Bowl are experiencing severe droughts. Oklahoma’s drought began in June of 2001, while in the northern plains the drought began several years ago. In eastern Montana, where the drought began four years ago, more than a thousand wheat farmers have given up farming. This spring, farmers in the community of Syracuse, Kansas, crowded into their school gym to pray for rain. Agriculture officials report that most of Colorado will not even have a wheat crop this year; and the state has had four times the normal number of wildfires just since January. Wyoming and New Mexico are being hard hit by drought as well; and New Mexico is also experiencing more wildfires than normal.

    Of course, droughts are nothing new for this area of the nation. But, dust storms have returned, and with them the question—is the Dust Bowl returning?

  • Hank Roberts // August 9, 2007 at 11:43 pm

    The text that follows the last link in my previous post is an excerpt taken from it:

    http://www.odl.state.ok.us/usinfo/maps/dustbowl/

  • Paul G // August 10, 2007 at 6:59 am

    == Lee said: ==
    =”No, surfacestations’ pictures did NOT discover any error in the temperature data. The pictures have no temperature data content whatsoever.”=

    Incorrect. Without SurfaceStations photographing that site, Steven McIntyre would probably never have probed deeper into the data.

    Seven and a half years that error sat in the records and none of the climate experts ever found it until the efforts of a couple of amateurs pointed it out to them.

  • Paul G // August 10, 2007 at 7:16 am

    == Marion Delgado said: ==
    =”Surface station data for the contiguous 48 states is not equivalent to multiple sources showing the average temperature in the world . . .”=

    Marion, the issue of this thread is the US surface site record and instead of going off topic, I think it is better to keep it on.

    == Hank Roberts said: ==
    =”You’re acting like some kind of vulture, to be happy that current conditions are slightly less bad than they were in 1934.

    You can’t be a grownup yet. Try reading history before you gloat.”=

    Gloating is allowed (temporarily) when a significant error is discovered that punches a small hole in the GW alarmists balloon.

    History has been reconfirmed with this adjustment, with 1934 remaining the hottest year in US records.

    I am sorry for the drought in various states just as I am sorry for the serious flooding in Britain, but neither issue relates to the discovery of a serious error in the US surface temperature record.

  • Paul G // August 10, 2007 at 7:56 am

    Had to come back and comment on this statement by Hank Roberts again.

    Hank Roberts said:
    =”You’re acting like some kind of vulture, to be happy that current conditions are slightly less bad than they were in 1934.”=

    Hank, you sound unhappy that it is not hotter now then it was in 1934. Why is that?

    Why are you disappointed at the very good news that the degree of the warming trend in the US has been reduced and that temperatures, while close, have not exceeded those of the 1930s?

    Lastly, things are not “slightly less bad” then they were in the ’30s. Like AGW alarmists are prone to do, that is an extreme exaggeration.

  • dhogaza // August 10, 2007 at 11:05 am

    Analysis of the data found this error.

    Exactly. McIntyre’s commentary makes that clear, he analyzed the data and found an error.

    It’s the right way to go about things, and congrats to McIntyre for making a minor contribution to our understanding of what’s going on.

    I wish he and the other denialists would lay off the spin, though. Which seems to be roughly …

    “1998 wasn’t the warmest year, it was the second-warmest year! Global warming disproved!”

    Bizarre.

  • dhogaza // August 10, 2007 at 11:07 am

    And the other bizarre thing … many of us have been saying that we can build a robust dataset through analysis.

    Others are saying “we need photographs, not data, nor analysis!”

    So McIntyre does some analysis, finds an error, makes the record more robust, and the spin?

    “see! we were right! we need photographs! …”

  • tamino // August 10, 2007 at 3:11 pm

    Congratulations to Stephen McIntyre for identifying the latest correction to USHCN data.

    It’s very interesting that the blogosphere has made such a big deal that the latest GISS analysis indicates 1934 as the “hottest year on record.” First of all, the difference between 1934 and 1998 is now estimated as 0.02 deg.C, which is probably within the error limits of the calculation, in which case there’s really a “statistical tie.” But what’s most interesting is that announcers seem to go out of their way not to mention (at least, not very prominently) that this is for the 48 contiguous states of the U.S. only. It doesn’t include Canada, or Ireland, or China, or Finland, or the oceans, or … In fact, it doesn’t even include Alaska!

    Those who have studied temperature records have known all along that there are regions which were hotter in past years than recent ones, including regions of nontrivial size, and now the lower 48 states of the U.S. appears to belong on that list. At less than 2% of the area of the planet, that’s certainly of nontrivial size — but it’s also certainly a lot less than the globe. Since the problem we face is global warming it’s best to pay attention to the planetary average, and by that measure, nothing even comes close what we’ve seen recently. Nothing even comes close. For the all-time record year on the planetary scale, 1998 and 2005 are still in a statistical tie. But denialists seem not to want people to pay attention to global warming.

  • caer bannog // August 10, 2007 at 3:24 pm

    The latest continental USA temperature anomaly plot (with corrections for the error discovered by McIntyre) can be found here

  • nanny_govt_sucks // August 10, 2007 at 5:45 pm

    First of all, the difference between 1934 and 1998 is now estimated as 0.02 deg.C, which is probably within the error limits of the calculation, in which case there’s really a “statistical tie.”

    Why wasn’t it a “statistical tie” last week when 1998 was slightly warmer than 1934?

    [Response: My guess: it was.]

  • tamino // August 10, 2007 at 7:07 pm

    Note to readers: I have returned from my travels, so there will no longer be long delays in moderating comments.

  • John Mashey // August 10, 2007 at 8:16 pm

    While correlation is not causation, it is useful to consider temperature trends in light of gyrations of sulfate aerosols:

    http://data.giss.nasa.gov/gistemp/graphs/ (USA part)

    versus

    http://www.pnl.gov/main/publications/external/technical_reports/PNNL-14537.pdf
    page 14, Total Sulfur Emissions by region
    OR
    http://capita.wustl.edu/capita/CapitaReports/EmisTrends/soxnemb.html, Figure 10., http://capita.wustl.edu/capita/CapitaReports/EmisTrends/sox10.gif

    There is a steep drop in sulfur emissions In North America, starting after the 1929 Crash [the purple section of that chart], which amounts to a 30-40% reduction from the peak then. It didn’t recover to that peak level until ~1940. Other things being equal, one would expect that a USA-localized drop in sulfate aerosols in the mid-1930s should yield upward pressure on temperatures there.

  • Lee // August 10, 2007 at 8:26 pm

    Actually, PaulG, even in the US, if you look at the 5 year running mean after the correction the last decade si still hotter than the 1930s. One peak year, in a statistical tie, with slightly higher numerical rank, embedded in an otherwise cooler interval, does not make the 1930s hotter than now.

  • nanny_govt_sucks // August 10, 2007 at 9:52 pm

    Other things being equal, one would expect that a USA-localized drop in sulfate aerosols in the mid-1930s should yield upward pressure on temperatures there.

    It appears the case is not so clear-cut when it comes to aerosols: http://www.physorg.com/news105192948.html

  • Marion Delgado // August 10, 2007 at 10:29 pm

    I want to point out - AGAIN - what most of you are missing. Their goal is to skew everything against action on climate change. Hence, if SM had found something that made it look more likely that anthropocentric global warming was happening, he would have ignored it. Hence, this is not error-checking. This is setting up a system to see how many points you can shave against global warming. There is no counter pressure to find records with glitches the other way, models or arguments that are too conservative, and so on.

    On the upside, even with this going on, tamino, Lee and I have both pointed out the obvious - 1934 applies only to the contiguous United States. And even there you STILL have the same trend, with a small amount of lag and perhaps a little slower, but there.

    And again, this is OLD NEWS for God’s sake, even with the minor tweak! What part of my post above isn’t clear? That’s FROM 2001 - and they’re stating that 2001 will probably end up like 2000, around the 13th warmest year in US history, but the second warmest globally.

  • Marion Delgado // August 10, 2007 at 10:40 pm

    Paul G:
    I actually read tamino’s post and my comment IS on topic. And yours is absurd. Saying that had McIntyre not pushed photos instead of data analysis he would have not analyzed data sets is pretty much a good working definition of illogic. This tiny 2000 error at Goddard has nothing to do with photos or conditions.

  • Paul G // August 10, 2007 at 11:16 pm

    Whatever Lee. 1934 is the hottest year. NASA does not list is as a “tie” so I’m not sure why you would.

    Deniers and warmers should both be happy with the recent correction to the data; surprisingly, warmers seem the most disappointed of all.

  • nanny_govt_sucks // August 11, 2007 at 3:00 am

    Deniers and warmers should both be happy with the recent correction to the data; surprisingly, warmers seem the most disappointed of all.

    Yes, it’s as if all the doomsday scenarios have disappeared momentarily while AGW credulists try to downplay what should be considered “good” news. Less warming should mean (to credulists) fewer hurricanes, fewer species extinctions, less dramatic sea level rises, etc… Where’s the collective sigh of (slight) relief and the hunger to further audit the record in hopes of finding out that things may not turn out to be as bad as projected?

  • tamino // August 11, 2007 at 3:16 am

    AGW credulists try to downplay what should be considered “good” news.

    The good news is that the USHCN data are closer to correct than they were previously. But the reduction in estimated global warming isn’t nearly big enough to be considered good news. Since the reduction in estimated U.S. (lower 48 states) temperature is 0.15 deg.C, and the lower 48 of the U.S. makes up less than 2% of the area of the globe, the reduction in the estimated global warming due to this latest correction is less than 0.003 deg.C.

    The bad news is that this recent development is being used by denialists to imply that the whole global warming idea is a crock. This reduces the likelihood of substantive action to reduce greenhouse gas emissions. Any delay or reduction in action to mitigate global warming is likely to have much more far-reaching negative impact than the benefit we can expect from reduced severity by 0.003 deg.C.

  • cce // August 11, 2007 at 3:52 am

    We know that sulfate aerosols, such as thos produced by burning coal, cause cooling, both directly and indirectly due to the cloud albedo effect. Other aerosols may cause warming depending on what they are, where they are and how high they are.

    NGS, who linked a story about Asian Brown Clouds, should read to the end.

    “The conventional thinking is that brown clouds have masked as much as 50 percent of global warming by greenhouse gases through so-called global dimming,” said Ramanathan. “While this is true globally, this study reveals that over southern and eastern Asia, the soot particles in the brown clouds are in fact amplifying the atmospheric warming trend caused by greenhouse gases by as much as 50 percent.”

  • chrisl // August 11, 2007 at 4:45 am

    Interesting that over 1200 temperature monitoring sites are seen as miniscule and represent only 2% of the earth. Yet a certain species of tree in ONE location CAN be a proxy for the whole of the earth. And no data required!

  • cce // August 11, 2007 at 6:44 am

    Hey Tamino,

    Since it’s clear that this topic isn’t going to go away anytime soon, and many questions will be asked “how could it be as warm in the ’30s,” maybe you could do a post showing the negative correlation between sulfer emissions and temperature in the US. You could remove the solar and ghg forcing trends from the US temps, and then plot what’s left as a function of SO2 emissions.

    I traced the SO2 plot that was linked in the comments. In rough 5 year increments we get:
    1880 500
    1885 1000
    1890 2000
    1895 3500
    1900 5000
    1905 7000
    1910 8500
    1915 10000
    1920 10500
    1925 11700
    1930 11000
    1935 8500
    1940 10000
    1945 13000
    1950 10600
    1955 10500
    1960 11200
    1965 13500
    1970 16000
    1975 14500
    1980 13000

    (numbers are in thousands of tons per year)

    I did a quickie plot and the dips in SO2 emissions correspond very closely to the spikes in temperature. But I’m not good enough to remove the solar and ghg influence.

  • nanny_govt_sucks // August 11, 2007 at 7:42 am

    Since the reduction in estimated U.S. (lower 48 states) temperature is 0.15 deg.C, and the lower 48 of the U.S. makes up less than 2% of the area of the globe, the reduction in the estimated global warming due to this latest correction is less than 0.003 deg.C.

    Another way of saying this is that we’ve only just begun to look at 2% of the globe and we’ve already found a correction of 0.003 deg C. Imagine what else could be found if the rest of the 2% were investigated, not to mention the other 98%!

  • John Cross // August 11, 2007 at 10:41 am

    Nanny, don’t you have some unfinished business from up the column a ways?

  • MrPete // August 11, 2007 at 3:28 pm

    It’s stronger than nanny’s summary. “Amatuers” have begun examining records covering less than 2% of the planet. The very best, most studied, most verified records. With little effort, no funding, they found a correction 0f 0.15 degrees C.

    This easily calls into question the confidence intervals of the US data. And since that data is better than global data, it more seriously calls into question the global data.

    As a start, one ought to back off and admit that globally we’re looking at at least a 0.3 or larger error interval.

  • tamino // August 11, 2007 at 3:52 pm

    Imagine what else could be found if the rest of the 2% were investigated, not to mention the other 98%!

    Imagine this: we might find that global warming is even more severe than we previously believed.

    It’s naive to think that every correction we haven’t yet identified is going to reduce estimated global warming; it’s just as likely that the next correction discovered will be in the opposite direction. If we take the 0.003 deg.C global correction recently discovered, and imagine that there are a hundred such corrections yet to be found, we can expect the magnitude of the final result to be 0.003 x square root of 100 = 0.03 deg.C. And the final result is just as likely to be positive as negative. Either way, it won’t alter the basic conclusion: global warming is real, and it’s here.

    And the surface temperature record from thermometers is only one of a great many evidences. The surface temperature record from satellites indicates the same trend found in the thermometer record. The retreat of glaciers, migration of species, rise in sea level, increase in wildfires, increased frequency and severity of heat waves, reduction of global snow cover, melting of permafrost, earlier arrival of spring snowmelt runoff, bleaching of coral reefs, dramatic reduction of polar ice, etc., etc., etc., all indicate the same phenomenon.

    Suppose we found an error of 0.003 in our estimate of the probability of debilitating disease due to long-term smoking. Would you then cast doubt on the vast body of science behind the conclusion that smoking is bad for your health? Would you commence a 2-pack-a-day habit?

    Seriously, people: finding an error which reduces the estimated global warming since 1900 from about 0.8 deg.C, to about 0.797 deg.C, is hardly reason to doubt the reality of the problem.

  • MrPete // August 11, 2007 at 4:03 pm

    “If we take the 0.003 deg.C global correction recently discovered, and imagine that there are a hundred such corrections yet to be found, we can expect the magnitude of the final result to be 0.003 x square root of 100 = 0.03 deg.C.”

    That’s bad math. If there are 0.15 errors in the USA data, it’s at least as likely to be the same or bigger elsewhere. Thus, 0.15 or larger elsewhere.

    Secondly, it is NOT just as likely to be lower or higher. The vast majority of errors found to date have been in the warmer direction. Obvious observation bias and other effects.

    Finding errors that radically increase the uncertainty level in the US data is a Big Deal.

    [Response: The bad math is yours.

    You seem to believe that every location in the world must have errors as large or larger than the recent correction applied to the lower 48 states of the U.S. There's no physical or logical reason to believe that the next error discovered will be in one direction or the other; your supposition sounds like ideology, not science.

    And the claim that the vast majority of errors have been in the warmer direction comes from denialists, who loudly trumpet any warm error with an "aha!" but refer to the correction of cool errors as fraudulent. Witness how they still harp on "urban heating" in spite of the fact that it *is* corrected for, and global trends are based on rural stations only. But they still maintain that the UHI correction is ignored, or insufficient. Meanwhile, they continue to accuse GISS of dishonesty for applying time-of-observation bias correction.

    I suspect that if we found an error of 0.003 in the probability of illness from smoking, you *would* call into question the vast body of science establishing the dangers of smoking.]

  • Paul G // August 11, 2007 at 5:32 pm

    == tamino said: ==
    =”The surface temperature record from satellites indicates the same trend found in the thermometer record. “=

    Not only is the trend important, the degree of the trend is important.

    Does anyone have any information how the new data for the US surface temperature record compares to the satellite measurements for the US?

  • captdallas2 // August 11, 2007 at 8:43 pm

    Paul G

    Interesting you mentioned satellite measurement. Since satellite data confirmed unusual warming in the 2002 winter there may be another minor adjustment in the works.

  • Marion Delgado // August 14, 2007 at 8:44 am

    I feel better now - what is this, comment 300? Anyhoozel. In the end this is an embarassment for Climate Denial and the Surface Stalkers. What have they taught us?

    They don’t know what the USHCN is or that it’s different from GISS - or the few that do don’t bother telling the herd. They don’t know the difference between the Lower 48 and the whole world. They don’t know that 1934 is indeed still in a statistical tie with 1998 - and by some readings, perhaps 2005. If at some point some reformulation makes 2005 the so-called hottest year by a hundredth of a degree will they go Heaven’s Gate? They don’t know about the difference between the different means for the data from the 2 sources before and after 2000. They don’t know that other changes were made then to improve accuracy (which, presented correctly, is exactly what they did). They, not knowing anything, actually think Hansen said 1998 was the hottest year in US history. He, and GISS, said 1934 was. And not only was that COMMON KNOWLEDGE. For God’s sake, I was quoting a teeny-tiny run-of-the-mill environmental science email newsletter from 2001 - but over on Real Climate (and independently of me or any knowledge by me that they were doing it) they posted an actual paper from 2001 where Hansen EXPLICITLY SAYS 19334 WAS THE HOTTEST YEAR IN US HISTORY, *BUT* THAT IT’S TOO CLOSE TO SAY WITH VERY MUCH PRECISION AND CONFIDENCE. So it’s Hansen they’re vindicating.

  • george // August 15, 2007 at 12:06 am

    It appears that some are actually questioning the very reality of the global warming trend.

    Not just whether it is human caused, mind you, but the reality of the trend itself.

    This is just absurd.

    Forget for the moment that, even if 1934 is tied for the warmest year of the century for the continental US, there is a clear trend of warming since the 70’s in the lower 48.

    Also, forget for the moment that the continental US represents only a small fraction of the total global surface.

    The fact remains that the surface temp record is just a single line of evidence — out of many, many lines — that show that the earth is experiencing a warming trend.

    What about the melting glaciers worldwide? Are we to ignore those?

    What about the melting polar ice sheets?

    What about the change in the height of the tropopause?

    What about rising sea level?

    What about the satellite record that shows warming of the troposphere and cooling of the stratosphere?

    What about the increase in the heat content of the oceans?

    What about the plants responding to longer growing seasons and moving northward?

    Some would clearly have us believe that any error in the surface temperature record (or any other single line of evidence) — no matter how small — casts doubt on the very idea of global warming.

    Such people would have us ignore the vast majority of the evidence in favor of their idle speculation about the potential for large errors based on a few small errors found here and there.

    One reaches a point where further conversation with such “doubters” is simply a waste of time. I’d say that point was reached some time ago.

  • Chris O'Neill // August 15, 2007 at 4:00 pm

    “”First of all, the difference between 1934 and 1998 is now estimated as 0.02 deg.C, which is probably within the error limits of the calculation, in which case there’s really a “statistical tie.””"

    ngsucks:”Why wasn’t it a “statistical tie” last week when 1998 was slightly warmer than 1934?”

    Why do you deny the fact that it was a statistical tie last week? In fact, not only a statistical tie but a dead heat tie (to less than 0.01 deg C).

  • ks // August 15, 2007 at 7:49 pm

    Chris,

    I’m not sure who you are questioning, but it was a statistical tie last week for the lower 48. And it was a statistical tie in 2001 when Hansen wrote the following -

    “In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.”

  • george // August 16, 2007 at 3:17 am

    Chris asked “Why do you deny the fact that it was a statistical tie last week?”

    The B-52’s wrote a song that answers your question, Chris: “Private Idaho.”

    Sometimes the rest of us wish we could live in our own private Idaho too — but, unfortunately for us, reality prevents us from doing so.

    Some people obviously do not have that problem.

  • nanny_govt_sucks // August 16, 2007 at 6:50 pm

    “In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.”

    Where’s Hansen’s mention of the “lower 48″?

    Funny that terms like “lower 48″ and “statistical tie” are mentioned only when the numbers don’t point in the Alarmist direction. Why did no one bring up Hansen’s quote above before this week? Because before this week, “statistical tie” was an embarrassment to the cause. Now it’s a straw to grasp onto.

    (John, be happy to get back to you when I have a bit more time.)

  • Adam // August 17, 2007 at 8:46 am

    “Where’s Hansen’s mention of the “lower 48″?”

    Three sentences earlier. The whole paragraph (only part of which you’ve quoted) starts with the words “The US…” it also refers to plate 6 which is labelled:

    “Plate 6. (a) Annual and 5-year running-mean surface air temperature for the contiguous 48 United States relative to the 1951-1980 mean; and (b) global annual and 5-year running-mean surface air temperature based on the meteorological stations.”

    It’s all here, and has been for about six years: http://pubs.giss.nasa.gov/abstracts/2001/Hansen_etal.html

    Even in the press release he said:

    “The lower 48 United States have become warmer recently, but only enough to make the temperature comparable to what it was in the 1930s.”

    http://www.giss.nasa.gov/research/news/20011105/

  • Hank Roberts // August 18, 2007 at 4:28 am

    Well, they had to make a choice — they couldn’t both demonize him _and_ read what he wrote.

  • george // August 18, 2007 at 3:51 pm

    Some people just make things up with the hope that no one will invest in the effort to call them on it.

    When they do get called on it and are proven wrong, they certainly don’t admit it. They simply move on to the next “argument” — and on another thread revert to the previously debunked argument(s), in the (mistaken) belief that no one will notice.

    Round and round they go and where they stop, no one knows.

    It’s impossible to convince such people of anything because they are ideologically motivated* so it makes no sense to even try.

    *The moniker “nanny_govt_sucks” says it all.

  • SomeBeans // August 18, 2007 at 5:37 pm

    …but he did that deliberately knowing that one day we would see the ‘Recent Comment’ title:

    “nanny_govt_sucks on Surface Stations”

  • tamino // August 18, 2007 at 6:22 pm

    Please no “piling on” of NGS, and less ad-hom (if you want to ridicule his ideas, go for it, but give substantiation). I don’t insist on absolute Ghandi-like brotherly love, but I’d prefer a more civil tone generally.

    I’ve certainly been guilty myself! But the more civility we can maintain, and the more we can stick to the real topic, the better.

  • nanny_govt_sucks // August 18, 2007 at 7:40 pm

    Tamino, does this mean you’re going to refrain from using the term “denialist”?

    “Lower 48″ is not included in Hansen’s quote and it is interchanged with “U.S.” at different points in his paper. For instance, “Lower 48″ or “contiguous U.S.” are not mentioned in his abstract. Where’s the outrage?

    But aside from this is how often Hansen’s uncertainty was refered to prior to last week. You couldn’t hear a peep. Now this week, his uncertainty is on parade.

    Speaking of Hansen, how well do you all know him? His latest public letter reveals a real activist character that I thought was just a figment of Rush Limbaugh’s imagination: http://www.columbia.edu/~jeh1/realdeal.16aug20074.pdf

    [Response: As Adam points out (5 comments back) the mention of "lower-48" is in the paper, in the figure captions, and in the press release. It's quite clear to anyone who reads the whole thing, that this is the topic being discussed.

    Trying to get mileage out of the fact that Hansen doesn't mention "lower-48" in *every* sentence referring to U.S. temperature, is the kind of tactic that gets one accused of being a "denialist."

    If someone accuses you of being a denialist, or accuses me of being an alarmist stooge for NASA GISS, they should provide at least some rationale which is rooted in something relevant to climate science. But comments which have *nothing but* name-calling, or which carry ad hominem attacks to the extreme, or which are based on irrelevancies (like someone's choice of pseudonym), will be deleted.]

  • nanny_govt_sucks // August 18, 2007 at 11:14 pm

    Also, the y-axis on the Orland plot has a range of 6 deg.C while the y-axis on the Marysville plot only covers 5 deg.C; when comparing the graphs, that would tend to exaggerate the Marysville trend relative to Orland.

    You may want to pass this note along to Hansen. His y-axi are out of whack as well: http://www.columbia.edu/~jeh1/realdeal.16aug20074.pdf

  • Chris O'Neill // August 19, 2007 at 6:27 am

    ngs: “Because before this week, “statistical tie” was an embarrassment to the cause.”

    ngs still doesn’t get it. Before this week, US48 was dead equal in 1934 and 1998 at 1.24 degrees anomaly. Why didn’t the mitigation alarmists bring this up in the past? Because they know GISS was not using it as evidence for global warming. GISS still doesn’t use it as evidence for global warming. GISS’s argument has not changed.

  • george // August 19, 2007 at 5:36 pm

    “the more we can stick to the real topic, the better.”

    Presumably these words were also intended for those who keep bringing up James Hansen’s “activism”, “alarmism”, and other isms (including “communism”?).

    It gets old after the umpteenth time.

    We all know James Hansen is the evil communist devil incarnate who wants to take away all our property and send us back to the dark ages. You know, sit in a cold, dark house and walk or pedal a rickshaw to work in order to cut emissions.

    So, having dispensed with the obvious, there is no longer any need to belabor it (again and again [repeatedly {ad nauseum }])

    So, let’s now declare “Open Mind” a Hansen “whine-free” zone and get back to being all lovey-dovey like George Thorogood’s old lady in “One Bourbon, One Scotch, One Beer”, shall we?

  • Hank Roberts // August 19, 2007 at 7:12 pm

    http://pdfdownload.04340.com/070819/tmp-hRjTGH/6855041003.png
    (that’s good for about 24 hours)

    The US ‘dust bowl’ decades were regional, not worldwide, events. Kind of like the Medieval warm spots, come to think of it. The rest of the world didn’t warm then, that much.

    Current warming is global, both during the last big El Nino and today.

    1934 vs. 1998
    1921 vs. 2006

    Or see it here: http://www.columbia.edu/~jeh1/realdeal.16aug20074.pdf

    From: http://www.columbia.edu/~jeh1/realdeal.16aug20074.pdf

  • henry // August 28, 2007 at 8:14 pm

    bigcitylib // Jul 30th 2007 at 6:45 pm

    Thanks,

    As you probably know, Anthony Watts blog (he is behind the surface station “project”)

    http://www.norcalblogs.com/watts/

    …is filled with graphs illustrating pretty much the same theme: bad stations show warming, good stations show cooling. Wonder if they’ve all been fiddled with to give this result.

    [Response: I doubt the graphs have been “fiddled with.” But I suspect there’s some cherry-picking going on, and that the actual data haven’t been examined very closely.]

    Comment (and question) about your response:

    I agree that the data probably hasn’t been “fiddled with”, but as for your statement that “the data hasn’t been examined very closely”; isn’t that what scientists are SUPPOSED to do before using it?

    [Response: What I meant is that I doubt the folks at Surfacestations.org have examined it very closely; I'm certain that the folks at NASA GISS have examined it in detail.]

  • Hank Roberts // August 31, 2007 at 4:00 pm

    If anyone has actually laid out an experimental design I haven’t seen it yet. Is there one? Something like
    – decide on the statistic to be used and how (for example, a 2-tail test rather than 1-tail)
    – written criteria for comparing photographs
    – have raters go through the photographs applying the criteria and recording them
    – compare the raters for consistency and repeatability of scoring
    – using the ratings, sort the photographs into say five bins (each rater to do this independently)
    – evaluate how good the rating system is by how repeatable the sorting is
    – goto 1, improve rating system, reevaluate
    – once the rating system provides consistent and repeatable sorts of the photographs,
    then
    – sort the photographs into final bins
    – pull the data for each station photographed
    – separate the data into the same bins
    – do the temperature trend for each bin (blind, the person doing the statistics should not know which bin is which)
    – submit to peer-reviewed journal
    – consider reviewer’s comments
    – revise and redo as necessary
    – if rejected, submit to Energy and Environment and “publish” online
    – Profit!

  • Sitez » links for 2007-08-31 // August 31, 2007 at 5:21 pm

    [...] Surface Stations - anthropocentric global warming [...]

  • Martin // February 27, 2008 at 9:45 pm

    In all the discussion about people being ‘happy’ or ’sad’ about the finding of the Y2K downward correction in the US temperature record, aren’t we forgetting that reality has not changed? Only our perception of it has changed. And if that means underestimating a serious problem even more, than, yes, that makes me sad. If only a little.

Leave a Comment