Open Mind

False Claims Proven False

February 25, 2010 · 87 Comments

Two of the most prominent claims of global warming denialists have proven to be utterly false.


I’ve completed processing the GHCN data for the northern hemisphere. This project was undertaken to investigate two denialist claims: 1st, that the dramatic reduction in the number of reporting stations around 1990 introduced a false warming trend; 2nd, that the adjustments applied to station data also introduce a false warming trend.

To investigate the 1st claim, I computed separate northern-hemisphere averages for stations that stopped reporting after 1992.0 (the “pre-cutoff” stations) and those that continued to report after 1992 (the “post-cutoff” stations), in order to see whether there’s a significant difference between the trends according to those two subsets.

To investigate the 2nd claim, I computed a northern-hemisphere average using the raw data (no adjustments) for all stations, and compared it to the northern-hemisphere average computed by NASA GISS, in order to see whether there’s a signficant difference between the trends with and without the adjustments used by GISS.

First, here’s the comparison of the “pre-cutoff” and “post-cutoff” data sets:

Clearly there’s little difference between the results obtained using the two distinct subsets. We can compare them in more detail by computing the difference between the two: this is the change created by switching from pre-cutoff only to post-cutoff only.

Using only the post-cutoff stations did not introduce any false warming trend — if anything, the stations which were retained showed slightly less warming than those which stopped reporting, although the difference is not statistically significant.

As for the claim that the introduction of adjustments to temperature records has created false warming, here’s a comparison between the northern-hemisphere result using raw GHCN data, to the results using only northern-hemisphere meteorological stations from NASA GISS:

Once again the denialists’ claims are proven false. The adjustments used by NASA GISS have reduced rather than increased recent warming.

Unlike some who claim to have analyzed these data, I combined station records in a proper way, I computed temperature anomalies properly, and I’ve computed area-weighted averages. All these steps are essential for a correct result.

The claim that the station dropout is responsible for any, let alone most, of the modern warming trend, is utterly, demonstrably, provably false. The claim that adjustments introduced by analysis centers such as NASA GISS have introduced false warming is utterly, demonstrably, provably false.

Categories: Global Warming
Tagged:

87 responses so far ↓

  • Tony // February 25, 2010 at 10:51 am | Reply

    One wonders why this kind of analysis wasn’t done by the critics. After all, it’s a very strong claim to make, with a great deal of potential to buttress sceptic arguments. Maybe McIntyre has some insight into this…

  • Scott Mandia // February 25, 2010 at 11:20 am | Reply

    Thanks, Tamino for doing the work (again)!

    Question: How many ocean stations have been dropped out? (wry smile)

  • Christian A. Wittke // February 25, 2010 at 11:40 am | Reply

    Thank you! This is explained to where everybody can follow and make up her or his mind.

    @Tony
    The vast majority of the critics are no scientists! Clearly they will now look for an arbitrary alternative issue to stir up a wanna-be-storm in another glass by denying or just making up a reciprocal claim. Just more red herrings while the ices are melting.

  • guthrie // February 25, 2010 at 11:47 am | Reply

    The only thing missing is an actual link to the pages where our favourite dunderheids make these claims. Does someone have one handy before said pages disappear down the memory hole?

  • Richard Telford // February 25, 2010 at 12:23 pm | Reply

    The skeptics have been harking on about this for years. Why didn’t they do this analysis? Perhaps they were afraid that there result would be like this, which would spoil their fun.

    How long did the analysis take to do?

  • carrot eater // February 25, 2010 at 12:50 pm | Reply

    I’m trying to understand why the match between you and GISS is excellent, but then has a slight offset at the end.

    You are not giving any area weight to anything above 80 N, while GISS does. But if anything, that would make GISS run warmer than you in the last few years, not cooler.

    GISS does make a UHI correction, but I don’t see why that would make the two curves deviate a bit only after 1998. Maybe it’s just an accident of your coarser gridding.

    It’s been mentioned, but perhaps just as intriguing is Spencer’s ongoing effort. Tamino used the same raw data as GISS; Spencer used even rawer data in the form of weather reports from a larger number of stations. So even if you use different raw data, you get a similar picture (so far; Spencer isn’t done yet).

    Guthrie: If nothing else, they collected much of it in one place, so this won’t disappear:
    http://scienceandpublicpolicy.org/images/stories/papers/originals/surface_temp.pdf

    The above is a collection of material from various blogs, including Watts’ and EM Smith’s. Note the bizarre predilection for averaging together absolute temperatures, instead of anomalies.

    As for adjustments, the blogs try to use death by a million pinpricks. They just pick out individual stations, say “I don’t understand this adjustment, so it must be fraud”, and then decide that the whole thing is fraudulent.

    • guthrie // February 25, 2010 at 3:35 pm | Reply

      Thats perfect, carrot eater, thanks.
      For example:
      “Around 1990, NOAA began weeding out more than three-quarters of the climate measuring stations around the world. They may have been working under the auspices of the World Meteorological Organization (WMO). It can be shown that they systematically and purposefully, country by country, removed higher-latitude, higher-altitude and rural locations, all of which had a tendency to be cooler.”

  • carrot eater // February 25, 2010 at 12:59 pm | Reply

    Could you toss on CRU and GHCN, land NH, on there too, for the fun of it?

  • Kevin McKinney // February 25, 2010 at 1:43 pm | Reply

    I can’t predict climate, yet alone weather, but I have no fear in predicting that this (excellent!) analysis will not induce shame in those whom it should.

    They’ll go on saying the same old things. Hopefully, some more reasonable types will be willing to look hard enough to discern smear from analysis.

  • Amoeba // February 25, 2010 at 3:16 pm | Reply

    I’m absolutely convinced the denialists did do the computation. It’s the obvious thing to do, just in-case it proved their case.

    But IIRC just like the tobacco industry when they realised that the data confirmed the smoking-cancer link. They suppressed the finding, and went ahead as if the data did prove their case – they lied.

    • carrot eater // February 25, 2010 at 5:01 pm | Reply

      While a competent person should be able to do what Tamino did here, I’ve seen no sign that Watts or EM Smith are capable.

  • barry // February 25, 2010 at 3:16 pm | Reply

    Excellent. Thousands of posts and articles on weather stations and analysis of one or a handful of locations, and the ’skeptics’ have never properly crunched the numbers. I doubt they’ll do the honourable thing now that the gauntlet has been thrown at them.

    Tamino, could you compute the trends from raw data to see how they stack up against the IPCC results for the periods below?

    Global mean surface temperatures have risen by 0.74°C ± 0.18°C when estimated by a linear trend over the last 100 years (1906–2005). The rate of warming over the last 50 years is almost double that over the last 100 years (0.13°C ± 0.03°C vs. 0.07°C ± 0.02°C per decade.

  • barry // February 25, 2010 at 3:18 pm | Reply

    That’s from the executive summary, Ch 3, AR4.

    http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter3.pdf

  • barry // February 25, 2010 at 3:43 pm | Reply

    Whoops, your analysis is of the NH.

    AR4 says of Northern Hemispheric warming:

    CRU/UKMO (Brohan et al., 2006)

    1906 – 2005 : 0.075 ± 0.023 per/dec

    1979 – 2005 : 0.234 ± 0.070 per/dec

    NCDC (Smith and Reynolds, 2005)

    1906 – 2005 : o.063 ± 0.022 per/dec

    1979 – 2005 : 0.245 ± 0.062 per/dec

  • Zeke Hausfather // February 25, 2010 at 3:48 pm | Reply

    Tamino,

    If you aren’t intending to publish anything formal on this in the near term, would it be possible to get the R script for your analysis at some point? I know a lot of us would enjoy playing around with it a bit, looking at (among other things) difference between GHCN and GISS/Hadley data at a regional level, examining how different station combination methods would affect the result (e.g. Chad’s recent work), etc.

    It might also be neat to give the grid-level data to some GIS folks to make spatial maps of different data sets (GHCN raw, GISS, GHCN “pre-cutoff”, GHCN “post-cutoff”) for comparison.

    [Response: I haven't decided whether or not to publish this (peer reviewed). If I don't I'll probably make the code available to those who I consider serious investigators. That does not include denialists.]

  • gss_000 // February 25, 2010 at 5:09 pm | Reply

    Awesome! Thank you so much for doing this. I love using this site as a reference to show denialists how wrong their claims are. This is such a service.

  • carrot eater // February 25, 2010 at 5:35 pm | Reply

    Does this include ocean squares with island stations?

  • John Mason // February 25, 2010 at 5:42 pm | Reply

    Good work!

    When I read things like:

    “Around 1990, NOAA began weeding out more than three-quarters of the climate measuring stations around the world. They may have been working under the auspices of the World Meteorological Organization (WMO). It can be shown that they systematically and purposefully, country by country, removed higher-latitude, higher-altitude and rural locations, all of which had a tendency to be cooler.”

    then my suspicion alarm starts flashing. Shame it takes so much work by folks such as yourself to get to the bottom of the story. Certainly justifies working into a paper IMO.

    Cheers – John

  • bbttxu // February 25, 2010 at 5:56 pm | Reply

    Nice work, though isn’t it a little too early to declare victory having only processed half of the temperature data on earth? Would be interesting to see this performed on southern hemisphere as well.

    [Response: No it's not too early. Not only is the northern hemisphere far more than half the temperature data for met stations, it's plenty big enough to establish the two results: station dropout did not introduce false warming, GISS adjustments did not amplify warming they suppressed it.

    Perhaps D'Aleo and Watts will now claim that climate scientists engaged in a conspiracy to inflate global warming by tampering with only southern-hemisphere stations?]

    • MartinM // February 25, 2010 at 6:28 pm | Reply

      You’re missing the obvious explaination, Tamino. There is no Northern hemisphere. It’s merely an invention of the (semi)global climate conspiracy.

  • Bob Kutz // February 25, 2010 at 6:25 pm | Reply

    I don’t understand how you match GISS without the adjustments, since they themselves show adjustments that would preclude matching an unadjusted data set;
    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

    Since GHCN says they’ve added .5 for the last 20 years; http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif , it seems a bit odd to me for you to say your unadjusted data matches their adjusted data.

    Are you saying x+.5=x?

    I’m thinking there’s some interesting math in your work here.

    Can I see your work?

    [Response: Your link is for USHCN only, which is only a small fraction of the data, and it's from NCDC, not GISS. As for the 0.5 deg. net, that's Fahrenheit. And I didn't say that my analysis matched the GISS result. It actually shows more warming that GISS.

    Your comment is both snide and, frankly, rather stupid. The only sensible part is where you admit you don't understand.]

  • Bob Kutz // February 25, 2010 at 6:45 pm | Reply

    No, the link may be to NOAA, but they’ve clearly captioned the page GHCN global gridded data, further, it’s really interesting that NCDC and GISS match very closely, but you are now saying you don’t claim to match GISS? A=B B=C, C=A, it’s called transitive, and it’s fairly simple to understand.

    I won’t sink to comments such as yours, I will again ask; can I see your work?

    [Response: Other than the page caption, the words "GHCN" and "global" don't appear on that page. And the caption to the graph you linked is "DIFFERENCE BETWEEN RAW AND FINAL USHCN DATA SETS."

    As for seeing my work, you bet your ass you can. I've decided to publish.]

    • Deech56 // February 25, 2010 at 7:09 pm | Reply

      RE Tamino:

      As for seeing my work, you bet your ass you can. I’ve decided to publish.

      Excellent! A peer-review article will be that much more powerful, and more than McI and the others will do. Sort of how Menne, et al. stole the thunder from the surfacestations project. Hitting back with the science is the best course.

    • carrot eater // February 25, 2010 at 7:17 pm | Reply

      Bob, are you willfully being illiterate? You aren’t the first person to have missed the words “US” all over that page, so I’ll forgive the error once. But to insist on it?

      If you want raw vs adjusted for the overall GHCN, see here:
      http://www.ncdc.noaa.gov/cmb-faq/temperature-monitoring.html

      As for GISS specifically, the only adjustment they do is for UHI. Though they do import USHCN adjustments for the lower 48 US.

      I think I’ll dub this “Steve McIntyre Syndrome”. I find that his readers often have trouble telling the difference between US charts and global charts.

  • Derecho64 // February 25, 2010 at 6:49 pm | Reply

    Does anyone really believe that even if the denialists get all the “raw data”, get all the “code” they want, and have it in their hot little hands, that they’ll actually do the work and analyze the data?

    They’ve had 95%+ of what they whine about for years, and are still whining, and lying, and defaming.

    Too many of them, and often the loudest ones, are deep in conspiracy-think, and absolutely no fact or evidence will persuade them. IMNSHO.

  • VeryTallGuy // February 25, 2010 at 7:07 pm | Reply

    Always look forward to these posts, a pleasure to read statistics explained so simply and yet absolutely authoritatively.

    A suggestion that perhaps the Guardian might consider you as a guest contributor given the shame they’re probably feeling just now ?

  • Zeke Hausfather // February 25, 2010 at 7:38 pm | Reply

    Bob Kutz,

    The top of the page says GHCN because USHCN is a (small) part of GHCN.

    As far as adjustments in GHCN go, over the entire series the mean adjustment is 0.017 C per decade and the median is 0 C. See http://www.gilestro.tk/2009/lots-of-smoke-hardly-any-gun-do-climatologists-falsify-data/

  • Zeke Hausfather // February 25, 2010 at 7:41 pm | Reply

    Er, to clarity a tad, that 0.017 C number is the mean adjustment of all individual stations. Since not all stations have the same weight in calculating global anomalies (the U.S. is particularly oversampled), a spatial analysis like Tamino performed will give you a different net adjustment result.

  • Estratocumulus // February 25, 2010 at 8:13 pm | Reply

    Hello Tamino.

    First of all, two comments. Sorry for my poor english and just to tell that I agree with you in almost everything about climate science and climate denialism.

    Now, there is something unclear with GISS 1200 km smoothing data. In june 2008, a participant of the forum of http://www.meteored.com did an analysis, comparing GISS met stations, GISS 1200 km smoothing radius and data from MSU.

    The fact is, GISS met stations plus oceans was in perfect agreement with MSU trends (also was with CRU and NOAA). But GISS 1200 km (that is the “official” result from GISS) was introducing a little extra-warming, wich came from the application of 1200 km smoothing radius to the land temperature record, instead of using met stations.

    Here you have the link to meteored where this issue was discussed (in spanish):
    http://foro.meteored.com/climatologia/seguimiento+temperatura+global-t78991.0.html;msg1772382#msg1772382

    And here the link only to the graph showing this difference:
    http://www.meteosat.com/imagenes/buf/gissestcionesmsu1.gif

    Of course, nothing of this changes the falsity of false claims; this is not about the quality of the data but about the smoothing technique.

    Sorry -again- for my english, and thank you very much for this blog.

    • carrot eater // February 25, 2010 at 8:38 pm | Reply

      I’m not as smart as Tamino, but I can try to help.

      You are comparing the data here,
      http://data.giss.nasa.gov/gistemp/graphs/Fig.A.txt
      to the data here
      http://data.giss.nasa.gov/gistemp/graphs/Fig.A4.txt

      The first (A.txt) is the GISS global land analysis, but I cannot figure out what is the second (A4.txt). Can you clarify what you clicked on, to find this page A4.txt?

      But I think even Fig.A.txt includes 1200 km smoothing. It just excludes data of sea surface temperatures.

      If GISS with smoothing is warming faster than CRU or GHCN, it is probably because of the Arctic. The Arctic is warming faster than the rest of the world, but has few stations. So if you interpolate over the Arctic, you get more warming than if you omit it.

  • Estratocumulus // February 25, 2010 at 9:00 pm | Reply

    Carrot said:
    “If GISS with smoothing is warming faster than CRU or GHCN, it is probably because of the Arctic. The Arctic is warming faster than the rest of the world, but has few stations. So if you interpolate over the Arctic, you get more warming than if you omit it.”

    Of course we thought about the Arctic, but really the difference doesn’t seem to be there.

    The page A4.txt doesn’t have a link to it anymore, but it had when that analysis was done. That is the smoothed data, and that is the data that is really used to make the global anomaly. You can test it yourself: take 30% land and 70% oceans and you’ll see that is the official GISS anomaly.

  • Eli Rabett // February 25, 2010 at 9:19 pm | Reply

    Estratocumulus, write an Email to Reto Ruedy at GISS. He is a very nice guy and always answers polite inquiries promptly and as well as he can

  • t_p_hamilton // February 25, 2010 at 9:27 pm | Reply

    Estratocumulus,

    The arctic has few met stations. If you use the smaller smoothing radius, there are large swaths of arctic where there is no anomaly in the grid. Could that be the problem?

  • J // February 25, 2010 at 9:32 pm | Reply

    Bob Kutz keeps asking “Can I see your work?”

    You’ve got a very detailed description of the methods. Nothing’s stopping you from doing your own analysis using (1) the same data, and (2) the same methods.

    If you do that, and get a very different answer from Tamino’s, then it would be quite appropriate to ask to see his code, since clearly one or the other (or both) of you has done something wrong.

    Science is mostly not done by taking other people’s code and running it again to see if you get the same result. Instead, it’s done by having different people investigate the same problem using different data, different methods, and/or different code, to see if the results are robust or not.

    I do think there’s value in sharing code but whether or not Tamino chooses to do so is mostly irrelevant. The real test, if you think there’s something incorrect here, is to do it yourself and compare answers.

    • Chad // February 25, 2010 at 10:11 pm | Reply

      The vast majority of people who yell the rally cry “FREE THE CODE!” wouldn’t have the slightest clue what to do with it. Some who would know what to do with it simple chant the rally cry as another way of saying: “I don’t want to slave over a hot keyboard like you did for many hours and days on end. I don’t understand your method, even though you explained it clearly in multiple blog posts. Please do my work for me so I can do point-and-click science and crank out ridiculous blog posts and comments.”

  • J // February 25, 2010 at 9:38 pm | Reply

    Way back at the top of this thread, Tony writes: One wonders why this kind of analysis wasn’t done by the critics. After all, it’s a very strong claim to make, with a great deal of potential to buttress sceptic arguments. Maybe McIntyre has some insight into this…

    The problem is that it would take a “skeptic” who is:

    (a) competent enough to do the work, AND

    (b) not smart enough to understand that the likelihood of falsification was low before undertaking the analysis, AND

    (c) honest enough to report the results when they fail to falsify GISSTEMP.

    I would think that the “skeptic” population that meets all three criteria is small. Watts almost certainly fails to meet (a); I don’t know whether McIntyre failed to meet (b) or (c). He’s pretty smart, so he might well have understood that doing such an analysis was unlikely to be fruitful, and thus declined to do it in the first place. Or he might have tried it and then quietly dropped it.

  • David Jones // February 25, 2010 at 11:14 pm | Reply

    Taomino. have you seen this?

    http://statpad.wordpress.com/

    This site is run by a published statistics professor.

    [Response: I've seen it. That's where the idea of a separate offset for each month came from. As I said before, it's a viable alternative. It has advantages but it forces all the station records to have the same average annual cycle. And, it's less similar to the GISS procedure. Maybe I'll try it anyway. This much is just about certain: it won't change the overall conclusions.]

    • carrot eater // February 26, 2010 at 12:04 am | Reply

      You sure about that and GISS? I thought GISS uses month-specific offsets. It’s discussed in the 1987 paper.

      • John Goetz // February 26, 2010 at 1:50 pm

        No, GISS uses annual offsets.

        The GISStemp software as implemented since its release to the public in 2007 deviates in a handful of places from the 1987 paper.

  • tamino // February 26, 2010 at 1:00 am | Reply

    I’ve looked at the annual cycle for different stations in the grid containing Skikda, as used by RomanM in his example of the difference between computing a single offset for each station, and computing 12 separate monthly offsets for each station.

    The differences in the sizes of the annual cycles were larger than I expected. Hence on that basis, I now think RomanM is right, that using 12 separate monthly offsets is a better way to combine station records for a gridwide average to incorporate into a global average, than using a single offset. That doesn’t mean the single-offset method is bad, just that the separate-monthly-offsets method is better.

    And it in no way invalidates the results of the analysis, which still shows conclusively that station dropout did not create a false warming trend, and GISS adjustments did not exaggerate warming they reduced it.

    • carrot eater // February 26, 2010 at 1:27 am | Reply

      I suspected you’d come to that. It’ll add to the computational load, though.

      With that out of the way, the only disadvantage of your method compared to GISS is the computational burden. I think.

      Your method is still prone to the weirdness illustrated in Fig 2 of Peterson 1998. I wonder if there are any grid boxes where this occurs to any appreciable extent. But as you said before, there are trade-offs.

  • dckx // February 26, 2010 at 2:26 am | Reply

    “Perhaps D’Aleo and Watts will now claim that climate scientists engaged in a conspiracy to inflate global warming by tampering with only southern-hemisphere stations?”

    Now you’ve done it. Tomorrow you wake up on a remote island with an integer instead of a name.

  • Skeptical Science // February 26, 2010 at 6:15 am | Reply

    Tamino, do write that paper. I’ve read paper after paper proving that the sun isn’t causing global warming and each time a new paper comes out, I think why is this in peer-review? Isn’t this question well and truly answered? And yet each new paper is greeted with dismay, surprise and anger by the skeptics.

    I’ve just made a donation to support Open Mind. Keep up the excellent work.

  • W Scott Lincoln // February 26, 2010 at 6:25 am | Reply

    There is still hope for the world because of people independently verifying the science like tamino.

  • cagwsnib // February 26, 2010 at 10:56 am | Reply

    How do your results stack up against Phil Jones’ recent admission that there has been no statistically significant warming in the last 15 years?

    [Response: Been there, done that. You could easily have found out the truth before trolling.]

  • ScP // February 26, 2010 at 11:50 am | Reply

    Probably O/T

    http://scienceandpublicpolicy.org/images/stories/papers/originals/Rate_of_Temp_Change_Raw_and_Adjusted_NCDC_Data.pdf

    [Response: Whack-a-mole.]

  • Zeke Hausfather // February 26, 2010 at 3:41 pm | Reply

    ScP:

    http://rhinohide.wordpress.com/2010/02/01/ghcn-high-alt-high-lat-rural/ has a good chart showing the global spatially-weighted temps from GHCN for stations in areas with greater than and less than 100 people per square kilometer that works well as an initial rebuttal. I’m sure some other bloggers will pick it apart, since its a fairly trivial claim to tackle (I might do it this weekend if I come across a good way to categorize lat/lon coords as urban or rural).

    My first impression (given the small sample size of stations) is that they just cherry-picked the stations used to show the trend they wanted to show.

    • carrot eater // February 26, 2010 at 4:04 pm | Reply

      urban/rural classification and population you can find in v2.temperature.inv

      I’m also suspicious of the sampling. Why 48? Why not use them all?

      We’ve seen time and time again, analyses that show that UHI does not really contaminate the final products. So I’m rather sceptical whenever I see a half-analysis like this.

      But at least this guy took the trouble to use anomalies, and gridding of a sort. That’s a definite step up from the SPPI report being addressed here. If nothing else, the author appears numerate.

      As a side note, what is it with people publishing graphs in the default Excel format?

      • Zeke Hausfather // February 26, 2010 at 4:34 pm

        Thanks!

        I guess this means I should finally switch from using the World Monthly Surface Station Climatology station database to GHCN raw… I wonder if I could do a simple poor-man’s spatial weighting for the U.S. by calculating the urban/rural station anomalies for each state and weighting them by the state’s land area?

        I always figured Excel purposefully made the default graphing scheme ugly as a way for us to quickly identify lazy people.

      • carrot eater // February 26, 2010 at 5:07 pm

        Leave the stone ages behind, Zeke.

        Honestly, I think it would be harder to code your poor man version. It’ll certainly be messier, with lots of state-specific information hard-coded in.

        Just do it right, I think it’ll be easier. Work out the formula for land surface area, and go from there. The remaining question is whether to just average together stations within a box, or find the distanced-weighted average from the center of the box.

        The best thing about this: between you, Tamino, Ron Broberg, Nick Stokes, ‘the blob’, the ccc guys – there are now lots of people who’ll have on their desktop already written software. So when the sceptics make some sort of claim, the turn-around period before an assessment of the claim will be shortened. If only the sceptics would write such code for themselves, so they make more valid claims in the first place. They would then be making a contribution by pointing out real weaknesses, instead of wasting everybody’s time.

        The main thing missing is an emulation of the adjustments. The GISS UHI adjustment isn’t too hard to do; Nick Stokes did a poor man’s version for one region recently. The GHCN adjustments are more involved, and would be a bigger project.

  • Xi Chin // February 26, 2010 at 3:54 pm | Reply

    Have you done the analysis using only rural stations with no adjustments for urban heat island effect? What does that data look like?

    I am confused because loking at the plots at WUWT (http://wattsupwiththat.com/2010/02/26/a-new-paper-comparing-ncdc-rural-and-urban-us-surface-temperature-data/) it looks like your plots contain the Urban Heat Island Effect adjustment?

    Could you show us the graphs for the Rural only stations with no adjustments?

    [Response: You're confused because you take WUWT seriously.]

    • carrot eater // February 26, 2010 at 4:11 pm | Reply

      There are no adjustments in Tamino’s method.

      The GISS set he’s comparing it to has only one adjustment, for UHI. In the overall picture, this adjustment reduces the warming trend by a bit. For how much, I think you can find in Hansen (2001), but that might be US-only.

      [Response: As far as I know, GISS also includes the USHCN adjustments (for US stations only).]

      • carrot eater // February 26, 2010 at 4:24 pm

        well, yes. lower 48.

      • Ron Broberg // February 28, 2010 at 12:05 am

        As far as I know, GISS also includes the USHCN adjustments (for US stations only).

        That sounds right. From the GIStemp script get_USHCN in which the US stations in v2.mean are replaced by USHCN stations: echo “replacing USHCN station data in $1 by USHCN_noFIL data (Tobs+maxmin adj+SHAPadj+noFIL)”

    • Zeke Hausfather // February 27, 2010 at 6:47 pm | Reply

      Xi Chin,

      Here is a quick rebuttal of Dr. Long’s claims:
      http://rankexploits.com/musings/2010/effect-of-dropping-station-data/comment-page-4/#comment-35383

      I’ll probably write something up more comprehensive later this week.

      • carrot eater // February 28, 2010 at 1:48 am

        Zeke,
        It’s safer to use the USHCN v 2.0 source files for US, instead of GHCN. I know it’s a bit more programming, with different file formats and everything. But it’s better. They don’t bother maintaining a lot of US stations in the GHCN; you have to go to USHCN to find them. And the latest adjustment procedure is in the USHCN. You might also spotcheck the raw files in each, to make sure they’re the same.

        I’m seeing a lot of confusion out there between USHCN v1 and v2. For anybody: if you see a graph with MMTS or SHAP written on it, it’s old.

      • Zeke Hausfather // February 28, 2010 at 8:39 am

        Given how oversampled the U.S. is, I suspect it won’t matter much. Plus, 47 of Long’s 48 rural stations are in GHCN.

        That said, I might as well do it to forestall the objection that I didn’t by folks who disagree with my results. The secondary benefit is that I can create grid-weighted temp series for USHCN raw, USHCN v1, and USHCN v2 to hopefully help put some of the sillier conspiracy theories about v2 to rest.

  • Zeke Hausfather // February 28, 2010 at 8:43 am | Reply

    Also, so far the adjustments between GHCN v2.mean and v2.mean_adj for lower 48 stations look quite similar to that USHCN net adjustments graph. Anyone know if GHCN uses the USHCN adjustment for U.S. stations in their adjusted dataset?

  • dp // February 28, 2010 at 6:13 pm | Reply

    I haven’t seen it claimed by scientists that fewer thermometers affects the trend. I don’t even know of a mechanism that would support this except selective elimination of sites with the intention of biasing the outcome. Who are these people making these claims? This was asked earlier and useless answer resulted.

    As this is the second post I’ve seen on this today without a cite to the original claim (Lucia being the other) I’m now far more interested in getting to the source of these claims.

    [Response: I haven't seen the claim from scientists either. But it was front-and-center in a work by Anthony Watts and Joe D'Aleo for the right-wing think tank "Science and Public Policy Institute," an organization denying global warming.

    Their thesis is, just as you say, that scientists engaged in "selective elimination of sites with the intention of biasing the outcome." The facts contradict them.]

  • dp // February 28, 2010 at 9:54 pm | Reply

    Anthony is a weather man turned blogger and we all know weather isn’t climate. If you post a story like this you should make it clear it is a volley in a blog war and that no science is involved so that we readers don’t waste our time trying to reconcile the differences. I suggest using “SWOTI” for “somebody’s wrong on the internet”, or NHMA for “nothing here, move along”.

    • Ray Ladbury // March 1, 2010 at 12:10 am | Reply

      dp, I disagree. The meme asserting that the temperature record is unreliable is ubiquitous. It is helpful to have a simple algorithm out there to verify the reliability and falisfy the claims of the deniers. Regardless of how ridiculous they are, a lot of folks take them seriously.

  • dhogaza // February 28, 2010 at 11:38 pm | Reply

    Well, dp, there’s a bit of a difference.

    1. Watts has a high school degree

    2. Tamino is a professional statistician specializing in time series analysis (of which this is an example)

    3. Tamino will be submitting his work for publication.

    Now, if you can show Tamino to be wrong, then do so. Otherwise your post is nothing more than an ad hominem attack.

  • dp // March 1, 2010 at 1:35 am | Reply

    I’m all in favor of Tamino’s work – but if all it does is prove Watts wrong, how does that advance the science? He and I agree no scientist is advancing the idea that dying thermometers is affecting the trend (I presume we agree for this to be true the dying thermometers can not be selected for effect).

    I was drawn to this thread because I was led to believe it was a serious refutation of a scientific report on the sparse data/trend claim. No serious scientist is making that claim, so says Tamino, and so far, I agree.

    I’d far rather see scientists go head to head than science fans flinging SWOTI’s spears on the net. They have no value for science. There’s a fanboy blog a day popping up which are more than adequate for playing blog wars.

    And I’ve made no ad hominem attack. I haven’t even said Tamino is wrong. I simply found this thread to be unhelpful as it doesn’t address anything important unless one wishes to debate a weatherman about climate. That is characteristic of a blog war post. I’m old – I don’t have time to waste on climate bunny trails. One would think reader opinion would mean something on a blog named “Open Mind”.

    [Response: If you were hoping for a legitimate scientific dispute about the reality of global warming, or the validity of the surface temperature record, there isn't one. But TV weathermen misunderstanding climate is prominent, has affected public opinion and policy, and found favor with politicians. If you want scientific debate about aspects of global climate, read the peer-reviewed literature; that's where scientists go head-to-head.]

    • Marco // March 1, 2010 at 6:23 am | Reply

      dp, to hopefully add something to Tamino’s comment:
      “blog science” is getting credibility. Serious credibility. The “high ground” is to offer the scientists who are being attacked (often by proxy) a peer-reviewed thorough rebuttal of the obviously false claims. Fighting it out on a blog, which may be known, but most likely not read by the whole community of climate scientists, isn’t doing much. However, it takes time to write a paper and get it published, while the falsehoods are still doing the rounds.

    • Ray Ladbury // March 1, 2010 at 1:04 pm | Reply

      dp, I think you are missing the point. Tamino’s analysis demonstrates that a relatively simple algorithm is capable of assessing the reliability of the temperature record. It is this simple methodology that is worthy of publication, independent of what some idiot says elsewhere on the intertubes.

      The fact that individuals have been casting about unsubstantiated accusations of fraud without making even the slightest efforts to verify these accusations is a severe indictment to the denialists’ credibility, but that is not what is worth publication.

      With respect to micro-Watts not having a bad track record, you must be joking. What has he ever been right about?

      As to trustworthiness, when has science ever demanded trust of an individual? Instead, science provides a methodology for delivering trustworthy understanding of the physical world–even when practiced by fallible humans. Don’t trust people. Instead look for the ones who are doing science.

  • dhogaza // March 1, 2010 at 1:57 am | Reply

    I was drawn to this thread because I was led to believe it was a serious refutation of a scientific report on the sparse data/trend claim. No serious scientist is making that claim, so says Tamino, and so far, I agree.

    OK, if you’re saying this is like Mike Tyson beating the heck out of Johnny Weir, I won’t disagree.

    Unfortunately, the Watts of the world are getting a lot of play and while *we* know there’s no science there, a lot of people are being fooled. He’s been a guest on Fox, where he’s presented as a climate expert. If Inhofe were Chair of his committee rather than ranking minority member, it would not surprise me at all to see him call for hearings and bring Watts as an expert.

    Knocking him down is important, unfortunately, even though it won’t make any difference among the denialbots. Having a refutation – in particular, a peer-reviewed piece – in hand might be very handy politically at some point.

    Does anyone else here think that maybe the appearance of the Watts/D’Aleo piece means that whoever did the analysis that’s supposed to be done for the surface stations project had to tell him “sorry, no good”, Menne, JohnV and all the rest are right?

  • Robert // March 1, 2010 at 2:02 am | Reply

    “I’m all in favor of Tamino’s work – but if all it does is prove Watts wrong, how does that advance the science?”

    Better question: which is more important for our survival, further refining our understand of global warming, or effectively containing and reversing the anti-scientific denialist nonsense that stands between the body politic and the necessary corrective steps?

  • David B. Benson // March 1, 2010 at 2:35 am | Reply

    dp // March 1, 2010 at 1:35 am — There are four major global temperature products, each using slightly different methods and at least one some different stations. Tamino is offering a different way to combine the data to produce a global temperature product. I think that is a contribution to the science, especially as it appears that CRU is planning to completely redo the HadCRU product.

  • dp // March 1, 2010 at 6:41 am | Reply

    One would think scientists were better behaved, but RP Jr. and Romm are busy blog flogging, too. It’s a sad thing to watch.

    As for peer review, we’ve experienced some failures in that regard of late, hence the need to cast a broader net and that is how I found this site. For the near term, scientists have lost the gift granted them by stature and tenure – I’m back to what I learned at Berkeley in 1963 – don’t trust anyone.

    I’m not too concerned about a weather man getting things wrong – it’s what they do, isn’t it? I’m interested in what they get right. Watts doesn’t have that bad a record in that regard, but we are well advised to question things. All things.

    [Response: Watts' record isn't bad, it's abysmal.

    And you should seriously reconsider that "we are well advised to question things. All things." Get serious: should we question that the earth isn't flat?

    'Cause that's the level to which Watts & Co. bring this discussion.]

    • J // March 1, 2010 at 5:59 pm | Reply

      “we are well advised to question things. All things.”

      Glad to hear that you have infinite time in your life. Those of us who aren’t so fortunate have to pick and choose what we spend our time on. When it comes to a public controversy involving science, that means assessing the credibility of the parties involved and lending more weight to the statements of the parties that have proven to have (a) better understanding of the science, (b) greater competency in the relevant analytical methods, and (c) higher ethical standards.

      For any given pairing of random individual X vs Anthony Watts, the former will probably prevail over the latter on points (a), (b), and (c).

    • Ron Broberg // March 1, 2010 at 7:39 pm | Reply

      … but we are well advised to question things …

      Especially claims by amateurs that they have overturned established science.

  • george // March 1, 2010 at 5:48 pm | Reply

    tamino disregard previous messed up blockquotes (again)

    dp says

    I’m not too concerned about a weather man getting things wrong – it’s what they do, isn’t it? I’m interested in what they get right. Watts doesn’t have that bad a record in that regard, but we are well advised to question things. All things.

    Anthony Watts is so bad he’s actually good.

    Unlike most weathermen, you can actually depend on his being wrong (far) more than 50% of the time.

    If Watts says it will rain Sunday, it’s time to get out the bathing suits, beach towels and sun tan lotion.

    • J // March 1, 2010 at 6:05 pm | Reply

      Oh ho, but have you compared the histogram of mean temperatures of sunny “beach” days vs cloudy “stay inside” days? The sunny day histogram appears to be shifted to the right a bit to my unaided eye.

      I was surprised to learn that only 5% of the “beach days” data-set was on the cool side of zero, while a whopping 95% was on the warm side. Even with a rising temperature trend, this seems excessive.

      When the distribution of data is so lopsided, it suggests that there may be problems with it, especially since there appears to be a 50% greater distribution on the “rainy days” side in the data-set.

      Clearly, somebody’s been tampering with the data. Once the GOP takes over in November, we’ll need to haul some scientists into Inhofe’s Star Chamber.

  • Scott A. Mandia // March 1, 2010 at 7:12 pm | Reply

    dp:

    The average person does not have access to peer-reviewed articles, or if so, perhps not the training to understand the science. Instead, the typical person Google’s a question and lands on a Web site. You know where that can lead.

    Several sources have claimed that WUWT is the most popular “science blog” on the Internet so it is very important to debunk the false claims made there.

    Have you read Moody & Kirshenbaum’s book Unscientific America? M&K place a large part of the blame on scientists who have remained in their ivory towers instead of speaking to the masses in a language that is easily understood, ala the late Carl Sagan.

    Tamino, Realclimate, Skeptical Science, and others are reaching out and I am convinced that the tide is turning. Climate science and climate scientists are under attack and it is a vital interest of humanity that these folks fight back.

  • agwPolitics // March 2, 2010 at 2:42 am | Reply

    Tamino et al,

    Your analysis of the temperature data is quite interesting. If your analysis holds up over time it will quiet criticism of the data set, however you and your fellow respondents are missing the bigger picture. As one whose job it is to plow thru the comments and research (both good and bad) of many sources I can say that for my peers the focus of the information brought forth by D’Aleo and Watts is less about their claims of skewed data and more about “human factors” such as intent of actions and perception.

    Your analysis would seem to show (is that putting it mildly?? ;-) ) that the “omission” of weather stations in Canada from the data set had no significant effect on the integrity of the data, but if you are further correct that your analysis is the first of its kind (I tend to concur – so far) then that might be an accident. By that I mean that whoever reduced the number of reporting stations would not have done so by performing a careful analysis of the impact of their actions first. It does not LOOK GOOD (despite what your calculations show) the the number of stations was reduced. Average Joe would use “common sense” to conclude that more stations are better and less is worse. It LOOKS suspicious, no?

    Further, if someone WANTED to skew the data would not way to do this have been to alter WHERE the data come from. Some critics of the collected data say that the people who “altered the dataset” did so with malicious intent and it was only through their ignorance, lack of math skills, and laziness that they didn’t get away with it.

    If you have the time, I would be interested in your thoughts. And if everyone could refrain from name calling and “labeling” in your replies it will make my editor happy and increase your credibility.

    Many Thanks
    ge

    [Response: The "malicious intent" suggestion is one of the biggest, ugliest lies to come from the denialist camp. Rather than report on something that didn't happen, you should report the dishonesty of those who have made such utterly false accusations. That's where the real story is -- otherwise you let the liars decide what news you're gonna cover.

    You're also under a fundamental misunderstanding. You have the idea that "someone" gets to choose which stations contribute data and which don't. Why don't you investigate that issue?]

Leave a Comment