Open Mind

Open Thread

February 5, 2008 · 229 Comments

It’s quite clear people want to argue about the validity of the surface thermometer record. Personally, I think it’s a non-issue which is harped on by denialists because they don’t have a real case. But, it’ll drive my hit count through the roof.

So this is an open thread for people to discuss anything *climate-related*. This is not for discussion of sex, religion, politics, or spongebob squarepants.


I’ll point out that there are certainly flaws in temperature data, just as there are flaws in all data, but NASA GISS has worked very hard to correct for non-climate factors.

The issue has been beaten to death on numerous threads on numerous blogs, so I doubt I’ll be commenting further on this thread. Those who want to discuss the surface record, the sun, galactic cosmic rays, the hockey stick, etc. etc. etc., knock yourself out. Those who wish to raise other climate-related issues, go ahead. DO NOT pepper this thread with links to, or copies of, propaganda pieces. Write your own thoughts. Try to be polite, try to be relevant, try to be logical, try to be honest. Reprehensible posts will be deleted, and that includes personal attacks on Jim Hansen, Mike Mann, etc. Criticize the work, not the man, and that goes for advocates too — if you want to discuss S. Fred Singer or Patrick Michaels or Bob Carter, talk about the work not the man. One last requirement: discussion of the surface thermometer record, and other contentious issues not related to other posts, belong *here*, not elsewhere, so if you post a comment on *another* thread which really belongs *here*, it’ll be deleted.

This is my first experiment with an open thread… we’ll see how it goes.

Categories: Global Warming · climate change

229 responses so far ↓

  • AtheistAcolyte // February 5, 2008 at 10:06 pm

    Hi again,

    I guess my previous post belongs here.

    In the time since, I’ve been in discussion with Reto Ruedy at GISS, and he’s been incredibly helpful thus far in hammering into my skull the details of homogeneity adjustments. I’ve gotten very close to figuring out the methodology, I think. I won’t write it out without his/their approval, but it seems pretty solid to me.

    So the next question I’m being asked by my loyal McIntyte is “Why make the adjustments at all? Why not just use the rural sites? Surely this doesn’t make the record more accurate?”

    My knee-jerk argument to this would be something along the lines of increasing the coverage per station ratio, and allowing more regional errors to creep into the answers. But I feel that to make an argument like that, I need some good mathematical reasoning behind me.

    There’s got to be other, more intuitive arguments.

    I enjoy the blog, look forward to discussion.

  • Nexus 6 // February 5, 2008 at 10:17 pm

    A new paper by Grudd shows a number of warmer-than-present periods over the past 1500 years in Northern Sweden. Compared to Briffa et al.’s paper covering the same area, the warm periods were warmer. Both are based on the same tree ring series I believe.

    What’s are people’s thoughts on the methodology used in this paper, and the significance of the paper in general?

    http://www.springerlink.com/content/8j71453650116753/?p=9ddaf2f63141459da7289ee7be4a4b41&pi=5

  • Lab Lemming // February 5, 2008 at 11:04 pm

    Question:
    Can large-scale use of mid-latitude wind farms effect the equator to pole heat transport, or are the scales so different that putting in that many generators would produce millions of times more energy than we actually need?

  • Ed Davies // February 5, 2008 at 11:20 pm

    I’ve been reading this blog for a couple of months and found it very informative though you’ve a couple of times lost me on the statistics.

    What does strike me, though, is that all of your analysis has been on the temperature record but climate change is, of course, not just global warming. Apart from sea level rise, the changes to weather patterns are likely to be at least as significant as the basic temperatures.

    What I wondered is whether you have given any thought to similar analysis of other climate parameters such as rainfall.

    For example, in a New Statesman article quoted on his blog:

    http://www.craigmurray.org.uk/archives/2008/01/drink_dictators.html

    Craig Murray writes: “I am sitting typing this in Accra, where I have been helping out with an emergency power generation project. One little-remarked consequence of climate change has been unpredictable rainfall patterns, which have adversely affected hydroelectric schemes. The consequences for Ghana, which until the recent problems got most of its electricity from hydro, have been dire. Last year power shortages caused an estimated 30 per cent drop in industrial production.”

    Obviously, I’m left wondering if there really is a statistically significant change in rainfall patterns or if there just happens to have been a few atypical years. I do worry that anything a bit unusual with the weather will get blamed on climate change; if the normal pattern re-establishes itself people will be left with the idea that there’s no real problem.

    [Response: Of course it’s an important thing to investigate. Part of the focus on temperature data is that in the public consciousness it’s the most commonly associated climate parameter with “global warming”; another part is the fact that there’s a lot of easily accessible and well-organized data for global (and local) temperature.

    The European Climate Assessment & Dataset Network makes precipitation data available for European locations, but I don’t know where to get global data. Nonetheless, I’ll look into it and maybe post on the subject in the near future.]

  • John Mashey // February 5, 2008 at 11:22 pm

    OK, I’ll bite.

    People might consider the combination of the following items, not intended to be doom-and-gloom, but certainly indicates some policies on investment as determined by laws of physics, not wishful thinking:

    0) Peak Oil:
    http://en.wikipedia.org/wiki/Peak_oil is a start, if you’re not familiar with it, or read Deffeyes “Beyond Oil”, or Strahan’s “The Last Oil Shock” or hunt up ASPO. We’re either at peak right now, or in a few years, and even oil companies are starting to admit it.

    1) Kharecha & Hansen, “Implications of “peak oil” for atmospheric CO2 and climate”
    http://pubs.giss.nasa.gov/docs/notyet/submitted_Kharecha_Hansen.pdf

    Peak doesn’t happen soon enough to make problems go away, and if we go big-time into coal, trouble.

    2) See Charles Hall’s work on EROI (or EROEI):
    http://www.esf.edu/EFB/hall/ home
    http://www.esf.edu/efb/hall/talks/EROI6a.ppt

    Study slide #22, which illustrates the EROI evolution of various fuel sources, how far renewables have to go, and the lure of thecoal bubble on the chart and it’s fine EROI … which 1) says we’d better not use.

    3) Then add Hall’s & Ayres&Warr’s work on energy’s relationship to GDP:

    http://www.ker.co.nz/pdf/Need_to_reintegrate.pdf
    http://www.iea.org/Textbase/work/2004/eewp/Ayres-paper1.pdf

    Summary: energy (or energy used * efficiency) is biggest factor in GDP. people with more energy are generally wealthier.

    4) Now, add in assumptions by many economists that GDP growth will continue, essentially as is, say: 1-3%/year, which in 100 years yields a world GDP:
    1%: 2.7X
    2%: 7.2X
    3%: 19.2X

    One finds assumptions of this sort in IPCC, and in Stern’s “The Economics of Climate Change”, and then economists argue about discount rates, but seemingly don’t argue much with the idea that “100 years from now, people will be much richer.”

    5) BUT: 0), 2), and 3) together imply that there will periods in the next 100 years where GDP is unlikely to grow, i.e., because people have to invest very hard to get more efficient and to replace fossil fuels, because that will take a while, and most of the easy oil has already been found. Study Hall’s EROI chart.
    Also, read the Hirsch Report to see what happens if we don’t start converting to renewables hard 20 years before Peak Oil [we didn’t].
    http://en.wikipedia.org/wiki/Hirsch_report

    6) Hence, any idea that one can ignore climate change mitigation, because really rich descendants can easily afford adaptation, is highly suspect. In particular, goods a are not arbitrarily substitutable. Information goods, or those based on Moore’s Law get cheap, i.e., any teenager will be able to have a Terabyte iPod :-), but it is difficult to understand why things that require real energy in the physical world get cheaper:
    - water
    - fertilizer [natural gas…]
    - food
    - gas/diesel for earthmoving, like to build dikes
    - steel and concrete for building sea walls
    - rebuilding New Orleans elsewhere
    - defending the San Francisco Bay against +1m sea level rise … well, OK, the first +1m might be OK, but people will have to handle the next +1m without much petroleum for the bulldozers.

    7) Charlie Hall’s slide 22 indicates that we have a long way to go. Of course, if we get more efficient, stop shipping bottled water halfway across the world, etc, cover the US Southwest with solar-thermal, cover roofs with PV, build lots of wind turbines, electrify transport, relocalize, and hope to make biodiesel work for ships & planes & big trucks … the result is OK. Put Hirsch & Hall together, and we should be spending money rather differently than in Bush’s proposed budget…

    8) My bottom line:
    IF we do all the things needed to keep even a semblance of the current first-world economy, and do them really fast, the world may stay rich enough to deal with the inevitable adaption issues, and we might stave off the really ones.

    IF NOT … a lot of people will get nailed by the economic crunch before the climate crunch gets their kids. Clearly, it is in the interest of oil&gas companies to want people to burn it as fast as possible and keep them dependent, because they can harvest a larger fraction of GDP. [See ExxonMobil’s latest results, for example, more profit on lower shipments.] Infrastructures and vehicle fleets don’t change instantly. If someone’s short-term economic interests are tied to fossil fuels, I can understand the motivation to fight the idea of AGW.

    Why on earth anyone else is doing their best to fight the efficiency and FF-replacement that helps economy first, and climate second, is beyond me. I guess they have been suckered by the professional denialists’ arguments into positions that are against their own self-interests.

    In the USA, CA has worked on efficiency & environmental protection for decades, and yet, CA is not usually known as a poor, destitute place with a miserable economy that no one would want. I am getting tired of CA subsidizing some other states who aren’t being very smart about this.

    Anyway, we might have enough money to invest to do the right things, if we decide to, although Hirsch certainly worries me., If we don’t, some people’s descendants (I have none) aren’t going to thank their ancestors much, when the only jobs they can get are shoveling dirt to build dikes.

  • Timothy Chase // February 5, 2008 at 11:38 pm

    AtheistAcolyte wrote:

    In the time since, I’ve been in discussion with Reto Ruedy at GISS, and he’s been incredibly helpful thus far in hammering into my skull the details of homogeneity adjustments. I’ve gotten very close to figuring out the methodology, I think. I won’t write it out without his/their approval, but it seems pretty solid to me.

    Tamino has a post on the adjustments made to the station data here:

    Best Estimates
    May 11, 2007
    http://tamino.wordpress.com/2007/05/11/best-estimates/

    AtheistAcolyte wrote:

    So the next question I’m being asked by my loyal McIntyte is “Why make the adjustments at all? Why not just use the rural sites? Surely this doesn’t make the record more accurate?”

    Well, if a station was always urban and subject to the same urban heat island effect, then this will shift the temperature trend curve up at all points equally, and therefore it will have no effect upon the trend in temperature anomaly. Since what we are actually concerned with is the temperature anomaly, there isn’t any problem.

    I have bad eyes, but that doesn’t mean that I need to have them surgically removed. If I get glasses they will work just fine. In this case, the methodology in use acts as our glasses in the study of trends in temperature anomaly. And if there are known changes in the station that we can adjust for, why throw out the additional datapoints? The more datapoints at different points of the surface, the more detailed the picture of temperature anomaly distribution, and the better able we will be to estimate the average temperature over the entire area.

    Likewise, you shouldn’t expect urban growth to have much of an effect upon the trend at a given station once it has become urban. Hot air rises, particularly when you have moist air convection. (Infrared imagining by means of satellites shows that urban sources of heat are quite limited in their influence.)

    Please see:

    Abstract

    All analyses of the impact of urban heat islands (UHIs) on in situ temperature observations suffer from inhomogeneities or biases in the data. These inhomogeneities make urban heat island analyses difficult and can lead to erroneous conclusions. To remove the biases caused by differences in elevation, latitude, time of observation, instrumentation, and nonstandard siting, a variety of adjustments were applied to the data. The resultant data were the most thoroughly homogenized and the homogeneity adjustments were the most rigorously evaluated and thoroughly documented of any large-scale UHI analysis to date. Using satellite night-lights-derived urban/ rural metadata, urban and rural temperatures from 289 stations in 40 clusters were compared using data from 1989 to 1991. Contrary to generally accepted wisdom, no statistically significant impact of urbanization could be found in annual temperatures. It is postulated that this is due to micro- and local-scale impacts dominating over the mesoscale urban heat island. Industrial sections of towns may well be significantly warmer than rural sites, but urban meteorological observations are more likely to be made within park cool islands than industrial regions.

    Assessment of Urban Versus Rural In Situ Surface Temperatures in the Contiguous United States: No Difference Found
    Thomas C. Peterson
    Journal of Climate, VOL. 16, NO. 18, 15 September 2003
    http://www.ncdc.noaa.gov/oa/wmo/ccl/rural-urban.pdf

    Anyway, we can show that rural stations trend slightly higher than urban, we can look at the CRN5s on one earlier threads and see how it has been trending cooler on the whole than GISS since the late 1960s, etc.. We can compare the trend against satellites, boreholes, etc..

    Apparently NASA GISS has things fairly well covered…

  • Dano // February 6, 2008 at 2:14 am

    IF NOT … a lot of people will get nailed by the economic crunch before the climate crunch gets their kids. Clearly, it is in the interest of oil&gas companies to want people to burn it as fast as possible and keep them dependent, because they can harvest a larger fraction of GDP.

    Some economists call this ‘hard landing or soft landing’. The longer we wait, the harder the landing.

    FWIW, I think many movers and shakers are directing their staffs to figger out how to not wait longer. Our civic discussions are moving this way too. We’ve got a long way to go, but our societies are learning how to negotiate publicly how to get there.

    The stated need for this thread, in my view, is excellent flypaper for denialists. They can hang out here and therefore not bother the rest of society with their cheeto-dusted claptrap.

    Best,

    D

  • JCH // February 6, 2008 at 2:37 am

    I invest in the oil and gas industry. They meet demand. They can do only so much to create demand. They have far less control over prices than people think. When prices were insanely low for decades, they could do nothing to make them go up. When the prices spiked in the 1980s, the industry was completely unable to restrain itself from over doing drilling and exploration.

    In a global energy market, ExxonMobil is not a the market force some imagine. ExxonMobil is just swimming with the tide, which is being driven mostly by the state oil companies. They control the game, and hold most of the big cards.

  • John Mashey // February 6, 2008 at 3:11 am

    I’ve helped sell supercomputers to petroleum geologists on most of the continents [not Antarctica]. I make no assertions about EM creating demand, but the point is, on the downslope of Peak Oil, with India+China coming up, nobody has to create demand any more. EM spent $26B to buyback its stock this year, with $40B in profit. that doesn’t bother me at all. What bothers me is the ‘what-me-worry” marketing, and especially, the funding of denialist entities. At least some folks, like Shell or BP, admit to reality.

  • Heretic // February 6, 2008 at 4:38 am

    Nexus 6, there might have been some periods warmer than now here and there, and these may have been related to non global dynamics. In any case, now is not the warmest that we will get. The important thing is how warm we will get.

  • Evan Jones // February 6, 2008 at 5:06 am

    The question of the “validity of the surface thermometer record” devolves into two main considerations:

    1.) What is the deal with microsite bias?

    2.) Is UHI being lowballed?

    There are two outstanding factors involved in both of the above.

    The simple issue is waste heat. That’s just an issue of offset and frequency.

    The more complex issue is that of the heat sink. A lower offset, but the “gift that keeps on giving”.

    Just to frame the question proposed for this “open thread”.

    “But, it’ll drive my hit count through the roof.”

    Yes.

  • Evan Jones // February 6, 2008 at 5:17 am

    ‘“Why make the adjustments at all? Why not just use the rural sites? Surely this doesn’t make the record more accurate?”’

    Agreed. It won’t do. Gridding.

    P.S., We wants that algorythm! We wants it! Please, Master, please!

    (Your fellow-atheist appeals to you.)

    Ed Davies: Sea level is key. Greenland melt has increased but interior accumulation has make up for a fair chunk of that, thanks to increased precip.

    The IPCC just reduced its AR4 maximum sea level rise 100-year projections to a maximum of 17 cm. They figure most of that will be due to thermal expansion, not direct melt.

  • Evan Jones // February 6, 2008 at 5:24 am

    “Well, if a station was always urban and subject to the same urban heat island effect, then this will shift the temperature trend curve up at all points equally, and therefore it will have no effect upon the trend in temperature anomaly. Since what we are actually concerned with is the temperature anomaly, there isn’t any problem.”

    That’s what i thought. But according to LaDochy (12/2007), a heat sink is NOT a constant offset. It exaggerates a small temperature increase over time, increasing the T-Max delta by as much as 2x and the T-Min delta by a whaopping 5x.

    LaDochy uses for his comparison urban sites and nearby rural sites. Note well that he does NOT adjust for microsite violation, however; he just does the “Lights=0″ thing . Therefore, the effect may be greater than even he estimates.

    He does his observations in California where there has been known regional warming.

    One presumes that this would work in reverse, and that a temperature drop would likewise be exaggerated over time.

  • Evan Jones // February 6, 2008 at 5:36 am

    “We’re either at peak right now, or in a few years, and even oil companies are starting to admit it.”

    I doubt it. Known world reserves were 3.4 tbls in 1975. They are 6.5 tbls today and growing constantly. And that’s only with today’s puny technology. I think we will be running away from oil long before we are running out of it.

    Oil companies (and the US government) have always been highly pessimistic in this regard.

    ‘and then economists argue about discount rates, but seemingly don’t argue much with the idea that “100 years from now, people will be much richer.”’

    Why would they? The “wealth curve” blows the “climate curve” away. Make that much, much richer.

    (P.S., It is ONLY the wealthy countries that go green in any real sense. In 20 years, the UDCs will be wildly wealthier than they are today. Then they will clean up just like the west and for the same reasons.)

    [Response: Perhaps rather than four comment in a row, you could organize your thoughts into one or a few?]

  • Evan Jones // February 6, 2008 at 5:56 am

    “Apparently NASA GISS has things fairly well covered…”

    A lot of these issues have been reopened, particularly UHI.

    Maybe I missed it, but I saw no accounting for heat sink, only waste heat.

    CRN5 station dynamincs can differ, depending on whether they are affected by waste heat or heat sink.

    If you want to warm a greenhouse, all you do is add a large rock. It absorbs solar energy (and pumps up T-Max) then releases joules at night, seriously boosting T-Min. The more mass in the sink, the more the effect.

  • cce // February 6, 2008 at 7:23 am

    Evan,

    I commented in the other thread that warming, using methods completely independent of instruments, show warming between 0.14 and 0.18 degrees per decade during the satellite record. This is for the lower troposphere, which isn’t exactly the same as the surface, but will be very close. Setting aside individual years, which are irrelevent given the magnitude of natural variability, warming over this period (29 years) has been between 0.4 and 0.5 degrees.

    The RSS analyses, which has suddenly become every Skeptic’s best friend because of recent and relatively cool anomalies, shows the most warming of all of these. It shows more warming than the instruments.

    HadCRU and GISTEMP both show warming of 0.17 degrees per decade, which is within the range of these independent calculations. Unless the UHI effect is affecting the satellites (and radiosondes, which are the most unreliable of all), it is imposible for these issues to create any meaningful contamination, at least nothing on the order suggested by skeptics.

    People can cling to the idea that the “auditing” is going to discover a meaningful warming bias in the GISTEMP analysis but that is a false hope. The state of many surface stations is inarguably shabby, and any effort to improve them should be applauded, but to use them as a way to cast doubt on global warming is not credible. Problems exist, but they can be and are taken into account.

  • cce // February 6, 2008 at 7:35 am

    Re: peak oil

    Oil production in almost all non-OPEC countries has already peaked. As surely as the US peaked in the ’70s (and it isn’t due to lack of drilling) world oil production will peak in coming decades.

  • sod // February 6, 2008 at 8:08 am

    nice introduction Tamino!
    though it looks like people tend to get off-topic even in an open thread…

    CRN5 station dynamincs can differ, depending on whether they are affected by waste heat or heat sink.

    If you want to warm a greenhouse, all you do is add a large rock. It absorbs solar energy (and pumps up T-Max) then releases joules at night, seriously boosting T-Min. The more mass in the sink, the more the effect.

    Evan Jones, that “rock” will have a tiny impact.
    thanks for pointing out LaDochy. he has some nice papers online. you might want to take a look at the LA comparison between the old and new weather station:

    http://ams.confex.com/ams/pdfpapers/119064.pdf

    the result is pretty devastating to your thesis:
    while Tmax did increase by about 1°C, Tmin stayed the same.
    how both changes together will influence climate data is an easy calculation, that i ll leave to you.

    and with the DWP location clearly being a class 5 station it looks like “error<=0.5°C”

    but hey, your just wrong by a factor of 10!

  • dhogaza // February 6, 2008 at 11:06 am

    Hank Roberts posted this link over on Deltoid.

    Apparently we’re seeing some sort of microlake microclimate siting bias or some such … since, all these problems with the surface temp record show that warming’s being greatly exaggerated.

  • Andrew Dodds // February 6, 2008 at 11:33 am

    Evan Jones -

    The idea of there being 6.4 Trillion barrels of conventional oil reserves is simply wrong - strictly defined, proven reserves are in the region of 300 billion barrels, with in the region of 600 billion probable that will be added by drilling. The figure you quote might just about apply to all the in-place oil ever found.. but it sounds more like the clasic mistake of adding P10 ‘possible’ reserves.

    (P10 = 10% probable, so adding two independant P10 estimates gives you a P1, 1% probable estimate, and so on, IIRC).

    300 billion comes from the standard oil industry technique of putting *proven* reserves at 10 year’s current production - crude (sic.) but given the lack of data transparency in the industry, as accurate as you can get.

    I would generally contend that a combination of the large scale adoption of nuclear power with synthetic fuels (Methanol being favoured due to ease of manafacture and use) could effectively solve both global warming and fossil fuel shortages, but since this tends to annoy both ideological environmentalists AND ideological free-marketeers it’ll have to wait till we’ve tried everything else.

  • Dano // February 6, 2008 at 1:26 pm

    It absorbs solar energy (and pumps up T-Max) then releases joules at night, seriously boosting T-Min. The more mass in the sink, the more the effect.

    And as soon as you get outside, the temp drops dramatically. Just like immediately outside urban areas, esp on irrigated fields. As any runner or bike rider knows.

    Suggestion: try learning something about the issue before asserting something about it.

    Best,

    D

  • J // February 6, 2008 at 1:59 pm

    Re: surface stations, we’d like to have the best network possible. The problem is that what counts as “best” depends on what you’re using it for.

    If you’re producing NWS forecasts, or designing a network to assess future climate change, then the “best” network is one with an adequate number of very well sited stations.

    If you’re looking at historical climate change from the 1800s on, then the “best” network is one with lots and lots of stations whose records overlap temporally and spatially. That is the overwhelmingly most important consideration.

    Let’s say you have good temperature data for some site from 1850-1920. There are two other stations nearby, one that’s ideally sited but only dates back to 1990, and another that consistently reads 3C high but runs from 1900 to 2005.

    If you look only at the current data, you’d keep the ideal, post-1990 station, throw out the 3C-high station … and then be forced to throw out the historical data as well.

    On the other hand, you could keep all three stations, use the ideal post-1990 station to adjust the data from the 1900-2005 station, and then use *that* one to adjust the data from the original station.

    This demand to arbitrarily toss out lots of existing stations would (if followed) dramatically weaken the ability to construct historical time-series analyses. I don’t know whether the people making this demand simply haven’t considered that impact, or whether in their minds it’s a feature rather than a bug.

  • fred // February 6, 2008 at 2:01 pm

    Moved from previous thread.

    BPL, to make a case for either complying with your published standard for measurement instrument, or changing it, we do not have to make any sort of case about the effects of non-compliance.

    This is the most basic principle of quality control. Either your processes must conform to your practice, or your practice to your processes. One or the other.

    It doesn’t matter what the effects of discrepancy are.

    If the out of spec instruments are more accurate, give lower, give higher, are less accurate, none of this makes any difference. Get the spec and the practice in line.

    Or stop being taken seriously as a data collection body.

    I am not allowed to say that I require my code to pass certain tests against buffer overflows, then test some of the time in a different way altogether, then say that its OK after all because the results are just as good…

    Neither can climate scientists take this point of view, and be taken seriously.

  • Evan Jones // February 6, 2008 at 2:08 pm

    “strictly defined, proven reserves are in the region of 300 billion barrels”

    But I was most definitely not referring to defined, proven reserves. That is a much narrower category. It is using “defined, proven” reserves as an indicator that led thos such as Meadows so badly astray.

    “that “rock” will have a tiny impact.”

    Depends on the size of the rock.

    “the result is pretty devastating to your thesis:”

    Not according to the abstract of LaDochy, S., R. Medina, and W. Patzert. 2007. Quite supportive, in fact.

    It’s not an old stations vs. new issue. It’s the delta of rural vs. nearby urban over time.

  • JCH // February 6, 2008 at 2:27 pm

    There are some people who need to spend a week at a pediatrician’s office taking temperatures on sick kids. Show me a Doc or an RN who does not correct the data on anything other than the unmentionable reading.

  • luminous beauty // February 6, 2008 at 3:37 pm

    Evan,

    Your quixotic enterprise would be much more meaningful if global warming was an artifact constrained to highly urbanized areas:

    http://data.giss.nasa.gov/gistemp/animations/a5_1881_2003_2fps.mp4

    Apparently, it isn’t.

    It makes one wonder, just what are you trying to prove?

  • Evan Jones // February 6, 2008 at 3:50 pm

    Well, this is from the Ladochy (2007) abstract. It speaks for itself, pretty well.

    Large urban sites showed rates over twice those for the state, for the mean maximum temperatures, and over 5 times the state’s mean rate for the minimum temperatures.

    http://www.int-res.com/abstracts/cr/v33/n2/p159-169/

  • Horatio Algeranon // February 6, 2008 at 4:05 pm

    We’ve heard conversations,
    Of surface stations,
    “Quite worthless, without reservations.”


    From
    “A Tale of Two Surface Stations”

  • J // February 6, 2008 at 4:31 pm

    This is the most basic principle of quality control. Either your processes must conform to your practice, or your practice to your processes. One or the other.

    Fred, I think you’re getting too hung up on this.

    You seem to be suggesting that unless GISS/HADCRU/whoever limits themselves to only using CRNx, y, z stations, they’re failing the test of quality control.

    That’s a very limited example of quality control, IMHO. What is more relevant to me is (a) are they actually following the methods described in their publications, and (b) are those methods defensible for their very specific application?

    See my previous post in this thread for an example of how a given station might be undesirable for one purpose and essential for another.

  • Barton Paul Levenson // February 6, 2008 at 4:39 pm

    fred posts, unbelievably:

    [[It doesn’t matter what the effects of discrepancy are.]]

    You’re saying if it makes no difference to the result, they should still be thrown out?

    Do you read these things before you post them?

  • Hank Roberts // February 6, 2008 at 4:41 pm

    Dhogaza, re your pointer to the arctic lakes piece I posted.

    A personal plea. Please stop saying things backwards trying for irony. You’re a prolific poster. But you’re adding to the total of misstatements, because most people won’t get the frame, just the misstatement.

    People don’t get irony.

    Google doesn’t find it.

    When you add to the total number of misstatements trying to be funny, you add to the total number of misstatements..

    My opinion. This feels like performance art when it’s being typed.

    Afterward, it’s reference material.

    There’s good work being done on how people perceive this kind of discourse. Repeating the mistake and saying it’s wrong is read as and remembered as a repetition of the mistake, not refutation.

  • luminous beauty // February 6, 2008 at 5:52 pm

    “People don’t get irony.”

    Alas and alack.

  • dhogaza // February 6, 2008 at 5:55 pm

    Well, I’d call it sarcasm, not irony, but that’s a small point. I should hope it was a bit more obvious than you think it was, but perhaps not.

    For those who didn’t bother to chase the link, while denialists are arguing that the surface station record exaggerates warming, high arctic seasonal lakes (ponds, wet spots by the side of the caribou trail, etc) are drying up, which is an unpleasant surprise.

    A very high percentage of the world’s migratory shorebirds depend on this ecosystem, for example (not an exhaustive one, not even close).

    Denialists remind me of nero fiddling while rome burned, except denialists aren’t semi-mythic, they’re real.

    It’s probably not simple coincidence that CA, for example, attracts a lot of software and electronic engineers, people who typically don’t have their fingers on the pulse of the natural world. If you’re in tune with the timing of the seasons, of migrations, of the biological world, it becomes very difficult to believe people who claim it’s not happening.

    Saw my first spanish imperial eagle today. Too bad southern spain’s predicted to become a desert similar to north africa in the next few decades.

  • sod // February 6, 2008 at 7:31 pm

    Large urban sites showed rates over twice those for the state, for the mean maximum temperatures, and over 5 times the state’s mean rate for the minimum temperatures.

    Evan Jones, a good start into understanding a paper would be, if you would actually READ it.

    the paper can be found here:

    http://wattsupwiththat.files.wordpress.com/2007/11/ca_climate_variability_ladochy.pdf
    (thanks to Anthony for at least providing a link to the original )

    the table on page 163 gives a mean temperature change of slightly below 0.08°C per decade for NON-URBAN sites alone.
    that gives a change of about 0.4 °C from 1950 to 2000, which sounds about right if you look at a picture of US surface data.

    the fatser increase in urban sites is COMPENSATED for in the calculations for climate purpose.
    funny sidenote: Tmax is often LOWER in urban sites. (and is used for temperature calculations for climate studies…)

    btw, you DID notice, that this article deals with an UHI effect?
    while your claim about the “error>5°C” is about MICRO SITE issues!
    so everything i said above still stands. the error is closer to “error<0.5°C” and this even INCLUDES part of the UHI effect… (because the new LA station is in a less effected UHI zone..)

  • Horatio Algeranon // February 6, 2008 at 7:31 pm

    “People don’t get irony.

    Google doesn’t find it.”

    Yes, I hear all that irony is getting to be a major problem.

  • Dano // February 6, 2008 at 7:33 pm

    Large urban sites showed rates over twice those for the state, for the mean maximum temperatures, and over 5 times the state’s mean rate for the minimum temperatures.

    This out of context statement is about as useful as me concluding all my skin is contracting because one area contracted after I read this statement.

    Or, a finer point, as Shashua-Bar* asserts, some places in the middle of cities are 10º F hotter than the fringes. Are we to conclude, then, that all areas are 10º F hotter? No, as there is a spatial component to the UHI extent.

    As has been explained to UHI ignorami for years.

    It’s not as if, lad, you are offering something new.

    Now go and learn something about the issue so you can do a better job.

    Best,

    D

    * Shashua-Bar, L., Hoffman, M.E. 2000. Vegetation as a climatic component in the design of an urban street. An empirical model for predicting the cooling effect of urban green areas with trees. Energy and Buildings 31:3 pp. 221-235.

    Also read these to get you some learnin’:

    Stone, B. Rodgers, M. 2001. Urban form and thermal efficiency: How the design of cities influences the urban heat island effect. Journal of the American Planning Association 67:2 pp.186-198.

    32 Voogt, J.A., Oke, T.R. 2003. Thermal remote sensing of urban climates. Remote Sensing of Environment 86:3 pp. 370-384.

    Peterson, T.C. 2003. Assessment of Urban Versus Rural In Situ Surface Temperatures in the Contiguous United States: No Difference Found. Journ Clim 16:18 pp. 2941–2959.

    Pokorný, J. 2001. Dissipation of solar energy in landscape-controlled by management of water and vegetation. Renewable Energy 24:3-4 pp. 641-645.

    Akbari, H., Pomerantz, M., Taha, H. 2001 Cool surfaces and shade trees to reduce energy use and improve air quality in urban areas. Solar Energy 70:3 pp. 295-310.

    Santamouris, M., Papanikolaou, N., Livada, I., Koronakis, I., Georgakis, C., Argiriou, A., Assimakopoulos, D.N. 2001. On the impact of urban climate on the energy consumption of buildings. Solar Energy 70:3 pp. 201-216.

    Levitt, D.G., Simpson, J.R. Grimmond, C.S., McPherson, E.G., Rowntree, R. 1994. Neighborhood-scale temperature variation related to canopy cover differences in southern California. In: 11th Conference on biometeorology and aerobiology: 1994 March 7-11; San Diego. Boston: American Meteorological Society: 349-352.

    Git readin’!

  • luminous beauty // February 6, 2008 at 8:57 pm

    Hank,

    It’s a wee bit ironic that google offers 22,200,ooo hits for ‘irony’.

    Among them, this gem:

    http://www.guardian.co.uk/weekend/story/0,,985375,00.html

    What is more delicious than any rhetorical irony, is the unintentional irony of obscurantist skeptics such as Anthony Watt and Evan Jones, who, in order to promote an argument that NASA is not adequately addressing UHI or micro-climate effects, quote a paper on the subject co-written by a NASA employee.

  • Hank Roberts // February 6, 2008 at 9:17 pm

    Just for closure, here below is a snippet from Aetiology and the cite to the research on what people get from
    attempts to correct misstatements. It’s (*sigh*) really discouraging.

    Knowing this is what works is why I tediously, ponderously, grindingly post homework-help level cites when I see a hobby-horse stampede arise (as is now happening over at Andy Revkin’s dot.earth.

    I posted over at Coby’s place:

    “… one suggestion — rephrase the main points. I’m always jolted by seeing a page listing all the false statements without qualification (same error Newsweek made recently on their cover).

    There’s good science now showing it’s not helpful:

    http://scienceblogs.com/aetiology/2007/09/deck_is_stacked_against_mythbu.php

    ____excerpt________

    ‘Obviously, this has implications for correcting these myths. The article suggests that, rather than repeat them (as the CDC “true and false” pamphlet does, for example), one should just rephrase the statement, eliminating the false portion altogether so as to not reinforce it further (since repetition, even to debunk it, reaffirms the false statement). Ignoring it also makes things worse, as the story noted that other research “…found that when accusations or assertions are met with silence, they are more likely to feel true.”‘
    ——end excerpt—–

    Short answer: don’t ignore bogosity, don’t repeat; rephrase. People learn from that.

    Damned shame. Yes, there are places where recreational snarking is still a lot of fun. But they tend to be polarized in-crowds, not helpful to later naive readers.

    Or, of course, Open Threads anywhere.
    Have at it! Them! Each Other! Me …

  • AtheistAcolyte // February 6, 2008 at 9:45 pm

    Evan Jones:

    Agreed. It won’t do. Gridding.

    Heh?

    P.S., We wants that algorythm! We wants it! Please, Master, please!

    (Your fellow-atheist appeals to you.)

    In a word: No. In a few more words: I don’t feel it’s mine to give. They’ve been very helpful in helping my understanding, and if you really want to understand it yourself, contact them yourself.

    Timothy Chase:

    Thanks, but I’ve already read “Best Estimates”, and while I found it enlightening, it didn’t get at the actual mathematical basis for me. In other words, I couldn’t take a particular station and construct the adjusted time series in Excel. That’s why I contacted GISS. Straight from the horse’s mouth, as it were.

    I agree that any stable urban heat effect would only vertically inflate a temperature graph, but the UHI crowd only argues that growing urbanization is what introduces the trend. The adjustments are meant to filter out this urbanization trend.

    The point the McIntyte will make, if I may play Devil’s Advocate for a moment, is that if the urban station’s long-term trend is ruled by the rural station’s long-term trend, then for long-term trends, why not only look at rural stations?

    Is there a data source which shows the GHCN-rural-only dataset exhibiting a similar to the full dataset trend? I feel I’d see one in Peterson 1999, but I can’t find a PDF of that work. I found a surrogate graph (page 27) (PDF), but it has no substantive sourcing nor apparent peer-reviewing. I’m looking for something a bit more scientific.

    I also have argued long and hard with this particular fellow about Peterson 2003, and the topic has dropped off the talking points in favor of new topics. In short, I made the point that Peterson 2003 was never about discussing trends between rural and urban sites, as McIntyre tried to do in his “analysis”, but it was about establishing the efficacy of all adjustments in homogenizing the two datasets (eliminating UHI).

  • AtheistAcolyte // February 6, 2008 at 9:49 pm

    Horatio Algeranon:

    “People don’t get irony.

    Google doesn’t find it.”

    Yes, I hear all that irony is getting to be a major problem.

    Have you tried eating more spinach? yuk yuk yuk…

  • Evan Jones // February 6, 2008 at 10:15 pm

    “Or, a finer point, as Shashua-Bar* asserts, some places in the middle of cities are 10º F hotter than the fringes. Are we to conclude, then, that all areas are 10º F hotter? No, as there is a spatial component to the UHI extent.

    As has been explained to UHI ignorami for years. ”

    Why would that need explaining? It is obvious to the point of truism.

    But that is quite beside the point.

    “Now go and learn something about the issue so you can do a better job.”

    Your cites are all 2003 or prior and therefore cannot account for the recent revisiting of UHI.

    The point is that a heat sink is not a onetime offset. It affects the delta throughout.

    “Well, if a station was always urban and subject to the same urban heat island effect, then this will shift the temperature trend curve up at all points equally, and therefore it will have no effect upon the trend in temperature anomaly. Since what we are actually concerned with is the temperature anomaly, there isn’t any problem.

    I have bad eyes, but that doesn’t mean that I need to have them surgically removed. If I get glasses they will work just fine. ” (Peterson, 2003) [It also says it used the Lights= method, as per Hansen 2001)

    LaDochy, OTOH, is clearly saying (in context) that the DELTA is affected. A modest increase iin temperature would be increasingly exaggerated over time. (Obviously it would vary with each individual case.)

    That would seem to contradict the GISS one-time method of adjustment. That would work for waste heat, but NOT for a heat sink.

    “btw, you DID notice, that this article deals with an UHI effect? while your claim about the “error>5°C” is about MICRO SITE issues!”

    Quite. And I pointed it out.

    “‘the table on page 163 gives a mean temperature change of slightly below 0.08°C per decade for NON-URBAN sites alone.’”

    “that gives a change of about 0.4 °C from 1950 to 2000, which sounds about right if you look at a picture of US surface data.”

    As opposed to:

    “Using climatic division mean temperature trends, the state had an average warming of 0.99°C (1.79°F) over the 1950–2000 period, or 0.20°C (0.36°F) decade–1. Southern California had the highest rates of warming, while the NE Interior Basins division experienced cooling. Large urban sites showed rates over twice those for the state, for the mean maximum temperatures, and over 5 times the state’s mean rate for the minimum temperatures. In comparison, irrigated cropland sites warmed about 0.13°C decade–1 annually, but near 0.40°C for summer and fall minima. Offshore Pacific SSTs warmed 0.09°C decade–1 for the study period. ”

    “Evan Jones, a good start into understanding a paper would be, if you would actually READ it.

    the paper can be found here:”

    I think you may wish to reread the above. Compare the 0.99C number with the 0.4C number.

    More directly, the paper can be found here:

    http://www.int-res.com/abstracts/cr/v33/n2/p159-169/

  • Horatio Algeranon // February 6, 2008 at 10:36 pm

    don’t repeat; rephrase

    So, don’t say “Global warming is an urban myth”, but instead say “Global warming is a bourbon myth”?

  • Hank Roberts // February 6, 2008 at 11:38 pm

    Say there’s hope.
    http://www.realclimate.org/?comments_popup=523#comment-80717

  • luminous beauty // February 7, 2008 at 3:03 am

    As long as there are puppies there is hope.

  • Dano // February 7, 2008 at 3:46 am

    Evan, you have a lot of learnin’ to do. Good luck in understanding the issue so you can one day speak to it intelligently.

    IOW: asked and answered hundreds of times to the point of tedium. You bring nothing new, hence:

    [killfile]

    Best,

    D

  • Heretic // February 7, 2008 at 4:22 am

    What a load of crap this surface stations stuff is. It’s simply mind boggling. Here is why we should ignore the all BS, as Tim Chase has recently pointed:
    Satellites
    Buoys
    Boreholes
    Balloons
    Those are direct measurements, then of course, there are the species moving poleward and upward, the glacial earthquakes, and so on and so forth.

    The all thing is nothing but a pathetic distraction which, when considering the big picture, is of such limited interest that it really does not deserve all the noise. That noise really confirms that there is such a thing as denialists and a denialosphere.

  • fred // February 7, 2008 at 6:14 am

    This one is getting to be a sort of touchstone for sincerity and denialism.

    You have a clearly indefensible situation, where an organization is operating a set of instruments some of which are out of spec with its own standards. It is the instruments that are out of spec. It is not that they are in spec but what they measure is variable and error prone (the child fever case, where the instruments are fine, but the condition is hard to measure accurately, at least orally). It is not that there is biased sampling but no argument about the measurement (the fossil case, where there is not suggested to be any problem because of out of spec excavation). It is, to take the medical example again, as if we had a standard for clinical thermometers. We found half or two thirds of those shipped did not meet the standard in a variety of different ways. The standard was adopted to prevent the use of thermometers with falsely high or low readings.

    Our reaction was, we will keep the standard, and we will also keep using the thermometers. This is indefensible.

    BPL, yes I did mean exactly what I said and thought about it before posting it. A huge proportion of the stations are out of spec. Either change the spec or stop using them. One or the other. Or stop being taken seriously as a scientific measurement organization.

    Heretic: it is only important because denialists cannot admit any criticism, of any aspect of the AGW canon. So we find among other things continuing refusal to admit that Mannian PCA is not a legitimate statistical procedure, continuing refusal to admit that failure to archive results is a problem and unjustifiable, and now refusal to admit that running stations which are out of your own specification is in any way questionable.

    One would have a lot more respect for the intellectual integrity of the movement if it would, occasionally, even once, admit its errors, correct them, and move on.

  • Jack // February 7, 2008 at 7:44 am

    It would be refreshing if someone visiting this site could post some actual evidence (this by definition excludes computer models) to support the man-made carbon dioxide global warming hypothesis. It seems to me you are mostly a group of insecure “warmers” and “warmists” trying to bolster each others’ prejudices or alleviate fears.

    In my opinion, AGW hysteria is entering its terminal phase. Whether the phenomenon will end with a bang or a whimper remains to be seen.

  • CraigM // February 7, 2008 at 10:03 am

    Evan Jones:

    “The IPCC just reduced its AR4 maximum sea level rise 100-year projections to a maximum of 17 cm. They figure most of that will be due to thermal expansion, not direct melt.”

    whah?

    Sorry, that dont sound right to me. From IPCC AR4 SPM:

    “The total 20th-century rise is
    estimated to be 0.17 [0.12 to 0.22] m.”

    I think you’re confusing projections with the estimated sea level ise for the 20th century.

    The maximum projections for the next 100 years are up around the 60cm mark. And many seem to think that is a conservative estimate too.

  • sod // February 7, 2008 at 11:59 am

    I think you may wish to reread the above. Compare the 0.99C number with the 0.4C number.

    why?
    do you know any source that claims that the world, the US or California warmed by 0.99°C since 1950?

    again:
    the slightly less than 0.1°C per decade of the RURAL stations are EXACTLY the warming that you would expect.
    looks like compensation works. fact.

    http://wattsupwiththat.files.wordpress.com/2007/11/ca_climate_variability_ladochy.pdf

  • JCH // February 7, 2008 at 1:48 pm

    The point of the child fever example is that within a doctor’s office there are several ways to measure temperature, and often multiple examples of the same tool in use. At my father’s office the gold standard was the rectal thermometer reading - please, no site photographs. The medical personnel can get exceptionally adept at correcting for a wide variety of errors, including between different examples of the same tool. That one runs cool, that one runs hot. They do that as a matter of course on the fly every single day of their working lives.

    Wander in some fool auditor genius who figures out that two versions of the same thermometer register a different temperature, and the auditor thinks he’s ready to practice medicine. Wacko.

  • tamino // February 7, 2008 at 1:49 pm

    So we find among other things continuing refusal to admit that Mannian PCA is not a legitimate statistical procedure

    This is no more true than the claim of a previous commenter who insisted that least-squares regression is bogus, and you can’t compute meaningful error estimates unless the noise is white.

    It’s sad that denialists have you so brainwashed: not only do you believe Mann’s *results* are mistaken, they’ve actually convinced you that “Mannian PCA” (whatever that means) is not a legitimate procedure. Which makes it ironic that you insist that it’s the *other* side that refuses to admit being wrong.

    It’s dishonest to suggest that we don’t acknowledge the flaws in surface thermometer data, when it’s legitimate climate scientists who have been refining methods to *correct* them for decades. That’s because they’re the ones who recognized the issues first — long before AGW was a hot political issue and denialists felt the need to discredit the data.

    It’s incomprehensible that you actually stand by your statement that “It doesn’t matter what the effects of discrepancy are.” That’s your story and you’re sticking to it. One would have a lot more respect for your intellectual integrity if you would admit your error and move on.

    It’s pathetic that you cling to a vain hope that somehow some undiscovered negative feedback mechanism will overthrow science, so reducing the human influence on climate as to save us from ourselves. You really do sound like a cigarette smoker who indulges in some fantasy that rationalizes doing anything *except* giving up smoking.

    Fred, for a long time I’ve thought you were a die-hard skeptic but not a denialist, so I thought it was worthwhile to engage you in discussion. It’s time for me to admit I was mistaken about that, and move on.

  • Barton Paul Levenson // February 7, 2008 at 3:51 pm

    fred writes:

    [[BPL, yes I did mean exactly what I said and thought about it before posting it. A huge proportion of the stations are out of spec. Either change the spec or stop using them. One or the other. ]]

    No scientist on God’s green Earth would throw out data because it had a bias. They would simply correct for the bias. You’re assuming, without proof, that if the stations are out of spec their data is useless. It isn’t. You’re just plain wrong.

  • Barton Paul Levenson // February 7, 2008 at 3:52 pm

    Jack posts:

    [[It would be refreshing if someone visiting this site could post some actual evidence (this by definition excludes computer models) to support the man-made carbon dioxide global warming hypothesis.]]

    I think John Tyndal did the basic lab work in 1859. Do you understand what a greenhouse gas is?

  • fred // February 7, 2008 at 4:14 pm

    OK, if decentered PCA is a legitimate statistical technique, give me a proper reference in a stats textbook or monograph saying so, and I will believe it is. Hey, I’ll use it as well. Just one authoritative account of doing PCA which shows it as a legitimate method. Contrary to Wegman.

    On quality, I believe one thing, and cannot be shaken from it, and that is that your specification MUST match your processes.

    I do not say they are wrong to use the stations they use, or even to construct them the way they have been constructed. I do not say the record they show is wrong. I just say, as was dinned into me for years in industry, your spec must match your process. If its good enough to do, its good enough to say you’re doing it.

    So if it really doesn’t matter, and I do not know whether it does or not, rewrite the standard to say so.

    I am not a denialist, and do not believe there are such animals. At least, I have never met one. But I haven’t gone to the Cato Institute site and similar odd places, maybe they hang out there. But I can see that having a standard that says your stations must be 20 meters away from artificial heat sources, and then putting them right next to one, is a ridiculous combination of acts. Its the combination of acts that is unjustifiable.

    20 meters is just an example.

  • Lee // February 7, 2008 at 4:35 pm

    Fred, you are wrong.

    The surface station network is an historical network. It has acknowledged flaws.
    Among those flaws is that stations have been moved or altered in the past, introducing inhomogeneities, and that stations have had infrastructure grow around them in the past, introducing inhomogeneities. These are compensated for in various ways, depending on who is using the data for their analysis.

    That compensating mechanism is verified, in the period of overlap, by the satellite record. Further, JohnV’s analysis using only the ‘best’ of those stations also verifies the compensating methods.

    We can not go back in time and produce a perfect surface station network - time travel seems to be contraindicated. This means we are stuck with the existing data, flaws and all. You would throw out much of that data. I’ll be blunt - thats a stupid response. That data can not be reacquired. It is HISTORICAL data. Examining and correcting the existing historical data for bias is a proper response, and that is what is done.

    Some people are calling for altering existing stations now to put them ‘in spec.’ That is even more stupid - intentionally introducing a massive inhomogeneity would cause major damage to the continuity of that temperature record. The CRN is being put in as an alternative high-quality network, specifically to deal with that issue.

  • Hank Roberts // February 7, 2008 at 4:38 pm

    If it makes a difference it shows up in the record.

    Current technology didn’t exist. Statistics as a field barely existed when the weather boxes were being put in place originally.

    Each site is what it is. Each record is what it is. Getting good information out of reality is what statistics is for.

  • luminous beauty // February 7, 2008 at 4:51 pm

    “I am not a denialist, and do not believe there are such animals.”

    The irony is killing me.

  • fred // February 7, 2008 at 4:53 pm

    Jack, assuming its a sincere question, the chain of reasoning and evidence seems to go something like this.

    1) Evidence that it is in fact warming. We could regard this as proven, subject to worries about the extent. Evidence from moving seasons, ground stations, satellite measurements, crop and vegetation coverage. Ice extents. My own worry is not whether its happening, but about whether it is exceptional, and whether it can be linked to CO2. It is beginning to seem that post 1975 warming is exceptional since 1850. Is this long enough to mark an exceptional term though? Will it reverse? We are dealing with fairly short timescales. Tamino seems to have suggested that a few more years will tell, but that maybe we are not totally certain at the moment, which seems reasonable. So on warming, its a done deal. On exceptionality maybe the verdict is, plausible but non-proven.

    2) Evidence that CO2 rises are man made. I always regarded this as certain too, until reading Spencer’s recent stuff which made me wonder. But the argument has been from the chemical makeup of the CO2 and that seemed decisive. If Spencer is right the worrying thing is that the chemical makeup stays the same during the annual fluctuations, which would be hardish to explain on the basis that its all man made. Have to see when he publishes and the stuff is peer reviewed. Unless that holds up however, and assuming the chemical composition argument stands, it is certain that CO2 rises are human and not due to warming which has happened for some other reason.

    3) Then there is the argument from physics: we know that CO2 definitely absorbs heat radiation, and that the more there is of it, up to a point, the more it will absorb. This isn’t open to doubt. The only issue is that this direct effect is not large enough to account for all the forecast warming. For that you need positive feedback loops, and though I will be abused for saying so, for me the jury is still out on the nature, evidence for and extent of them. This means you can accept a lot of the argument, but have more doubts about the reasoning to the imminent desertification of Spain than about the earlier stages.

    4) Then there is causation. We may agree it is warming, that it is exceptional, that CO2 is rising, that this will lead to some warming, and that the CO2 rise is man made, but still worry a bit about causation. The thing I worry about here is why other warmings, which I am not persuaded are uncomparable to the post 1975 one, took place without CO2 rises. I worry rather about explanations which are of the form, this particular instance of X is caused by something different from all the other instances of X. Oh and we don’t know what caused the other ones. I find it very hard to know how much weight to assign to this, but its a factor, and I’d be easier if there were some explanation of the earlier warmings that we can exclude to account for this one.

    You are not going to get, at this stage, a definitive proof. What you get from going into the above in detail is a view that it is, though to what degree is a question, a plausible hypothesis. Bits of it seem better established than others. But more is being published all the time. We will probably get certain one way or the other within five years.

  • tamino // February 7, 2008 at 5:01 pm

    Evidence that CO2 rises are man made … the worrying thing is that the chemical makeup stays the same during the annual fluctuations, which would be hardish to explain on the basis that its all man made.

    But it doesn’t. See this.

    Maybe we should start a “you might be a denialist” list, along the lines of Jeff Foxworthy’s “you might be a redneck” list. I’ll start:

    If you’re not convinced that the CO2 increase is man-made … you might be a denialist.

  • J // February 7, 2008 at 5:20 pm

    The argument about causation always seemed silly to me.

    There are two possibilities. One, that currently observed warming is being caused by the GHGs that we know we’re adding to the atmosphere, and that we know ought to be producing a positive radiative forcing.

    Or, alternatively, that some mysterious force is erasing the expected impact of anthropogenic GHGs, while a second mysterious force is producing the warming we’re actually observing.

    I suppose if you find your kid standing in the kitchen and holding a cookie, while the cookie jar lies in pieces on the floor, there might be explanations other than the obvious. But I think most people would agree that if we’re going to reject the simple and obvious explanation, we need to have very strong evidence in favor of the alternative.

  • fred // February 7, 2008 at 5:28 pm

    http://wattsupwiththat.wordpress.com/2008/01/28/spencer-pt2-more-co2-peculiarities-the-c13c12-isotope-ratio/#more-619

    is Spencer’s post. Be interested to have a reaction to it. Is it just nuts?

    [Response: It’s fine *until the final step*, which is just nuts.

    Perhaps I’ll post on the topic soon.]

  • J // February 7, 2008 at 5:38 pm

    Maybe we should start a “you might be a denialist” list, along the lines of Jeff Foxworthy’s “you might be a redneck” list. I’ll start:

    If you’re not convinced that the CO2 increase is man-made … you might be a denialist.

    Eh. You’d need to include the caveat that (a) the person has been confronted with the evidence, and (b) is cognitively capable of understanding the evidence. Otherwise you risk tagging as “denialists” people who are merely uninformed, misinformed, or idiots.

    That said, I’d be interested in some kind of scale of denialism. Clearly, someone who insists that CO2 isn’t even rising is more of a denialist than someone who admits it’s rising but claims humans aren’t responsible.

    It’s kind of like Tim Lambert’s “bingo” game, except his doesn’t differentiate among the various degrees of stupidity involved.

  • luminous beauty // February 7, 2008 at 6:11 pm

    With some slim hope fred has an outside chance of ‘getting’ the irony, I’ll add:

    If you deny you’re a denialist, and deny that denialism exists …you might be a denialist.

    I think ‘delusionalist’ or ‘obscurantist’ are better descriptors, but I cannot deny that common consensus meaning is derived from usage rather than rigorous definition.

  • J // February 7, 2008 at 6:13 pm

    Fred:

    I haven’t looked at Spencer’s latest claims, but how does he deal with the fact that atmospheric CO2 was basically flat for the past millennium, then began rising exponentially following the Industrial Revolution?

    And where exactly does Spencer claim that all the anthropogenic CO2 is going?

    That would be an extraordinary pair of coincidences. Something is mysteriously taking all of our own CO2 out of the atmosphere, and something else is simultaneously adding back in extra CO2! And both these mysterious forces only appeared on the scene at exactly the time when we began widespread industrial use of fossil fuels!

    Extraordinary claims require extraordinary evidence. I assume Spencer’s evidence must be really, really powerful … because the argument he’s making is, prima facie, highly improbable.

  • Horatio Algeranon // February 7, 2008 at 6:48 pm

    “if you find your kid standing in the kitchen and holding a cookie, while the cookie jar lies in pieces on the floor, there might be explanations other than the obvious.”

    Well, it’s obvious that the kid was just rescuing the cooky before the cooky jar committed hari cookyjari.

  • Barton Paul Levenson // February 7, 2008 at 7:20 pm

    fred posts, astoundingly:

    [[I am not a denialist, and do not believe there are such animals. ]]

    I have met them on RealClimate, Deltoid, Open Mind, and AOL. They are legion. They don’t believe global warming is happening, or they think it can’t be due to fossil fuel burning. They think they’re defending the American economy from evil Europeans and socialists. They live in a political dreamworld where you deny science if science says something that goes against your side’s interests. Rush Limbaugh is a denialist. Ann Coulter is a denialist. Pat Michaels and Richard Lindzen and Viscount Monckton and Ross McKittrick and Steve McIntyre are all denialists. Take a look around you. They’re not hidden.

  • Barton Paul Levenson // February 7, 2008 at 7:23 pm

    fred writes:

    [[For that you need positive feedback loops, and though I will be abused for saying so, for me the jury is still out on the nature, evidence for and extent of them.]]

    Google “Clausius-Clapeyron law.”

  • Dano // February 7, 2008 at 7:27 pm

    If 8% of the population refuses to believe their eyes, who cares. They don’t have access anyway, being stuck in their parent’s basements or suffering ossification via aging.

    I’m more interested in a list of “you might be a denialist decision-maker if…” because this small minority is hindering setting policy, not comment-thread denialists.

    Best,

    D

  • Hank Roberts // February 7, 2008 at 7:52 pm

    The Spencer/Engelbeem exchange was interesting; the thread, like most, seems now lost to the taggers who just post the same scribble everywhere they go.

  • Paul Middents // February 7, 2008 at 7:56 pm

    Re Fred’s query on Roy Spencer’s latest:

    http://scienceblogs.com/stoat/2008/01/spencer_is_totally_off_his_roc.php

    William Connely thinks he is completely off his rocker.

  • luminous beauty // February 7, 2008 at 8:02 pm

    Stages of denialism:

    1.) Global warming isn’t happening.

    2.) It’s happening, but it isn’t human caused.

    3.) Some part is human caused, but it isn’t significant.

    4.) The human cause is significant, but the consequences will be more good than bad.

    5.) The consequences are more bad than good, but there isn’t anything that can be done about it.

    6.) We could have done something about it, but now it is too late.

    7.) And so on…

  • Heretic // February 7, 2008 at 8:04 pm

    “We will probably get certain one way or the other within five years.” Haven’t we heard something similar to this about 5 years ago from various skeptics? Are we going to hear it again in 5 years?

    You talk about intellectual honesty, Fred, and we just had an interesting example with the D’Aleo “paper.” Did Spencer have this peer-reviewed and published and if not, why not? It looks like it would make an interesting paper.

    Jack, the outgoing long wave radiation (heat) has decreased in the bands of the the GH gases in the proportion expected from the increased concentrations of these gases:
    http://www.nature.com/nature/journal/v410/n6826/abs/410355a0.html

    More heat is retained. I need some good reasons as to why should the climate should not warm up. Fred is arguing that there might be another, unknown, more important source of warming, which happens to be acting right at the same time as this. Now that would be a coincidence that even D’Aleo could have hard time to find.

    Stratospheric changes confirm the GH signature:
    http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F1520-0469(1996)053%3C1339%3ATCOSAB%3E2.0.CO%3B2&ct=1&SESSID=6dee5785413b3a01f83cacbfd77ad44f
    This is not a peer-reviewed article but gives an accessible summary and features peer-reviewed references: http://www.atmosphere.mpg.de/enid/2__Ozone/-_Cooling_nd.html.

    Tropopause changes are consistent with the big picture:
    http://www.aero.jussieu.fr/~sparc/News17/ReportTropopWorkshopApril2001/17Haynes_Shepherd.html
    Santer also has work on this.

    I don’t know where Fred is getting is readings on feedbacks but, as always, blogs are not nearly as informative as the real stuff, like these papers:
    http://www.pnas.org/cgi/reprint/0702872104v1.pdf
    http://adsabs.harvard.edu/abs/2006GeoRL..3310703T
    http://www.agu.org/pubs/crossref/2006/2005GL025044.shtml
    http://www.met.tamu.edu/people/faculty/dessler/minschwaner2006.pdf

    Temperatures measured and inferred from all other sources than surface stations show consistent warming trends. Numerous proxies confirm these trends. One could almost say who cares about surface stations measurements, if they were not a sizeable piece of data and no scientist is going to disregard data, even less than ideal.

    There are a lot of good reasons for being skeptical of the skeptic arguments. Those reasons tend to be in peer-reviewed scientific publications, whereas the skeptic arguments tend to rage in blogs or “Energy and Environment.”

  • Evan Jones // February 7, 2008 at 8:44 pm

    “The maximum projections for the next 100 years are up around the 60cm mark. And many seem to think that is a conservative estimate too.”

    That’s because I got it wrong. But it has just recently been revised:

    The IPCC hasreduced it’s max. projections to 17 inches, not cm., or about half its previous maxiumum.

    Here is a very recent WSJ storu on it. (Note that is is not particularly critical of the IPCC.)

    http://www.opinionjournal.com/editorial/feature.html?id=110009625

    A note on the “potentiality” of CRN ratings: A heat sink has most of its effect at T-Max and even more so at T-Min. And those are the two points most important in measuring US temperatures. So the error could not exist for 90% of the time and still be fullblown in the climate record.

    I strongly advocate a rechecking of the surface station records.

    (sod reports that Germany uses an hourly average, a much superior method.)

  • Evan Jones // February 7, 2008 at 8:54 pm

    Stages of Affirmation:

    1.) All important resoureces are nearly at an end. Alle ist weg.

    2.) Some resources are nearly at an end unless we take drastoic measures.

    3.) Resource depletion is significant, and the effects are threateninng.

    4.) Some resources are becoming scarce, but limiting growth and restistribution may prove effective.

    5.) Some resources may become scarce in the future. Caution advised.

    6.) Resources are not scarce but may become so in the future.

    7.) I’ll come in again.

  • cce // February 7, 2008 at 9:20 pm

    You can’t directly compare the TAR and AR4 SLR figures.

    The TAR’s upper sea level figure was 88 cm by 2100.

    AR4’s upper figure was 59 cm, plus 17 cm for “rapid dynamic changes in ice flow.” Further, the AR4 end date was 2090-2099 which is worth a few cm.

    The rest of that editorial is similar garbage and is based on the fantasies of Lord Monckton. The SPM is written by scientists and requires joint approval of the lead authors and each government’s representatives. AR4 didn’t exclude the Hockey Stick. It didn’t say that the TAR “overestimated human influence by at least one third” and no one says that models require warming year in and year out.

  • dhogaza // February 7, 2008 at 9:25 pm

    What does that have to do with climate science, Evan? That’s got to be the most off the wall post I’ve seen here.

    Are you claiming that climate science is driven by some off-the-wall apocalyptic woo-woo crap or what?

    There’s nothing in your seven points that relate to climate science in the least. The first six made me laugh, the seventh made me shrug my shoulders, thinking to myself, “pity all those spinning electrons that will carry his posts to me when he follows through on his threat”.

  • John Mashey // February 7, 2008 at 9:36 pm

    See:
    http://www.climatesciencewatch.org/index.php/csw/details/oreskes_lecture/

    I heard an earlier version of this talk a year ago, and it is really good stuff, detailing the long history of climate science before politicization, the rise of the George C. Marshall Institute, denialist roots, etc. She knows some of these folks personally, and they are not nice.

    Some here may recall the silly Monckton/Schulte/Ferguson/Morano attack on her. If you don’t, I collected togehter the timeline and details of the sorry mess in:
    http://www.zerocarbonnow.org/wordpress//uploads/monckton_schulte_oreskes1.pdf
    as an illustration of the media manipulation involved.

  • luminous beauty // February 7, 2008 at 10:36 pm

    Evan’s Terminator as Stuart Smalley impression leaves something to be desired.

  • jacob l // February 7, 2008 at 11:53 pm

    how is this for a description of AGW ??
    http://www.agu.org/journals/rg/v027/i001/RG027i001p00115/RG027i001p00115.pdf

  • tamino // February 8, 2008 at 12:01 am

    Fred,

    You NEED to watch this video by Naomi Oreskes, about the activities of denialists.

    The whole thing. IF you have the courage to face the truth.

  • steven mosher // February 8, 2008 at 12:08 am

    Timothy,

    I am glad you brought up the Peterson paper because it provides strong support for Anthony Watts work. Peterson’s paper is routinely cited but seldom understood . let me summarize it for those who have not read it. One step at a time.

    Peterson, believes that UHI ( Urban Heat Island) is a reality. That is, he believes
    in the climate science that has held and shown for over 100 years that urban locations are warmer than rural locations. Hence, Peterson starts his argument with the following:

    “As just about every introductory course on weather and climate explains, urban areas are generally warmer than nearby rural areas.”

    The concern, of course, is that this factor may be introducing a bias into the
    historical climate record. You can witness this concern by observing that
    Peterson and Hansen adjust their data for URBAN bias. When Peterson issues his data, the last adjustment he does is for Urban BIAS.
    Green roofs. google it.

    Climate science accepts, as fact , that urbanity can skew or bias results. I do. You do.

    Next. Peterson attempted to show the following. If the temperature records are Homogenized and adjusted, then we will see no difference between Rural and Urban. Very simply: it is the consensus of climate science that urban centers are warmer than rural locations on average. (Peterson does not deny this climate science. )However he is concerned that this bias may infect the historical record. So he compares “rural” stations to “urban” stations
    to judge the difference. And he makes various adjustments to reconcile the urban with rural.

    He selects stations in the US. He explicitly argues that his results are valid only for the sample he selected: CONUS. Good for him.
    That is, he argues that CONUS can be corrected for UHI. CONUS is 1.6% of the world land mass.

    He selected stations that are different from the GISS stations. His results are valid for the population he sampled. It is not the same as the population that GISS sample.

    So, the first questions for you to answer Timothy are:

    A. Is UHI ( urban is warmer in general than rural) real, or is climate science wrong?

    B. Does Peterson limit his conclusions to CONUS or not?

    C. Does Peterson use the same stations as GISS?

    This is the first order of business. When we settle these three questions. We move on to the interesting ones.

  • P. Lewis // February 8, 2008 at 12:16 am

    I’m not overly fond of U-tube (an infrequent visitor and never staying long), but I was captivated for the 58 minutes 36 seconds of Naomi Oreskes’s talk. John Mashey is right: it’s good stuff. The hour just flew by.

    I’d recommend all “believers” watch it. I’d always wondered why some blog commenters went on about the George C. Marshall Institute. Not having much interest in the make up of US institutes, I never delved. Now I know the context, things are much clearer.

    All “deniers” should avoid it. It could open your eyes to the history of global warming science (if you’ve never managed to get past the link for Spencer Weart’s magnum opus), the political machinations and the masquerading of political double-speak and downright lies as science debate. Hell, it might even begin to change some people’s minds. No, if you’re a denier, don’t bother.

    No, there aren’t many communists left.

  • luminous beauty // February 8, 2008 at 1:25 am

    Mosher,

    “Is UHI ( urban is warmer in general than rural) real, or is climate science wrong?”

    1.) Look up false dichotomy.

    2.) Explain why most global warming is not in urban areas.

  • Evan Jones // February 8, 2008 at 2:01 am

    Who is denying global warming? The question isn’t whether global warming is real. The question is the extent of it.

    A heat sink will exaggerate the delta of a mild warming. A heat sink will also exaggerate the delta of a mild cooling.

    therefore, for heat sink to be exaggerating the existing temperature record (other than the immediate offset), there must have been some amount of warming to begin with. The question is how much.

  • Dano // February 8, 2008 at 2:34 am

    Wow.

    Excellent Oreskes talk.

    Excellent. Spread the URL.

    Best,

    D

  • Dano // February 8, 2008 at 2:55 am

    I am glad you brought up the Peterson paper because it provides strong support for Anthony Watts work.

    Oh?

    Watts is taking temperature measurements now? Comparative? Over time?

    About time. Where may we view and audit the temp data?

    Best,

    D

  • Heretic // February 8, 2008 at 3:57 am

    Evan, how much warming is shown by radiosonde data, satellites, boreholes, etc…

    That is, of course, if you totally disregard surface stations data. Of your points of whatever-you-call-it, not one has anything to do with climate science or even your own arguments about surface stations. What are you trying to say?

    Steve Mosher, the strongest warming is indeed happening without regard for urbanization, why is that? And if the stations are so bad, how come there is strong agreement with all other sources? And the butterflies and birds and so forth, they’re getting wrong numbers too?

    Nothing but nonsense. It is a good thing that the ones doing real science are not waiting for you to tackle the interesting questions.

    By the way, you were bragging and boasting about how FORTRAN did not impress you since you had “boxes of it” in your garage, what’s up with that? Hansen, the king of fraud according to CA, DID make the code public, so now what?

  • Timothy Chase // February 8, 2008 at 4:11 am

    P. Lewis wrote:

    I’m not overly fond of U-tube (an infrequent visitor and never staying long), but I was captivated for the 58 minutes 36 seconds of Naomi Oreskes’s talk. John Mashey is right: it’s good stuff. The hour just flew by.

    I enjoyed it. didn’t know that the Marshall Institute had played such a pivotal role the politicization of the public’s perception of various scientific issues — and quite honestly didn’t know how well established so much of the science had been decades ago, and how well the US government had been listening to the actual science back in the 1970s.

    Informative, educational, and a nice distraction from coughing and sneezing. Had to run around last night and today getting a bunch of paperwork done and faxed for a position I’ve landed — and I am trying to get over a cold before the thing starts.

  • Heretic // February 8, 2008 at 4:36 am

    Congrats Tim, I’m sure you deserve it (the position, not the cold)

  • Timothy Chase // February 8, 2008 at 4:48 am

    Steven Mosher wrote:

    I am glad you brought up the Peterson paper because it provides strong support for Anthony Watts work.

    Dano wrote:

    Oh?

    Watts is taking temperature measurements now? Comparative? Over time?

    No, but he has been cherry-picking surface stations, comparing the temperature records of individual “good” stations against “bad” stations in order to argue that urban stations are distorting the temperature record.

    You can also see a little of what is being argued by the “skeptics” with their reference to rocks as heat-sinks, e.g., when Evan Jones wrote:

    If you want to warm a greenhouse, all you do is add a large rock. It absorbs solar energy (and pumps up T-Max) then releases joules at night, seriously boosting T-Min. The more mass in the sink, the more the effect.

    Global warming isn’t due to greenhouse gases but the laying of too much asphalt. Which explains why the diurnal temperature range (difference between daytime highs and nighttime lows) has been decreasing. Nice theory. Now if they want to do science, they need to turn it into a testable hypothesis. Here is a prediction: urban stations will have a smaller DTR than rural stations.

    Let’s check:

    The DTR is particularly susceptible to urban effects. Gallo et al. (1996) examined differences in DTR between stations based on predominant land use in the vicinity of the observing site. Results show statistically significant differences in DTR between stations associated with predominantly rural land use/land cover and those associated with more urban land use/land cover, with rural settings generally having larger DTR than urban settings.

    Climate Change 2001:
    Working Group I: The Scientific Basis
    http://web.archive.org/web/20061209234846/http://www.grida.no/climate/ipcc_tar/wg1/054.htm
    http://www.grida.no/climate/ipcc_tar/wg1/054.htm

    Bingo!

    Let’s do another testable hypothesis. The falling trend in DTR will be much stronger in urban areas as opposed to rural areas.

    Let’s check:

    Although this shows that the distinction between urban and rural land use is important as one of the factors that can influence the trends observed in temperatures, Figure 2.2 shows annual mean trends in diurnal temperature range in worldwide non-urban stations over the period 1950 to 1993 (from Easterling et al., 1997). The trends for both the maximum and minimum temperatures are about 0.005°C/decade smaller than the trends for the full network including urban sites, which is consistent with earlier estimated urban effects on global temperature anomaly time-series (Jones et al., 1990).

    Climate Change 2001:
    Working Group I: The Scientific Basis
    ibid.

    Apparently not.

    So much for the asphalt theory of global warming.

  • Timothy Chase // February 8, 2008 at 5:10 am

    Heretic wrote:

    Congrats Tim, I’m sure you deserve it (the position, not the cold)

    Well, I know a team of three that got let go by Microsoft back in November — and all of them are still out of work, so it doesn’t seem to be that easy to find at this point. In that respect, I guess, I am pretty lucky.

  • Heretic // February 8, 2008 at 5:23 am

    I used to pride myself to be a moderate, but the BS flung around is reaching biblical proportions. I think I’m definitely tipping over to the side of Dhogaza, Dano and others.

    I wish all those skeptics would apply their skepticism to stuff that really costs them and me a lot of money every f***ing day.

    To all skeptics out there who claim to be good with data. Look up the data for activated protein C (Xygris) and realize that every time a medicaid recipient is given a full course of it (5 or 6 bags, 8 grands each), your yax dollars pay for it. That’s happening NOW, and will continue, because somehow FDA approved the darn thing, although benefits (if any) are so marginal that they’re hardly worth the cost. Why were there no skeptics screaming bloody murder (pun intended) to Merck and asking for ALL data and ALL codes to be made public about their now infamous COX2 inhibitor? That is stuff that REALLY cost billions and thousands of lives. Where are the skeptics for that? Too busy counting the trillions predicted by Baliunas as a cost to phasing out CFCs? Too busy bickering about AC units and barbecues near thermometers?
    What a load of dung!

  • Evan Jones // February 8, 2008 at 5:42 am

    “No, but he has been cherry-picking surface stations, comparing the temperature records of individual “good” stations against “bad” stations in order to argue that urban stations are distorting the temperature record.”

    No.

    1.) He is in the process of observing the entire USHCN net. His volunteers have observed 482 so far. He has stubbornly refused to come to any conclusions until the majority have been observed and has said the effect could be large or small..

    2.) He is not studying UHI. His project is confined to site violation and has said that UHI and site violation are not directly additive, but one can ’swampt” the effects of the other.

    3.) The rate of violation has been steady throughout the process (now 40% complete) and he has not been cherrypicking to obtain a skewed percentage of stations in violation.

    4.) His data is complete, clearly presented, and public for all to see. CRN has taken a great interest in his work.

    I would add that using 2001 data to assess UHI effects is behind the times. For one thing, the “Lights=” method is used to determine whether a station is urban, and this method is no longer valid, considering the changing nature of development.

    You seem to be accusing him of cherrypicking and coming to conclusions when he is doing neither.

    I ask you to be fair. There will be time enough for independent review when he is finished. He is very open with his data and methods, in accordance with scientific method.

  • Evan Jones // February 8, 2008 at 5:54 am

    LaDochy, et al, (2007) and McKitrick, Michaels, et al (2007, using corrected methods from 2006) directly contradict the conclusions of WG1.

    They have yet to be refuted so far as I know.

  • fred // February 8, 2008 at 7:14 am

    It gets to be clearer and clearer what it takes to be called a denialist, and it is fairly easy to qualify.

    All you have to do is doubt one element of the litany. You could, for instance, accept in full the CO2-warming connection, and also accept the IPCC estimates of future warming ranges, sea level rises, and believe that we should limit carbon emissions right away.

    But if you do not believe in decentered PCA and the Hockey Stick, you will be a whited sepulchre, and be considered a denialist. If you believe all of the above, and even believe in the Hockey Stick, but think Jones should have published the names of his stations without loss of time, sometime back in the nineties, and that Thompson should archive and make available data now, then too you will be a denialist.

    Probably if you are sceptical about exactly that famous polar bear photo was showing, you will be as sounding brass or a tinkling cymbal….

    I do not know what Spencers account is, don’t particularly accept the argument. But I do not dismiss it out of hand until I understand it.

    A classic denialist, then.

    I’ll check out George C Marshall who I never previously heard of, but doubt it will change my mind on decentered PCA.

    On which I am still waiting for a reference. Please someone, there must be lots of college textbook descriptions of how to do such a standard procedure. Just one will do. I don’t mean the ordinary sort. I mean those references where its described how to do decentered.

    [Response: Before I lift anothe finger to bother with you:

    Did you view the entire video from Oreskes?

    Do you still maintain there are no denialists?]

  • Deech56 // February 8, 2008 at 1:03 pm

    Timothy Chase, congratulations. Not to be selfish, but I hope this doesn’t mean you will be posting less frequently. Your posts are always good reading.

  • tamino // February 8, 2008 at 2:46 pm

    I have an aversion to allowing denialists to divert attention from real issues, to erroneous non-issues that only serve to make people like Fred obsess about the hockey stick and repeat false claims about Mann et al.’s methodology. One of the reasons we’re not, as a society, actively working to head off the worst of what’s to come, is that their strategy of misdirection has been so successful.

    That said: PCA (principal components analysis) is a very interesting mathematical tool. So I’m definitely going to post about it, both PCA in general, and as used in Mann et al. specifically.

    It’s a nontrivial topic so it may end up being more than one post. I’m on the road for the next 5 days, leaving this afternoon, so it’ll be mid to late next week before I can offer the first installment. Readers should also be aware that from this afternoon until my return I’ll have limited internet access, so there may be delays in moderating comments. Patience is appreciated.

  • Barton Paul Levenson // February 8, 2008 at 4:46 pm

    Evan Jones writes:

    [[His data is complete, clearly presented, and public for all to see. ]]

    You left out “irrelevant to the issue.”

  • Paul Middents // February 8, 2008 at 4:46 pm

    Fred,
    Try this text out.

    http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-95442-4

  • Evan Jones // February 8, 2008 at 5:44 pm

    “You left out “irrelevant to the issue.””

    That is definitely not the opinion of the NOAA/CRN.

  • Heretic // February 8, 2008 at 5:56 pm

    The Mc Kitrick and Michaels paper still deals exclusively with surface temp measurements.

    Their conclusion does not even mention the question of why lower tropospheric satellite measurements agree well with the surface measurements when they should, in fact, show significant discrepancies. It poses more questions than it answers.

    The authors, however, don’t state to have any interest in following up on these questions.

  • JCH // February 8, 2008 at 6:45 pm

    Is that the political wing of NOAA, or the scientific wing. I rather doubt the scientists at NOAA expect any change in the science. The GW appointees probably do hold delusional expectations.

  • sod // February 8, 2008 at 6:50 pm

    LaDochy, et al, (2007) and McKitrick, Michaels, et al (2007, using corrected methods from 2006) directly contradict the conclusions of WG1.

    LaDochy has written several papers in 2007, but i assume you are refering to the california paper.

    that paper does NOT contradict any IPCC report.
    even the RURAL stations alone warm with a rate of about 0.1°C per decade.

    http://wattsupwiththat.files.wordpress.com/2007/11/ca_climate_variability_ladochy.pdf

    (sod reports that Germany uses an hourly average, a much superior method.)

    the “deutscher wetterdienst” is using an hourly mean to calculate the average daily temperature since 2001. whether this number or a min/max mean is used when calculating German data for international climate research, i don t know.

  • steven mosher // February 8, 2008 at 6:53 pm

    I figured no one would actually read Peterson.

    Peterson believes that the Urban Heat Island is real. Like Nasa

    http://www.sciencedaily.com/releases/2002/06/020619074019.htm

    His concern was that UHI was a potential source of Bias in the climate record.

    So he compared Rural sites to urban sites. to look for a temperature difference.

    1. The sites he used are not the sites used by GISS or HADCRU.
    Of the 289 stations, only 63 came from the USHCN network. Simply, he tested the data NASA doesn’t use!

    2. His classification of Rural and Urban is suspect.

    When SteveMC looked into this he found:
    “Of the 63 USHCN stations in the Peterson network, 9 had GISS-lights of 0. However, 3 of the 9 sites with lights=0 were classed by Peterson as urban (Fort Yates ND; Utah Lake Lehi UT; Fort Valley AZ) while 6 were classified as rural.
    Of the 13 Peterson USHCN sites classified as “rural”, the GISS lights were as high as 19. Checking the 48 Peterson USHCN sites classified as “urban”, 15 had GISS lights less than or equal to 19 (including 3 with lights=0 as noted above”

    I did a spot check of Peterson. I was delighted to find Mineral California on his list
    of stations. It’s not used by NASA and is out in the middle of nowhere. Peterson
    classified it as Suburban. For grins go take a google map look at this tiny village of
    142 people. Still one can’t blame Peterson for this classification. He didn’t do it.
    He got it from someone else, Gallo

    3. He limits his conclusions to CONUS. You can’t extrapolate to the World
    and you cant even extrapolate to the USHCN. He used 63 USHCN sites. The
    rest, like Mineral California and 225 others, don’t get used by GISS. If
    Peterson proved anything, he proved that data NOT USED by GISS is good.

    4. The difference he found between Urban and Rural was .31C. Then he made adjustments. The final figure was .04C for Urban-Rural. ( he subsequently
    updated this as the result of a processing error and the difference was 0C.)

    5. He compare 3 years of monthly data ( 36 months). Small effects will not be found
    in small samples. Nice trick. So, he uses data from stations that NASA doesn’t use.
    Then he looks for a small effect with a small N.

    6. He did not look for long term trends. The most important thing is the long term trend and long term corruption of the record. That what one wants to look for. Not 3 years
    of data from a collection of stations that GISS doesn’t use.

    Now, Peterson realizes that his conclusion is at odds with climate science:

    “The research presented here attempts to unravel the
    mystery of how a global temperature time series created
    partly from urban in situ stations could show no contamination
    from urban warming.”

    There are several explanations. The rural sites might be corrupt, or

    “Therefore, if a station is located within
    a park, it would be expected to report cooler temperatures
    than the industrial sections experience. But do
    the urban meteorological observing stations tend to be
    located in parks or gardens? The official National
    Weather Service guidelines for nonairport stations state
    that an observing shelter should be ‘‘no closer that four
    times the height of any obstruction (tree, fence, building,
    etc.)’’ and ‘‘it should be at least 100 feet from any paved
    or concrete surface’’ (Observing Systems Branch 1989).
    If a station meets these guidelines or even if any attempt
    to come close to these guidelines was made, it is clear
    that a station would be far more likely to be located in
    a park cool island than an industrial hot spot.”

    Petersons cool parks. UHI is not found because the STATIONS are
    located in cool parks. And they are located in cool parks
    because people following the WMO siting guidelines and keep
    urban stations away from concrete.

    How does one test this? I suppose you could go look at the stations
    and see if they were properly sited? Hence Anthony Watts.

    What Anthony is finding however is that the USCHN ( the network
    used by GISS) does not meet siting guidelines.

    But Peterson didn’t test that network, 225 of the 289 sites he used are OUTSIDE the USHCN.

  • steven mosher // February 8, 2008 at 6:55 pm

    opps I meant 226 out 289.

  • Lee // February 8, 2008 at 8:07 pm

    I jsut went to teh surfacestatins.org page for the first time in a few weeks. Misleading as all get-out.
    First, his map of the US, with stations categorized into classes 1-5, has a legend that says “error in oC” with the errors listed from “<1C” to “.+5C”
    He does not say those are based on a suggestive study, he does not say th t they are maximum possible errors, not absolute known errors. In fact, his legend implies that they are absolute, known errors.
    He also does not say that the potential relevant errors are in trends, not absolute temperatures, and that reading high doesn’t mean squat, if there hasn’t been a bias introduced.
    He also does not mention that JohnV’s analysis of only the ‘class 1-2′ stations yields results nearly identical to the GISS results, so that the compensating algorithms seem to work very well. He does say that urban trends are corrected . He doesn’t say ANYTHING to offset the impression that the trend record is hopelessly contaminated by that misrepresented “error in oC” in absolute temperature.

    Further down, he still has that presentation of the Orland and Marysville stations, with the clear implication that ‘good’ sites actually show cooling, and ‘bad’ sites are warming only because of site effects. He has been called on this many times, and keep saying that selection was arbitrary. Right.

  • Barton Paul Levenson // February 8, 2008 at 9:07 pm

    The NOAA/CRN thinks that surfacestations.org is making a useful contribution? Where did they say that?

  • P. Lewis // February 8, 2008 at 10:10 pm

    For those who don’t know much about PCA and who have time and the inclination to want to arm themselves before Tamino unleashes it on us, there are two useful tutorials (many more really if these don’t take your fancy): Smith (which introduced me to the delights of Scilab) and Shlens. Both are good, and I think one can benefit from both of them, but once you’re in, I think the second is the better one. If you can understand mean, SD and variance, then you can get a handle on covariance easily enough. The rest is basically matrices and linear algebra. ;-)

    Piece of cake. If I can understand it, then so can you.

  • henry // February 9, 2008 at 7:21 am

    I keep hearing the argument about Watts’ work that “one picture doesn’t show anything”.

    I agree. If I were to show a picture of the Arctic ice, I’d be asked why I didn’t show another, from a different time, as a comparison.

    So take any of Anthony’s pictures, and compare the site to pictures taken 20 years ago. If you can find one, that is.

    And there’s the point: one picture may not show much, but two can show a trend….

    If pictures had been taken when the sites were installed, or when they were moved, or had equipment upgraded, then we would have had a record of changes in microsite problems.

  • Chris O'Neill // February 9, 2008 at 9:56 am

    fred:

    It gets to be clearer and clearer what it takes to be called a denialist

    Actually it’s patently obvious what a denialist is if you watch Oreskes’ talk. It’s someone, such as Singer or Seitz, who works in the denialism industry that seeks to create false scientific controversy in issues such as acid rain, CFC destruction of ozone, tobacco smoke causing cancer, first directly then second-hand, and now AGW. If you don’t know any denialists before this talk, you will afterwards.

  • EliRabett // February 9, 2008 at 6:22 pm

    Evan Jones wrote:

    1.) All important resources are nearly at an end. Alle ist weg.

    Which is a common, although incorrect idea. Ores don’t so much go away as exploitable ores become lower grade and the cost of recovery goes up. In the past this has been masked by cheaper energy and work (mechanization/cheap oil or electric) which has allowed us to exploit more diffuse resources.

    In essence this is Julian Simon’s insight. He was not betting on metal ores becoming cheaper as the cost of exploiting lower grade ores going down. Today the lowest cost source of iron is old ships (ok, buildings, cars, etc). Modern shipbreaking consists of driving a ship up on the beach in a third world nation and setting a bunch of guys with torches loose. Car shredding is somewhat more technical, but your old junker probably took a trip to China.

    If energy becomes more expensive, the only way out is a more efficient separation/concentration/refining technology. It might be interesting to find another Simon to bet with.

  • EliRabett // February 9, 2008 at 6:34 pm

    AtheistAcolyte: the GISS long term trend is ONLY from the rural stations. It has to be as they correct the urban stations on the basis of the rural stations. If they also included the urban stations in the long term trend it would induce an error similar to that of D’Aleo in that some of the data would force other date leading to major autocorrelation issues.

  • steven mosher // February 10, 2008 at 1:26 am

    Many folks have asked me to link to my analysis of CRN5 sites. Here is the link, showing the difference between the surveyed crn5 sites and all others.

    http://www.climateaudit.org/?p=2201#comment-154080

    I’ll update with new stations, in a bit

  • dhogaza // February 10, 2008 at 7:57 am

    Now, Peterson realizes that his conclusion is at odds with climate science:

    “The research presented here attempts to unravel the
    mystery of how a global temperature time series created
    partly from urban in situ stations could show no contamination
    from urban warming.”

    How is this statement “at odds with climate science”?

    UHI is not found because the STATIONS are
    located in cool parks. And they are located in cool parks
    because people following the WMO siting guidelines and keep
    urban stations away from concrete.

    How does one test this? I suppose you could go look at the stations
    and see if they were properly sited? Hence Anthony Watts.

    What Anthony is finding however is that the USCHN ( the network
    used by GISS) does not meet siting guidelines.

    But Peterson didn’t test that network, 225 of the 289 sites he used are OUTSIDE the USHCN.

    Which, of course, doesn’t begin to address the issue of how the NASA people compensate for UHI and other issues with the historical data in the USHCN network.

  • dhogaza // February 10, 2008 at 8:04 am

    Here’s the abstract to the paper Mosher et al claim is disproving the GISS record blah blah blah.

    With mounting evidence that global warming is taking place, the cause of this warming
    has come under vigorous scrutiny. Recent studies have lead to a debate over what contributes the
    most to regional temperature changes. We investigatedair temperature patterns in California from
    1950 to 2000. Statistical analyses were used to test the significance of temperature trends in Califor-
    nia subregions in an attempt to clarify the spatial and temporal patterns of the occurrence and inten-
    sities of warming. Most regions showed a stronger increase in minimum temperatures than with
    mean and maximum temperatures. Areas of intensive urbanization showed the largest positive
    trends, while rural, non-agricultural regions showed the least warming. Strong correlations between
    temperatures and Pacific sea surface temperatures (SSTs) particularly Pacific Decadal Oscillation
    (PDO) values, also account for temperature variability throughout the state. The analysis of 331 state
    weather stations associated a number of factors with temperature trends, including urbanization,
    population, Pacific oceanic conditions and elevation. Using climatic division mean temperature
    trends, the state had an average warming of 0.99°C(1.79°F) over the 1950–2000 period, or 0.20°C
    (0.36°F) decade–1. Southern California had the highest rates of warming, while the NE Interior Basins
    divisionexperienced cooling. Large urban sites showed rates over twice those for the state, for the
    mean maximum temperatures, and over 5 times the state’s mean rate for the minimum temperatures.
    In comparison, irrigated cropland sites warmed about 0.13°C decade–1annually, but near 0.40°C for
    summer and fall minima. Offshore Pacific SSTs warmed 0.09°C decade–1for the study period.

    I’d like Stephen Mosher to explain how the urban heat island effect has caused offshore Pacific SST’s to warm 0.09C per decade for the study period.

    Mosher’s been erecting and knocking down a strawman with statements like “Peterson believes that the Urban Heat Island is real. Like Nasa“, i.e. acting as though people here don’t believe the UHI effect exists.

    That strawman doesn’t need to be knocked down, and the fact that Mosher feels compelled to argue against it is telling.

  • dhogaza // February 10, 2008 at 8:29 am

    I’ve skimmed Peterson’s paper. His main thesis appears to be that UHI can’t be corrected for (he claims evidence in the form of one paper in 2002 that doing so results in a spurious increase in trend), that changes in agricultural practices also artificially boost the trend for stations in such areas, and that the only surface record that makes sense is the small set of ultra rural stations not subject to such changes.

    There’s a problem with this approach, though, in that those stations in CA will largely consist of those in the mountains and the mojave and great basin deserts. In particular, he cites the NE portion of the state as showing no change or a slight cooling trend. That’s the Great Basin, whose climate and weather is relatively minimally affected by the Pacific warming he cites elsewhere, and is hardly representative of the state.

    The main conclusion I’d draw from the paper is that there’s a great deal of variation in how warming impacts the different climatic zones in California. The dry desert, minimally effected by Pacific moist air masses responds differently than coastal areas like southern california. He seems to believe that the latter warming is due entirely to urbanization (land use changes).

    Nearly all his cites seem to be from Christy and friends, except when he cites Hansen in order to claim that he’s wrong.

    So Mosher’s probably correct in his belief that the guy’s more skeptical of AGW than many scientists, but I don’t see how he can be co-opted into the science-denialist camp. Nowhere does he claim that global warming is false.

  • dhogaza // February 10, 2008 at 9:22 am

    Actually that was ladouchy et al, not peterson, I was reading, the sequence of posts and links confused me since Mosher talks about both.

    So I’ve skimmed Peterson, whose conclusion includes the following:

    Once biases
    caused by differences in elevation, latitude, time of ob-
    ser vation, instrumentation, and nonstandard siting were
    adjusted out of the data, contrary to generally accepted
    wisdom, no statistically significant impact of urbani-
    zation over the contiguous United States could be found
    in the existing in situ temperature observation network.nearby rural stations
    , adjustments to account for urban-
    ization in CONUS in situ time series are not appropriate.

    So, Mr. Mosher, what Peterson is saying is that those of you who’ve been arguing for the UHI being a significant bias in the surface temperature record for the lower 48 are WRONG, if his analysis of a subset of the available data is right.

    You say that “He, like NASA, believes the UHI to be real” without mentioning that his research supports the position that the UHI is real but its effect is being EXAGGERATED.

    Pfft.

    So back to this statement…

    Now, Peterson realizes that his conclusion is at odds with climate science:

    “The research presented here attempts to unravel the
    mystery of how a global temperature time series created
    partly from urban in situ stations could show no contamination
    from urban warming.”

    It’s only “at odds with climate science theory” if you believe that”climate science theory” hinges upon past exaggeration of the UHI (which folks like you have been claiming is under-, not overestimated)

    And since the ladouchy paper seems so popular with y’all, how do you reconcile his in essence tossing out southern california due to his belief that increases there are due to urbanization with Peterson’s claim that his analysis implies that UHI bias may be minimal?

    Seems to be a bit of a conflict there …

    Just re-skimmed some of Mosher’s posts here … he seems totally unaware that Peterson, if any thing, would support the notion that NASA’s UHI adjustments are likely to introduce a cooling bias to the lower-48 record.

    Is this what Europeans mean by the term “own goal”?

  • Dodo // February 10, 2008 at 9:58 am

    Eli, just for the record, could you repeat the GISS definition of “rural”.

  • Raven // February 10, 2008 at 10:40 am

    I thought I would take a stab at a couple of the arguments used to defend the GISS dataset.

    1) A bias must not exist because the trend matches the satellite records.

    Satellites do not measure temperature - they measure radiation. The algorithms used to convert what the satellites measure to temperature are complex and the surface and balloon records were some of the inputs used to develop and validate these algorithms. If the satellite differs too much from the surface measurements then the satellite record is assumed to be wrong and adjusted accordingly.

    In other words, any warming bias in the ground records has likely influenced the satellite records as well. This means, you cannot use the satellite records as evidence that the ground records have no bias.

    2) The processed GISS data does not show any difference between urban and rural.

    This does not show that the UHI effect has been removed. What it shows is the data has been smoothed. Hypothetically speaking, the same results could have been achieved by adding warming to the rural stations. The work done by Anthony has demonstrated that microsite bias exists everywhere - even in rural settings. This means that an algorithm which assumes that any ‘rural’ station is unbiased and any ‘urban’ station is biased is not adequate.

    More importantly, if there are known microsite problems in the relatively reliable US network it is not possible to credibly argue that theses problems do not exist in places like China and Russia which have a must bigger influence on the global temperature record.

    It is impossible to know for certain whether the surface record is biased. The statistical tests done by Michaels and McKitrick demonstrate that there is probably bias but does not prove it. This question can only be resolved by an exhaustive survey of the measurement sites in the world and a re-analysis of the data. I realize this is not likely to happen because the agencies with the funds to do this would be incredibly embarrassed if a warming bias was found to exist.

    Arguing that the surface record is not biased is a bit like arguing that OJ is innocent because that is what a jury decided after looking at the evidence. Yet a lot of people still believe OJ is guilty and this is not an unreasonable position to have given the evidence that is available. The data is being used to justify some enormous public policy changes that will create a lot of hardship for a lot of people. For that reason, the onus is on the people using the data to prove beyond any reasonable doubt that it is an accurate reflection of reality. If the keepers of the data are that certain that they are right then it should be easy to provide that level of proof. If the keepers are not willing to provide that level of proof then they should stop using the data to influence policy makers.

  • luminous beauty // February 10, 2008 at 4:18 pm

    Whatever the arguments about the accuracy of corrections to the surface record due to UHI, they are entirely made moot by the sheer fact that most global warming is happening outside of urbanized areas. In particular, the Arctic, a phenomenon unexplainable except by the enhanced greenhouse effect.

    This is a fact that Raven, Mosher, el alia, are unwilling to address, since it belies any meaningful reason for their continued flogging of this dead horse.

  • dhogaza // February 10, 2008 at 6:00 pm

    Arguing that the surface record is not biased is a bit like arguing that OJ is innocent because that is what a jury decided after looking at the evidence.

    This is a rather astonishing statement. Not really sure how to respond, other than the obvious point that science does not work like a criminal trial.

    If the keepers of the data are that certain that they are right then it should be easy to provide that level of proof.

    They’ve done so, and since you insist on the legal analogy, they’ve done so to a very, very large jury of their peers. You’re not a member of the jury pool, much less the jury, IN REGARD TO THE SCIENCE. If you choose to exercise your political right to ignore the verdict because of your basic inability to understand the science, you get to do that.

    But, of course, the entire analogy sucks.

    Little oddities about your position…

    Satellites do not measure temperature - they measure radiation. The algorithms used to convert what the satellites measure to temperature are complex and the surface and balloon records were some of the inputs used to develop and validate these algorithms. If the satellite differs too much from the surface measurements then the satellite record is assumed to be wrong and adjusted accordingly.

    Back when Christy and Spencer’s first temperature constructions from satellite data were being touted as “putting a wooden stake through the myth of global warming”, strangely, we never heard denialists make the claim you’re making above.

    And Christy’s errors weren’t corrected because of the surface or radiosonde data, nor model outputs. The errors were REAL ERRORS, in algebra, among other things.

    Multiplying by “minus constant” rather than “positive constant” isn’t the kind of error that is corrected “to make it fit the surface data” or whatever myth about the satellite data you seem to believe in.

    It is corrected because basic 9th grade algebra mistakes aren’t controversial.

    If the keepers of the data are that certain that they are right then it should be easy to provide that level of proof. If the keepers are not willing to provide that level of proof then they should stop using the data to influence policy makers.

    Of course, the level of proof is set at a level which is impossible to meet. How do you meet the standard set by that one meteorologist on Inhofe’s “list of 400 scientists” who says his major complaint is that “Climate science doesn’t take God into account”?

    Hmmm?

    Any suggestions?

    And Roy Spencer, creationist, denialist to the extent that he says that we don’t know if we’re responsible for the growing CO2 concentration? That despite the isotope signature that matches fossil fuels, despite our ability to estimate how much CO2 we pour into the atmosphere, that it’s not OUR CO2 that’s increasing?

    How do you satisfy that level of whackery?

    Or the level of whackalooney displayed by so many denialists here and elsewhere on the web?

    It’s impossible.

    The bar that needs to be met is the one set by science, when it comes to the science, and that bar has been met long ago. Science is moving on.

    That’s what’s so funny. The REAL world is moving on and planning. Conservation biologists, planners involved in water resources, ag types … denialists like yourself are tilting at windmills in the same way that creationists tilt at biology. And with the same effectiveness in the real world. Political results? Yes, in both cases. Any effect on the real world of science, outside as well as inside the field? Of course not, in both cases.

  • Hank Roberts // February 10, 2008 at 8:00 pm

    “Although species have responded to
    climatic changes throughout their evolutionary history2, a primary
    concern for wild species and their ecosystems is this rapid rate of change3. We gathered information on species and global warming from 143 studies for our meta-analyses. These analyses reveal a consistent temperature-related shift, or ‘fingerprint’, in species ranging from molluscs to mammals and from grasses to trees. Indeed, more than 80% of the species that show changes are shifting in the direction expected on the basis of known physiological constraints of species. Consequently, the balance of evidence from these studies strongly suggests that a significant impact of global warming is already discernible in animal and plant populations.”

    Root,T.L., J.T. Price, K.R. Hall, S.H. Schneider, C. Rosenzweig, J. A. Pounds. 2003. “’Fingerprints’ of global warming on animals and plants”. Nature

    Full text downloadable from publication list on this page:
    http://terryroot.stanford.edu/

  • Raven // February 10, 2008 at 9:09 pm

    I forgot about “the artic temps increasing therefore UHI effect was removed” argument.

    This does not really deserve to be caused an argument because no one is arguing that there was not warming - only that the amount of warming is exagerrated. More importantly, the polar regions only form a small part of the global average which means the global averages could still be badly biased even if one assumes that the contribution to the average from polar regions is unbiased.

    A simple math equation should demonstrate this point:

    The average of 1 and 6 is 3.5. If 1 represents a correct measurement but 6 represents a measurement with a warming bias of 2 then we would find that the average is still has a bias of 1.

  • Lazar // February 10, 2008 at 9:42 pm

    P. Lewis wrote:

    there are two useful tutorials (many more really if these don’t take your fancy): Smith (which introduced me to the delights of Scilab) and Shlens. Both are good, and I think one can benefit from both of them, but once you’re in, I think the second is the better one.

    Yes, both are excellent. I learnt how to do PCA from Smith, the 2D case is easy to understand and one can even calculate the eigenvectors and values by hand with relative ease.

    Fred wrote:

    OK, if decentered PCA is a legitimate statistical technique, give me a proper reference in a stats textbook or monograph saying so, and I will believe it is. Hey, I’ll use it as well. Just one authoritative account of doing PCA which shows it as a legitimate method. Contrary to Wegman.

    Fred… if you haven’t already, I recommend following the tutorials and doing PCA, then read Wahl & Amman 06 in full, not Wegman’s response to said. Don’t worry whether decentered PCA is ‘referenced in a stats textbook’, that doesn’t make it “legitimate”, instead you might ask;
    What is it doing to the data?
    How does this mesh with MBH purpose for doing PCA?
    Are their purpose(s) valid?
    Does the method achieve the purpose(s)?
    What logic are M&M’s criticisms based on?

    Dr. Wegman, well he recently signed onto a very silly document regarding AGW, which is incompetence, like M&M his logic is shot to pieces, and elsewhere, in order to cast doubt on GCMs, he misrepresents (selectively quotes) a certain passage in Wahl & Amman 06. I don’t trust him at all. Sorry.

    Tamino: I look forward to your analysis! I’ve been waiting and hoping for this a long time.

  • sod // February 10, 2008 at 10:18 pm

    (sod reports that Germany uses an hourly average, a much superior method.)

    in the CA thread that Steven Mosher linked to, there is a nice graph by Bob Koss (#262), comparing the two methods!

    Here are two stations with 24 readings per day to demonstrate the difference between methods.

  • steven mosher // February 10, 2008 at 10:27 pm

    Every time people complain about my argument to eliminate bad stations, they forget one thing.

    Stations are eliminated every year

    http://www.climateaudit.org/?p=2711

    There is a mystery of sorts in station deletion.
    As stations have been deleted, those stations remaining have poor temporal coverage. More Missing months than before. More missing data than in the early 1900s!

    I dont want this to be an AGW debate. Tamino is wicked smart. Perhaps he can suggest a methodology for Filling in Missing data in a temperature time series? That might be an intersting topic that wouldnt result in flamage.

  • steven mosher // February 10, 2008 at 10:35 pm

    Dhog.

    Funny, then that Hansen does in fact adjust Petersons data for UHI effects. If peterson says no adjustment is necessary, then why does hansen adjust? Odd argument that. Perhaps peterson and hansen should talk.

    peterson selected sites ( 289) such that only 25% of them are actually used by climate science. ( GISS)

    His study shows nothing about the sites used by climate science. sell his cherries at some other fruit stand.

  • steven mosher // February 10, 2008 at 10:50 pm

    Dhog.

    The Nasa UHI compensation has nothing to do with MICROSITE issues. Since I have trudged through hansens code I can tell you what it does
    ( kinda sorta its not really Documented )

    1. A site is classified as RURAL or URBAN using population data ( not up to date) and Nightlights
    ( 1995) Not very accurrate, as hansen has noted recently. Want to see an URBAN SITE according to nighlights:

    http://gallery.surfacestations.org/main.php?g2_itemId=569

    http://gallery.surfacestations.org/main.php?g2_itemId=903

    So, before we even discuss Hansens UHI adjustment ( it’s pretty simple) you have to discuss the accuracy of the categorization
    Urban/Rural.

    Having been to Orland, I Would disagree with the Satillite. But nevermind. You see the orland site. Rural or urban? And then ask yourself
    Why would nighlights show this site to be illuminated by lights? What is the extant of a nightlights pixel?

    Lets do those questions first. What make orland an URBAN site?

    Use Oke’s criteria.

  • sod // February 10, 2008 at 11:06 pm

    i forgot the link to CA:

    http://www.climateaudit.org/?p=2201#comment-169136

    I jsut went to teh surfacestatins.org page for the first time in a few weeks. Misleading as all get-out.
    First, his map of the US, with stations categorized into classes 1-5, has a legend that says “error in oC” with the errors listed from “<1C” to “.+5C”
    He does not say those are based on a suggestive study, he does not say th t they are maximum possible errors, not absolute known errors. In fact, his legend implies that they are absolute, known errors.
    He also does not say that the potential relevant errors are in trends, not absolute temperatures, and that reading high doesn’t mean squat, if there hasn’t been a bias introduced.
    He also does not mention that JohnV’s analysis of only the ‘class 1-2′ stations yields results nearly identical to the GISS results, so that the compensating algorithms seem to work very well. He does say that urban trends are corrected . He doesn’t say ANYTHING to offset the impression that the trend record is hopelessly contaminated by that misrepresented “error in oC” in absolute temperature.

    very nice sum up.

  • Evan Jones // February 12, 2008 at 5:13 am

    Yes, very nice. But wrong.

    “First, his map of the US, with stations categorized into classes 1-5, has a legend that says “error in oC” with the errors listed from “=2C”. You are saying it does not mean what it says. You are saying it says “UP TO >=2C”. In other words, “>=2C” equals “2C”.

    This, in the absence of any CRN qualification on the matter, would seem to be scientific newspeak.

    Besides, when does a heat sink have the “maximum” effect, anyway? At T-Max and especially at T-Min. Obviously. Q.E.D.

    And how did (and do) they calculate the temps? As a function of T-Max and T-Min, that’s how.

    You don’t agree? Fine. Disprove it.

    Set up a well-sited net that would measure the temperatures correctly and compare these results with the existing net. Cheap, easy, quick. And we know. Let the chips fall where they may.

    Any objection to this would seem to me to be profoundly unliberal.

  • Evan Jones // February 12, 2008 at 5:16 am

    What in the world just happened to my above post? It got utterly mangled.

    Did I fall afoul using greater-than less-than signs with everything in between being eliminated? Could this be fixed?

  • dhogaza // February 12, 2008 at 3:36 pm

    Did I fall afoul using greater-than less-than signs with everything in between being eliminated? Could this be fixed?

    You can’t learn HTML but you think you’re going to overturn the work of thousands of climate scientists?

    Funny, then that Hansen does in fact adjust Petersons data for UHI effects. If peterson says no adjustment is necessary, then why does hansen adjust?

    Because it would appear that perhaps they disagree on how the data should be handled.

    Regardless, Peterson seems to make it clear that those of you denialists who scream “UHI effect! UHI effect!” have been engaged in a chicken little attack on the data. If he’s write.

    So let’s see … y’all have attacked the GISS record in the past because it doesn’t properly account for the UHI effect. Now you’re going to attack it for actually doing what denialists insist must be done, but you’ll claim that doing so inflates the surface temp trends. Yet Peterson says you can get nearly identical trends ignoring potential UHI, while touting the other paper which suggests throwing them all out because great basin and mojave records show less warming, and you like that better.

    The Nasa UHI compensation has nothing to do with MICROSITE issues.

    You forgot the word “directly”. And you forgot to mention that the only microsite issues that matters are those that vary significantly over time, because if they don’t, TREND data will be fine.

    Having been to Orland, I Would disagree with the Satillite. But nevermind. You see the orland site. Rural or urban? And then ask yourself
    Why would nighlights show this site to be illuminated by lights? What is the extant of a nightlights pixel?

    Lets do those questions first. What make orland an URBAN site?

    So Hansen’s method isn’t perfect. Yet, endless efforts to derail it fail other than “photo evidence shows that it must be wrong even though analysis fails to do so, BECAUSE WE DON’T LIKE THE ANSWER”.

    Likewise, satellite … “having been there, I’d say I don’t trust the satellite”. While when the original UAH analysis came out, the denialists were screaming “the world is cooling, throw out the surface temp record”.

    You can’t just pick and choose the source you think is “trustworthy” based on your personal prejudice regarding the result without expecting to be called on it.

    It’s just another reason why science ignores the denialist peanut gallery.

  • Steve Bloom // February 12, 2008 at 4:12 pm

    Evan, to repeat the excruciatingly obvious one more time, the CRN standards are for the… wait for it… CRN, i.e. not the old network. They can’t be used to characterize data quality. They are a rule-of-thumb precautionary principle for new sites.

    As I’ve mentioned before, the NOAA folks do appreciate the photo project. It makes it possible to check for a possible cause if something anomalous has been detected in the data. If no such thing is detected in the data, though, there’s no point in looking.

    BTW, that was a fascinating exposition just now over at Watts Up With That on the Antarctic volcanoes. Gee, why do you suppose Anthony was so interested? *snork*

  • Evan Jones // February 12, 2008 at 4:34 pm

    I’ll try again. You can delete the previous two posts if you wish.

    “First, his map of the US, with stations categorized into classes 1-5, has a legend that says “error in oC” with the errors listed from “<1C” to “.+5C”
    He does not say those are based on a suggestive study, he does not say th t they are maximum possible errors, not absolute known errors. In fact, his legend implies that they are absolute, known errors.”

    Wells said, perhaps, but just plain wrong.

    I have been carefully through the CRN handbook. Nowhere does it say that these are “maximum possible errors”. It also doesn’t say “.5c”, “it says equal-to-or-greater-than 5c” (no decimal point).

    (I can’t use the greater-than or less-than signs or I will cut out large portions of text. So I will have to spell it out. Sorry.)

    You seem to be trying to say that greater-than-or-equal-to 2C does not mean that. It actually means less-than-or-greater-than-or-equal-to-at-most 2C. But that is not what it says.

    It does say that these are estimated values, but it does NOT say they are maximum values. So if you like, you could say “c. “. Fine.

    In fact, Anthony Watts has used a direct cut-and-paste from the CRN handbook, which bears the NOAA/NCDC logo and is signed off on (signatures present).

    It spell it out and says that a new network needs to be set up because of siting problems and violations with the currewnt system, noted by public and private concerns alike.

    So it quite obvious CRN considers the problem of site violations to be all too real. In fact, that is what the CRN, according to its own handbook is all about. (It therefore comes as no surprise that CRN has requested the work of Anthony Watts.)

    Amd I have also heard that satellite records are modified to conform with surface records, so it would be a classic fallacy of the pseudo-proof to use the one to confirm the other.

  • Lee // February 12, 2008 at 4:38 pm

    Evan says:

    “Set up a well-sited net that would measure the temperatures correctly and compare these results with the existing net. Cheap, easy, quick. And we know. Let the chips fall where they may.”

    First, that is being done with the CRN.
    Second, JohnV’s analysis using only the ‘best’ of the existing surface stations shows that the GISS analysis is almost identical to using only the ‘best’ stations.
    Third, the satellite analysis during the period of overlap is nearly identical to the surface station analysis - this using an independent methodology and independent analysis with greater effective coverage.
    And fourth, there is a huge amount of qualitative data that shows warming. Some of that data is also semi-quantitative (degree-days of chilling for tree bloom, for example, or spring advance of greening date) and is in rough agreement with the surface stations.

    The chips have fallen.

    And yes, I’m saying that Anthony Watts usage of those station classifications, and the potential resulting errors, is absurd and has been so from the beginning. He is taking an analysis of siting issues that may (not do - may) cause a station to record higher temperatures than the surrounding areas - relevant to daily weather records as needed by farmers and people deciding what to wear today - states with his legend that this means the station is ‘hot’, and then implies that this creates an error ***in the trend*** of that magnitude. Hi cherry picked station examples support this implication.
    He further implies that those implied trend errors contaminate the surface analysis, without mentioning that the analyses look for and remove such spurious warming, and that JohnV’s analysis USING WATTS OWN DATA confirms that te inhomogeneity corrections are doing a damn good job of correcting for spurious warming.
    And people are in fact assuming that. Ive seen people arguing that if station so and so is “off” by 5 C, then it drives the trend up when averaged, blah, blah, blah - and explicitly pointing at Watts site as the basis for that argument.

  • Steve Bloom // February 12, 2008 at 11:12 pm

    Evan, you say that nowhere does the official NOAA CRN document say that those are “maximum possible errors.” You’re absolutely right, since placing a hard upper or lower limit on them (other than zero) would mean that they are something other than seat-of-the-pants estimates. Here’s how NOAA describes them (last sentence on page 5): “The errors for the different classes are estimated values.” “Estimated” as it is used here is distinguished from “calculated” or “measured.” As I noted in the other thread, NOAA took this approach because they did a study trying to quantify micro-site effects and found that it’s not possible. Now, is there something you still don’t understand about this?

  • Evan Jones // February 13, 2008 at 12:22 am

    “that is being done with the CRN”

    Does this mean you would accept the CRN results if they were in conflict with the USHCN stations? (After all, you have said the chips have already fallen.)

    “the satellite analysis during the period of overlap is nearly identical to the surface station analysis - this using an independent methodology and independent analysis with greater effective coverage.”

    But surely if satellite measurement is adjusted to match surface temperature readings, that renders the comparison moot.

    Note the precipitous drop this January. Could this cooling not be the result of the heat sink effect “undoing” itself, exaggerating the cooling on the way down as it exaggerated the warming on the way up? (Steve Mosher has noted that CRN5 sites seem more volatile.)

    “And yes, I’m saying that Anthony Watts usage of those station classifications, and the potential resulting errors, is absurd and has been so from the beginning. ”

    But isn’t that the same as saying CRN itself is absurd and has been so since the beginning?

    “He is taking an analysis of siting issues that may (not do - may) cause a station to record higher temperatures than the surrounding areas - relevant to daily weather records as needed by farmers and people deciding what to wear today ”

    So you are saying siting does not matter? CRN’s primary concern is siting.

    “He further implies that those implied trend errors contaminate the surface analysis, without mentioning that the analyses look for and remove such spurious warming, ”

    Do they? I thought the adjustment was for UHI, not site violations.

    “and that JohnV’s analysis USING WATTS OWN DATA confirms that te inhomogeneity corrections are doing a damn good job of correcting for spurious warming.”

    Wasn’t there a “western bias” pointed out in his analysis? And wasn’t the western US where most of the actual temperature increase occurred?

    And if a 300-station set was considered an insufficient sample, then wouldn’t 13% (4% CRN1, 9% CRN2) of 300 be that much more of an insufficient sample? It seems very much as if some of us are trying simply to brush off site violations before the data has been collected and evaluated.

    “And people are in fact assuming that. Ive seen people arguing that if station so and so is “off” by 5 C, then it drives the trend up when averaged, blah, blah, blah - and explicitly pointing at Watts site as the basis for that argument.”

    The fact that 14% of station measured so far are CRN5 and 56% CRN4, most of them not masked by UHI is blah, blah, blah to you?

    Over 6 in 7 are out of compliance by an “estimated” 1C or more. Nowhere in the CRN handbook does it say “maximum” or “up to” or “sometimes”.

    The CRN is so concerned about siting issues that they want to set up an entirely new network. You do not share the concerns of the CRN?

    Okay, then. It is clear that it is not you who will do the checking. That will be left up to others. It will be checked. Do you object?

  • dhogaza // February 13, 2008 at 12:24 am

    I suggest leaving Evan to rant to himself rather than respond. The difference between a measurement and a trend has been explained to him several times here, and I would guess many times elsewhere.

    Either he is incapable of understanding, or he chooses not to understand.

  • Evan Jones // February 13, 2008 at 12:28 am

    “Gee, why do you suppose Anthony was so interested? *snork*”

    Well, offhand, I would hazard a guess that it is much the same reason for your complete lack of interest.

    And why would CRN siting issues be relevant for them but not the USHCN?

  • Evan Jones // February 13, 2008 at 2:56 am

    “I suggest leaving Evan to rant to himself rather than respond. ”

    Well, judging by the quality of that response . . .

    “The difference between a measurement and a trend has been explained to him several times here, and I would guess many times elsewhere.”

    You mean I failed to mention the trend exaggeration of a heat sink (as opposed to the direct offset effect of waste heat)?

    Ot that I failed to mention the sharp increase in percentage and severity of site violations since 1980 and what effect this had on the trend?

    And I thought those were two of my main points.

    Well, if I left all that out, maybe you’d better explain trends to me all over again (but type real slow this time).

  • Evan Jones // February 13, 2008 at 3:02 am

    “As I noted in the other thread, NOAA took this approach because they did a study trying to quantify micro-site effects and found that it’s not possible. Now, is there something you still don’t understand about this?”

    Estimated means, well, estimated. It does not mean “maximum”. I can only assume that if they had meant “maximum” they would have said that as opposed to “estimated”.

    What does seem pretty clear to me is what has appeared pretty clear to the CRN. Namely that the USHCN has such serious problems with siting that it justifies setting up an entirely new network. Now, is there something you still don’t understand about that?

  • Lee // February 13, 2008 at 7:09 am

    Was that Duane Gish I just saw galloping through here?

  • sod // February 13, 2008 at 2:12 pm

    Estimated means, well, estimated. It does not mean “maximum”. I can only assume that if they had meant “maximum” they would have said that as opposed to “estimated”.

    “estimated” means, that they do NOT know what the error at the station will be. but a type 5 test station showed such an error. if a REAL station, under completely different conditions will, is unclear.

    “potential” error means, that such a station MIGHT have such high an error on some days. every person, with the slightest understanding of temperature and climate will immediately understand, that the chosen notation error>=5°C makes ONLY sense, with a potential error. 99.9% of the readers of the NOAA manual will understand this at once.
    YOU (and others) have demonstrated, that a significant part of the readers of the Watt’s site do NOT.

  • Barton Paul Levenson // February 13, 2008 at 2:50 pm

    Evan, you keep saying the satellite temperature measurements are somehow adjusted to match the ground temperature measurements. Please cite a source.

  • George // February 15, 2008 at 3:20 am

    steven mosher // February 13, 2008 at 6:50 pm said:

    “Heretic, I did not half accuse Hansen of fraud. I half accused Mann. In the end, after looking at the history I think I said he was willfully ignorant. He made a mistake, it was pointed out, and he persists.”

    Mosher most certainly did not “half”-accuse mann of fraud. He accused him!

    The following is a comment that Mosher made about the Hockey stick (ie, Michael Mann et al) at Climate Audit:

    “I imagine the people who exposed the piltdown man were anti evolutionists?

    “When you see the hockey stick, think Piltdown man. The fraud didn’t make evolution a false theory, but it did lead some down the wrong path for some time.”

    Piltdown man was not a case of “willful ignorance”. It was fraud, pure and simple — one of the biggest scientific frauds ever!

    Does Mosher actually think everyone here is stupid enough to buy his idiotic claim of “half-accusing” Mann of fraud?

    It’s bad enough that Mosher accuses someone of fraud without providing substantive evidence, but it’s absolutely cowardly (and more than a little pathetic) that he now denies it.

    Note: Steve McIntyre removed the offending reference to piltdown man from CA (but only after I pointed it out in another thread on this blog)

  • John Tofflemire // February 15, 2008 at 9:48 am

    dhogaza,

    You stated that:

    “This trivially true statement is of little interest given that “not understood at all” is not an accurate description of our knowledge of La Niña/El Niño. There’s a lot we don’t understand about the phenomena, in particular how to predict when they’ll occur, but that doesn’t mean we know nothing at all. You’re making the common mistake of assuming that since we don’t know everything, we don’t know anything.”

    In response to my previous post that:

    “Things that are frequently “known” may in fact not be understood at all.”

    My point is that the terms “La Niña” and “El Niño” are simply labels for an observed set of phenomena that appear to be the same but which in fact could result from a larger set of causes producing what appear to be relatively similar outcomes. In that sense, we may think we understand these phenomena when in fact we only understand them in a very approximate way. A common example of this is the “common cold” which has similar recognizable symptoms (fever, coughing, etc.) but which in fact can be produced by a multitude of different viruses.

    The proof of the pudding, so to speak, is whether we can predict these the emergence, length and intensity of these phenomena with a reasonable degree of accuracy. In fact we cannot and, since we cannot, we don’t really understand them. We only understand them to an approximation. It does’t mean that we don’t know anything, but it does mean that, at any point in time, we know far less than we think and understanding more is what science is all about.

  • luminous beauty // February 15, 2008 at 7:34 pm

    “The proof of the pudding, so to speak, is whether we can predict these[sic] the emergence, length and intensity of these phenomena with a reasonable degree of accuracy. In fact we cannot and, since we cannot, we don’t really understand them.”

    Except we can and do predict the emergence, length and intensity of these phenomena with a reasonable degree of accuracy.

    http://www.ccb.ucar.edu/ijas/ijasno2/cane.html

    http://www.cpc.noaa.gov/products/precip/CWlink/MJO/enso.shtml

    At this point in time, John knows far less than what is known in the field for which he asserts definitive knowledge.

    Can John understand more than what he thinks he knows?

    One would hope.

  • Evan Jones // February 15, 2008 at 7:41 pm

    PBL: Replied. Reply vanished. Not sure why.

  • Evan Jones // February 15, 2008 at 7:49 pm

    sod:

    What I understand is that 99.9% of those who read the CRN handbook known damnwell NOT to site a station in such a way that “estimated” violations are greater than the delta that they are trying to measure in the first place.

  • fred // February 15, 2008 at 9:58 pm

    Look, I think we have a smoking gun in my argument on quality.

    Go over to CA and look at Watts’ post on Lampasas, Texas. Take a look at the temp charts. Take a look at the one where they have made adjustments, and with what success. Now look in the mirror and say to yourself, you don’t need to meet your own site guidelines to have a quality network. Quality is something you inspect in and that you rework in.

    Here is a case where the old well known story applies. You cannot inspect in quality. You cannot rework quality. The result is junk. The only way you get quality is by adhering to your standards rigorously.

    If they change their standard so this kind of thing is OK, they are idiots. If they have this kind of station while having the standards they do, they are idiots. If anyone says we should not throw out data from these kind of stations, well, they may not be idiots, they may be very smart, but they are dyed in the wool denialists.

  • Hank Roberts // February 16, 2008 at 1:24 am

    > we know far less than we think

    That’s the reason for citing sources and looking for new references.

    Here’s another case where modeling was ahead of the data collection, and
    now the ARGO system is for the first time giving a large database of ocean temperature at various depths.

    http://www.bom.gov.au/bmrc/clfor/cfstaff/wld/sub_sst.pdf

    Statistical prediction of ENSO (Nino 3) using sub-surface temperature data.
    Wasyl Drosdowsky
    Bureau of Meteorology Research Centre, Melbourne

    Abstract

    Statistical schemes for predicting the evolution of the El Nino - Southern Oscillation (ENSO) tend to
    show some skill out to 9 to 12 months from late in the Southern autumn or early winter, but only limited
    skill for a few months from late summer or early autumn through the so-called “predictability barrier”.

    In contrast, coupled ocean– atmosphere models exhibit some skill through this period. These models are either initialised with, or develop, significant equatorial sub - surface temperature anomalies during the early phases of El Nino (or La Nina) development. Inclusion of sub – surface temperature data can lead to similar improvement of skill through the autumn period in a simple statistical model.
    ———————-

    A few years ago, the computers couldn’t handle the wealth of data now being added to the models.

    http://www.nature.com/nature/journal/v447/n7144/full/447522a.html

    Things like this subsurface temperature change described in the abstract above show up in the models — and data from Argo and the like are collected — and we suddenly have a better predictive tool than before.

    Add in the deep paleo work:

    http://cel.isiknowledge.com/CEL/CIW.cgi?SID=P2LHK7N1m2nI3PIbIka&Func=Abstract&doc=2/1
    The last five glacial-interglacial transitions: A high-resolution 450,000-year record from the subantarctic Atlantic
    Cortese et al.
    PALEOCEANOGRAPHY 22 (4): Art. No. PA4203 OCT 19 2007

    “… A comparison between SST and benthic delta C-13 suggests a decoupling in the response of northern subantarctic surface, intermediate, and deep water masses to cold events in the North Atlantic. The matching features between our SST record and the one from core MD97-2120 (southwest Pacific) suggests that the super-regional expression of climatic events is substantially affected by a single climatic agent: the Subtropical Front, amplifier and vehicle for the transfer of climatic change….”

    Don’t ask me what it predicts — but don’t dismiss what we know, because it changed since yesterday. Being able to use the information coming in at the rate people are doing the science we’re getting is a challenge no human society has ever faced.

    I h0pe we’re able to do it.

  • cce // February 16, 2008 at 3:01 am

    The trends in the GISTEMP analysis are derived entirely from rural locations.

    A station move from an urban environment to a less-urban environment would create the opposite effect. I suspect that this is the case more often than not.

  • Lee // February 16, 2008 at 4:13 am

    Evan,

    So cite the damn source again. How hard can it be?

    Fred, this is HISTORICAL data. We cant go back and take the measurements again. Science works from historical data with potential flaws all the freaking time. We don’t throw the data out - we do the best job we can extracting the information available in what we have.

    You would throw out the historical data because it isn’t perfect - that is a profoundly unscientific approach.

    Even more - those guidelines, such as they are, are relevant to errors in the instantaneous temperature measurement. But the historical data is being used to find trends. You have not in any way shown that errors that make a site sometimes measure higher than other nearby sites at any given time, will cause an error in the trend in averaged temps over time at that site compared to trend at nearby sites. In fact, you seem to be continuing to avoid that necessary step in your argument.

    You guys continually conflate temperature and temperature trend issues, and it is becoming hard to believe that is accidental.

  • Lee // February 16, 2008 at 4:21 am

    To follow on, Evan said:

    “What I understand is that 99.9% of those who read the CRN handbook known damnwell NOT to site a station in such a way that “estimated” violations are greater than the delta that they are trying to measure in the first place.”

    Evan, what makes you think that occasional possible ‘errors’ in instantaneous temperature reading compared to temp readings at nearby sites, cause an error in the trend at that site compared to nearby sites?

    Here is a scenario for you. At a given site, every day when the temp is taken, a die is thrown. Every time it comes up six, a random value of between 2C and 6C is added the day’s measurement.

    The daily readings are going to be highly suspect - with a random ~ 16% of the data points reading too high by a substantial and uncorrectable error.

    Evan, fred:
    What will be the effect of those errors on the long-term trend calculated from those highly suspect daily values? Show your work.

  • Hank Roberts // February 16, 2008 at 5:26 am

    Or cite to someone else’s work, if you’re not doing your own calculations.

    “Over at CA” is not a citation. Heck, it’s not even a pointer.

  • fred // February 16, 2008 at 9:02 am

    I am not confusing trends and temps. I am not even making a point about either trends or temps. I am making a point about quality control and usability of data. About whether there is any way to get from readings of certain kinds of compromised instruments to anything we would call data.

    And here we have a paradigm case of the process of using instruments compromised with unknown effects, then trying to adjust for the compromises to get to data from readings. And if you look at the charts, it is evidently and obviously failing. In Lampesas, Texas, there is no data to throw out. There are a bunch of readings, but there is no data. Look what happened at the time of the move.

    There is no legitimate argument for using this instrument. Still less is there any argument for using this instrument when it doesn’t even meet your own standards.

    If you cannot admit this, then I am driven to thinking we are in the presence of denialism and bad faith. There is nothing worth defending here. Just admit it is not data, and lets move on.

  • sod // February 16, 2008 at 12:05 pm

    sod:

    What I understand is that 99.9% of those who read the CRN handbook known damnwell NOT to site a station in such a way that “estimated” violations are greater than the delta that they are trying to measure in the first place.

    sorry Evans, but you are not even avoiding the point but talking total NONSENSE again.

    the station is measuring DAILY TEMPERATURE. so even an error”>”5°C will certainly NOT be bigger than the “dealta” they are measuring.

    the “delta” you are talking about is a TREND in global temperature change.
    the effect of the “POTENTIAL error” in that ONE station on the GLOBAL TEMPERATURE TREND is certainly SMALL and currently UNKNOWN.

  • sod // February 16, 2008 at 12:07 pm

    sorry, sentence should read:

    sorry Evans, but you are not only avoiding the point but talking total NONSENSE again.

  • steven mosher // February 16, 2008 at 3:05 pm

    George, it’s all still there
    http://www.climateaudit.org/?p=2328
    read the whole thread. you’ll see that it started as a joke, and I concluded like this:

    “Here is where I come down. One can believe in evolution as I do and still see the piltdown
    man as a hoax and bad science. The theory Doesnt get knocked down because of the hoax.
    Similiarly, one should be able to accept GW or even AGW and recognize the hockey stick for
    what it is. The “hoax” comparison is a bit harsh on Dr. Mann. I think he made a mistake. But carrying
    on as he has in light of what experts have said takes it to the level of “willful ignorance”

    Basically what puzzled me was why Mann refused to admit certain mistakes.

    For example getting geographical locations wrong:

    http://www.climateaudit.org/?p=2406

    . This doesnt make climate science false, but why not just correct it?
    weird. is that willful, i dont know. what do you think. like I said, hoax is too harsh, maybe willful is too harsh as well.

    For now, out of respect for tamino and the good job he does , and to keep things civil here, I’ll not mention it again.

  • luminous beauty // February 16, 2008 at 4:36 pm

    fred,

    I’m sure you know the plural of anecdote is not data.

    “About whether there is any way to get from readings of certain kinds of compromised instruments to anything we would call data.”

    It is a simple matter to compare a single data stream with similar data from nearby stations that, reasonably, aren’t so compromised. The quick and dirty way that GISS does it, as I understand, is if a particular record shows a three sigma anomaly from such a regional average it is thrown out.

  • Lee // February 16, 2008 at 4:45 pm

    fred says:
    “And if you look at the charts, it is evidently and obviously failing.”

    No, if you look at the internal controls and external comparisons, it is evidently and obviously working very damn well.

    The fact that you completely fail to respond to the points about this being historical data, and the frequent use of unrepeatable and partially flawed historical data in science, and dismiss all the comparisons without comment, leads me “to thinking we are in the presence of denialism and bad faith.”

  • Evan Jones // February 16, 2008 at 5:33 pm

    sod:

    Well, you could consider that if the delta is 1/6 of the estimated error, you can kiss half of 20-century increase goodbye. Especially in light of the fact that the violations have greatly increased in recent years, yet are not adjusted for in the record.

    You might also consider NOAA and GISS adjustments over time. (They would probably knock your socks off ifyou bothered to check.)

    Or you can continue to consider me and my personality, which really doesn’t get us anywhere.

    “A station move from an urban environment to a less-urban environment would create the opposite effect. I suspect that this is the case more often than not.”

    Check that out on Watts’ site. It’s there.

    And this begs two other vital questions:

    1.) Recent viloation creep on rural stations.

    2.) Gross, widespred site violations as a direct result of the MMTS switchover (beginning in the 1980’s ).

    Neither of these points has anything to do with the urban/rural classification. Many of the worst violations involve rural stations.

    Bottom line: The stations have been getting worse, not better, over time. Both the NOAA and GISS adjustments are in exactly the opposite direction as this trend.

    That alone should result in a reexamination of records and procedures. Why are the scientists involved not falling all over each other to examine this?

  • Evan Jones // February 16, 2008 at 5:41 pm

    “Evan, what makes you think that occasional possible ‘errors’ in instantaneous temperature reading compared to temp readings at nearby sites, cause an error in the trend at that site compared to nearby sites?”

    What makes me think that is that

    a.) It is not occasional, it is very widespread. Fully 2/3 of stations are in CRN4 or CRN5 violation. Only 4% of stations are CRN1.

    b.) These violations are not constant from 1900. They have increased greatly over time (esp. since 1980, thanks to the MMTS switchover). That creates a trend.

    c.) The adjustments (NOAA & GISS) are in exactly the opposite direction of this trend.

    That is what makes me think what I think. And what I think is that the system needs to be reevaluated.

  • Hank Roberts // February 16, 2008 at 7:24 pm

    If it were not possible to get good scientific work done, using data from primitive instruments, where did science come from?

  • Lee // February 16, 2008 at 9:42 pm

    and in teh period of overlap, teh satellite recod is in very good agreement with the GSS adn CRU analyses. And JohnV’s preliminary analysis of only CRN1 and 2 sites, also shows very good agreement with the GISS analysis. Nto to mentin such things as nothern movement of USDA zones, in amounts that are in good general agreement with the GIS warming numbers. Or changing dates of green-fist bloom, earlier in the north where warming is allowing earlier bloom, and later in the south where loss of chilling days is delaying bloom, all also in amounts that are in good general agreement with warming values from GISS. And on and freaking on.

    Evan asks why he scientists arent all fallling over each other to examine errors - what the hell does Evan think is the point of all the GISS publications documenting their corrections of inhomogeneities? Why on earth does Evan imagine the comparisons with the satellite record are being conducted? What does Evan imagine is the purpose of the overlapping-in-time plans for the CRN?

  • Lee // February 16, 2008 at 9:45 pm

    I also note that Evan is not citing his source for his claim that the satellite record is adjusted by synchronizing with the surface record, nor withdrawing that claim - nor repeating it.

  • sod // February 16, 2008 at 10:59 pm

    sod:

    Well, you could consider that if the delta is 1/6 of the estimated error, you can kiss half of 20-century increase goodbye.

    again:

    you are drawing conclusion from a POTENTIAL error in one station to an effect on the AVERAGE GLOBAL TEMPERATURE. that is absurd!

    the problem is, you are doing that without any evidence or facts and based on your extremely limited understanding of statistics, errors and effects.

    so let us look at some REAL facts:

    http://ams.confex.com/ams/pdfpapers/119064.pdf

    this is YOUR LaDochy paper. it shows a change in Tmax of 1°C and of Tmin of 0°C.

    for climate reconstruction, this will give you an average change of 0.5°C while changing from a type 5 to a type 1 station!

    and this is the WORST CASE scenario! and this si being COMPENSATED.

    but i notice, i wrote this before. perhaps you will read it, for once?

  • Heretic // February 17, 2008 at 2:25 am

    Sod summarizes quite well why this whole thing is such an uninteresting distraction.

  • Evan Jones // February 17, 2008 at 2:41 am

    “If it were not possible to get good scientific work done, using data from primitive instruments, where did science come from?”

    From continually refining and improving methods, and by going out of the way to correct whenever error is noted.

    Lee:

    1.) But microwave data is a proxy and is adjusted because of all the interference near the surface. The only way to be sure of surface temperature measurements is to do so correctly at the surface. Then we can compare it with satellite data. I think you are putting the cart before the horse.

    One should only rely on an indirect measurement when a correct measurement method is not available. A correct method IS available. It is not being done. It should be done.

    2.) John V’s sample is very small (a mere 13% of 300 of 1221 stations) and has a western bias. Most of the 20-century temperature rise in the US was in the west.

    Therefore, we must wait until a much larger portion of the net is surveyed, and distribution is properly accounter for.

    I do agree that the CRN is a good effort, so far as I can tell (though why they are Using a wired MMTS system without automatic data transmission is beyond me).

    But the NOAA and GISS adjustments seem to be in exactly the wrong direction:

    GISS adjusts the past cooler and do not adjust the present much at all. This is on top of the NOAA adjustment, which (mostly) leaves the past data as is, and adjusts the current data warmer. Therefore, it would seem that their efforts are misplaced.

    One can only hope the CRN data does not get subjected to the above adjustments.

    “you are drawing conclusion from a POTENTIAL error in one station to an effect on the AVERAGE GLOBAL TEMPERATURE. that is absurd!”

    No, I am suggesting that since the “estimated” error of six out of seven stations observed is 1C or greater, a reassessment is necessary. In fact, I think it absurd to suggest otherwise.

  • Evan Jones // February 17, 2008 at 2:45 am

    “I also note that Evan is not citing his source for his claim that the satellite record is adjusted by synchronizing with the surface record, nor withdrawing that claim - nor repeating it.”

    I have, but the post has not yet appeared. Please note.

  • Evan Jones // February 17, 2008 at 4:00 am

    sod:

    That does not appeat to be the same paper at all. Different title, different subject, single sample.

    Here is the entire abstract from La Dochy (Dec. 2007). It covers 330 stations, not just a single comparison. It disagrees with your conclusion regarding T-Min.

    Also, no account–whatever–is made of rural stations microsite violations.

    ————————————————————

    Recent California climate variability: spatial and temporal patterns in temperature trends

    Steve LaDochy1,*, Richard Medina1,3, William Patzert2
    1Department of Geography & Urban Analysis, California State University, 5151 State University Drive, Los Angeles, California 90032, USA
    2Jet Propulsion Laboratories, NASA, 4800 Oak Grove Drive, Pasadena, California 91109, USA
    3Present address: Department of Geography, University of Utah, 260 South Central Campus Drive, Salt Lake City, Utah 84112-9155, USA
    *Email: sladoch@calstatela.edu

    ABSTRACT: With mounting evidence that global warming is taking place, the cause of this warming has come under vigorous scrutiny. Recent studies have lead to a debate over what contributes the most to regional temperature changes. We investigated air temperature patterns in California from 1950 to 2000. Statistical analyses were used to test the significance of temperature trends in California subregions in an attempt to clarify the spatial and temporal patterns of the occurrence and intensities of warming. Most regions showed a stronger increase in minimum temperatures than with mean and maximum temperatures. Areas of intensive urbanization showed the largest positive trends, while rural, non-agricultural regions showed the least warming. Strong correlations between temperatures and Pacific sea surface temperatures (SSTs) particularly Pacific Decadal Oscillation (PDO) values, also account for temperature variability throughout the state. The analysis of 331 state weather stations associated a number of factors with temperature trends, including urbanization, population, Pacific oceanic conditions and elevation. Using climatic division mean temperature trends, the state had an average warming of 0.99°C (1.79°F) over the 1950–2000 period, or 0.20°C (0.36°F) decade–1. Southern California had the highest rates of warming, while the NE Interior Basins division experienced cooling. Large urban sites showed rates over twice those for the state, for the mean maximum temperatures, and over 5 times the state’s mean rate for the minimum temperatures. In comparison, irrigated cropland sites warmed about 0.13°C decade–1 annually, but near 0.40°C for summer and fall minima. Offshore Pacific SSTs warmed 0.09°C decade–1 for the study period.

  • Hank Roberts // February 17, 2008 at 5:01 am

    If you really want to know what’s going on in the world, observe the world.

    For example the timing of spring.

    What’s the most sensitive instrument?

    Life.

    Here’s the main cite:

    http://www.nature.com/nature/journal/v421/n6918/abs/nat ure01286.html

    Nature 421, 37-42 (2 January 2003) doi:10.1038/nature01286

    A globally coherent fingerprint of climate change impacts across natural systems

    “… debates within the Intergovernmental Panel on Climate Change (IPCC) reveal several definitions of a ’systematic trend’. Here, we explore these differences, apply diverse analyses to more than 1,700 species, and show that recent biological trends match climate change predictions. Global meta-analyses documented significant range shifts averaging 6.1 km per decade towards the poles (or metres per decade upward), and significant mean advancement of spring events by 2.3 days per decade. We define a diagnostic fingerprint of temporal and spatial ’sign-switching’ responses uniquely predicted by twentieth century climate trends. Among appropriate long-term/large-scale/multi-species data sets, this diagnostic fingerprint was found for 279 species. This suite of analyses generates ‘very high confidence’ (as laid down by the IPCC) that climate change is already affecting living systems.”

  • Ken Feldman // February 17, 2008 at 5:46 am

    Guys,

    I’ve had experience arguing these points with Evan at another website. I have this advice:

    He’s made up his mind. No amount of reasoning will sway him. He thinks that you can determine the error of the entire surface station network by calculating a weighted average based on the maximum potential error based on the CRN calculation of the site. He doesn’t understand errors in measurement, doesn’t look at physical evidence (like ice melt, changing seasons, shifting biological ranges, etc…)

    Basically, he’s made up his mind that this is all a liberal plot to increase the price of a gallon of gasoline, and nothing you say is going to change his mind.

  • John Tofflemire // February 17, 2008 at 6:09 am

    Thank you to Hank Roberts for the references regarding ENSO. It’s an interesting topic of which I’ve some reading on and I look forward to learning more about it.

    I’m not denegrating what is known about these phenomena, I’m just saying that people can and frequently do overestimate what they think they know about something with the total reality of that thing.

  • fred // February 17, 2008 at 7:17 am

    Well, just read Atmoz’ post on Texas. This is really, really confusing. We seem to have a totally out of spec station, with what anyone would expect to be a warming bias following a move to an out of spec site, that in fact in terms of its raw data shows a cooling trend which is not confirmed by rural stations in the area, and which then has a hockey stick warming trend adjusted into it by the adjustment algorithm. This is really very weird.

    I’ll tell you what: it does not raise my level of confidence in the accuracy of the surface station record. If its this weird in the US, how much weirder must it be in rural urbanizing China?

    Yes, I know of course that one station doesn’t prove anything about the record. But you have to admit it is very weird indeed, and wonder if this is a representative sample of how we go about calculating surface temps elsewhere than in Texas.

  • sod // February 17, 2008 at 7:37 am

    One should only rely on an indirect measurement when a correct measurement method is not available. A correct method IS available. It is not being done. It should be done.

    sigh. please tell me, how do you want to “correctly” measure the temperature in 1921?
    there is no way around using the data we have. FACT.

    2.) John V’s sample is very small (a mere 13% of 300 of 1221 stations) and has a western bias. Most of the 20-century temperature rise in the US was in the west.

    John V, in sharp contrast to you DOES understand basic mechanisms of th#is subject. he did a WEIGHTED average, by location.

    your “most stations in the west argument”, like all your “arguments” is NONSENSE!

    No, I am suggesting that since the “estimated” error of six out of seven stations observed is 1C or greater, a reassessment is necessary. In fact, I think it absurd to suggest otherwise.

    again: estimated means, that we don t no the REAL error.

    potential means, that the error MIGHT be as high as this.

    the LADochy paper i linked shows, that the REAL error over the years is below 0.5°C in a WORST CASE scenario
    the REAL effect on the whole network will be much LOWER, because the majority of stations are NOT type 5 ones. this is consistent with the analysis of John V.

  • sod // February 17, 2008 at 8:04 am

    sod:

    That does not appeat to be the same paper at all. Different title, different subject, single sample.

    Here is the entire abstract from La Dochy (Dec. 2007). It covers 330 stations, not just a single comparison. It disagrees with your conclusion regarding T-Min.

    Also, no account–whatever–is made of rural stations microsite violations.

    a case study in ignorance.
    i called it “YOUR” LaDochy paper, because you brought up LaDochy.
    i brought it up on Feb 6. on this page.

    http://ams.confex.com/ams/pdfpapers/119064.pdf

    i brought it up, BECAUSE it is dealing with microsite issues (that is what we are discussing here!)
    why you continue to bring up the other paper, that deals with UHI all the time, is beyond me.

    though it is absolutely obvious, that you don t understand the difference, again.

    why else would you claim that a UHI paper does contradict a microsite paper on Tmin?!?

    ———————

    He thinks that you can determine the error of the entire surface station network by calculating a weighted average based on the maximum potential error based on the CRN calculation of the site. He doesn’t understand errors in measurement, doesn’t look at physical evidence (like ice melt, changing seasons, shifting biological ranges, etc…)

    very nice sum up.

  • luminous beauty // February 17, 2008 at 2:26 pm

    Evan doesn’t understand the difference between instrument calibration and data adjustment.

    CRN has done an empirical study of local effects, comparing their showcase Asheville site against the ASOS station at the nearby airport.

    http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/research/Sun.pdf

    Since the airport station is on tarmac surrounded by parking lots it is a class 5 according to the siting guidelines.

    They found a bias of, not 5C, but 0.25C.

    The LaDochy paper that sod cites indicates a bias of 0.5C between class 1 and 5 sites.

    Suggesting the guidelines may exaggerate by a factor of ten or more.

    Watt’s up with that?

  • wildlifer // February 17, 2008 at 3:40 pm

    Isn’t the UHI an anthropocentric artefact? So if urban areas are trapping/holding heat so that it doesn’t escape into space, why should we eliminate that effect from the data? Doesn’t it contribute to overall warming?

  • fred // February 17, 2008 at 4:00 pm

    Luminous, fine, if the spec is wrong, change it. Maybe it really does not matter. Maybe they were wrong when they set up the spec. Then lets locate them wherever, parking lots, airports, McDonalds. Don’t have specs and then ignore them. Or, do it, and stop being taken seriously. Why is this simple point so hard to accept?

  • Evan Jones // February 17, 2008 at 4:06 pm

    Ken,

    You are quite correct.

    My unatlerable opinion is that in order to measure surface temperatures properly one must measure surface temperatures properly.

    I do not see how or why anyone else would have a different opinion on the matter or have the slightest objection.

    It’s not unlike cutting cards. If the shuffle is honest, why would anyone object? But if the shuffle is not honest it is easy to see why there would be objections.

    I do not accuse. I merely want to cut. Any objections?

    sod:

    The example you cite is a single case of comparing an LA site with one at USC. And, as I said, no micrositte issues seem to be considered, only UHI.

    If you observe the USC station, it appears to be sited in the middle of a concrete island and surrounded by small stuctures. That would certainly explain a comparative T-min.

    We only have one photo, and it do not have a closeup of the surface, so one cannot determine for sure. (I will check and see if this site has been properly observed and photographed.)

    As for the John V study, the western bias was pointed out well after the study was put together, so unless JV readdressed the bias, I do not see how you can be correct in your assertion.

    But why spitball? Just do the job right in the first place and thre will be no need to speculate or estimate.

    As for my ignorance, oh, yes, you are quite right. I am quite ignorant of what the surface temperatures actually are and will remain so until the measure is properly taken.

    HR:

    No one is disputing that there has been warming. Of course, a mild warming occurring after three decades of mild cooling would have a considerable effect on life. What is in dispute is the degree and the cause and the means of measurement, not whether warming has occurred.

    Since a comprehensive reassessment would be relatively cheap, easy, and quick, I see no reason why this is not done, and I see no reason why anyone would object.

    “Since the airport station is on tarmac surrounded by parking lots it is a class 5 according to the siting guidelines.”

    LB:

    Both sides of the AP are in violation. One side is more under a UHI bubble than the other. UHI conditions tend to mask microsite violations, as I have already pointed out.

    The “rural” USC site that sod points out seems to have severe violations, though I cannot tell for sure from the photo. (I will try to check.) If so, that would certainly reduce any bias measurement, esp. T-Min.

  • Evan Jones // February 17, 2008 at 4:12 pm

    LB:

    Wait.

    I was using the wrong example completely. My apologies.

    I am checking out the study you cited.

  • Evan Jones // February 17, 2008 at 4:17 pm

    >Isn’t the UHI an anthropocentric artefact? So if urban areas are trapping/holding heat so that it doesn’t escape into space, why should we eliminate that effect from the data? Doesn’t it contribute to overall warming?

    Yes.

    But it is important to make sure that the same percentage of stations are sited in UHIs as UHIs are as a percentage of the earth’s surface.

    The US is c. 3% urban. If only 3% US stations were subject to UHI, we wouldn’t have a problem.

  • Evan Jones // February 17, 2008 at 4:28 pm

    LB: Unfortunately, the Asheville, NC, site has not been observed (that I could find). Other stations that are considered “showcase” have been in very severe violation. Threfore we cannot presume that it is free of violation merely because it is considered a “showcase” until it has been observed. (Also, there was no date on this study. Do you know when it was done?)

    There are no photos of the Asheville site in the article you have sited. Therefore, you may be quite right. Or not.

  • Evan Jones // February 17, 2008 at 4:46 pm

    “Basically, he’s made up his mind that this is all a liberal plot to increase the price of a gallon of gasoline, and nothing you say is going to change his mind.”

    Except that I don’t believe any such nonsense. I certainly don’t think it is a plot , just a careless measurement issue that can be corrected by non-careless measurement.

    (Also, I am a liberal, myself, of the New York, bleeding-heart variety, and a defender of LBJ, not that this is in any way relevant to the debate at hand.)

  • dhogaza // February 17, 2008 at 4:50 pm

    My unatlerable opinion is that in order to measure surface temperatures properly one must measure surface temperatures properly.

    No one here will contest that.

    However, as has been pointed out to you over and over again, we’re not interested in proper surface temperature measurements.

    We’re interested in trends.

    Quit conflating the two.

    Now that I’ve violated my “don’t feed the troll” recommendation made a few days ago, let me repeat my recommendation not to feed the troll. :)

  • luminous beauty // February 17, 2008 at 5:16 pm

    “Since a comprehensive reassessment would be relatively cheap, easy, and quick, I see no reason why this is not done, and I see no reason why anyone would object.”

    Comprehensive re-assessments have and continue to be made using proper analysis, not vagrant suppositions.

    I see no real use in analyzing data against a scale that has no empirical validity.

    I have no objections to amateurs trying to recreate scientific results. I encourage it, but they should strive to understand the science and make some empirical measurements of their own before concluding that multiple corroborating professional scientific analyses are seriously flawed.

  • Timothy Chase // February 17, 2008 at 5:59 pm

    Fred wrote:

    Luminous, fine, if the spec is wrong, change it. Maybe it really does not matter. Maybe they were wrong when they set up the spec. Then lets locate them wherever, parking lots, airports, McDonalds. Don’t have specs and then ignore them. Or, do it, and stop being taken seriously. Why is this simple point so hard to accept?

    Fred, the specs are for setting up future sites, not for sites that have existed for the past decade, fifty-plus years or whatever. In the strictest sense, those older sites do not violate the specs because the specs weren’t created for them.

    Changing those sites so that they “fit” the guidelines would introduce additional uncertainty. Replacing those sites with new sites would introduce additional uncertainty. Either would be changing the way in which we measure temperatures and thus would introduce additional noise into our measurement of the trends in temperature, which is afterall what we are actually interested in.

    Additionally, those sites are typically not sites that belong to NASA GISS but to other outfits, e.g., which study the weather. They aren’t NASA’s to change.

  • luminous beauty // February 17, 2008 at 6:10 pm

    Evan,

    The dates for the study are in the introduction.

    The Asheville site is a CRN station, not a USHCN COOP station. It is located near CRN headquarters, thus a showcase site. I suspect it is not very likely to suffer severe criteria violations, but be a Missouri mule, if you will.

    Asphalt and jet exhaust are class 5 criteria. You don’t need a picture.

  • Hank Roberts // February 17, 2008 at 6:24 pm

    EJ writes:

    > Wait.
    >
    > I was using the wrong example
    > completely.

    Is it possible to go back and edit (e.g. strike out, change font) when one discovers one has made mistaken assertions, with this software? It’d be very helpful to anyone trying to follow later.

  • Lee // February 17, 2008 at 7:53 pm

    Evan, please stop the bullshit. Tamino, I’m sorry for the language, but it is time to call this what it is.
    Examples:
    —–
    ‘LB: Unfortunately, the Asheville, NC, site has not been observed (that I could find). Other stations that are considered “showcase” have been in very severe violation. Threfore we cannot presume that it is free of violation merely because it is considered a “showcase” until it has been observed.”
    Evan this is a showcase CRN site. The example given is o a showcase CRN site being compared to a nearby urban site. Please give us an example of a showcase CRN site being found to be in “very severe violation.” In fact, give us more than one - yo used the plural.
    —–

    “The US is c. 3% urban. If only 3% US stations were subject to UHI, we wouldn’t have a problem.”
    Evan, urban sites do not contribute to the trends in the GISS analysis. Urban trends are REMOVED in the analysis. There is no urban contribution to the GISS trend, so please tell us how that UHI effect can be a “problem” for the GISS trend?
    —–

    “But why spitball? Just do the job right in the first place and thre will be no need to speculate or estimate.

    As for my ignorance, oh, yes, you are quite right. I am quite ignorant of what the surface temperatures actually are and will remain so until the measure is properly taken”

    Evan, please tell us how to go back to 1921 and make sure those measurements are ‘properly taken?’? One more freaking time - this is HISTORICAL data. It can NOT be measured again.
    —–

    “As for the John V study, the western bias was pointed out well after the study was put together, so unless JV readdressed the bias, I do not see how you can be correct in your assertion.”
    JohnV did a gridded study, consciously and purposefully mirroring the gridded GISS studies. - area bias correction is the entire point of gridded studies, and was a design consideration from the beginning. The ‘western bias’ argument was mud thrown at teh window to try to obscure the view - it is bullcrap, and you are repeating bullcrap.
    —–

    ““I also note that Evan is not citing his source for his claim that the satellite record is adjusted by synchronizing with the surface record, nor withdrawing that claim - nor repeating it.”

    I have, but the post has not yet appeared. Please note.”

    So post it again - how hard can it be to post a citation? You have been asked repeatedly by myself and others to cite a source for that claim. Frankly, Evan, I think you’re making this up out of whole cloth. That hypothesis is consistent with everything else you’re doing here, AND I hae a reasonable understanding of the history of the satellite measurements, and your claim is not consistent with what I know. I will continue to believe that you are simply ‘inventing’ that claim until you demonstrate otherwise.
    —–

    That’s all of Evan’s crap I can take for now - any bets on the probability of on-topic substantive responses?

  • sod // February 17, 2008 at 9:42 pm

    sod:

    The example you cite is a single case of comparing an LA site with one at USC. And, as I said, no micrositte issues seem to be considered, only UHI.

    if you can t spot the MICROSITE issue in the picture, you are beyond help.

    this is what the authors say:

    The USC site resembles a park, with tall shade trees just west of the instrument shelter (fig 3). The shelter is also above a grass area. The DWP site is located on the roof of a 2-story downtown parking structure, with no immediate vegetation or obstructions (fig 4). The DWP location is also closer to where one would expect the urban heat island peak

    http://ams.confex.com/ams/pdfpapers/119064.pdf

    the UHI effect gets mentioned as an extra.

    If you observe the USC station, it appears to be sited in the middle of a concrete island and surrounded by small stuctures. That would certainly explain a comparative T-min.

    take a look at the “good” example(Orlando) on Watt’s page:

    http://www.surfacestations.org/

    notice the concrete footpath leading to this type 1 (ONE!) station? and there isn t grass below the sensor either! FRAUD!

    As for the John V study, the western bias was pointed out well after the study was put together, so unless JV readdressed the bias, I do not see how you can be correct in your assertion.

    as always, i am right and you are wrong.
    here is the link to the original CA discussion:

    http://www.climateaudit.org/?p=2048

    John V did some original analysis in post #123.
    he did a second analysis based on a GRID in post #157.

  • Evan Jones // February 18, 2008 at 5:46 am

    I looked up USC. Is it not also smack in the middle of LA? Even if it is not as near the expected UHI peak? The author seems to be tacitly saying it is also under UHI effect.

    Therefore are both stations not subject to at least some degree of UHI? I have said time and again that UHI masks the effects of site violation (or nonviolation). Therefore it would be no surprise that there would be only a 0.5C difference, as the only significant factors would be UHI-1 vs. UHI-2.

    And LaDochy (Dec. 2007) specifies quite clearly that UHI increases the rate [sic] of temerature increase in CA from 1950 to 2000, doubling at T-max, and x5 at T-Min.

    I say this (right or wrong) justifies further checking.

    Anthony Watts and Steve Mosher have both commented that the stations included were concentrated on the west, with some in the east, and very few in the middle. And that only 17 CRN1 (all available) were included. Do you disagree with that?

    JV may be right. He may be wrong, owing to insufficent sample. The test will be concluded when the data is accumulated.

    I think there needs to be further checking and that all results be published openly. Do you disagree with this.

    You can dwell on me, personally, all you like. I would guess my IQ is at the very least 25 points lower than yours (possibly a lot more) and my degree is in a nontechnical field.

    Now that we have gotten that out of the way, could you please explain why you seen so adamantly against what seems to me to be routine due diligence?

    Stipulating that every point you make is entirely correct, all that means is that LaDochy seems to contradict himself in two separate papers. Fine.

    I say check. Why would you disagree that this should be done? I could understand it you said, “Fine, then, check. I think you will be proven wrong.” But you seem to object to the very act of checking.

  • Evan Jones // February 18, 2008 at 7:50 am

    “Evan this is a showcase CRN site. The example given is o a showcase CRN site being compared to a nearby urban site. Please give us an example of a showcase CRN site being found to be in “very severe violation.” In fact, give us more than one - yo used the plural.”

    Considering the history of the NOAA, including it’s “showcase site” at U Phoenix, I utterly refuse to accept the quality of any site until it has been observed and photographed and I (or Anthony Watts or someone else in whom I have a modicum of trust) have seen the photos.

    Period.

    “Urban trends are REMOVED in the analysis. There is no urban contribution to the GISS trend, so please tell us how that UHI effect can be a “problem” for the GISS trend?”

    UHIE is arguably lowballed. This has been the center of the Hansen/McKitrick dispute for years. If UHI is measured against rural stations that, themselves, are compromised, it is inevitable that UHI is being underrated.

    Also the “Lights=” method (Hansen, 2001) to determine what is or is not an urban area is in huge dispute.

    “Evan, please tell us how to go back to 1921 and make sure those measurements are ‘properly taken?’? One more freaking time - this is HISTORICAL data. It can NOT be measured again.”

    I did not say to discard historical data. I said set up a new network, properly sited, and compare the results with the current network. That would account for site violation effects.

    Then an accounting can be made of the history of station moves and MMTS “upgrades” and a backbearing can be done to make proper adjustments to historical data.

    Most of the old stations c. 1900 were properly sited. No need for adjustment. Yet those are adjusted cooler (for an unknown reason) by GISS. (About 0.7C cooler for 1900, pro-rated to modern times at which point there is no adjustment.) This is on top of the NOAA adjustment that leaves the 1900 data mostly unchanged, but adjusts 0.3C upwards [sic!] for modern measurements. Both adjustments are in exactly the wrong direction. GISS adjustments are added on top of NOAA adjustments, resulting in a c. 1C upward trend in the data during the 20th century.

    The effect is that older readings are dumbed down and modern readings are pumped up. Exactly the opposite trend in which microsite violations occurred. Need for review? I think so.

    “The ‘western bias’ argument was mud thrown at teh window to try to obscure the view - it is bullcrap, and you are repeating bullcrap.”

    I don’t think JV thinks so. At any rate you can’t properly grid for a middle part of a country which simply isn’t there. The data is incomplete. The data will be completed. Then JV and everyone else and his mother will do the comparisons. In fact current GISS data is very similar to NOAA data. GISS adjustments to NOAA are very slight after 2000. (It is in the past that they are whopping.)

    “Evan, I think you’re making this up out of whole cloth. That hypothesis is consistent with everything else you’re doing here”

    Here are just two examples of how in situ surface observations affect the results of satellite data. Obviously, if the in situ data is not correct, this will affect the satellite results.

    “Satellite-derived land surface air temperature data”

    “A new method to derive land surface temperatures from SSM/I data was described in Basist et al. (1998). This method uses the relationship among the seven different microwave channels provided by the Defense Meteorological Satellite Program (DMSP) instrument to identify the land surface type and determine the percentage of a pixel that is liquid water each time the satellite flies overhead. Since water has an emissivity of 0.65 at 19 GHz, the impact surface wetness has on the observed brightness temperature can be determined. This adjustment is empirically calculated by using the relationship between in situ temperature measurements and satellite brightness temperature at the SSM/I frequencies. Williams et al. (2000) expanded on this method using other surface and atmospheric conditions based on statistical relationships between in situ and satellite observations at the time of satellite overpass.”

    http://lnweb18.worldbank.org/ESSD/ardext.nsf/18ByDocName/ABlendedSatellite–InSituNear-GlobalSurfaceTemperatureDataset/FILE/ABlendedSatelliteInSitu.pdf

    “Infrared and microwave SST retrievals are highly complementary but are found to have significant differences that must be addressed if the products are to be combined. Individual products are evaluated using buoy observations to identify any dependence of the retrieval uncertainty on atmospheric forcing. The infrared products are seen to be affected by aerosols, water vapor, and SST while the microwave product is affected by atmospheric stability, wind speed, SST, and water vapor. Applying bias adjustments based on these results reduces the differences between the products.

    “A detailed comparison is performed between existing infrared and microwave sea surface temperature products and independent in situ observations from drifting and moored buoys. The infrared product is the operational AVHRR nonlinear SST (NLSST) product from the Naval Oceanographic Office [2]. The passive microwave product is the TRMM Microwave Imager (TMI) SST retrievals prepared by Remote Sensing Systems [3]. The buoy observations were drawn from an archive of NCEP GTS surface marine data maintained at the NOAA Climate Diagnostics Center.”

    http://www.scielo.cl/scielo.php?pid=S0717-65382004000200018&script=sci_arttext

    And that, Lee, is all I have to say to you.

    I have tried to be reasonable and answer questions put to me. However, I see that a civilized conversation is an exercise in futility.

    This conversation is over.

  • fred // February 18, 2008 at 9:03 am

    TC, it makes no sense. The reason most of them are out of spec seems to be moves and changes in or around the sites. So the argument now is, site new ones in accordance with spec (which suggests the spec is valid) but do not correct old ones, with the aim to preserve their history, though this history is a history which includes changes at least as great as those required to bring them into spec.

    This is just nuts. I never thought I’d get to this point, but I am really getting most sceptical about the surface station record. And we still have not surveyed China! What will a survey of the Chinese stations show?

    I have no doubt that its warming, just from the totally unscientific grounds I can remember the winters of my youth. But I doubt whether the surface station record as it is being exposed adds much or any precision to that. Timing of flowering of crops, extent of vegetation, date of spring, habitats. Yes, all that shows trends. But this station stuff seems to pretend to a precision, at least recently, that it just doesn’t have. I think we should stop adjusting, report raw data, and only from good stations with a well defined history. That at least will be data. The rest of this stuff looks more and more like wild assed guesses.

    And what on earth is the point of adjusting the thirties down? Its a nonsense.

  • Barton Paul Levenson // February 18, 2008 at 4:09 pm

    Evan Jones posts:

    [[Considering the history of the NOAA, including it’s “showcase site” at U Phoenix, I utterly refuse to accept the quality of any site until it has been observed and photographed and I (or Anthony Watts or someone else in whom I have a modicum of trust) have seen the photos.

    Period.]]

    Darn right, Evan! And along the same lines, I refuse to believe in any of the discoveries of exosolar planets until I see a photograph of those planets! I ain’t gonna listen to a bunch of so-called “scientists” when I can just trust my own damn eyes!

  • Barton Paul Levenson // February 18, 2008 at 4:12 pm

    fred posts:

    [[I think we should stop adjusting, report raw data, and only from good stations with a well defined history.]]

    Better yet, fred, let’s cut to the chase — I think we should report data only from stations that tell us what we want to hear!

  • Lee // February 18, 2008 at 4:36 pm

    Evan -

    So you have no examples of UHN sites being in “severe violation.’ Got it.

    You had earlier said “But it is important to make sure that the same percentage of stations are sited in UHIs as UHIs are as a percentage of the earth’s surface.

    The US is c. 3% urban. If only 3% US stations were subject to UHI, we wouldn’t have a problem.” Now you retreat to microsite potential errors in the adjustment. Evan, the trend in urban areas is adjusted to math at ar surrounding non-urban sites - there is no urban trend in the record. You conflate UHI and microsite issues and alter your argument as necessary to avoid the point. Got it.

    On surface temps, you refuse to use anything except the “flawed” data - you’ve argued that the satellite data isn’t useful here. And now you require a method that wont give us results for 20 years. Given the range of methods for determining temps, and corroborating the existing surface analysis, why do you insist on only trusting the surface temp record, which you don’t trust? gee, I wonder

    JohnV has said outright that his analysis corroborates the GISS analysis.

    Calibrating satellite-derived temps agaisnt INSTANTANEOUS surface temperatures, which is all you describe, is NOT the same as adjusting the satellite trend to match the surface trend. N o one gives a flying f*** about instantaneous temperatures, we are talking about TREND. Calibrating instantaneous temperatures is Not ajdjustign the trend - it will have NO EFFECT on the trend. Why do you so consistently conflate issues about instantaneous temperatures with trend analyses? You have provided NO citation showing that there is any adjustment of satellite trend to match the surface stations analysis of trend - and that is the only issue we care about.

    “This conversation is over.”
    Right…

  • fred // February 18, 2008 at 5:37 pm

    I’m also looking at Watts’ latest posting, Cedarville. Ask yourself what possible grounds there could be for these adjustments to the readings. Like, in about 1895, the view seems to be that Cedarville was reading too low by about 0.75C. Then in 1930 it seems to be reading too high by about 1C. So what changed? Then in 1995 its spot on. Why? I really don’t get it. Like, you can see that there’s a case for just using the raw data. Or, there’s a case for adjusting for known station site and installation changes. But this stuff? Cedarville appears to have been rural to the same degree for the last 100 years. This is making no sense at all.

  • Hank Roberts // February 18, 2008 at 6:18 pm

    EJ>>> the post has not yet appeared.
    EJ>>> Please note.”

    >> So post it again - how hard can
    >> it be to post a citation?

    EJ> I see that a civilized conversation
    EJ> is an exercise in futility.
    EJ>
    EJ> This conversation is over.

    Tamino, when people can’t get their citations to successfully post online, after trying, whether the Secret Masters are censoring the Internet or the spam filters hate truth, or whatever, it does leave them hanging.

    What about putting up a tip jar and offering a bounty for anyone who can come up with a citation supporting claims like this one EJ makes?

    I’d love to put a bounty on proof for some of the common claims that people can’t seem to find their own support for, or if they know the truth, they can’t get it to post on the Net.

    If the secret censors are powerful enough to stop _all_of us trying, even that would be useful information.
    Of course maybe that’s what the absense of citation proves. Hmmm.

  • J // February 18, 2008 at 6:53 pm

    Here are just two examples of how in situ surface observations affect the results of satellite data. Obviously, if the in situ data is not correct, this will affect the satellite results.

    Evan, are you joking?

    When people talk about measuring global temperature trends from satellite, they’re virtually always referring to calculations of lower- or mid-tropospheric temperatures derived from data collected by the MSU and AMSU instruments.

    Neither of the sources you cited has anything remotely to do with that.

    Good grief.

  • sod // February 18, 2008 at 10:33 pm

    Evan Jones, you obviously can t admit that you re wrong, even if it is DIRECTLY shown to you.

    you were wrong on the LA paper being a “UHI” paper. you were wrong on John V.

    oh and you don t understand how satellite data is adjusted. (of course it is adjusted via ground temperature, but NOT via the global ground average!

  • steven mosher // February 19, 2008 at 1:15 am

    Fred,

    Cedarville is Mystery. According to the input file I have for NASA GISS cedarville is a DARK site. that means its Rural. That means it should get no adjustment, according to Hansen2001. Pouring over the source code for GISStemp I find no code where a DARK SITE
    should be adjusted.

    Some help if you havent read the code:

    C**** Input files: units 31,32,…,36
    C**** Record 1: I1,INFO(2),…,INFO(8),I1L,TITLE header record
    C**** Record 2: IDATA(I1–>I1L),LT,LN,ID,HT,NAME,I2,I2L station 1
    C**** Record 3: IDATA(I2–>I2L),LT,LN,ID,HT,NAME,I3,I3L station 2
    C**** etc. NAME(31:31)=brightnessIndex 1=dark->3=bright
    C**** etc. NAME(32:32)=pop.flag R/S/U rur/sm.town/urban
    C**** etc. NAME(34:36)=country code

    The relavant step is STEP2
    for those that have the code. the relevant routine is PaPars.f

    Now, since Cedarville APPEARS to have been giss adjusted, I am looking at this for Anthony. It is not very easy. Gisstemp can drop stations from processing. They also adjust stations without indication in the stations inventory file that it has been adjusted, so you have to walk through the code.

    There are other issues here. H2001 reliance on nightlights data ( imhoff) might have some issues. there are some quirky things in the data.
    Lets say it’s not the best documented step.

    Let me stipulate that I dont think the errors in analysis will prove global warming false.

    Nightlights ( I think it was a 1995 data set defense satillite OLS) can and has been improved on. including some thresholding improvments. Nightlights are a proxy for urbanization. there are others ( like NVDI) that could be added to the mix.

    I dont think H2001 was perfect in finding rural sites. I think that goal is good. I think nightlights helps. I think NVDI ( gallo’s study) is a good addition. I think Anthony Watts site surveys are a good quality filter. I think ATMOZ is right when he argues that the US is over sampled.
    I think that NOAA is right ( Vose) when he argues that the US only needs 135 stations to monitor climate trends.

    So. find good sites. improve the data.

    ( psst the century will still be warming, so dont fret.)

  • Hank Roberts // February 19, 2008 at 1:37 am

    “J” might not be Evan, but whoever “J” is, “J” wrote:

    > … just two examples of how in
    > situ surface observations affect the
    > results of satellite data.

    Let’s look at the first one given.

    A look at that 1998 paper is worthwhile, it does have a lot to say about how they handled issues a decade ago. Just for example:

    [after a section on how the different instruments work and what they’re actually detecting, and how that’s used to get a temperature number, and how each method has its problems and issues to deal with]

    Nevertheless it’s interesting, e.g.:
    —-quote—-
    ” … However, every time new data points for that month are added, the potential for closer neighbors and improved spatial checking exists so the “bad” data points are reevaluated. The vast majority of the transmitted CLIMAT messages have acceptable mean temperature data.

    GHCN’s quality control removes the majority of the erroneous data caused by errors in digitization or transmission. However, a very few good data points without close enough neighbors to verify their extreme climate signal are probably removed from the dataset in this process and some erroneous but not extreme data points are retained, particularly in regions with high temporal variability. For example, when identical mean temperatures are transmitted for two consecutive months, it seems suspicious but possible. GHCN’s QC does not flag such indications of possible problems until three data points in a row are identical.

    Another problem with surface data is potential inhomogeneities in the station time series due to factors such as changes in location, new instrumentation, or changes in observing practices. GHCN mean temperature data undergo rigorous homogeneity testing (Peterson et al. 1998a). However, the GHCN adjustment methodology (Easterling and Peterson 1995; Peterson and Easterling 1994) requires 5 yr of data on either side of a potential discontinuity in order to make robust adjustments for the artificial change in the data record. Therefore, data from 1992 to [1997] … can contain some inhomogeneities. ….
    ——-end quote——

    J, whoever you are, I hope you did get that paper from a skeptic site, though I can’t find any prior mention of it with a quick search. How’d you find it?

    Knowing this information would have saved a good bit of skeptic energy for more useful issues, because a decade ago the folks who wrote that paper were addressing, in detail, the kinds of questions I keep seeing the surfacestations fan base posting and getting upset about. Nice find, actually. Of course someone should check the subsequent ten years for updates on the methods used and not rely on this one as current info.

  • Fred // February 19, 2008 at 4:45 am

    BPL, I think we should report only from stations that meet our specifications, and without adjustments, regardless of what the results show. You assume there is something I want to hear. There is not. I do not care much whether the results from a set of properly sited stations properly distributed geographically shows warming, cooling or random fluctuation. What I want is proper recording and reporting of data so we can tell what is going on.

    Obviously there is warming in general to some extent. There is also, equally obviously, incompetent and unprofessional station management and data adjustment on some scale. Both can be, and are, true. The problem with this way of managing the stations is that it is depriving us of data. The problem with defending it is that it cannot be defended rationally.

    Now get to the fundamentals. Why exactly is Cedarville being adjusted as shown by Watts in the 20C? What reason do we have to think it was under reading in 1895 and so on? And so on for the other stations. I don’t know what cleaning up the stations will show, and don’t care. I just want data. What we are getting now is wild guesses, its not data.

    And I want for people to stop saying it doesn’t need doing. And stop saying how important it is to preserve the historical record, when you’re just going to adjust it out of all recognition anyway. This is a totally nonsensical way of going about data collection, and the only reason for defending it is because you’re emotionally attached to the existing ‘data’ and what you think it shows.

  • J // February 19, 2008 at 7:12 pm

    Hank, you’re confused.

    I didn’t post the references. Evan did that.

    I just pointed out that Evan’s links don’t have anything whatsoever to do with the MSU / AMSU tropospheric temperature records.

    Evan (and Raven, who made similar claims earlier in this thread) are just plain wrong. Nobody is tampering with the satellite tropospheric data set to make it match the data from GISS or HADCRU or whatever.

    Although I suppose it would be mildly entertaining to watch some of McIntyre’s minions start pestering Christy et al. After all, if you start out with the assumption that the GISS data *must* be wrong, then any match between the surface and satellite records must ipso facto be evidence of tampering.

  • Hank Roberts // February 19, 2008 at 9:25 pm

    Thanks for the correction J, glad you caught my error.

    Evan, where did you get that cite? While it’s ten years old, it’s interesting to read how sources of known error were handled back then, and that errors would stay in for several years before new numbers accumulated to use to correct them. I’d sure be curious to see an update on that description.

    (J’s right that the surface data talked about isn’t part of the MSU satellite records Evan’s talking about)

  • Heretic // February 20, 2008 at 1:17 am

    Hank and J, unless you guys are talking about something else, I thought that Tamino had given us a fairly comprehensive overview of the issue on this thread:
    http://tamino.wordpress.com/2007/12/31/msu/

    Which referenced these papers:

    http://www.ncdc.noaa.gov/oa/climate/research/rss-msu.pdf
    http://climate.envsci.rutgers.edu/pdf/VinnikovGrody2003.pdf
    http://climate.envsci.rutgers.edu/pdf/TrendsJGRrevised3InPress.pdf
    http://www.ncdc.noaa.gov/oa/climate/research/nature02524-UW-MSU.pdf

    I may be mistaken, since I read your exchange diagonally, but let’s keep in mind that, as soon as there has been enough time elapsed that details about a particular issue are forgotten in the blogosphere, you can expect that issue being brought up again.

  • Hank Roberts // February 20, 2008 at 3:53 am

    Hey, we’re trying to get Tamino’s hit count numbers up here (wry grin).

    No, good pointer. I’d have done better just to say “asked and answered” and should have.

  • luminous beauty // February 20, 2008 at 1:31 pm

    fred,

    “This is a totally nonsensical way of going about data collection, and the only reason for defending it is because you’re emotionally attached to the existing ‘data’ and what you think it shows.”

    No one is emotionally attached to the data. It is the only data we have for the historical record. We don’t have access to a time machine to go back and make new measurements. That is why it needs statistical analysis and subsequent correction to be useful. It is still useful. Less precise than maybe one would like, but it is what it is.

    How many times must you be told this before it sinks in?

  • steven mosher // February 20, 2008 at 2:37 pm

    Fred, Cedarville should not be adusted according to the code from GISS. It’s an
    Unlit station ( dark, meaning Rural). If we had a easy way to get all the giss data (you have to scrape the web site) we could figure out how prevalent this is. Anyway, still looking at the code to see if I missed something. But by the description in H2001, it should not get an adjustment.

  • steven mosher // February 20, 2008 at 5:31 pm

    Good point Luminous.

    So, if the USHCN historical data for cedarville CA indicates a site that hasnt moved in over 100 years, and if the US Census population data indicates that during this whole period it has had a population of <10,000, making it a rural site, by Giss definition, And if the data from the Satillite nightlights photo indicates that the site is DARK, that is shows no pixel illumination whatsoever in 231 passes of the satillite, and if H2001 indicates that ONLY dim sites and Bright Sites are adjusted by Gisstemp, and if a review of GISStemp code confirms this, and if The output of the program
    indicates that Cedarville IS in fact adjusted, then
    It would appear that the adjustments should be doublechecked. After all, If the adjustments and corrections are made to improve the record, and if some adjustments appear to have been made that don’t follow the descriptions of the adjustments, then the “correction” in this specific case, should be investigated. It’s merely checking the INPUT of the program ( in step 0 of gisstemp) where cedarville is described as DARK, with the correction algorithm decribed in the paper ( Dark dont get adjust by GISS)
    with the algorithm as implemented in the code ( step 2, PaPars.f) and finally looking at the
    final output. If you find an anomaly, like cedarville, the whole project isnt scrapped. And one cant simply ignore it without getting to the bottom of it. In the end, the effect of these errors is likely to be minor, and global warming will still exist. Just fix the litttle mistakes and move on. When you fight corrections to the correction algorithms, then some could misinterpret that resistence in various ways.

    In short, the historical record needs corrections, and the corrections need to be correct.

  • Lee // February 20, 2008 at 6:35 pm

    msoher,
    the lit/unlit urban correction is NOT the only correction that is applied to stations. There is a substantial analysis looking for unhomogeneity - this pre-exists the lit/unlit correction, is detailed in earlier papers, and IIRC in the introduction to the lit/unlit stations paper.

  • steven mosher // February 20, 2008 at 9:01 pm

    Lee,

    I am refering the most recent code and H2001, the controlling document.

    GISSTemp reads in data from GHCN and USHCN. This happens in Step 0 of the code.

    According to Hansen2001 and the code, the data is read in AFTER noaa have done the following:

    1. Removed outliers.
    2. Performed a TOBS adjustment. time of Obseration
    3. Performed a SHAP adjustment. station move
    4. Adusted for MMTS changes. instrument changes.

    I refer you to H2001 and the USCHN documentation and the source code.
    It you like I will cut and paste for you

    As H2001 and The Nasa website describe, GISS read in data AFTER USCHN adjustments described above. This file is described as NoFil.

    In Step0 of GissTemp this file is read in from Noaa websites. In step 1 of Giss temp stationdata can be combined with other station data. This can happen when you have multiple scribal records for a station. Looking at Cedarville raw data available at GISS doesnt indicate this type of situation. Further, the adjustment made to cedarvill is the typical linear adjustment one sees in step 2 of GissTemp. the URBAN adjustment. As describe in H2001, After GISS rads in the data from USHCN they make a correction to unlit and bright sites. I believe the section is 4.4 ofthe paper, but the code that does this is in step2 forttran file PaPars.f.

    But never mind what I say, here Giss description of the code.. Step 2 is the step of interest,
    Non Rural ( light = dim or bright) are modified to match Rural.

    Step 0 : Merging of sources (do_comb_step0.sh)
    —————————
    GHCN contains reports from several sources, so there often are multiple records
    for the same location. Occasionally, a single record was divided up by NOAA
    into several pieces, e.g. if suspicious discontinuities were discovered.

    USHCN and SCAR contain single source reports but in different formats/units
    and with different or no identification numbers. For USHCN, the table
    “ushcn.tbl” gives a translation key, for SCAR we extended the WMO number if it
    existed or created a new ID if it did not (2 cases). SCAR stations are treated
    as new sources.

    Adding SCAR data to GHCN:
    The tables were reformatted and the data rescaled to fit the GHCN format;
    the new stations were added to the inventory file. The site temperature.html
    has not been updated for over a year; we found and corrected a few typos
    in that file.

    Replacing USHCN-unmodified by USHCN-corrected data:
    The reports were converted from F to C and reformatted; data marked as being
    filled in using interpolation were removed. USHCN-IDs were replaced by the
    corresponding GHCN-ID. The latest common 15 years for each station were used
    to compare corrected and uncorrected data. The offset obtained in that way was
    subtracted from the corrected USHCN reports to match any new incoming GHCN
    reports for that station (GHCN reports are updated monthly, in the past, USHCN
    data lagged by 2-5 years).

    Filling in missing data for Hohenpeissenberg:
    This is a version of a GHCN report with missing data filled in, so it is used
    to fill the gaps of the corresponding GHCN series.

    Result: v2.mean_comb

    Step 1 : Simplifications, elimination of dubious records, 2 adjustments (do_comb_step1.sh)
    ———————————————————————–
    The various sources at a single location are combined into one record, if
    possible, using a method similar to the reference station method. The shift
    is determined in this case on series of estimated annual means.

    Non-overlapping records are viewed as a single record, unless this would
    result introducing a discontinuity; in the documented case of St.Helena
    the discontinuity is eliminated by adding 1C to the early part.

    Some unphysical looking segments were eliminated after manual inspection of
    unusual looking annual mean graphs and comparing them to the corresponding
    graphs of all neighboring stations. (As a test, the analysis was done
    including all these parts - the global mean series was not affected)

    After noticing an unusual warming trend in Hawaii, closer investigation
    showed its origin to be in the Lihue record; it had a discontinuity around
    1950 not present in any neighboring station. Based on those data, we added
    0.8C to the part before the discontinuity.

    Result: Ts.txt

    Step 2 : Splitting into zonal sections and homogeneization (do_comb_step2.sh)
    ———————————————————-
    Since the gridding program was written about 30 years ago for what now
    would be viewed as tiny machines, the data were divided into 6 zonal
    sections. We plan to reprogram the gridding and analysis section of
    our procedure to eliminate this division.

    The goal of the homogeneization effort is to avoid any impact (warming
    or cooling) of the changing environment that some stations experienced
    by changing the global trend of any non-rural station to match the
    global trend of their rural neighbors. If no such neighbors exist,
    the station is completely dropped, if the rural records are shorter,
    part of the non-rural record is dropped.

    Result: Ts.GHCN.CL.1-6, Ts.GHCN.CL.PA.1-6

    Step 3 : Gridding and computation of zonal means (do_comb_step3.sh)
    ————————————————
    A grid of 8000 grid boxes of equal area is used. Time series are changed
    to series of anomalies. For each grid box, the stations within that grid
    box and also any station within 1200km of the center of that box are
    combined using the reference station method.

    A similar method is also used to find a series of anomalies for 80 regions
    consisting of 100 boxes from the series for those boxes, and again to find
    the series for 6 latitudinal zones from those regional series, and finally
    to find the hemispheric and global series from the zonal series.

    It should be noted that the base period for any of these anomalies is not
    necessarily the same for each grid box, region, zone. This is irrelevant
    when computing maps of trends; however, when used to compute anomalies, we
    always have to subtract the base period series from the series of the
    selected time period to get the proper anomaly map.

    Result: SBBX1880.Ts.GHCN.CL.PA.1200 and tables (GLB.Ts.GHCN.CL.PA.txt,…)

    Step 4 : Reformat sea surface temperature anomalies
    —————————————————
    Sources: http://www.hadobs.org HadISST1: 1870-present
    http://ftp.emc.ncep.noaa.gov cmb/sst/oimonth_v2 Reynolds 11/1981-present

    For both sources, we compute the anomalies with respect to 1982-1992, use
    the Hadley data for the period 1880-11/1981 and Reynolds data for 12/1981-present.
    Since these data sets are complete, creating 1982-92 climatologies is simple.
    These data are replicated on the 8000-box qual-area grid and stored in the same way
    as the surface data to be able to use the same utilities for surface and ocean data.

    Areas covered occasionally by sea ice are masked using a time-independent mask.
    The Reynolds climatology is included, since it also may be used to find that
    mask. Programs are included to show how to regrid these anomaly maps:
    do_comb_step4.sh adds a single or several successive months for the same year
    to an existing ocean file SBBX.HadR2; a program to add several years is also
    included.

    Result: update of SBBX.HadR2

    Step 5 : Computation of LOTI zonal means
    —————————————-
    The same method as in step3 is used, except that for a particular grid box
    the anomaly or trend is computed twice, first based on surface data, then
    based on ocean data. Depending on the location of the grid box, one or
    the other is used with priority given to the surface data, if available.

    Result: tables (GLB.Tsho2.GHCN.CL.PA.txt,…)

    A program that can read the two basic files SBBX1880.Ts.GHCN.CL.PA.1200 and
    SBBX.HadR2 in order to compute anomaly and trend maps etc was available on our
    web site for many years and still is.

    For a better overview of the structure, the programs and files for the various
    steps are put into separate directories with their own input_files,
    work_files, to_next_step directories. If used in this way, files created by
    step0 and put into the to_next_step directory will have to be manually moved
    to the to_next_step directory of the step1. To avoid that, you could
    consolidate all sources in a single directory and merge all input_files
    directories into a single subdirectory.

  • Raven // February 22, 2008 at 1:39 pm

    A reference for those who seem to think that the satellite measurements are independent of the ground measurements:

    “Mears et al. A Reanalysis of the MSU Channel 2 Tropospheric Temperature Record. Remote Sensing Systems, Santa Rosa, California (Manuscript received 10 October 2002, in final form 23 May 2003)”

    Researchers generally agree that the surface warming observed over the past century is at least partially anthropogenic in origin, particularly that seen in the past two decades (Hansen et al. 2001; Houghton et al. 2001).[…] Despite excellent coverage (more than half the earth’s surface daily), the MSU data suffer from a number of calibration issues and time-varying biases that must be addressed if they are to be used for climate change studies.

    Translation: if the satellite record is assumed to wrong if it does not agree with the surface record. The calibration algorithms are verified for correctness by comparing them to the surface record.

    Therefore the satellite record *cannot* be used to validate the surface record.

  • dhogaza // February 22, 2008 at 4:59 pm

    Sorry, your snippet does not support your conclusion.

  • Lee // February 22, 2008 at 5:41 pm

    Raven,

    That’s absurd. Satellites suffer from known calibration and bias problems - orbital drift, changes in sensors over time, the absolute calibration for instantaneous temperature, and so on.

    Saying this is so, and looking at ways to address those issues - which is what that quote says - is NOT the same as saying they are adjusting to match the surface station trend over time. There is no hint, none at all, in what you quoted, that they are adjusting to match the surface. You are taking your unsupported assumption and applying it to a set of words that absolutely do not even imply what you claim, by claiming that the words are actually hiding what they are actually doing.

  • adder // February 22, 2008 at 7:25 pm

    Translation: it is well known that satellite data has problems. Which have nothing to do with what surface thermometers are saying. Those problems need to be addressed before the data is useful for anything. Raven essentially suggests these people are being dishonest. I suggest Raven is wrong.

  • Hank Roberts // February 22, 2008 at 8:09 pm

    Raven, you appear to be trolling. No skeptic could be as gullible as you’re pretending to be.

    Prove me wrong. Show you can learn. Put that cite into Scholar, follow it forward.

  • Barton Paul Levenson // February 22, 2008 at 8:14 pm

    Raven writes:

    [[Researchers generally agree that the surface warming observed over the past century is at least partially anthropogenic in origin, particularly that seen in the past two decades (Hansen et al. 2001; Houghton et al. 2001).[…] Despite excellent coverage (more than half the earth’s surface daily), the MSU data suffer from a number of calibration issues and time-varying biases that must be addressed if they are to be used for climate change studies.

    Translation: if the satellite record is assumed to wrong if it does not agree with the surface record. The calibration algorithms are verified for correctness by comparing them to the surface record.]]

    Your “translation” is a pure non sequitur. There’s no logical way to get from “there are calibration and time-varying biases that need to be addressed” to “[t]he calibration algorithms are verified for correctness by comparing them to the surface record.”

  • Hank Roberts // February 22, 2008 at 9:38 pm

    Most readers will have recognized this famous paper and know what followed from it, it’s rather famous recent history.
    Anyone who doesn’t — can follow the citing papers forward to the answer.

  • dhogaza // February 22, 2008 at 10:12 pm

    Most readers will have recognized this famous paper and know what followed from it, it’s rather famous recent history.

    Even non-readers like John Christy and Roy Spencer are likely to know what followed from it … :)

  • Hank Roberts // February 23, 2008 at 4:43 pm

    Raven, a hint: the “MSU Channel 2 Tropospheric Temperature Record” is not the surface temperature. Try Scholar looking for corrections in it that mention the paper you cited.

    We all had to learn how to do this and we have to keep learning as the tools change and new information comes out. Join us.

  • Jesse // February 25, 2008 at 12:03 am

    However, as has been pointed out to you over and over again, we’re not interested in proper surface temperature measurements.

    We’re interested in trends.

    Quit conflating the two.

    The point is that the observed trends are only valid if the surrounding environment is completely unchanged over the period in question.

    When sites are moved, buildings go up nearby, or any other changes happen in the nearby environment, the following temperature readings from that site simply cannot be used to create valid trend data without continually adding new adjustments for every significant environmental change along the history of the site.

  • Hank Roberts // February 25, 2008 at 1:15 am

    http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2FJCLI3297.1

  • cce // February 27, 2008 at 11:18 pm

    Somewhat off topic, but if you want to see a truly incompetent discussion go here:
    http://wattsupwiththat.wordpress.com/2008/02/27/a-look-at-temperature-anomalies-for-all-4-global-metrics/

    In particular, look at the frequency distributions at the bottom, and ask yourself if Watts understands any of this.

    [Response: The level of misunderstanding indicated by the post you link to, is astounding. If it weren’t tragic, it’d be hilarious.]

  • Hank Roberts // February 27, 2008 at 11:57 pm

    Remember, it’s an election year.

    Facts are less at a premium than usual in US public discourse for the next 10 months or so.

    Much of this is run-up to the big bogosity conference coming up in early March, when they will attempt to prove that 19,000 people who might be entitled to be called “Doctor” can be wrong about climatology.

    Watch for it, everywhere.

    Those who don’t know this essay should:

    http://www.netcharles.com/orwell/essays/politics-english-language1.htm

  • Hank Roberts // March 3, 2008 at 5:45 am

    http://www.springerlink.com/content/w35×4521lv82m847/

    … we predict the amplitude and period of the present cycle 23 and future fifteen solar cycles. The period of present solar cycle 23 is estimated to be 11.73 years and it is expected that onset of next sunspot activity cycle 24 might starts during the period 2008.57±0.17 (i.e., around May–September 2008). The predicted period and amplitude of the present cycle 23 are almost similar to the period and amplitude of the observed cycle. With these encouraging results, we also predict the profiles of future 15 solar cycles….”

    Prediction of solar cycle 24 and beyond
    J. Astrophysics and Space Science
    SSN 0004-640X (Print) 1572-946X DOI 10.1007/s10509-007-9728-9
    Friday, January 04, 2008

Leave a Comment