Open Mind

Best Estimates

May 11th, 2007 · 33 Comments

A reader recently got into the fray over at climateaudit, participating in discussions about the abuse of graphs in Martin Durkin’s so-called “documentary.” Somewhere along the line, comments turned to accusing NASA GISS (and HadCRU as well) of “point-shaving” because of the adjustments they make to temperature data in order to estimate global average temperature.


The implication is that these research organizations have deliberately instituted changes designed to inflate the estimated trend in global average temperature. This raises the question, what adjustments do the good people at GISS apply when estimating global temperature? Fortunately, the adjustment procedures are well documented in Hansen et al. 1999 and Hansen et al. 2001. This post is a synopsis of those papers, and quotes heavily from them.

It turns out that the adjustments instituted by GISS cannot be a deliberate attempt to bias the result, because they don’t favor one direction over another. They are quality control measures, and as such, they don’t bias the results, or favor warming trends over cooling trends — they just make the results better.

The first step in the GISS quality control procedure is to flag any monthly data point that is more than 5 standard deviations different from the long-term mean (for that month, for that station). Such a large deviation is overwhelmingly likely due to a simple error in arithmetic or transcription. After all, human error is not that rare, but for a normally distributed random variable to exceed 5 standard deviations is; the chance is less than 1 in a million. However, just to be on the safe side, if one of the five nearest neighboring stations also showed a large deviation from the long-term mean for that station month, which was at least half as large as that for the datum in question, the datum is not flagged.

Data were also flagged if the record had a “jump discontinuity,” specifically if the means for two 10-year periods differed by more than 3 standard deviations. A third flag was designed to catch “clumps” of bad data that occasionally occur, usually at the beginning of a record; specifically, a station record was flagged if it contained 10 or more months within a 20-year period that differed from the long-term mean by more than 3 standard deviations.

Then, all flagged data were graphically displayed, along with neighboring stations that contained data during the period in question, and a subjective decision was made as to whether the apparent discontinuity was flawed data or a potentially real climate anomaly. The philosophy was that if the data were not quite obviously flawed, it was retained. Using this criterion, only a very small portion of the original data was deleted: out of about 6,000 station records, approximately 20 were deleted entirely, in approximately 90 cases the early part of the record was deleted, in five cases a segment of 2-10 years was deleted from the record, and approximately 20 individual station months were deleted.

It’s worth noting that up to this point, no data have been actually changed, only the obvious errors have been discarded.

GISS did modify the records of two stations that had obvious discontinuities. These stations, St. Helena in the tropical Atlantic Ocean and Lihue, Kauai, in Hawaii are both located on islands with few if any neighbors, so they have a noticeable influence on analyzed regional temperature change. The St. Helena station was moved from an altitude of 604 m to 436 m elevation between August 1976 and September 1976. This introduces a false warming trend into the data. Therefore assuming a lapse rate of about 6°C/km, GISS added 1°C to the St. Helena temperatures before September 1976. Note that this change can’t possibly introduce a false warming trend, it only removes a false warming trend.

Lihue had an apparent discontinuity in its temperature record around 1950. On the basis of minimization of the discrepancy with its few neighboring stations, GISS added 0.8°C to Lihue temperatures prior to 1950. Note that again, this can’t possibly introduce a false warming trend, but it does remove one.

When multiple records exist for the same location, two records are combined if they have a period of overlap. The mean difference, or “bias,” between the two records during their period of overlap is used to adjust one record before the two are averaged. In the majority of cases the overlapping portions of the two records are identical, representing the same measurements that have made their way into more than one data set.

If there’s a third record for the same location, it is then combined with the mean of the first two records in the same way, with all records present for a given year contributing equally to the mean temperature for that year. This process is continued until all stations with overlap at a given location are employed.

If there are additional stations without overlap, these are also combined, without adjustment, provided that the gap between records is no more than 10 years and the mean temperatures for the nearest five year periods of the two records differ by less than one standard deviation. Stations with larger gaps are not combined, but treated as separate records.

Homogeneity Adjustments

Homogeneity adjustments are made with the aim of removing nonclimatic variations in the temperature record. Nonclimatic factors include such things as changes of the environment of the station, the instrument or its location, observing practices, and the method used to calculate the mean temperature. Quantitative knowledge of these factors is not available in most cases, so it is impossible to fully correct for them. Fortunately, the random component of such errors tends to average out in large area averages and in calculations of temperature change over long periods.

One of the important homogeneity adjustments is meant to correct for the urban heat island effect. Originally, GISS classified stations as urban, peri-urban, or rural, based on population data. More recent analyses for U.S. stations, however, use photos from United States Defense Meteorological Satellites, taken with a highly sensitive photomultiplier tube, to identify which locations are lighted at night, and which are unlit; multiple photos were used, taken at the time of new moon. On this basis, locations are classified urban, peri-urban, or rural.

Urban stations are adjusted by applying a “two-legged” de-trending correction. By using two legs for de-trending, the model accomodates those urban stations that have had two different “growth behaviors” during their history. The slopes of the de-trending correction, as well as the “hinge point,” or time of switch from one slope to another, were variables chosen to minimize the difference between the urban stations and and a weighted average of rural stations within 500 km, with the nearest rural stations having the highest weight.

This procedure means that the year-to-year changes in an urban station record are determined chiefly by the uncorrected data, but the long-term trend is defined by the nearby rural stations. Therefore the long-term trend for the globe as a whole is overwhelmingly dominated by the behavior of the rural stations — but for some reason delusionists still like to accuse the record of being wrong because of urban heating effects.

Another important correction is for time-of-observation bias. The standard way of calculating the monthly mean temperature in the United States is to define the daily mean as the average of the daily maximum and minimum temperatures and then average the daily means over the month. The preferred 24-hour period would be the calendar day, i.e., from midnight to midnight. However, most observers recording results from maximum-minimum thermometers prefer observing times other than midnight. The time of observation has a systematic effect on the monthly mean temperature; for example, an afternoon 24-hour reading samples the diurnal cycle near its maximum on 2 days.

This would not matter much if the time of observation at a given station did not change during the station’s history. However, there have been changes of the time of observation by many of the cooperative weather observers in the United States. Furthermore, the change has been systematic with more and more of the measurements by United States cooperative observers being in the orning, rather then the afternoon. This introduces a systematic error in the monthly mean temperature change.

A correction for the time-of-observation bias has been developed, and verified as valid from hourly data available for many U.S. stations. Of course, to apply this correction, it is necessary to have reliable metadata defining all changes of time of observation in the station record. These data generally exist for the U.S. stations and are believed to be reliable. The time-of-observation correction is not generally required in the rest of the world, because the systematic shift from once a day evening to once a day morning observations which occurs at U.S. cooperative observer stations is not characteristic of most global observations.

Another correction applied chiefly to U.S. stations is caused by changes in the location of the thermometer or the station itself. In most long records, such moves are the rule, rather than the exception, and records of the moves are not generally available. U.S. stations have reasonably good station history records that permit adjustments for these discontinuities.

A systematic discontinuity was introduced by the change from liquid-in-glass thermometers to the maximum-minimum temperature system (MMTS) in the U.S. Cooperative Network. The effect on the U.S. mean temperatures is an order of magnitude smaller than the effect of either the time-of-observation bias or the station history adjustments, but because this correction is well defined, it is included in the current GISS analysis.

All these corrections are necessary in order to obtain the best estimate possible. It’s foolish not to correct for transcription errors, or time-of-observation bias, or station moves, or instrument changes, or urban heating effects. Furthermore, none of these changes inherently favors one direction over another; each can cause temperatures, and temperature trends, to increase or decrease. Hence the claim that GISS is “cooking” the temperature data in a deliberate attempt to inflate temperature trends, is frankly ludicrous. The statement that they work very hard to improve the quality of historical temperature data, in order to provide the best estimate possible, is dead on the money.

Tags: Global Warming · climate change

33 responses so far ↓

  • Glen Raphael // May 11th 2007 at 5:25 am

    A more charitable implication is that these research organizations _might have accidentally_ instituted changes that _have the effect_ of inflating the estimated trend in global average temperature.

    Yes, the changes have been described in a way that seems innocuous and reasonable. But unless the results have actually been audited, I don’t trust them. People make mistakes. Bugs creep into programs. The data and code should be made available in a manner that allows a disinterested or differently-interested third party to execute the correction algorithms exactly as described and verify that they get the same result and allows them to try out other algorithms to see how sensitive the data is to particular corrections. See which changes are actually making a difference and exactly where is that difference manifested in the results and what does that imply.

    Based on past experience, it seems reasonable to expect independent auditing will find *some* bugs. Series that were accidentally excluded or duplicated or changed. Innocuous-seeming corrections that had unexpected side effects. Subjective criteria that ended up introducing unwanted bias. We know there will be bugs, but we won’t know what effect the bugs have until the test is performed. Until testing is done, partisans should be permitted the luxury of assuming/expecting that the hidden errors, once discovered, might turn out to favor their particular viewpoint. To shut them up, make the data available for audit and have it audited.

    The MBH experience has poisoned the well for everyone else. Mann claimed his results weren’t sensitive to the inclusion of strip bark samples; this turned out to be a lie or a mistake. Mann claimed to have passed statistical tests which he hadn’t, and so on. Various of Mann’s colleagues have claimed data was available when it wasn’t or claimed tests had been run when they hadn’t or claimed studies were “independent” when they weren’t.

    Lesson learned: “trust, but verify”.

    [Response: Even IF errors were made, they’re overwhelmingly likely to be far fewer in number and effect than those that exist in the unadjusted data. And since there’s no favoritism of one direction over another, it’s overwhelmingly likely that such errors — IF they exist — have no significant net effect on the worldwide trend. You and the climateaudit people really are just grasping at straws. Here’s my opinion: your comment is a textbook example of “sour grapes.”

    The Hansen papers referenced in the post, and the references therein, provide enough detail to reproduce the procedure. If you think mistakes have been made that would be uncovered by an audit — get busy. If you find evidence of significant mistakes, put it in the peer-reviewed literature. Until then … abide by the old saying, “put up or shut up.”]

  • plum // May 11th 2007 at 7:17 am

    This seems related to a comment thread tussle I’m having (under another pseudonym) over at a NZ blog, and I was hoping you could help set my mind at rest. The debate is based on Friis-Christensen’s repudiation of the way the Swindle documentary portrayed his research.

    http://news.independent.co.uk/media/article2521677.ece

    In this news item, he states that the Swindle doco-makers “fabricated” 100 consecutive years of data in a 400-year graph purporting to show his findings.

    “[Friis-Christensen] said there was a gap in the historical record on solar cycles from about 1610 to 1710 but the film-makers made up this break with fabricated data that made it appear as if temperatures and solar cycles had followed one another very closely for the entire 400-year period.”

    One of the NZ commenters, a self-identified expert on computer modelling who claims climate scientists don’t know a thing about real computer modelling — unlike him — is saying that it is a valid technique to impute, or interpolate, data to fill in this gap.

    My point is that 100 years is an awful big gap in a graph that stretches only 400 years.

    He comes back and says graphs that chart millions of years of data also contain imputed data, sometimes for much longer than a century at a pop. He points me to the graph here:

    http://www.gcrio.org/CONSEQUENCES/winter96/article1-fig6.html

    My point is that that the imputed data in the new graph doesn’t fill in a quarter of the horizontal axis.

    I’m pretty sure I’m right, but was hoping you could confirm that for me. Also, a question: when the NZ commenter says climate scientists don’t read outside their field to look for better algorithms and that many are using simplistic linear regression models — is he correct? How interdisciplinary is the field?

    [Response: Funny how according to the delusionists, when Hansen and colleagues make corrections based on perfectly logical, absolutely necessary, and thoroughly documented procedures, they’re “point-shaving” to perpetrate a fraud, but when Martin Durkin *makes up stuff* — so much so that Friis-Christensen takes him to task for it — they’ll find a way to justify inventing a 100-year stretch of data.

    There is absolutely NO justification for fabricating a 100-year stretch of a 400-year time series. And when interpolation is made, the interpolated values have to be chosen based on strictly objective criteria, not chosen to match your pet theory.

    Many climate scientists do a pretty good job of staying near the cutting edge in statistical methods. As for using linear regression, it’s such a ubiquitous tool that *everybody* uses it at one time or another.

    I just might know who your NZ individual is. Now here’s the most important point: he will always claim that the GISS temperature time series is fraudulent, but you will NEVER get him to admit that fabricating 100 years of data for the sole purpose of matching a denialist theory is invalid! That should tell you all you need to know.

    I’ve concluded that an earlier comment by Dano is right; it’s a waste of time to argue endlessly with those whose minds are sealed shut. It’s like walking into a creationists convention and trying to persuade them of the truth of evolution. So when they come here, I’ll give ‘em the smackdown IF I feel like it, ignore ‘em if I feel like it, but I’m no longer going to waste my time answering every lame fantasy from the deluded. No matter what argument you provide, they’ll still invent a reason to believe otherwise — which is why I’ve taken to calling them delusionists.]

  • Glen Raphael // May 11th 2007 at 7:02 pm

    tamino:

    You say none of these changes /inherently/ favors one direction over another - which may be true - but /in practice/ some of them do seem to favor one direction over another. In particular, I believe some recent adjustments have had a strong net effect of reducing past temperatures and increasing recent temperatures, causing the 1930s to seem much cooler than they used to be. Is that not the case? Please consider the GISS2000 versus GISS2007 difference chart on the bottom of this page:

    http://www.climateaudit.org/?m=20070216

    Is that chart correct? If so, the effect appears to much more significant relative to the trend being measured than you imply above.

    (Hadcru3 is more suspect than GISS because it is not yet possible for independent observers to recreate Hadcru3 due to data issues.)

    I prefer to think of it as cynicism rather than sour grapes or grasping at straws.

    Given the apparent cherrypicking we’ve often seen in historical climate studies , if it is true that recent adjustments have had the net effect of changing the “warmest year in the last century” from 1937 to 1998 and reducing the 1940s cooling trend, a cynic might suspect this particular set of adjustments was chosen (over all other possible sets of similar adjustments) primarily for that reason rather than simply because they “made the data better”.

    [Response: The biggest difference between the 2000 and 2007 versions of GISS temperature time series, is that for U.S. stations the later series include a correction for time-of-observation bias. Examination of station history records in the U.S. shows a distinct tendency, over the years, to change from recording values in the afternoon, to taking morning observations. Such time-of-observation changes introduce a false *cooling* trend into the monthly average time series.

    Make no mistake about it: changing the time of observation from afternoon to morning *does* introduce a false cooling trend. This was convincingly shown, and a valid correction for the effect was derived, in Karl et al. (1986, Model to estimate the time of observation bias associated with monthly mean maximum, minimum and mean temperatures for the United States, J. Clim. Appl. Meteorol., 25, 145-160, 1986), by using *hourly* temperature data available at a number of U.S. stations. Leaving out the time-of-observation correction is a sure-fire way to have *less* than the best estimate.

    And *that* is why the later GISS series for the U.S. show a distinctly greater warming than previous (2000 and before) GISS series. It’s not because some adjustment was introduced which creates a false warming trend, but because a factor which creates a false cooling trend was removed.

    Furthermore, this adjustment was made *only* for U.S. stations, for which we have enough metadata to know which stations underwent a change in time of observation. Since the U.S. is only about 2% of the area of the globe, this has a significant impact on the trends in U.S. temperature, but very little effect on the global trend. Of course, McIntyre and his ilk don’t really like to mention such things.

    As for “cherrypicking we’ve often seen in historical climate studies,” I’ve seen rotten-to-the-core cherrypicking by delusionists/denialists (I’ve even posted on the topic, using the co2science website as an archetype), but I’m not aware of *any* examples of cherrypicking by GISS, HadCRU, Mann, Bradley, Hughes, Jones, or Moberg.]

  • John Mashey // May 11th 2007 at 8:26 pm

    A couple comments and a question:

    1) I looked hard at GISS data a few years ago, and I certainly thought they were taking prudent efforts to clean up inherently-dirty data.

    2) Personally, I never take on faith difficult statistics work done without some review by statisticians, even thought many non-statisticians have fine stat skills.

    At Bell labs, when people wrote papers for external publication, they first had to go through an internal review that tended to be tougher than external reviews, needing at least 2 reviews from departments outside one’s own management chain, and reviews could be scathing.

    Anything involving statistics tended to go to a group that included John Tukey, Paul Tukey, Joe Kruskal, John Chambers, etc. It didn’t take long for people to learn they ought to consult these folks earlier in their analysis, rather than later, to make sure the methods and algorithms were right.

    3) Speaking as an old computer scientist/software engineer & manager thereof:

    “All code is guilty until proven innocent many times, and even then, retest to be sure.”

    Around 1968 our computer center got Waterloo’s FORTRAN compiler WATFOR, which was nominally for students (fast compiles, extra checking for undefined variables and subscripts out of range). The computer center strongly recommended to researchers that they take their existing FORTRAN programs and test them under WATFOR. Of course, it turned out that many such programs, whose results often formed the bases of published papers, were at least a little broken, much to people’s chagrin. In some cases, the bugs didn’t invalidate the published results, but it certainly encouraged healthy skepticism!

    I still see lots of results where the statistics is done by some locally-written FORTRAN (or C, or whatever) program, rather than a statistics package or a statistics-oriented programming [Chambers & co wrote S at Bell Labs, at least in part, to lessen the number of error-prone new codes floating around.]

    Question: is there still as much use of locally-written FORTRAN as there seems (in the climate research domain)? If so, is that: (a) habit, (b) convenience, or (c) insufficient features/performance on the part of existing tools?

    Why do I ask?
    A long time ago, Brian Kernighan and I wrote a paper, on somewhat of a lark, for a conference called Language Design for Reliable Software, in which we said that was the wrong problem, that the best way to have reliable software was to re-use reliable components, rather than writing so much new code. The paper was loved by 2 reviewers and hated by 2, so it was rejected, although it turned out the topic was actually discussed heavily at the conference.

    Later, a friend of Brian’s wanted a paper for the magazine Software-Practice and Experience, so we pulled this from the files, updated it, and it was published in 1979 as “The UNIX Programming Environment”, and then IEEE Computer published an expanded version in 1981.

    4) IDEALLY, I’d sure wish that:
    a) All key data be easily accessible on-line

    b1) All code be available on-line
    OR EVEN BETTER:
    b2) Standard analysis packages be used whenever possible, not homegrown FORTRAN codes.

    And the WWW makes it a lot easier to do this than it used to be [in the old days, we had to duplicate magnetic tapes and send them around, a real pain.]

    BUT, it takes a HUGE amount of work., and some of that work seriously detracts from getting useful research done. There are huge jumps of time and money between:

    a) A program a scientist/engineer writes for their own use or those of nearby colleagues. AND
    b) A well-documented program suitable for public display. AND
    c) An actual program product, with test cases, makefiles, portability cleanups, etc, etc suitable for distribution. If you distribute software, but you don’t do all that, you can generate a huge support headache, which university researchers are rarely set up to handle. Very few university research organizations are also software engineering organizations, even in computer science or electrical engineering, much less in other domains … and that’s OK.

    I’ve occasionally reviewed NSF grant proposals. If someone proposes to build a software tool for general use and distribution, that’s OK, but if they are doing research, they should be looking to efficiently generate results, and over time, errors will get sorted out in the usual process of science.

    Hence, while we have a legitimate wish for visibility of code and (especially) data, there is no better way to slow down research dramatically than to demand that research efforts turn into serious software engineering efforts as well. OF COURSE, this is exactly what some denialists are doing: taking a legitimate wish, and extending it so far that researchers stop doing paleoclimatology research, because they’re spending all their time doing software engineering and answering questions. Put another way, it’s turning legitimate skepticism into a delaying tactic.

    Personally, I think researchers *should* try to make key datasets available online, that grant proposals should include funds for doing this, and that standard analysis packages should be used as much as possible, to lessen the number of algorithm/implementation errors and resulting wasteful arguments. If somebody *wants* to make code available, that’s nice, although I’d expect it would be like the early UNIX distributions: “Here’s the code, as is; don’t call us.”

    [Response: You bring back a lot of old memories; I used WATFOR in 1968, but I was 12 years old at the time (and lucky enough to be in a special summer program for gifted science students)!

    I don’t know what the case is in climate science, but I do know about statistical analysis in astrophysics. Homemade code is the rule, standard stats packages are the exception. It’s partly due to habit, but it’s also due to the fact that the standard stats packages aren’t sufficiently cutting-edge to have the tools really needed. For instance, in astronomy one almost never gets a time series which is evenly sampled in time — and some time series are *pathologically* unevenly sampled — so most everything in a standard stats package is right out. In fact, astronomers, and mathematicians working with them, have been at the forefront of statistical methods to deal with unevenly sampled time series. Also, everybody has his own “pet methods” which may or may not be included in a stats package. Furthermore, many astronomers are closet mathematicians and like to invent *new* methods (which sometimes inspires, and sometimes frustrates, us mathematicians!) which don’t exist in any standard package. This carries the twin danger, that the program code may have errors, and that the statistical evaluation of the method itself may be flawed.]

  • ks // May 11th 2007 at 9:59 pm

    Thanks for the post, tamino. I appreciate the work. I am sorry you have to deal with some readers that argue from after-the-fact positions. If the adjustments show less warming, then they are correct and if the adjustments show more, then it’s a biased results.

    I’ve found some of the positions at climateaudit interesting. That the 1990 reconstructions are somehow more valid than recent multi-proxy reconstructions is one such position. Not sure how one can distrust recent reconstructions based on implied arguments or guilt-by-association haggling of Mann 1998, but blindly trust the older one. And somehow everything comes back to Mann 1998… discussion about TGGWS fabricating solar activity data and stopping the correlation over 20 years ago (when it broke down) relates back to Mann’s reconstruction from multi-proxy data… incredible.

    I also find it odd that there appears to be a vocal majority that cries out “bias” if you criticize TGGWS without criticizing AIT first (maybe it’s a right of passage before posting on climateaudit, I’m still trying to figure it out).

    In response to the previous post by Mr. Mashey stating, “IDEALLY, I’d sure wish that: a) All key data be easily accessible on-line b1) All code be available on-line” I would point to the response of Micheal Mann to Rep. Joe Barton’s 5th question http://www.realclimate.org/Mann_response_to_Barton.pdf

    “[This] presumes that in order to replicate scientific research, a second researcher has to have access to exactly the same computer program (or ‘code’) as the initial researcher. This premise is false. The key to replicability is unfettered access to all of the underlying data and methodologies used by the first researcher. My data and methodological information, and that of my colleagues, are available to anyone who wants them. As noted above, other scientists have reproduced our results based on publicly available information. It also bears emphasis that my computer program is a private piece of intellectual property, as the National Science Foundation and its lawyers recognize.”

    However, Mann has posted his program online.

  • plum // May 12th 2007 at 3:47 am

    Thanks for your reply. I’ve only just started engaging in the comment thread battles after having lurked in the background for years. I “kinda sorta” knew that you and Dano are right about the endless capacity of some “sceptics” to engage in self-delusion, but dammit I just had to find out for myself.

    Interestingly, one lesson I learned was that you don’t necessarily have to be a climate scientist (I’m not!) to see the holes in their arguments — but you do need to have a good understanding of rhetorical tricks and logical fallacies.

    The comment thread is here

    http://www.sirhumphreys.com/adolf_fiinkensein/2007/may/07/yup_its_the_noo_religion

    but you don’t need to visit to guess that the NZ commenter has an alliterative name and a seeming addiction to puffing up his self-image as a rebel maths modeller who knows so much more than almost all those dumb and insular climate scientists.

  • John Willit // May 12th 2007 at 1:24 pm

    I think we should start from the basic philosophy of science - replication of experimental results by other scientists.

    Many theories and breakthroughs in the history of science have been discarded after other researchers were unable to replicate the results.

    No one has been able to check the adjustments made by Hansen and Jones (and GISS and the Hadley Centre.)

    Both researchers and both centres also run GCMs and Hansen even published a long-term forecast in 1988.

    Temperatures have increased by 0.8C since 1900 (and the adjustments made to the historical temperature datasets account for 0.7C of that increase.)

    Given the bias (or let’s say lack of objectivity) shown by these researchers, the fact that they are both the temperature record authorities and managers of large GCMs, and the fact that the “adjustments” made account for virtually all of the increase in temperatures in the last century, I am a little sceptical.

    [Response: What a crock! You’re not skeptical, you’re deluded.

    The procedures used to adjust the data by Hansen et al. (at GISS) are thoroughly documented in the papers linked to in this post and references therein. And ALL the data are downloadable from the NASA GISS website.

    So I’ll tell you what I told Glen Raphael: if you think mistakes have been made or deliberate bias was instituted, then all the data and all the procedures are there for you to download — get busy and do the work. If you find evidence of significant mistakes or fraud, put it in the peer-reviewed literature. Until then, put up or shut up.

    Your implication is clear, that Hansen at GISS (and Jones at HadCRU) got the results they did because of personal bias rather than objective data and analysis. But you don’t have even a *shred* of evidence to offer. It’s illustrative of the intellectual and *moral* emptiness of your position.

    You’re about as deep in the dark as it gets. Even the vast majority of your delusionist friends don’t buy that “adjustments made to the historical temperature datasets account for 0.7C of that increase” crap.]

  • Michael Jankowski // May 12th 2007 at 8:09 pm

    While the process of flagging may be based on guidelines, the final part of the decision-making process can certianly be influenced by the subjectivity you mentioned:

    “…Then, all flagged data were graphically displayed, along with neighboring stations that contained data during the period in question, and a subjective decision was made as to whether the apparent discontinuity was flawed data or a potentially real climate anomaly…”

    It would be more appropriate to find the past threads on Climate Audit discussing the topic of revising the instrumental records, rather than a more recent thread where a poster simply referred to the issue.

    For example, see “Adjusting USHCN History” here http://www.climateaudit.org/?m=200702&paged=2 .

    While it is powerful to say, “…After all, human error is not that rare, but for a normally distributed random variable to exceed 5 standard deviations is; the chance is less than 1 in a million…,” take a look at figure 2 on the linked CA page. Almost every year pre-1950 was adjusted downward while almost every year post-1950 was be adjusted upward. And note the pattern of adjustments compared to the pattern of temperature records. Generally speaking, the warmest pre-1950 periods are adjusted downwards the most, while the warmest post-1950 periods are adjusted upwards the most. How does this fit-in with the idea of a supposedly unbiased method of quality control being applied to a normally distributed random variable? And with some pre-1950 adjustments amounting to over -0.2 deg C while post-1950 adjustments amount to over +0.3 deg C, you’ve got a +0.5 deg C change there from min to max changes. While that +0.5 deg C (slightly smaller for GISS) may not be the resulting sum of changes for the entire period, you can see why someone would say something like, “the fact that the “adjustments” made account for virtually all of the increase in temperatures in the last century” as one responder did.

    The adjustments to HadCRU are discussed here http://www.climateaudit.org/?p=1106 and elsewhere. The effect of these adjustments is nothing like that of USHCN or GISS, and is almost exclusively downwards - except for the most recent years. The main problem with the HadCRU record is Phil Jones reluctance to be able to find/produce/share data.

    [Response: I’ve already pointed out (see the response to Glen Raphael’s 2nd comment) that the reason for the one-sided character of GISS adjustments to U.S. stations is the fact that time-of-observation bias is introduces a mostly one-sided false cooling trend. I also mentioned that this is only applied to U.S. stations, and the U.S. is only 2% of the globe, so the effect on the global average is tiny.

    The REAL dishonesty in this story is the insistence of the climateaudit people to deny the truth, and the real marvel is that people don’t get it even after it’s explained repeatedly. The people who run climateaudit *cannot be trusted*.]

  • John Willit // May 12th 2007 at 10:59 pm

    So you called me out and said the adjustments made do not add up to 0.7C.

    Why don’t you go through them all and add them up for us?

    [Response: Why don’t *you* process all the data? And while you’re at it, ponder how the *corrections* made to U.S. stations, which constitutes 2% of the globe, can possibly skew the global trend enough to be responsible for most of the *global* warming observed in the last century?

    The appalling thing is that the climateaudit people would keep *known errors* in the data, rather than apply necessary corrections, because they’re more interested in their disbelief than in the truth.]

  • John Norris // May 13th 2007 at 1:11 am

    1. re: “They are quality control measures, and as such, they don’t bias the results, or favor warming trends over cooling trends — they just make the results better.”

    Perhaps the quality control measures should be used to toss bad data, not to try and fix it. If it was measured incorrectly, the process needs to be fixed so it is measured correctly. Adjusting the data to the best expert guess is a reckless shortcut for such an important issue.

    2. re: “The people who run climateaudit *cannot be trusted*”

    Wow!

    The climate science community gave MBH a pass, the people who run climateaudit didn’t. Tell me again who can’t be trusted. Better say it louder this time, or provide a little more substantiation, it’s not getting through to me.

    [Response: Correcting the data to obtain the best expert estimation is not recklessness, it’s science. And nobody gave MBH a “pass” — that work has been subjected to intense scrutiny. Furthermore, MBH is only one of many paleoclimate reconstructions; I’ll bet you’re deluded enough to think they *all* got a “pass.”

    But you and the climateaudit people would rather bury your heads in the sand than face the truth. My “guess” is that it is not *possible* to get through to you.]

  • Heiko Gerhauser // May 13th 2007 at 10:51 am

    http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf

    “Although the contiguous U.S. represents only about 2% of the world area, it is important that the analyzed temperature change there be quantitatively accurate for several reasons. … perceptions of the reality and
    significance of greenhouse warming by the public and public officials are influenced by reports of climate change
    within the United States.”

    The comments thread on the subject over at climate audit is quite interesting. Having spent a few hours on the subject, I realise that it’s impossible for me to check the methods used for the adjustments myself within a reasonable timeframe.

    As one of the commenters at climate audit remarked, these station data weren’t really obtained with the goal of measuring century long temperature trends of a few tenths of a degree C.

    Hansen thinks there’s uncertainty of at least +/- 0.1C. This even seems on the low side to me, when I see how satellite data had to adjusted recently, and these were meant to measure trends.

    With station data there are sources of error that are now impossible to check. For example, just because people noted down that they had recorded a temperature at a certain time, doesn’t mean they actually did. This doesn’t matter, if there isn’t a trend in the lying, but there may be. And how can we check that kind of thing? The people who recorded station data in Siberia or Arizona in the 1930’s are long dead, and if all they did was to record temperatures at 12 o’clock rather than 3 o’clock, and put down 3 o’clock in their log, how to reconstruct that? The differences with neighbouring stations might be very subtle. It might be particularly bad in some regions, it might be an irregular occurrence.

    We’ve had much expansion of settlements, two warld wars, the great depression. And if adjustments of 0.3C are necessary for US stations, how accurate are data for WWI or WWII in Europe and the Soviet Union?

    And, I find the method for measuring urban heat island effect quite suspect. A station in a city may be 3C warmer than a rural station, but that won’t affect trends, if that differential stays the same. Central London I think was just as much an urban heat island in 1900 as it is today.

    What matters ought to be changes in the environment around stations, and a bit of tarmac around a rural station, or irrigation, or shading by trees, might make a difference. Again, that may be very hard to reconstruct today.

    Does all that make a difference to what I think about temperature trends for the world? Not much, I take the 0.7 +/- 0.2C as the best estimate we’ve got.

    On the other hand, I see that there is still disagreement about the best way to adjust temperature data for satellites, with strangely enough political views of the respective authors appearing to correlate with what adjustments they prefer.

    It doesn’t strike me as unlikely that Hansen unconsciously would favour a method that makes 2006 the warmest year ever in the US, when making it so, is a matter of choosing between a number of subjective adjustment choices which each make a difference of hundreds of degrees, when all that’s required is to adjust 1933 down by 0.05C more than with another set of adjustments that adjusts 1933 down by 0.33 rather than 0.39C.

    [Response: It strike me as *extremely* unlikely, because I *have* done this sort of work before.

    When analyzing the light curves of variable stars, one generally has to combine data for entire populations of objects, from large numbers of observatories, made by a large number of different observers, each using different instruments, under different conditions, sometimes using different methods. The only way to prevent subjectivity from influencing the outcome is to set up very strict objective criteria, that leave little or no “wiggle room” for one’s personal bias to influence the outcome. Good researchers learn to do this, because inevitably the scientific community *is* going to check your work. If it’s found to be not credible by reasons of personal subjectivity, they’ll put the evidence that your personal bias influenced your results in the peer-reviewed literature, and your reputation (and perhaps your future research as well) goes down the tubes.

    The only really subjective part of the Hansen et al. procedure is the visual inspection of suspect data, compared to data from nearby stations, to determine whether or not data points were blatant errors and should be removed. And as they well documented, data were only removed when there was simply no doubt an error had occured. *None* of the procedures which adjust the data, have any subjective element — when you determine the best-fit correction for urban heating, it’s simply not *possible* to choose the result you want; the choice is made by least squares fitting. It’s pretty clear to those of us who *have* done this type of work, that Hansen and colleagues were as rigorous as it’s possible to be given the available information.

    Nobody claims that the raw data, or the analysis thereof, is perfect. But personal bias, and the desire for a particular outcome, have nothing to do with its imperfections. If flaws are found and verified, they’ll quickly appear in the peer-reviewed literature. Regardless, we’ll continue to see *false* criticism on climateaudit.]

  • Heiko Gerhauser // May 13th 2007 at 7:20 pm

    I like Hansen’s papers. He’s clearly a good and thorough researcher, and I am also quite willing to accept an argument that he is aware of his biases, and compensates accordingly, when given deficient information, there are several reasonable choices.

    What I would not so readily accept is the notion that there are no such subjective choices to be made in the first place; that there is no room whatsoever for judgment.

  • guthrie // May 13th 2007 at 7:40 pm

    It is worth pointing out that even without the thermometer record, it is clearly getting warmer. For example, an old man here in Scotland kept a diary of work he did in his garden for over 30 years. He found that the date of first mowing of his lawn kept getting earlier and earlier, and the grass kept growing later in the year.
    Or in other words, the grass could tell it was getting warmer.

  • Steve Bloom // May 14th 2007 at 12:54 am

    Heiko, you’re being rather more even-handed regarding the satellite stuff than the record supports. Where did you get your information?FYI, the research team that has gotten involved in climate change politics (Spencer and Christy) is also the one that has been caught in multiple errors. The far more qualified RSS team, by contrast, have stayed out of the political debate (other than to criticize S+C in appropriate scientific vanues) and have made no such errors. Interestingly, each and every one of S+C’s errors have been on the cold side. What should we make of that?

    For some background, see here (and follow the links to the two prior posts) and here.

    But enough beating around the bush: What it looks like to me is that S+C’s interest in MSU data interpretation grew out of their political interest in trying to find data that conflicted with the surface trend, and that they were over their heads technically from the start. At this point, I don’t think they have a shred of credibility left.

  • ks // May 14th 2007 at 4:33 am

    Some people will say anything to try and cast uncertainty *cough*climateaudit*cough* and often resort to the oddest of claims regarding minutia (lying about sampling times comes to mind).

    And by the way, a tenth of a degree C over hundreds of thousands of stations over hundreds of thousands of days isn’t as small as some would have you believe. 0.1 C is 0.18 F (us Americans have to keep units straight)

    A word to the wise on “how to be a skeptic” taken from RC - http://www.realclimate.org/index.php/archives/2005/12/how-to-be-a-real-sceptic/

    1) “One needs to be aware that skepticism about whether a particular point has been made convincingly is not the same as assuming that the converse must therefore true.”
    - Not trusting the temperature record does not invalidate AGW

    2) “If a particular point has been argued to death previously and people have moved on (either because it was resolved, moot or simply from boredom), there is little point bringing it up again unless there is something new to talk about.”
    - Don’t start saying humans aren’t responsible for the increase in CO2

    3) “Skepticism has to be applied uniformly. Absolute credence in one obscure publication while distrusting mountains of ‘mainstream’ papers is a sure sign of cherry picking data to support an agenda, not clear-thinking skepticism.”
    - over-scrutinizing the temp record comes to mind for not applying uniform skepticism

    4) “Constructive skepticism is a mainstay of the scientific method. The goal of science is to come closer to a comprehensive picture of how the real world works, with skepticism essential to toughening up scientific ideas, though alone, it is insufficient to move understanding forward.”
    - Uncertainty for the sake of uncertainty helps no one. If humans were not responsible for the recent increase in global temperatures, it would be a giant relief to everyone. So rather than refuting AGW, go and prove another theory correct. Otherwise, you’re just a speed bump.

  • Heiko Gerhauser // May 14th 2007 at 12:48 pm

    Why did Hansen wait so long with implementing the time of day adjustment, if this adjustment was shown to be necessary in 1986? What “strict, strong objective criteria” did he use to decide to implement that adjustment in 2001, and not in 1999?

    [Response: I can’t speak for Hansen or NASA GISS, I can only guess. For one thing, it takes time to identify and incorporate all the corrections necessary, so it’s to be expected that the procedure will get better as time goes on. Also, GISS is primarily attempting to estimate the *global* temperature trends, and the time-of-observation bias primarily affects U.S. stations, so while it has a notable impact on U.S. temperature trends, it has very little impact on global trends. Also, we don’t have the metadata for most stations outside the U.S. to determine whether or not there have been changes in time of observation. But as I say, I’m just guessing.]

  • Heiko Gerhauser // May 14th 2007 at 1:24 pm

    With KS mentioning the number of stations, I had to think of this page from the GISS site:

    http://data.giss.nasa.gov/gistemp/station_data/

    On the lying about reporting times point, maybe I didn’t put that very well. This is about people being keen on their lunch, or on getting home early/late, or on tending their cows or whatever, and therefore taking a short cut. Presumably people had other commitments besides reading a thermometer once per day, so is it that odd to suggest that these other commitments might at times have interfered with the ability to read a thermometer at a set time every day, and rather than writing down that they only managed at 4 rather than at 3, people might have been tempted to gloss over being late/early?

    I am not suggesting a conspiracy here, and as long as misreporting of times is rare or random it won’t affect the trend.

  • Heiko Gerhauser // May 14th 2007 at 1:37 pm

    Thanks, yes that makes sense. Would you agree then that there is a subjective element in these kinds of decisions?

    In this case, in fact, on the face of it, Hansen seems to have made a conservative choice, ie waited longer than he had to, to make an adjustment, when given his well known stance on the urgency of climate change, he might have been tempted to rush it.

  • Michael Jankowski // May 14th 2007 at 2:01 pm

    “I’ve already pointed out (see the response to Glen Raphael’s 2nd comment) that the reason for the one-sided character of GISS adjustments to U.S. stations is the fact that time-of-observation bias is introduces a mostly one-sided false cooling trend. ”

    I apologize for being somewhat repetitive in missing that response.

    “I also mentioned that this is only applied to U.S. stations, and the U.S. is only 2% of the globe, so the effect on the global average is tiny.”

    The effect on the “average” may be “tiny,” but the effect on the trend is not. Based on the graphics, the net effect looks to be a change of about +0.15 deg C over the 20th century, or about 25% of the warming of the 20th century. That’s far from a “tiny” percentage. It’s amazing how much an adjustment of the data of just 2% of the globe - and arguably the best-kept and detailed data existing around the globe - affects the warming of the 20th century trend that is so widely reported.

    “And nobody gave MBH a “pass” — that work has been subjected to intense scrutiny. ”

    Really? Then how come so many errors - including some which are still not corrected - weren’t discovered until McIntyre and McKitrick started digging around 5 yrs after MBH98 was published? How come it took until M&M to reveal that what was archived as MBH98 data was actually the wrong data files, full of errors and omissions? That’s was passes as “intense scrutiny?” in your book? I don’t know whether to call that stupid, laughable, insane, or all three!

    “Furthermore, MBH is only one of many paleoclimate reconstructions; I’ll bet you’re deluded enough to think they *all* got a “pass.””

    Without getting into an argument about how valid and accurate paleoclimate reconstructions can possibly be in the first place (look at the proxy coverage of Mann and Jones 2003 http://www.ncdc.noaa.gov/paleo/pubs/mann2003b/mann2003b.html), how much scrutiny have these reconstructions truly received, with many still having unresolved issues of data archiving, using suspect input data, etc?

    Since you’ve got a finger on the pulse of climateaudit, that site has two recent threads dedicated to Briffa (May 9th) and Lonnie Thompson (May 10th). It sounds like you could go over there and fill in all the holes.

    While your at it, you might want to delve into the “intense scrutiny” archives and explain how MBH98 calclulated confidence intervals. Enlighten them all.

    [Response: I said,

    I also mentioned that this is only applied to U.S. stations, and the U.S. is only 2% of the globe, so the effect on the global average is tiny.

    You then say,

    Based on the graphics, the net effect looks to be a change of about +0.15 deg C over the 20th century, or about 25% of the warming of the 20th century. That’s far from a “tiny” percentage. It’s amazing how much an adjustment of the data of just 2% of the globe - and arguably the best-kept and detailed data existing around the globe - affects the warming of the 20th century trend that is so widely reported.

    Are you really that ignorant? To make a change of 0.15 deg.C in the global average, adjustments to U.S. temperature would have to amount to a change of 7.5 deg.C. Not even your climateaudit buddies believe that. I’ll bet the graphics in the climateaudit post are for U.S. temperature, not global temperature.

    The *claims* of M&M are every bit as “reliable” as your understanding of global average temperature. And paleoclimate reconstructions in general received intense scrutiny in the study by the National Academy of Sciences.

    As for going over to creationismaudit climateaudit to enlighten them on evolution temperature reconstructions, I prefer not to argue with people who have attached themselves to a particular opinion for ideological rather than scientific reasons.]

  • Michael Jankowski // May 14th 2007 at 3:31 pm

    “Are you really that ignorant? To make a change of 0.15 deg.C in the global average, adjustments to U.S. temperature would have to amount to a change of 7.5 deg.C.”

    Ignorant? No. In a hurry? Yes, they are US temps only, and not the global average. I should have known better in the first place, knowing they are US temp histories. I apologize for the error. Of course, you could have easily pointed that out as fact had you visited the site rather than “betting.” So how many of your positions are based on your “betting” faith?

    An issue still stands that these corrections, to probably the most detailed and well-kept record in the world, had an impact of about 25% on the 20th century US trend. What does this tell you about the accuracy issues that may exist globally?

    “The *claims* of M&M are every bit as “reliable” as your understanding of global average temperature. ”

    Wow, comparing a momentary brainfart with an actual publication which exposed and brought about MBH publishing a correction - that’s almost as good as your “intense scrutiny” claims!

    “And paleoclimate reconstructions in general received intense scrutiny in the study by the National Academy of Sciences.”

    I assume you’re referring to the NAS report 8 years(!) after MBH98, which maybe even you would admit was brought about in large part due to M&M criticisms. And it basically was a lit review with some recommendations and criticisms. The NAS made no attempts to verify or replicate any of the work that was done.

    “And paleoclimate reconstructions in general received intense scrutiny in the study by the National Academy of Sciences.”

    The NAS found faults common to many reconstructions (including MBH98), such as the inclusion of bristlecone pines as a temperature proxy. Once again, not attempts to verify or replicate.

    “As for going over to creationismaudit climateaudit to enlighten them on evolution temperature reconstructions, I prefer not to argue with people who have attached themselves to a particular opinion for ideological rather than scientific reasons”

    Yes, you would prefer to “bet.”

  • ks // May 14th 2007 at 11:02 pm

    Mr. Jankowski,

    If I may use this example to draw a take home lesson. It appears you have been duped by climateaudit. What you see over there amounts to slight-of-hand tricks in which Mr. McIntyre quibbles over minutia that implies more than he’s actually saying. He starts showing graphs and discussing the matter in broad terms. You’re left thinking he’s talking about global data when he isn’t. Other times he switches the topic. Rather then try to deal with accusations about TGGWS fabricating 25% of a graph, he starts talking about MBH 99 for the 50,000 post.

    You need to be careful about reading too much into his posts. Be careful to discern what is actually being said from what is implied. Sadly, I’m not sure you’re ready for the task.

  • stewart // May 15th 2007 at 3:20 am

    Wow.
    Tamino, I admire your patience. Simply put, if the data were as wobbly as implied by some of the posters, it would be simple to present the European or Asian or whatever data, to demonstrate no or an opposite trend. If temperature data disagreed with actual climate (number of frost-free days, etc) that would be simple to demonstrate. The quibbling and misinterpretation of numbers shows that there is no better ammunition.. Good - we can take the issue as settled, with a few people pretending they know something (like the folks I know who tell me that the first atomic bomb was tested in Canada, and they whisper it so no-one overhears them).

  • Mikel Mariñelarena // May 15th 2007 at 7:38 pm

    Hi Tamino,

    Thanks for your interesting blog.

    As for the issue at hand, what is your opinion on Phil Jones’ refusal to archive/release the HadCRU data and methods?

    Thanks again and best regards,

    Mikel

    [Response: Considering the politicization of the issue, and the tendency for delusionists to claim that data corrections are actually deliberate bias, Jones’ reluctance to hand them the “keys to the store” is at least understandable. However, I much prefer the approach of NASA GISS, to make all data freely available and methods well documented, so that the results can be replicated by *anbody* (albeit with a heckuva lot of time on their hands). Besides, I’m a data analysis junkie … and that requires data!]

  • Steve Bloom // May 15th 2007 at 10:42 pm

    It’s worth noting that GISS has vastly greater resources for such things than does HadCRU. It’s a struggle for the latter to even keep their web site up to date. Given the obvious motivations of McIntyre and crew, I don’t blame Phil in the slightest for minimizing the amount of time he spends dealing with them.

  • Dano // May 15th 2007 at 11:15 pm

    The quibbling and misinterpretation of numbers shows that there is no better ammunition.. Good - we can take the issue as settled, with a few people pretending they know something (like the folks I know who tell me that the first atomic bomb was tested in Canada, and they whisper it so no-one overhears them).

    Amen.

    They got nothin’.

    The society train has left the station, and we are debating adaptation and mitigation, not whether a totem has x or y degrees of shininess. Those few who continue to try to make luscious picnics out of crumbs are at the platform, jumping up and down, trying to get the departed train to look at their crumbs. Bye, bye contrascientists and denialists! Bye! Buh-bye!

    Best,

    D

  • John Willit // May 16th 2007 at 4:45 pm

    The “adjustments” that Hansen applied to the US temperature dataset were then extended to the whole world dataset.

    So the argument that it represents only 2% is invalid.

    [Response: Hansen’s documentation makes it clear (I thought I had, too) that time-of-observation bias corrections were NOT applied to non-U.S. stations. As Hansen et al. (2001) states,

    This time of observation correction is included in the current GISS analyses for USHCN stations. Such a correction is not generally required in the rest of the world, because the systematic shift from once a day evening to once a day morning observations which occurs at U.S. cooperative observer stations is not characteristic of most global observations.

    If you claim otherwise, then what is the source for your information?

    And the fact is, that time-of-observation bias corrections do *not* introduce false trends, they *remove* false trends.

    Urban heating corrections are applied to non-U.S. stations, but for U.S. stations the identification of rural and urban stations is based on satellite imagery, while for non-U.S. stations it’s based on population data. Of course, *those* adjustments (made worldwide) have a greater tendency to *reduce* temperature trends in the data, than to increase them.

    So the implication that false-warming corrections were applied worldwide (or at all, for that matter) is invalid.]

    Hadley Centre adjusted their global temperature record for the fourth time in 2006. This last adjustment resulted in an increase of 0.2C in the trend.

    Here is the archived chart from 2005.

    http://upload.wikimedia.org/wikipedia/en/archive/f/f4/20060121191701%21Instrumental_Temperature_Record.png

    Here is the current from 2007.

    http://upload.wikimedia.org/wikipedia/en/f/f4/Instrumental_Temperature_Record.png

    [Response: You need to get your vision checked. The main difference is that the later graph makes the 1900s and 1910s, as well as the 50s and 60s about 0.1 deg.C cooler, but from 1975 to the present (the modern global warming era) there’s practically no difference. Then trend during the modern global warming era is pretty much unchanged.

    And … this post wasn’t about HadCRU, was it?]

  • Caz // May 19th 2007 at 11:30 am

    Tamino

    You are clearly an intelligent person and you do, on occasions, make reasonble points, but I think you’re less than open-minded about certain gw issues. An example is the MBH (and other proxy) climate reconstructions. Apart from the fact that there are clear errors in the methodolgy used by MBH, there are serious questions as to whether proxy data, in particular tree rings, can be used to construct an accurate representation of past climate. Michael Jankowski, in an earlier post, hints at these problems.

    Think about it. To use tree rings would require a direct linear relationship between temperature and tree ring widths. Is there one? The mean annual temperature in the Malaysian rain forest is, at ~26 deg C, about the same as in the western sahara. Would tree growth be the same in both locations? Moisture, atmospheric CO2 concentrations and probably lots of other factors influence tree growth. At best, temperature might be able to explain 60% of the variance in tree ring widths. But, even that might then only in a relatively small interval.

    Again think about the extreme situations. If a tree grows Xmm at 15 deg C and (X + d) mm at 16 deg C - will it grow (X + 5d) at 20 deg C - what about 30 deg C. Trees species thrive within a range of certain optimum conditions. Too hot (or too cold) they die off and are replaced by different species - as has happened all over the world.

    In a nutshell, Reconstructions of climate using tree ring will significantly under-estimate the true variability of the climate.

  • george J // May 19th 2007 at 9:30 pm

    Steve Bloom said: “It’s worth noting that GISS has vastly greater resources for such things than does HadCRU. It’s a struggle for the latter to even keep their web site up to date.”

    Good point.

    But when it comes right down to it, nobody’s resources are infinite and that really means prioritizing what data they make avalaible and in what form.

    The issue clearly involves more than just putting the data on a website so that anyone can download it.

    A lot of people have neither the knowledge nor the expertise to understand (to say nothing of make legitimate scientific use of) such data — and the inevitable questions that such people have as a result just eat up researchers’ valuable time that would be better devoted to other things.

    Also, as one can gather from the most recent exchange between NASA GISS and Climate Audit’s Steve McIntyre, some people do not seem to be satisfied no matter what researchers do to accommodate their data requests.

    No researcher is beholden to provide their data to every Tom, Dick and Harry who wants it and I’m really not sure why any researcher would even bother with people who have contempt for the process by which they make their data available.

  • John Mashey // May 19th 2007 at 9:59 pm

    re: Caz & tree rings

    Just out of curiosity, is there some reason to care? Can you explain what they are? Wegman&co, after spending a lot of time on this certainly didn’t [1]
    (my usual quote):
    ‘As we said in our report, “In a real sense the paleoclimate results of MBH98/99 are essentially irrelevant to the consensus on climate change. The instrumented temperature record clearly indicates an increase in temperature.” We certainly agree that modern global warming is real. We have never disputed this point. We think it is time to put the “hockey stick” controversy behind us and move on.’

    [1] energycommerce.house.gov/reparchives/108/Hearings/07272006hearing2001/Wegman.pdf

  • Caz // May 21st 2007 at 9:10 am

    John Mashey

    You say

    “Just out of curiosity, is there some reason to care? Can you explain what they are? Wegman&co, after spending a lot of time on this certainly didn’t”

    Well yes there is if you want to keep referring to proxy reconstructions as evidence for unprecedented climate change. If you’re prepared to accept there is no long-term evidence - fine!

  • Caz // May 21st 2007 at 9:24 am

    John Mashey Part II

    Elsewhere on his blog, Tamino writes

    “Are you insane? Or are you blind? The highest medieval temperature in the Moberg reconstruction is 0.37 deg.C in the year 1105. The highest in the HadCRU series (northern hemisphere, just like the Moberg reconstruction) is 0.83 deg.C in 2003.”

    Tamino is using a proxy record to compare current temperatures. The fact that it is a questionable, highly dubious comparison doesn’t seem to occur to him. The Moberg reconstruction is just that - a reconstruction. The HadCRU temperature record is a record of actual measurements. To compare the 2 is like comparing apples with oranges.

    I’ve already commented and given reasons why proxy reconstructions fail to represent the true climate history. I stand by this. But, if you insist on trying to compare 2003 with the medieval period as depicted by the Moberg reconstruction - then you need to extend the Moberg reconstruction up to 2003 using the same calibration period as used in the original reconstruction.

    I don’t wish to pre-judge the results but I doubt very much if the extended reconstruction will come anywhere near close to simulating the temperature increases that are evident in the surface temperature record(s). In fact, I know it won’t.

    There are 2 studies (Briffa et al, Esper et al) which have produced tree ring reconstructions which run up until the end of the 20th century (MBH and others only extend up to c1980). In both these reconstructions there is reasonable agreement with thermometer measurements around the middle of the 20th century (probably the calibration period) but the reconstructions actually showed cooling towards the end of the century. Both papers comment on the massive under-estimation by the reconstructions compared to the thermometer record. They blame “unknown 20th century factors” .

    Still – it’s a neat trick. I’ll give you that. Take the low variance proxy data then graft on the more sensitive thermometer record and hey presto – “unprecedented warming”.

  • finings // May 31st 2007 at 11:37 pm

    I admire what the GCMers have been able to do with so little data, so little knowledge, so little computing power and a lot of handwaving. Imho until they can get reasonable fits for historical data prediction their advice should be taken at the same level as an honest horse tipster. I don’t think their research is fraudulent but what is done with the results must come close.
    What is the current average accuracy in historical data prediction? Is it past 50% yet? I can’t find numerical info.

    Is there yet any empirical evidence tying co2 rise to temp rise yet?

    It is all moot anyway, Kyoto proved that already. We should be looking for real solutions that work and provide real climatic returns instead of playing power/money games.

    If there really is a problem.

  • John Willit // Jun 5th 2007 at 2:02 pm

    If you go back and look at how much they have adjusted the temperature record AFTER the initial GCM results were hailed as accurately matching the historical climate, you have to just conclude that they just make up the data and the models results as they go along.

Leave a Comment