Open Mind

Open Thread #5

August 10, 2008 · 436 Comments

For discussion of things global-warming related, but not pertinent to existing threads.

Categories: Global Warming

436 responses so far ↓

  • Bella Green // August 10, 2008 at 3:40 am

    This is the perfect place to thank you again for the endless time, effort and patience you put into this blog and to encourage you to continue. I am using your brains to help me write my lectures for the class I’ll be “teaching” on climate science this fall for Senior University (Southwestern University in Georgetown, Texas). Between your site and the fine gentlemen at RealClimate, I’m confident that I’ll be able to communicate the basics and answer all their questions. I don’t think I’d have to nerve to do this without having real scientists as backup, though that won’t stop me from having a shaking fit after every lecture and having a glass of Scotch when it’s over! Public speaking is *not* easy for me. And I want to remind you that, even if it often doesn’t seem like it, you really are making a difference “out here”.

    Ya know, I think there should be a different, more dignified name for blogs like yours that challenge us and increase our knowledge — a title that separates them from girls going on about makeup, for example (though those blogs do have their uses when one’s 12 yr old daughter is discovering makeup (I’m SO not ready for this!))

    Ah yes - my cat Greymantle is reading this (really) and says hello, and asks your Blueberry to please avoid walkabouts longer than overnight, and also strongly warns against getting into fights with anything that can get one’s entire head into its maw. Cheers, mate!

    [Response: Best of luck with the class. If specific questions arise, feel free to ask them here, but be advised that the folks at RealClimate know a lot more than I do. By any chance are you an Aussie?]

  • Paul Middents // August 10, 2008 at 6:38 am

    Bella,

    Press on fearlessly in your teaching. You will touch a few and one of them might make a difference. This website and RealClimate are great resources. There are lots of others. Check out the septic sites too. Every once in a while a wise a** in your class will and you need to be prepared.

    I taught Astronomy (among lots of other things) for ten years in a community college (1991-2001). Global warming was becoming an issue. I was skeptical at first out of shear ignorance. A little study of the physics and history quickly converted me. I regret that I did not give the issue enough prominence in my classes.

    You can make a difference, one student at a time. They remember you and they remember what you say. It’s a little scary.

    Paul

    [Response: Which reminds me: Spencer Weart's Discover of Global Warming is a resource of tremendous value, one of the best.]

  • michel // August 10, 2008 at 8:45 am

    http://www.guardian.co.uk/environment/2008/aug/10/climatechange.arctic

    Is this true?

  • Lazar // August 10, 2008 at 11:13 am

    Why the Climate Audit / David Stockwell attack on CSIRO “Drought Exceptional Circumstances Report” is wrong.

    The CSIRO report predicts increasing frequency and severity of exceptional temperature and rainfall events, over all seven regions of Australia for temperature, and three of seven regions for rainfall (no discernible changes in the others). An exceptional temperature event, in the context of drought, is an annual average temperature above the 95th percentile of observed temperatures during 1910-2007. An exceptional rainfall event is likewise a total annual rainfall below the 5th percentile for 1900-2007. This difference in periods is due to availability of reliable observational data for temperature and rainfall. Severity is measured as the area effected during an exceptional event. Predictions were made using an ensemble of 13 GCMs.

    David Stockwell claims

    all climate models failed standard internal validation tests for regional droughted area in Australia over the last century

    The tests David Stockwell employed were…

    … correlating model predictions for individual years of exceptional rainfall with observed years of exceptional rainfall! This ignores noise (internal variability in the climate system and GCM climate simulations) and that the CSIRO report predicted frequency. Steve MicIntrye and the auditors repeat this mistake here, with the obligatory snark from Steve (“Even for Michael Mann, a correlation of -0.013 between model and observation wouldn’t be enough. For verification, he’d probably require at least 0.0005.”) and a 100-word paragraph about the trouble involved in untarring a .tar archive.

    … comparing trends from linear regression. For each year of modelled (mean of 13 GCMs) and observed data he took the area effected, but for years when there was no exceptional event (i.e. most years) he used an ‘area effected’ value of zero, resulting that the residuals are not even close to normally distributed. Still, he applied a t-test to the difference in observed and modelled trends. But the error term was calculated only as the standard deviation of the 13 GCM modelled trends. He ignored the error in calculating a trend itself, which when taken into account renders the observed and modelled trends as statistically insignificant (not different from zero) — unsurprising given the treatment
    of years not containing an exceptional event.

    … he claims to test “The probability of significance of the difference between the observed trend and mean trend projected for the return period (returnp-p), the mean time between successive droughts at the given level.” and concludes “This indicates the frequency of droughts in the models has no relationship to the actual frequency of droughts”What he actually did was compare the mean for the entire period 1900-2007 of the number of years between exceptional events for modelled and observed data. Not trends.

    … he completely ignores the analysis of exceptional temperature events in the CSIRO report which incidentally show much better correlations between model and observed.

    … he claims that GCMs are calibrated on regional precipitation data. “Standard tests of model skill are either internal (in-sample) validation, where skill is calculated on data used to calibrate
    the model, or external (out-of-sample) validation, where skill is calculated on held-back data. As external validation is the higher hurdle, poor internal validation blocks further use of the model. Here internal validation is performed
    on the thirteen models over the period 1900 to 2007 for each of the seven Australian regions.”
    They are not.

    This is the first time I am actually angry about…
    Denialists pestering scientists.
    Producing disinformation.
    And setting themselves as auditors in order to sell that disinformation.

    “Key claims of the CSIRO report do not pass obvious statistical test for “significance”.” — Steve McIntyre.

    “Studies of complex variables like droughts should be conducted with statisticians to ensure the protocol meets the objectives of the study.” == David Stockwell

    “I don’t think its fair to single out CSIRO. You need to identify the enemy — IMO bias and pseudoscience. There are targets for review everywhere. The public face of science has shifted from atom splitters to GHG accounting.” — David Stockwell

    For a reasonable model-observation comparison, do read the CSIRO report especially figures 8 and 10.

    Thanks (I think) to ST for pointing this out.

  • Allen // August 10, 2008 at 1:10 pm

    Lazar,

    Thanks for the review. I took your advice and downloaded the CSIRO report and looked at the figures you suggested. I also read Stockwell’s report.

    He says as his #1 critique: “…While drought area decreased
    in the last century in all regions of Australia except for Vic&Tas and SWWA,
    the models simulated increase in droughted area in all regions. The
    Vic&Tas region has very low observed trend (+1% per year) in droughted
    area. This means the climate models are significantly biased in the opposite
    direction to observed drought trends…”

    Your recommended CSIRO Figure 10 seems to bear him out. The actual data seems to decrease over time in all but two areas– while the models show an increase over time in all the areas. That is, the model trends are opposite the actual data trends even in the calibration period.

    Also, the CSIRO report authors’ Summary indicates “…the
    qualitative assessment that the temperature data have the lowest uncertainty, that there is higher uncertainty with
    the rainfall data, and that the soil moisture data – being derived from a combination of rainfall data, low resolution
    observations of evaporation, and modelling – are the least reliable…”

    So, even the CSIRO authors themselves seem to be saying something similar to what Stockwell said — that is, they caution regarding drought aspects of the report.

    I am perplexed. There does not seem to be much of a disconnect on the content of Figure 10 and its import regarding the model predictions.

    Anyhow, I’ve bookmarked this site (my first visit) as it seems to provide more depth than many.

  • dhogaza // August 10, 2008 at 1:58 pm

    Is this true?

    Is what true? The reported observation that melting of the arctic ice cap accelerated in mid-July, thus putting things back on track to meet or break last year’s low ice extent record? Yes, that’s true.

  • Lazar // August 10, 2008 at 4:32 pm

    Allen,

    Thanks for the response.

    He says as his #1 critique: “…While drought area decreased in the last century in all regions of Australia except for Vic&Tas and SWWA, the models simulated increase in droughted area in all regions. The Vic&Tas region has very low observed trend (+1% per year) in droughted area. This means the climate models are significantly biased in the opposite
    direction to observed drought trends…”

    First off, talk of “droughted area” instead of ‘area effected by extreme rainfall’ is wrong (the CSIRO report does not talk of “droughted area”) and elides the role of temperature and its analysis in the CSIRO report. Drought is multiply defined and is effected by a combination of rainfall, temperature, and wind speed, not just total or average amounts, but when (what time of year).

    A better understanding can be found in…

    Cai, W., and T. Cowan (2008)
    Dynamics of late autumn rainfall reduction over southeastern Australia
    Geophys. Res. Lett., 35
    doi:10.1029/2008GL033727

    and

    Cai, W., and T. Cowan (2008)
    Evidence of impacts from rising temperature on inflows to the Murray-Darling Basin
    Geophys. Res. Lett., 35
    doi:10.1029/2008GL033390.

    And here (also read Luke’s comments). The situation in the MDB is at critical.

    The CSIRO report analyses extreme events of temperature and precipitation. A single extreme temperature or precipitation event is not of itself sufficient for the Australian National Rural Advisory Council and the Minister for Agriculture, Fisheries and Forestry to issue an exceptional circumstances (aka drought) declaration. Equating an extreme precipitation event with drought is simply wrong.

    Stockwell claimed that the data and his analysis showed, among other results, that “the climate models are significantly biased in the opposite direction to observed drought [not "drought" -- Lazar] trends”. That claim and others are demonstrably (above) false.

    The claim that the graphs show modelled and observed trends are of the opposite sign is your claim. Eyeballing graphs is not reliable though. The data used to produce the graphs are 10 year moving averages and therefore highly autocorrelated. You would need to test for significance and account for autocorrelation to make a solid claim.

    Anyway, I hope you stick around this site.

  • Lazar // August 10, 2008 at 4:57 pm

    Allen,

    trends even in the calibration period

    GCMs are not statistical models. The atmosphere is divided into parcels, boundary conditions applied (surface topography, oceans, forcings) under well-defined physical laws, conservation of energy exchange, mass, momentum between parcels calculated, the whole thing swirls into motion.

  • John Mashey // August 10, 2008 at 6:35 pm

    Although the following doesn’t Open Mind’s fine technical analyses’ focus, since this thread has had some material on disinformation:

    Synopsis of Naomi Oreskes, You CAN argue with the Facts - Full Talk, April 17,2008 - Stanford U - 40 minutes .

    Naomi is an award-winning geoscientist/science historian, a Professor at UCSD and as of July, promoted to Provost of of the Sixth College there. She is also a meticulous researcher, as seen from past books, and from having reviewed a few chapters of the book she mentions in the talk. She unearthed some fascinating memos, although of course, impossible to replicate the exhaustive database of tobacco documents.

    If you haven’t seen her earlier 58-minute video, The American Denial of Global Warming”, you might watch that first. It’s first half is a longer version of the development of climate science, and the second half is about the George C. Marshall Institute.

    This talk has about 10 minutes of background, and the rest is new material on the Western Fuels Association.]

    The video production isn’t flashy, but it’s good enough. This, of course, is an informal seminar talk - for the thorough documentation, you’ll have to await the book.

    ======SUMMARY=====
    00:00 Background [fairly familiar, some overlap with earlier talk]

    10:30 1988, Hansen in Congress, IPCC starts

    11:05 “Tobacco strategy” to challenge science

    I.e., use of similar techniques, sometimes by same people

    14:50 Western Fuels Association (Power River coal companies)

    Sophisticated marketing campaign in test markets

    17:20 1991 - WFA creates ICE - Information Council for Environment

    ICE ~ Tobacco Industry Research Council (TIRC) -
    See Allan M. Brandt, “The Cigarette Century”

    21:00 WFA print campaign

    23:00 Scientists are more believable than coal people, so use scientists, create memes

    25:30 WFA produces video “The Greening of Earth”, provides many copies

    The Greening Earth Society (astroturf); more CO2 is good for the whole Earth Excerpts from video

    30:00- Video shows the Sahara turning completetely green

    32:20- “Plants have been eating CO2 and they’re starved”
    Discussion of circumstances under which CO2 does help and illustration of marketing tactics, cherry-picking, etc. I.e., how does one use a few tidbits of real science to create an impression very different form the overview? Are there lessons for scientists?

    40:00 end

  • Hank Roberts // August 10, 2008 at 7:48 pm

    > GCMs are not statistical models

    Yep. Spelled out here:

    http://www.thebulletin.org/print/web-edition/roundtables/the-uncertainty-climate-modeling?order=asc

    —excerpt—–
    … this problem is not fundamental to climate models, but is a symptom of something more general: how scientific information gets propagated beyond the academy. What we have discussed here can be broadly described as tacit knowledge–the everyday background assumptions that most practicing climate modelers share but that rarely gets written down. It doesn’t get into the technical literature because it’s often assumed that readers know it already. It’s not included in popular science summaries because it’s too technical. It gets discussed over coffee, or in the lab, or in seminars, but that is a very limited audience. Unless policy makers or journalists specifically ask climate modelers about it, it’s the kind of information that can easily slip through the cracks.

    Shorn of this context, model results have an aura of exactitude that can be misleading. Reporting those results without the appropriate caveats can then provoke a backlash from those who know better, lending the whole field an aura of unreliability.

    So, what should be done? Exercises like this discussion are useful, and should be referenced in the future. But there’s really no substitute for engaging more directly with the people that need to know.
    —–end excerpt——–

  • Lazar // August 10, 2008 at 8:29 pm

    E.g., Allen, here is a plot of data from the Murray-Darling Basin from the CSIRO report. The black are observations, red are model values. Years without extreme precipitation are treated as missing values, and model values are the mean of the 13 GCMs, which is why the peaks of observational data are higher and the number of missing values greater than for model data. The model trend is up and the observed trend down, but neither trend is significant, and the observed is within the confidence interval of the modelled.

  • Brian D // August 10, 2008 at 11:23 pm

    This is a little off-topic for the proto-discussion forming here, but DeSmogBlog’s linked Open Mind. Seeing as WordPress isn’t particularly well-organized for browsing one blog by topic (and Tamino’s blogged nearly everything under a single “global warming” tag), I submitted a small selection of some of the more pertinent topics there (based on which inactivist arguments show up in the comments most commonly). The reaction from the resident denialists is, in a word, amusing (provided one can bring oneself to laugh at gross, irresponsible idiocy).

  • Luke // August 10, 2008 at 11:37 pm

    A challenge offered - (not by me - but $1000 up for grabs)

    http://www.jennifermarohasy.com/blog/archives/003315.html

    [Response: That's the same Jennifer Marohasy who recently posted We Aren’t Responsible for Rising Atmospheric Carbon Dioxide: A Note from Alan Siddons. 'Nuff said.]

  • Luke // August 11, 2008 at 12:20 am

    Of course - but $1000 quick bucks for any triers offered in that thread by Michael Duffy who runs a sceptic show on Australian ABC radio. Just FYI.

  • Allen // August 11, 2008 at 12:26 am

    Lazar,

    Thanks for the constructive reply.

    As a relative newcomer to “Climate Science” issues, I am having the probably-normal difficulties assessing conflicting viewpoints. A superficial look at (logical) arguments is not adequate, I find.

    Therefore, I floated my observation, hoping for a constructive reply — and I got one. Your reply gives me something to study and think about that should improve my understanding — once I do my “homework”.

    I’ll followup your references.

  • Allen // August 11, 2008 at 12:35 am

    Hank Roberts,

    Your “excerpt” hits one of the nails on the head. I find that, for the most part, articles and reports (pro and con) that I have scanned in the last few weeks leave out a lot of detail that I think should be in there — if the authors want to be convincing to an outsider. Sometimes, merely defining jargon in a glossary (or abbreviations at their first use) would help. Moreover, on the few occasions when I have dug down to the references (pro and con), they left out scientific steps necessary to be convincing to an ignorant but interested third party. Just an observation.

  • Bella Green // August 11, 2008 at 12:38 am

    Paul, thanks for the encouragement. I’ve done a couple of presentations and encountered the obligatory smart-a–, and didn’t lose my temper. So far so good…

    HB, I’ve recommended my students read Dr. Weart’s book before we start. And no, I’m not an Aussie, I just like words, and ‘walkabout’ is one of my favorites. I’m so very glad your cat survived his extended ‘walkabout’.

  • Hank Roberts // August 11, 2008 at 12:46 am

    Brian, I hope you asked our host’s okay before doing that. Else you’ll just increase his shitload — remember he’s got to shovel the stuff from people here who don’t come here to learn.

    A plea to web-competent folks — come up with something like a killfile or blackhole list or spam filter to which blog hosts can submit the IP addresses of persistent time wasters, to share flagging the copypasters. It’s not censorship to identify the sources, especially when they’re sockpuppeting. I suspect there are a lot fewer of them than the userids would indicate, from the amount of pure repetition. Google the obvious phrases, the ones that are the tastiest bait. They repeat themselves.

    Hosts, don’t bite. Their goal is to waste your time and delay, delay, delay your real work.

  • Luke // August 11, 2008 at 12:57 am

    But more importantly on the drought issue:

    Messy stuff - for starters the process of drought declaration and revocation needs to be modelled properly. For example I think in the state of Queensland if you declared drought on 12 month percentile 5 rainfall you might think that you end up in drought 5% of the time i.e. 5 years in 100 on “average”. But bad droughts are multi-year in nature. They persist. They persist as a “break” doesn’t occur. So if you use a revocation rule of reaching median rainfall you end up in drought 23% of the time (from memory of a Qld example). Much longer than 5%.

    From a severity point of view temperatures are up compared to previous droughts - supposedly making droughts worse. But if the southern circulation effects are to produce more high pressure systems over the continent, then wind may be less. And in the formulation of evaporation – solar radiation, wind and vapour pressure are more important than temperature. Having said that - (and speculating now) - high soil temperatures affect the vapour transport in soils making situations worse (So I’m told). So we don’t have evaporation sorted out in the modelling process.

    On the wind issue - http://ams.allenpress.com/perlserv/?request=get-document&doi=10.1175%2FJCLI4181.1

    I wouldn’t expect CSIRO to get the trend exactly right in their 1900-2000 runs. The years won’t match up. What they should try to get right is the spectral components of year to year and decadal variability. Something like a weather generator would do - the statistical properties should be correct but it won’t match any particular year.

    And there appears to be decadal, interdecadal and quasi-decadal modes in regional rainfall. Tamino will have to help me here as on statistical hiding to nothing but there is some feeling that AGW may be slowing down quasi-decadal variability - all speculation from myself as an agriculturalist but here you are:

    Rainfall Variability at Decadal and Longer Time Scales: Signal or Noise?

    http://www.bom.gov.au/bmrc/clfor/cfstaff/sbp/journal_articles/holger_jclim_2005.pdf

    http://jedac.ucsd.edu/PROJECTS/PUBLISHED/GDR_PREDICTION/GDR_Prediction.pdf

    McPhaden, M. J., and D. Zhang, 2002: Slowdown of the meridional overturning circulation in the upper Pacific Ocean. Nature, 415, 603–608.
    Allan, R.J. (1985). The Australasian Summer Monsoon, Teleconnections, and Flooding in the Lake Eyre Basin. Royal Geographical Society of Australasia.

    So how good is the modelling of all the decadal influences …. hmmmm …

    So IMO - not enough depth by CSIRO - a fairly modest analysis of a serious issue by Stockwell and hope we don’t throw the baby out with the bathwater. The Australian Government Treasury’s problem is that they have been shelling out billions of dollars in drought aid for decades. Some landholders may have had 200 years worth of support by now. So it’s a very reasonable question as to whether the probability distribution has changed. And multiple interactions abound.

  • Joseph // August 11, 2008 at 1:15 am

    That’s the same Jennifer Marohasy who recently posted We Aren’t Responsible for Rising Atmospheric Carbon Dioxide: A Note from Alan Siddons. ‘Nuff said.

    That’s amazing. How do they do that? I’ve looked at what I think is equivalent data, and the pattern is really clear and undeniable (graph of detrended series here). I can’t believe they wouldn’t see this. I can only suppose there’s some intentional obfuscation there.

    [Response: Ya think?]

  • trevor // August 11, 2008 at 4:14 am

    Lazar and Luke. Why don’t you pop over to David Stockwell’s blog where he has just posted a more detailed discussion on the CSIRO report. I am sure that he will be happy to engage with you.

    http://landshape.org/enm/cherry-picking-in-australia/

  • Hank Roberts // August 11, 2008 at 4:42 am

    Chuckle. Or wait til he he can get his thesis published in a refereed journal, whichever makes more sense to you to evaluate scientific claims.

    There’s always E’n'E.

  • Luke // August 11, 2008 at 5:00 am

    Argh - should have checked instead of using memory - my bad - the 24% was for decile one declaration scenario (percentile 10)

    Declared percentile 5 annual rainfall - revoked at percentile 30 rainfall - 8.2% area of the state of Queensland on average drought declared; revoke at percentile 50 rainfall 13.3% on average declared

    for simulated pasture instead of rainfall the percentages were 12.4% 17.8% respectively.

    from: National Drought Forum 2003: Science for Drought: Brisbane Australia pp 141-151

    Day et al.

    Simulating historical droughts: some lessons for
    drought policy

    1964-2003

  • michel // August 11, 2008 at 7:19 am

    The reported observation that melting of the arctic ice cap accelerated in mid-July, thus putting things back on track to meet or break last year’s low ice extent record? Yes, that’s true.

    See, this is what puzzled me. The article is at

    http://www.guardian.co.uk/environment/2008/aug/10/climatechange.arctic

    and it referenced an organization whose site is at

    http://nsidc.org/arcticseaicenews/

    where I can’t find any reference to dramatic events of the week before the dateline of the piece. The dramatic events seem not to have been reported any place else. The piece is datelined August 10, so I was expecting something to have happened in the first week in August. But not only did it not seem to be on the site, the charts in the above link don’t seem to show 2008 catching up with 2007. I can’t find any of the quotes from the article or anything approximating them, either.

    So what gives? Is it real?

    [Response: Perhaps Maslowski and/or Serreze are including up-to-date data on sea ice thickness (which isn't readily available as far as I know). The news story may be based in part on a presentation by Maslowski in June. Other experts don't agree, Stroeve expects arctic sea ice to last until about 2030.

    But the sea ice extent for this year is on track to be the 2nd-lowest all-time but not to break the all-time low observed last year. However, there's another month of the melt season yet to come, and they may have information that leads them to believe it'll break last year's record -- the ice is a lot thinner this year than last according to all reports I've seen. Extent only covers 2 dimensions, thickness is the missing 3rd dimension, so extent data alone don't tell the whole story.

    In general, it's wise not to take reports in newspapers too seriously; journalists have a habit of pronouncing every scientist's opinion as the latest authoritative truth, and of blowing things out of proportion. They also have a habit of emphasizing the dramatic, at the sacrifice of perspective and rigor. Websites run by scientists (like RealClimate or Cryosphere Today are a better bet for reliable information than newspaper articles.]

  • Matt // August 11, 2008 at 8:20 am

    Hank: Chuckle. Or wait til he he can get his thesis published in a refereed journal, whichever makes more sense to you to evaluate scientific claims.

    There’s always E’n’E.

    And let me guess, a similarly phrased paragraph about would result in yet another copy/paste soliloquy from you on “trolls.” Note that while I do respect but often disagree with the intellectual capabilities of the first 3, I cannot support you on KennyG.

    But of course, YOU aren’t ever one of those copy/paste trolls. Are you. It’s always the person you don’t agree with that is the troll. Perhaps the label “troll” is just a crutch to help you deal with things that are distastful to you.

    It reminds me that those that are usually the first to beg for tolerance are usually the least tolerant. And those that beg for giving are usually the most stingy.

    Ironic, is all.

  • Petro // August 11, 2008 at 1:58 pm

    michel asked:
    “So what gives? Is it real?”

    As it was explained to you by dhogaza above, the behaviour of Artic has been atypical since mid-July. If you do not believe scientists at NSIDC or commenters here, you can always turn on the primary data. From link below:
    http://rapidfire.sci.gsfc.nasa.gov/realtime/2008224/
    you can access satellite photos of the Earth since April 2001. Identify relevant Artic pictures and compare them between the years. It is evident even to layman, that the Arctic ice this year is different.

  • Hank Roberts // August 11, 2008 at 2:02 pm

    Matt, read David Brin’s piece.
    Yes, I know it bothers you that people don’t consider bloggers reliable sources of information.

    But there are very few bloggers who can cite sources, read science papers knowledgeably, and have a track record of being able to teach well.

    So I rely on refereed journals because while that’s not sufficient to know someone really knows what they’re talking about, it is at least a first hurdle passed and they are participating in a forum where knowledgeable people will correct their mistakes _in_the_journal_.

    If there’s no publication record, and nobody I consider trustworthy vouches for the blogger, then it’s just another blogger.

    No offense, man, but I don’t consider you a trustworthy source about published science, I don’t know you, I don’t know what if anything you’ve published, we haven’t any friends in common, and all I see is your opinion.

    Point to science journals and I’ll look at what you refer to. Point to bloggers and, yawn, maybe, but life is short and there’s plenty of good stuff to read.

  • dhogaza // August 11, 2008 at 2:24 pm

    Here’s the NSIDC graph that may’ve triggered that story. As you can see the decrease in ice extent accelerated slightly about the first week of the month, while last year at about this time we saw the curve starting to flatten a bit. I was wrong when I said mid-July, it wasn’t that early…

    However now it’s flattened out a bit again.

    A few days ago, some people were speculating that the acceleration in melting might continue and that the two lines (last year and this) might cross in September after all.

    I think what you’re seeing is some people obsessing over short-term (days!) fluctuations in the rate at which the arctic ice cap is melting, It’s like a horse race - who will win, 2007 or 2008? Treat it as fun, nothing more.

    Note that the latest piece on the NSIDC site is dated August 1, before that little uptick in the rate of melt. Obviously they themselves didn’t see it as being worthy of comment, as you’ve noted. And, it’s not, really, unless you’ve got a bet out as to whether or not the 2008 minimum will beat last year’s (and there are some people out there with public bets, so, sure, they’re going to be keenly interested).

  • Dano // August 11, 2008 at 2:29 pm

    MAtt:

    your head fake fails to distract away from the fact that denialists cannot discuss “their” “ideas” in refereed journals.

    Best,

    D

  • Hank Roberts // August 11, 2008 at 2:34 pm

    PS, Matt, if this was supposed to have some extra words in it, and was something relevant, try again:
    “a similarly phrased paragraph about would result in yet another copy/paste soliloquy from you on ‘trolls.’ Note that while I do respect but often disagree with the intellectual capabilities of the first 3, I cannot support you on KennyG.”

    I assumed that was failed snark and ignored that figuring you just dropped some words editing, but coming back, was it supposed to mean something serious?

    Try again if so. You’re coming up

  • Hank Roberts // August 11, 2008 at 2:42 pm

    Hm. WP does seem to be dropping edits.
    And Brin’s website isn’t responding.
    Bugs in the intartubes again?

    Anyhow, Matt, look this one up. He’s seriously addressing what’s missing in blogging compared to older areas where people disagree, and talks about why science done in the journals works:

    David Brin’s article ‘Disputation Arenas: Harnessing Conflict and … It was lead article in the American Bar Association’s Journal on Dispute Resolution …
    http://www.davidbrin.com/disputationarticle1.html

  • Hank Roberts // August 11, 2008 at 3:11 pm

    Here, save the trouble of reading it all, this is the core from Brin’s piece, and why blogging isn’t capable of resolving scientific issues (yet)

    ——excerpt—–
    What each of the older accountability arenas has — and today’s Internet lacks — is centripetal focus. A counterbalancing inward pull. Something that acts to draw foes together for fair confrontation, after making their preparations in safe seclusion.

    No, I’m not talking about goody-goody communitarianism and “getting along.” Far from it. Elections, courtrooms, retail stores and scientific conferences all provide fierce testing grounds, where adversaries come together to have it out… and where civilization ultimately profits from their passion and hard work.

    This process may not be entirely nice. But it is the best way we ever found to learn, through fair competition, who may be right and who is wrong.

    Yes, counter to the fashion of postmodernism, I posit the existence and pertinence of “true and false” — better and worse — needing no more justification than the pragmatic value these concepts have long provided. In science you compare theory to nature’s laws. … In a myriad fields, this process slowly results in better theories, notions, laws and products. Again, it is murky and inefficient… and it works.

    My point is that today’s Internet currently lacks good processes for drawing interest groups — many of them bitterly adversarial — out of those passworded castles to arenas where their champions can have it out, where ideas may be tested and useful notions get absorbed into an amorphous-but-growing general wisdom.

    Some claim that such arenas do exist on the Net — in a million chat rooms and Usenet discussion groups — but I find these venues lacking in dozens of ways. Many wonderful and eloquent arguments are raised, only to float away like ghosts, seldom to join any coalescing model. Rabid statements that are decisively refuted simply bounce off the ground, springing back like the undead. Reputations only glancingly correlate with proof or ability. Imagine anything good coming out of science, law, or markets if the old arenas ran that way!

    … I am selfish and practical. I want something more out of all the noise.

    Eventually, I want good ideas to win. …

    —–end excerpt——

    That’s why science is done in refereed journals.

  • Lazar // August 11, 2008 at 4:14 pm

    Atmospheric Warming and the Amplification of Precipitation Extremes

    Richard P. Allan and Brian J. Soden
    Science DOI: 10.1126/science.1160787

    Abstract:

    “Climate models suggest that extreme precipitation events will become more common in an anthropogenically warmed climate. However, observational limitations have hindered a direct evaluation of model projected changes in extreme precipitation. Here, we use satellite observations and model simulations to examine the response of tropical precipitation events to naturally driven changes in surface temperature and atmospheric moisture content. These observations reveal a distinct link between rainfall extremes and temperature, with heavy rain events increasing during warm periods and decreasing during cold periods. Furthermore, the observed amplification of rainfall extremes is found to be larger than predicted by models, implying that projections of future changes in rainfall extremes due to anthropogenic global warming may be underestimated.”

    (h/t abelard)

  • Joseph // August 11, 2008 at 5:13 pm

    Some claim that such arenas do exist on the Net — in a million chat rooms and Usenet discussion groups — but I find these venues lacking in dozens of ways.

    I had the feeling that online article predated blogs. It’s from 2000, so that’s pretty much the case.

  • Hank Roberts // August 11, 2008 at 5:46 pm

    http://www.gebco.net/data_and_products/gebco_world_map/images/gda_world_map_small.jpg

    Good bathymetric (depth) map of the Arctic, helps make clear that it’s a deep bowl with relatively narrow, and shallow, connections to the rest of the world’s oceans.

  • Hank Roberts // August 11, 2008 at 7:17 pm

    Joseph wrote:

    > that article predated blogs …

    Which have even less ability to handle bogus crap than the old Usenet newsgroups (which deprecated crossposting copypasted stuff).

    You understand he’s pointing out how it took centuries to make the other fora capable of sorting out and disposing of the crap, right? And how science does it?

    You know the need to look for subsequent references to material. Here:

    http://davidbrin.blogspot.com/2006/12/todays-centrifugal-net-is-not-arena-or.html
    Brin says, more recently:

    ——excerpt follows——-

    “Some of you have read my extensive essay - written for the American Bar Association - about the underlying common traits of markets, science, courts and democracy — the “accountability arenas” that have empowered free individuals to compete and create without tumbling quickly into repression and outrage…. for the first time, ever. Alas, over the years since, I have found that people have trouble perceiving some of what the paper describes… or why today’s internet just does not yet have what it takes to empower us with a “fifth arena.”

    … needed tools are absolutely missing.

    Oh, our would-be masters want it this way. Those who would return us to a style of feudalism. They would let us wrangle and spume and EXPRESS ourselves, endlessly online….
    —-end excerpt——

  • Joseph // August 11, 2008 at 11:07 pm

    You understand he’s pointing out how it took centuries to make the other fora capable of sorting out and disposing of the crap, right? And how science does it?

    The other fora have limitations too. There’s plenty of poor research that passes peer-review. I can provide a number of examples.

    Of course, peer-review is a good thing. It lends itself to some challenges, though, like accusations of establishment bias.

    Blogs don’t have anything like peer-review, but there are some areas where they are clearly an innovation, e.g. in how quickly feedback and corrections can be produced. A blog with an open comment policy can have rapid response reader-review, which is also open, as opposed to peer-review.

    In my experience, a lot of times you can tell which blogs are crank blogs by the way they arbitrarily delete comments, by the way they deal with corrections, and so on. Granted, there’s no objective way at the moment to tell a good blog from a bad blog.

    The scientific literature is for scientists. Blogs, on the other hand, can be for scientists but also lay people. I don’t have any hard evidence of this, but recently there was one of those polls in a blog I frequent which asked the following question (paraphrasing from memory).

    “Do you feel that the scientific community has done a good job of communicating the safety of vaccines?”

    The answer that won overwhelmingly was this:

    “No, but blogs like Orac’s help.”

    (Orac’s blog is a different blog to the one that had the poll).

    So I think that at least among blog readers, blogs have a lot of swaying power, more so than standard scientific authority a lot of times. That’s just how things go. Innovations are invented, and they can revolutionize the way we do things.

  • Hank Roberts // August 12, 2008 at 12:21 am

    re Lazar’s posting above, I’ve mentioned this one before; it’s a good place to start to find other paleo links to rainfall changes the last time there was a huge greenhouse gas excursion in a short period of time.

    It’s one of them feedbacks — huge rainfalls, extreme erosion, lots of fresh carbonate rock exposed, more rainfall, more extreme weathering, biogeochemical cycling.

    The lesson from the past is: don’t go there.

    http://ic.ucsc.edu/~jzachos/eart120/readings/Schmitz_Puljate_07.pdf

  • Matt // August 12, 2008 at 4:34 am

    Hank: assumed that was failed snark and ignored that figuring you just dropped some words editing, but coming back, was it supposed to mean something serious?

    Yeah, it ate a bit about if someone made the comment you made but instead of your target submitted Gavin, Mann or KennyG, you would have freaked out and called them a troll. Alas, the moment is lost. Not sure if you like KennyG or not. I don’t, so I’m pretty sure you do. :)

  • Hank Roberts // August 12, 2008 at 7:27 pm

    I don’t even know what ‘KennyG’ is!
    So I probably don’t want to know …

    One for Tamino:

    “… powerful statistical tools that allow scientists to run approximations of a climate model many times extremely quickly, providing … a large ste of results with which to calculate probabilities…. known in the trade as ‘emulators’ …”
    at p. 18 of the PDF file:

    http://www.nerc.ac.uk/publications/planetearth/2008/summer/sum08-rapid.pdf

    http://www.nerc.ac.uk/publications/planetearth/2008/summer/

  • David B. Benson // August 12, 2008 at 9:49 pm

    Tree-ring based reconstructions of northern Patagonia precipitation since AD 1600

    http://hol.sagepub.com/cgi/content/abstract/8/6/659

    Seems to be behind a paywall for me, but the abstract is interesting.

  • Hank Roberts // August 13, 2008 at 12:48 am

    http://features.csmonitor.com/environment/2008/08/12/are-they-really-going-to-gut-the-endangered-species-act/#comment-2456
    ——-excerpt——-

    … the proposed rules would prohibit federal agencies from assessing the greenhouse gas emissions from construction projects.


    After the AP broke the story, the Department of the Interior released a statement describing the proposed changes as “narrow.”

  • Duae Quartunciae // August 13, 2008 at 3:18 am

    Does this blog have a feed?

    [Response: I don't know! Anyone?]

  • Hank Roberts // August 13, 2008 at 4:43 am

    > feed

    Usual caveat, I know nothing, Nothing about this.

    I looked it up:
    http://codex.wordpress.org/WordPress_Feeds

  • Hank Roberts // August 13, 2008 at 4:47 am

    Oh, and here’s a site whose author figured it out (a tech book writer); here’s her explanation:
    http://www.mariasguides.com/2007/11/16/site-topics-available-as-rss-feeds-and-e-mail-subscriptions/

  • Duae Quartunciae // August 13, 2008 at 5:25 am

    Thanks… as my father says: When all else fails, read the manual.

    I have found your feed, and added it to my reader. The link I used for your feed is tamino’s RSS feed.

  • cce // August 13, 2008 at 7:36 am

    Not to cause controversy (actually, yes), the Auditors are working Wahl and Ammann (mostly Ammann) pretty hard lately. Many accusations and numbers thrown about, i.e. calibration and verification. I understand about 2% of this stuff, and I question the objectivity of the source. A post, perhaps?

  • David B. Benson // August 14, 2008 at 12:59 am

    “These data demonstrate that the MWP and LIA are global climate events, not only restricted to the Northern Hemisphere.”

    from

    http://www.cosis.net/abstracts/EGU2007/01568/EGU2007-J-01568.pdf

    a two page abstract.

  • Barton Paul Levenson // August 14, 2008 at 1:13 pm

    I can’t resist writing this in here, even though I just wrote it in at RealClimate. Call me a spammer.

    I found another mistake by Miskolczi. His equation (4) is:

    AA = SU A = SU(1-TA) = ED

    where

    AA = Amount of flux Absorbed by the Atmosphere
    SU = Upward blackbody longwave flux = sigma Ts^4
    A = “flux absorptance”
    TA = atmospheric flux transmittance
    ED = longwave flux downward

    These are simple identity definitions. I do wonder why Miskolczi used the upward blackbody longwave for the amount emitted by the ground when he should have used the upward graybody longwave — he’s allegedly doing a gray model, after all. Apparently he forgot the emissivity term, which is about 0.95 for longwave for the Earth. One more hint that he doesn’t really understand the distinction between emission and emissivity.

    Note that he seems to be saying the downward flux from the atmosphere (ED) must be the same as the total amount of longwave absorbed by the atmosphere (AA).

    The total inputs to Miskolczi’s atmosphere are AA, K, P and F, which respectively stand for the longwave input from the ground, the nonradiative input (latent and sensible heat) from the ground, the geothermal input from the ground, and the solar input. P is negligible and I don’t know why he even puts it in here unless he’s just trying to be complete. He’s saying, therefore, if you stay with conservation of energy, that

    AA + K + F = EU + ED

    Now, from Kiehl and Trenberth’s 1997 atmospheric energy balance, the values of AA, K, and F would be about 350, 102, and 67 watts per square meter, respectively, for a total of 519 watts per square meter. EU and ED would be 195 and 324, total 519, so the equation balances.

    But for Miskolczi’s equation (4) to be true, since AA = ED, we have

    K + F = EU

    That is, the sum of the nonradiative fluxes and the absorbed sunlight should equal the atmospheric longwave emitted upward. For K&T97, we have 102 + 67 = 195, or 169 = 195, which is an equation that will get you a big red X from the teacher.

    There is no reason K + F should equal EU, therefore Miskolczi’s equation (4) is wrong. Q.E.D.

  • Petro // August 15, 2008 at 10:58 pm

    Since there are several denialists in this site among the commenters, I would like to ask you a couple of questions:

    What evidence you have that Andrew Watts is telling truth?

    Why you consider him better source of knowledge on climate science than the climate scientists?

    These questions have puzzled me a long. Please help me to understand!

  • Barton Paul Levenson // August 16, 2008 at 1:37 pm

    Correction — in the next to last paragraph, “upward” should read “downward.”

    *sigh*

  • Hank Roberts // August 16, 2008 at 4:02 pm

    Talk of the Nation, August 15, 2008 · Scientists studying many different parts of the planet’s ecosystems are warning that Earth may be on the verge of a sixth major mass extinction event.

    http://www.npr.org/templates/story/story.php?storyId=93636633

    “I have had scientists who have pulled me over to the side and said in private much the, what you’re saying, ‘The situation is much worse than we are willing to talk about in public … we don’t want to scare people’”
    – Ira Flatow at 05:55

    “I remember the way the country organized after Pearl Harbor…Americans changed their entire economy in a year…. You can do it if you have the right incentive, and fear is, ought to be a great incentive if we care anything about children and grandchildren … there’s a lot of reason to be scared for them…. Have you heard anything about ecosystem services?”
    – Paul Ehrlich

  • Hank Roberts // August 16, 2008 at 4:18 pm

    Here’s the link to the PNAS article talked about in the NPR Science Friday audio file:

    http://www.pnas.org/content/early/2008/08/08/0801911105.abstract

    Here’s Wired online:
    NOTE, GOOD: links to the above and many other related science papers
    (pointing out how poorly this story is being covered)
    http://blog.wired.com/wiredscience/2008/08/the-sixth-extin.html

  • Hank Roberts // August 16, 2008 at 4:50 pm

    Here’s a collection of presentations from the National Academy on the current extinction:
    http://www.nasonline.org/site/PageNavigator/SACKLER_biodiversity_program

    Amphibians, where climate change has correlated with a fungus problem:

    http://progressive.atl.playstream.com/nakfi/progressive/Sackler/sackler_12_07_07/david_wake/david_wake.html

  • TCO // August 16, 2008 at 4:51 pm

    Paul Ehrlich? He’s been wrong on the disaster predictions before.

  • Dano // August 16, 2008 at 4:52 pm

    Hank,

    I’m an urban ecology guy. Green infrastructure, ecosystem services’ CBAs, built environment greening, nearby nature. I speak nationally several times a year on the topic of how to do green infra.

    What you quoted from Ira Flatow and Paul Ehrlich is absolutely correct. PNAS has a recent special feature on ecosystem services, and here is Paul’s latest, with a hopeful note at the end, after the reader must digest the passage

    Yet despite a ballooning number of publications about biodiversity and its plight, there has been dispiritingly little progress in stanching the losses—so little that some commentators have characterized applied ecology as ‘‘an evermore sophisticated refinement of the obituary of nature’’ (18). As conservation-oriented scientists, we are responsible for biodiversity. Its loss is our failure.

    [ pp 11579-80. emphasis added, footnotes omitted]

    “Having an ecological education means living in a world of wounds.” — Aldo Leopold

    ——–

    WRT feeds, if one is using FireFox, one can see the RSS feed logo in the browser window. Clicking on the logo allows a subscription. Yet another reason to use Moe-ziller.

    Best,

    D

  • Hank Roberts // August 16, 2008 at 5:29 pm

    TCO, we are currently IN the disaster Ehrlich was worrying about 40 years ago. And the trend is awful. Go talk to the nearest ecologist about it.

    The same bullshitters had a stable of lying crap artists working to fool people then as now.

    Look at the numbers.

  • Hank Roberts // August 16, 2008 at 5:31 pm

    Here, TCO. I realize how incredibly hard it is for people to understand this and how hard it is to believe that their personal experience isn’t telling them the state of the whole world.
    http://thingsbreak.wordpress.com/2008/08/12/a-case-of-the-mondays/

  • Hank Roberts // August 16, 2008 at 6:09 pm

    http://www.sfgate.com/cgi-bin/blogs/green/detail?blogid=49&entry_id=29113
    ____excerpt____

    … to comment on the Bush administration’s attempt to gut the Endangered Species Act …, turns out … the Fish and Wildlife Service is no longer accepting comments by email (H/T Grist). (It seems to have something to do with the 600,000 comments they got about protecting polar bears—the very thing they’re trying not to do.)

    That’s right, they want you to waste some paper trying to speak out for the environment…. And by name: They’ll be posting all the personal information you provide on their web page, which they apparently know how to use even though they choose not to.
    ————————–

    http://switchboard.nrdc.org/blogs/awetzler/bush_administration_decides_to.html

  • Petro // August 16, 2008 at 6:10 pm

    TCO tells:

    “Paul Ehrlich? He’s been wrong on the disaster predictions before.”

    Give us justification for your opinion.

  • TCO // August 16, 2008 at 9:14 pm

    [edit] Predictions and Quotes
    “In ten years all important animal life in the sea will be extinct. Large areas of coastline will have to be evacuated because of the stench of dead fish.” Paul Ehrlich, Earth Day 1970

    “Population will inevitably and completely outstrip whatever small increases in food supplies we make, … The death rate will increase until at least 100-200 million people per year will be starving to death during the next ten years.” Paul Ehrlich in an interview with Peter Collier in the April 1970 of the magazine Mademoiselle.

    By…[1975] some experts feel that food shortages will have escalated the present level of world hunger and starvation into famines of unbelievable proportions. Other experts, more optimistic, think the ultimate food-population collision will not occur until the decade of the 1980s.” Paul Ehrlich in special Earth Day (1970) issue of the magazine Ramparts.

    “The battle to feed humanity is over. In the 1970s the world will undergo famines . . . hundreds of millions of people (including Americans) are going to starve to death.” (Population Bomb 1968)

    “Smog disasters” in 1973 might kill 200,000 people in New York and Los Angeles. (1969)

    “I would take even money that England will not exist in the year 2000.” (1969)

    “Before 1985, mankind will enter a genuine age of scarcity . . . in which the accessible supplies of many key minerals will be facing depletion.” (1976)

    “By 1985 enough millions will have died to reduce the earth’s population to some acceptable level, like 1.5 billion people.” (1969)

    “By 1980 the United States would see its life expectancy drop to 42 because of pesticides, and by 1999 its population would drop to 22.6 million.” (1969)

    “Actually, the problem in the world is that there is much too many rich people…” - Quoted by the Associated Press, April 6, 1990

    “Giving society cheap, abundant energy would be the equivalent of giving an idiot child a machine gun.” - Quoted by R. Emmett Tyrrell in The American Spectator, September 6, 1992

    “We’ve already had too much economic growth in the United States. Economic growth in rich countries like ours is the disease, not the cure.” - Quoted by Dixy Lee Ray in her book Trashing the Planet (1990)

    ————————-

    http://en.wikipedia.org/wiki/Paul_R._Ehrlich

  • Matt // August 16, 2008 at 9:20 pm

    Petro: Give us justification for your opinion.

    http://en.wikipedia.org/wiki/Ehrlich-Simon_bet

  • Matt // August 16, 2008 at 9:27 pm

    Hank: TCO, we are currently IN the disaster Ehrlich was worrying about 40 years ago. And the trend is awful. Go talk to the nearest ecologist about it.

    Ehrlich predicted half our species would be lost by 2000, and that all would be lost between 2010 and 2025.

    I don’t think you can claim things are playing out as he has predicted.

    The man gets an F- for prediction accuracy.

  • dhogaza // August 16, 2008 at 9:37 pm

    Ehrlich predicted half our species would be lost by 2000, and that all would be lost between 2010 and 2025.

    I’d like a direct citation to where he said all life on earth would extinct by 2025.

  • dhogaza // August 16, 2008 at 9:38 pm

    Hank’s comment doesn’t declare that Erlich was right regarding the timeframe, but he was certainly right about the shape of the curve.

    We are in the midst of a major extinction event, and the pace is accelerating.

  • Hank Roberts // August 16, 2008 at 11:10 pm

    Matt:

    http://scienceblogs.com/intersection/jackson%282008%29.jpg

    Look at it.

    Do you feel anything, knowing these numbers?

    Do you feel anything, knowing you’ve been wrong?

  • TCO // August 17, 2008 at 12:06 am

    Yeah, like I said. He’s been wrong on the predictions before.

    He’s a nutter. A touchstone. An alarmist version of a skeptic kook. or like a real socialist is to a liberal fellow traveler. Like Che of t shirt fame. Yum, yum….

  • dhogaza // August 17, 2008 at 12:26 am

    I wouldn’t call him a nutter … and yes, he’s been hyperbolic but you do also realize that quote-mining a few hundred words from several books doesn’t necessarily paint a portrait, I should think …

  • dhogaza // August 17, 2008 at 12:36 am

    But let’s see, what lesson is there for climate science skeptics, here?

    Paul Ehrlich is an extremely good scientist. His dabbling in predicting future food supplies and the like have been wrong, no doubt about it. You’d *expect* skeptics to take to heart the lesson that an expert in one field may well fall short when dabbling in another. For instance, McIntyre in climate science. Or, say Lomborg about anything other than political science.

    His predictions regarding extinction rates aren’t really wrong in the same sense. He should’ve not staked himself to a timeframe. Here we sit in 2008, with increasing evidence that half of the world’s species may be committed to eventual extinction today. It may take the rest of the century for the story to play out, but the gist of the story is no different than the story told by Ehrlich: we’re dooming an inordinate percentage of our biological heritage to extinction.

    Of course, while Ehrlich was wrong about the scale of famine in the world in the 1970s and 1980s, those who were promising that technology would end world hunger in a similar timeframe were just as wrong. But we don’t hear about that so much from the right, do we?

  • Hank Roberts // August 17, 2008 at 12:47 am

    And yet, the cold numbers say this:

    http://scienceblogs.com/intersection/jackson%282008%29.jpg

  • dhogaza // August 17, 2008 at 3:02 am

    Oh, but Ehrlich did talk about extinction. Hank, what you’re linking supports the notion of “commitment to extinction”.

    In the long term view there’s no distinction. But in efforts to debunk Ehrlich, it’s everything, like a few centuries vs. his two or three decades makes a difference.

    Wrong, from all we know, but hmmm … trivially true.

  • Dano // August 17, 2008 at 5:39 am

    The thing wingnuts don’t want to believe is that if Ehrlich was off by, say, 40-50 years, what’s that % wrongness?

    IOW: the denialists are grasping at straws.

    Best,

    D (whose grad advisor was postdoc in Ehrlich’s lab and who has been lucky enough to have Paul explain this stuff in person, face-to-face).

  • TCO // August 17, 2008 at 2:05 pm

    Dhog/Hank: It’s a balence. If you make less definite predictions, or more tentative ones, then you lose all the excitement level. If Ehrlich really believed the extreme predictions that were wrong, he had a wrong world view and should learn from his mistake. If he didn’t believe them….well that’s just demogaugery. In any case, I see a conjunction of science with PR…where the science suffers. Kinda reminds me of Climate Audit.

  • dhogaza // August 17, 2008 at 3:52 pm

    Well, Ehrlich openly admits that many (not all) of his predictions didn’t come to pass.

    Unlike the guy who runs Climate Audit, he is able to admit when he’s been wrong …

  • Dano // August 17, 2008 at 4:02 pm

    If Ehrlich really believed the extreme predictions that were wrong, he had a wrong world view and should learn from his mistake. If he didn’t believe them….well that’s just demagoguery. In any case, I see a conjunction of science with PR…where the science suffers. Kinda reminds me of Climate Audit.

    In 2-4 generations (IMHO), folk will look back at statements like these and shake their heads in wonder, asking why few were listening and why that society thought because the timelines were a few decades off made the information not worth listening to.

    I’d call it pathetic, but it is more accurately the human condition - calling the human condition pathetic doesn’t do anything useful.

    Best,

    D

  • Hank Roberts // August 17, 2008 at 4:18 pm

    TCO, you’re hockey sticking again.

    Look at the extinction numbers now, after 20 years of study. Don’t blow off what’s known now because the 20 year old early work, when the concern was first raised, was imperfect.

    Look at the world and how much is already lost.
    Did you read those numbers linked above? Can you imagine how an ecology can work with such losses?

    “And then . . . they came for me . . . And by that time there was no one left to speak up.”

  • TCO // August 17, 2008 at 4:27 pm

    Ypu’re living a meme, Hank. the guy’s attention getting, publicity grabbing predics were wrong. If he had predicted the truth and taken away all the England will be gone by 2000 silliness, he would not have had any noteriety.

  • Lee // August 17, 2008 at 6:36 pm

    In practical terms, Ehrlich does not matter. What Ehrlich said back then DOES NOT MATTER.

    What matters is what has actually happened over the last 10, 50, 100 years - which is INDEPENDENT of what Ehrlich predicted.

    And what is happening, is that we are seeing ecosystem and ecosystem service collapses on massive scales - that list that just got posted is chilling. Massive worldwide fisheries collapses, oceanic dead zones, tropical forest removal and collapse, and on and on. We are seeing extinction, commitment to extinction, simplification of ecosystems, alteration even of species - N.A. Cod , for example, have under massive fishing pressure evolved to a smaller, earlier-reproducing, and likely shorter-lived species.

    And that just scratches the surface.

    On top of all this already observed shit, we are INCREASING the stresses we put on ecosystems. We are co-opting even more resources, even more surface area, even more of the worlds freshwater supplies, to human uses. We have created a society and economy that is dependent on the behaviors that are causing those stresses.

    CO2-induced warming and ocean acidification is just one more set of pressures on already badly damaged and stressed ecosystems and ecosystem services - but they would likely be huge all on their own. We arent adding them on their own, - we are adding them on top pof all this other damage we’ve done to the services and systems that support our way of life on this planet.

    And yet, in the face of this documented pattern of damage or collapse of ecosystem service after ecosystem service, of increasing anthropogenic pressure on the natural structures that support our cultures and societies, we somehow aren’t engaging the hard conversations about how we diminish our impact on those damaged systems, how we mitigate and ameliorate that damage in ways that can continue to give us acceptable and good standards of living on this planet.

    Instead we are engaged in this pitiful argument about whether we are actually having an impact at all, while the increasing evidence for an increasing rate of increasingly heavy damage piles up around us.

    What Ehrlich said 30 years ago doesn’t alter any of this, not one iota.

  • cce // August 17, 2008 at 8:14 pm

    Although I think it’s obvious to the point of being “fact” that we’re in the midst of an extinction event, Ehrlich’s predictions were clearly hyperbole. Unfortunately, “The Boy Who Cried Wolf” ends with the flock being eaten up.

  • Deech56 // August 17, 2008 at 8:32 pm

    To follow up on cce’s post of August 13, 2008/ 7:36 am : http://bishophill.squarespace.com/blog/2008/8/11/caspar-and-the-jesus-paper.html

    If there’s anything posted elsewhere, a pointer would be helpful. Thanks.

  • Hank Roberts // August 17, 2008 at 10:45 pm

    http://www.cgd.ucar.edu/ccr/ammann/millennium/

    Paleoclimate Reconstructions

    UNDER REVISIONPaleoclimate Reconstructions
    An evaluation using real world and “pseudo” proxies based on coupled GCM output

    Collaboration between:

    NCAR CGD Paleo //
    NCAR IMAGe //
    NCAR Assessment Initiative

    Goals
    (1) Provide transparent multi-platform code of past climate reconstruction techniques to the
    community.
    (2) Use state-of-the-art coupled Atmosphere-Ocean General Circulation Model ouput to test reconstruction techniques used in context with proxy data.

    Two of the four sections are hyperlinked to date:

    http://www.cgd.ucar.edu/ccr/ammann/millennium/AW_supplement.html

    http://www.cgd.ucar.edu/ccr/ammann/millennium/SignificanceThresholdAnalysis/

  • Hank Roberts // August 17, 2008 at 10:48 pm

    It’s not hyperbole, it’s range of error. Forty years ago almost nobody had _heard_ of ecology outside of biology departments. Pick any other field and look at what they expected to happen over the same time span.

    Got your flying car yet?

    The optimistic mistakes are too bad.
    The pessimistic mistakes are still pretty bad.

    Look at that table again, look at the numbers.

  • Matt // August 17, 2008 at 11:09 pm

    Hank: Do you feel anything, knowing these numbers?

    Do you feel anything, knowing you’ve been wrong?

    Alas, the “fake but accurate” argument again.

    Hank, if you want me to pat Ehrlich on the back for guessing the sign of the first derivative of species growth, then I’ll give him that much: He got the sign right.

    But getting the sign right and the magnitude very wrong isn’t enough. We rely on scientists and engineers to get both the sign right and the magnitude close. Ehrlich was WAAAAAY off on the magnitude (1000X). Do you acknowledge that?

    Can you show me the text in which Ehrlich was very clear that he meant “committed to extinction” versus “extinct”? Those are very easy concepts to grasp, and I don’t see him being clear on the difference from his scary writings in the 60’s and 70’s.

    I’m growing oh so tired of scientists failing to stand by previous predictions with after-the-fact corrections on what they meant. Remember the “business as usual” debate? Same thing.

  • Hank Roberts // August 17, 2008 at 11:17 pm

    PS — you all realize this is not Paul Ehrlich’s paper?
    Don’t confuse the NPR radio interview linked earlier with this work.

    You should at least look at it and look it up:

    http://scienceblogs.com/intersection/jackson(2008).jpg

  • Matt // August 17, 2008 at 11:27 pm

    Lee: And what is happening, is that we are seeing ecosystem and ecosystem service collapses on massive scales - that list that just got posted is chilling.

    Yes, the list is very important and saddening. But we must condemn those that try to help their cause by overstating the truth. I’m sure folks that made the case for the Iraq war also believed they were helping. But we cannot have “experts” bully us and circumvent checks and balances by stretching thruths–even if those experts believe they are taking us to a “better place. ”

    Hank posted a comment from Ehrlich above, which was:


    You can do it if you have the right incentive, and fear is, ought to be a great incentive if we care anything about children and grandchildren … there’s a lot of reason to be scared for them…. Have you heard anything about ecosystem services?”

    Here we get a peek into Ehlrich’s mind and combined with his track record of overstating extinction rates, we might be able to guess that the man believes lying is OK if it helps mankind.

    This is why people are so distrustful of the current predictions from scientists about warming.

    And when you read about the behind-the-scenes tactics here in preparing the last IPCC report, I get even more distrustful. FWIW, this stuff will play very, very poorly in Peoria. If this tale were picked up by 20/20 and turned into 30 minute story, it’d be devastating to the cause.

    http://bishophill.squarespace.com/blog/2008/8/11/caspar-and-the-jesus-paper.html

  • dhogaza // August 18, 2008 at 12:28 am

    Can you show me the text in which Ehrlich was very clear that he meant “committed to extinction” versus “extinct”?

    Actually, I said that, and I didn’t say that Ehrlich said it. Please re-read what I said.

    And, don’t take Bishop Hill’s blog as gospel. You’re going to be sadly disappointed, eventually, if you do.

  • Hank Roberts // August 18, 2008 at 1:18 am

    > committed to extinction

    http://books.google.com/books?id=yMAP4DAL9A4C&pg=PA328&dq=ehrlich+extinction+%2B%22committed+to+extinction%22&lr=&sig=ACfU3U2hcAmkXmCqmNQmytvEjnsBitczBg

  • MrPete // August 18, 2008 at 1:22 am

    Interesting Erhrlich discussion. Dano, thanks for the link to his new paper; I’ve passed it on to my wife, who also heard much of this in person from Ehrlich back in the 70’s.

    30 years of osmosis tells me to greatly respect the problems caused by our massive pressure on habitat and ecosystems, and also to remain hopeful (if/when we wake up to the stupidity of many of our actions) by respecting nature’s unbelievable resilience.

  • Hank Roberts // August 18, 2008 at 1:28 am

    By the way, the same deception is operating throughout the denial process. No mechanism. No proof of extrapolation from present knowledge. If CO2 has almost doubled why hasn’t temperature almost doubled. If what they said 20, 30, 40 years ago wasn’t exactly right, how can we think we know anything more today?

    Has anyone found signs of intelligent life in the universe yet?

  • MrPete // August 18, 2008 at 1:44 am

    Qualitatively different situation, Hank.

    In terms of population/biomass loss, we have pretty good modern day measurement numbers and there’s not much uncertainty about the sign. We’ve seen species go extinct under human-caused pressure. And we know that certain actions reduce the pressure.

    For climate, the assumptions are orders of magnitude larger and broader. We’re trusting the GCM’s more than the current measurements. We’re making big guesses about major influences. And we assume we can take action to fix the problem (and not cause even more harm in the process.)

  • MrPete // August 18, 2008 at 1:48 am

    I just read a bit of the back-discussion about the data we collected last year. Interesting to see the clamor for “the rest of the graphs.”

    Expectations appear to be confused. Here’s some light on the subject. For a familiar context, I’ll organize this according to traditional data releases/archives, such as the older data that we extended. (e.g. Google ITRDB CO524)

    There are five potential sets of data, to put it most generally. Without getting into the validity of each category:

    1) Easily crossdated samples with the “desired signal” (not my definition; others call it that)
    2) Easily crossdated samples without “signal”
    3) Manually crossdated samples with “signal”
    4) Manually crossdated samples, without “signal”
    5) Provenance details for all the above

    #5 is generated at time of collection
    #1,#2 take a short amount of time to generate
    #3, #4 take much longer, often a few years. Some scientists set the samples aside forever, others keep picking at it until most/all are dated.

    For the data collected in the 1980’s:
    #1 is available
    #2,3,4,5 were never released and cannot be found.

    For our data:
    #5 was released immediately
    #1, #2 were released immediately
    #3, #4 do not yet exist

    Bottom line: We’ve already released more data than many others release, even decades after the fieldwork is complete. What some here are clamoring for goes way beyond what many scientists ever provide. In this case it is not provided simply because it does not yet exist.

    TCO’s preference that data be withheld until everything is complete would ordinarily be logical as far as I’m concerned. In this case it is incompatible with our goal of transparency. If transparency makes some suspicious, so be it.

  • MrPete // August 18, 2008 at 2:03 am

    Ray L suggests the data we’ve released can’t be critiqued because it is not “published” and means bupkis because it’s not in a peer journal.

    Interesting. Quite a few scientists have critiqued various aspects already. And others have gladly received the data already available. Sure, it would be nice if it gets into a journal someday. Not my big dream. The data is already available; I expect it will also make its way into ITRDB sooner than later. Not sure where the extensive provenance belongs. I haven’t seen an archive for 2-D (available now) let alone 3-D (one of these days) dendro images. Personally, I’m happy when good data is made available. The data can speak for itself.

    (I have a comment in response to the “where is the data” questions; it won’t yet post. Patience please.)

  • dhogaza // August 18, 2008 at 3:26 am

    or climate, the assumptions are orders of magnitude larger and broader.

    Strange. For the basic GHG hypothesis, we have lab measurements.

    Yet, for the human-forced extinction stuff, we have no lab measurements, yet, the GHG stuff is less rigorous.

    Strange.

    What are these “assumptions” you are talking ab out?

  • dhogaza // August 18, 2008 at 3:32 am

    And others have gladly received the data already available.

    So what has happened to the Great Left Wing Conspiracy Against Truth that is pretty much the entire raison d’etre for CA? Gosh, could it be that scientists are interested in science, after all?

    I almost hope, that after a decade+ of trying to debunk the hockey stick, that you’ll succeed (not that I think you will). It’s irrelevant. “Oh, we defeated an early paper, while science blitzkriegs onwards”.

    Really, when it boils down to it - you folks have *nothing*. The most you can prove is that climate science is correct, even if one early paper is subject to review.

    It’s a bit like saying that Galileo was wrong for not accurately modeling the different rate of fall for two cannonballs of differing weight.

  • Rattus Norvegicus // August 18, 2008 at 3:35 am

    MrPete,

    I think that you put too much faith in the “incredible resilience of nature”. Inevitably the loss of species leads to simplification of ecosystems and increased instability. This jeopardizes ecosystem services and makes maintaining our civilization more difficult.

    I remember learning a lot of this stuff in the early to mid ’70’s when my dad was taking graduate classes in ecology by going on most of the class field trips with him. It was fun and educational, I just didn’t realize at the time that it was cutting edge science.

  • Hank Roberts // August 18, 2008 at 4:27 am

    Clue:
    http://www.sciencemag.org/cgi/content/abstract/319/5860/192

    Look again at that list. What’s missing? What’s broken because of what’s missing?

  • matt // August 18, 2008 at 4:34 am

    Hank: If what they said 20, 30, 40 years ago wasn’t exactly right, how can we think we know anything more today?

    “They” don’t need to be exactly right. But if someone’s prediction is 3 orders of magnitude off from the actual, don’t you think it’s fair to scrutinize the next “sky is falling” pronouncement?

    You seem to think there’s very little consequence to being very, very, very wrong.

    If scientists want to be taken seriously, there must be an element of accountability. Accountability means there’s pain if someone is wrong–even if they meant to harm and tried their best. That is how the real world works. If scientists dont’ want to deal with accountability, then kick the problem over to engineers and bean counters. They deal with it all the time.

    But we can’t have people with zero accountability scaring the world.

    [Response: It's really not correct to compare the overstatements of an individual, or even a small group, to the clear and overwhelming consensus of the climate science community. As for accountability, there's a tremendous amount of review and a very *conservative* summary in the assessment reports of the IPCC. Comparing one scientist's overstatements to the global warming consensus is mistaken.

    And it's downright foolish to focus on the negative consequences of the extremely unlikely event that they're wrong while ignoring the vastly greater negative consequences of the extremely likely event that they're right.]

  • MrPete // August 18, 2008 at 4:38 am

    R.N… except for the “too much faith” part, I agree 100% with what you say. My “resilience” statement relates to how well nature comes back from a variety of disasters, whether near-extinctions, horrible fires, etc etc. And how well it fights off many of our inane attempts to put a leash on natural processes. Living along the coasts and watching what happens when people try to control beach shifts, or keep rebuilding homes on the top of the cliffs…(or fighting bentonite clay in Colorado) you just gotta laugh or else you’ll cry.

    You’re 100% correct: once life is gone, it’s gone. And reduced biodiversity hurts more than we can imagine. We’ve got to learn how to be better caretakers of our home. (Anyone here enjoy Pollan, BTW?)

  • tamino // August 18, 2008 at 4:44 am

    If you submit a comment and it seems to disappear, that’s probably because it’s been sent to the spam queue. It still gets reviewed for approval, so there’s not need to re-submit, especially multiple times.

  • MrPete // August 18, 2008 at 5:00 am

    dhogaza, I’ll just agree to disagree with you, since you seem to want to argue more than look into the real issues.

    Let’s assume you are correct, that the HS is irrelevant. If so, then it doesn’t matter that W&A’s confirmation has been debunked. It doesn’t matter about sbBCP.

    Thus, we can go through the current crop of “team” papers, removing any that use MBH-related stats methods and any that use sbBCP. And it will make no difference because it is all immaterial?

    I submit it is more impactful than you suspect. And from the conversations I’ve had, serious dendros are aware and are quietly working on some radically new and hopefully better methodologies and data processes.

    Let’s revisit this question in a few years. I have no clue about the policy outcome, but have confidence dendro best practices will be significantly different, and SteveM nicely vindicated, in not too many years.

    What’s being proven is not that “Climate Science” is correct, nor that it is incorrect! What’s being proven is that “Climate Science” knows a lot less than is claimed… that we’re overconfident of our understanding, and overconfident of our ability to manage climate by whatever means.

    If you want to help this whole thing move forward, start ignoring all the attitudes and see what you can learn from the various people involved in this. Including Tamino and Schmidt, and also SteveM, GBrowning and others. They’re all pretty smart people with a lot of strengths.

    In the meantime, we’re accomplishing little through my responses to the various razzes. Someone will just seek another way to denigrate rather than become a serious inquirer or expositor.

  • Lazar // August 18, 2008 at 9:35 am

    MrPete,

    Lazar, AFAIK, your graph link is meaningless to your quest.

    You still did not answer the question…

    “Do you still have doubts that MBH used precipitation series to reconstruct temperature?”

    … the plot shows the temperature reconstruction produced by the MBH98 algorithm changes if precipitation proxies are deleted from the original input file. Are you seriously maintaining MBH98 did not use precipitation proxies to reconstruct temperature?

    If they’re all temp or precip proxies, they should correlate as such.

    Correlate with what? And who is maintaining that they ought be “all temp” or “all precip”, and why?

    You continue to ignore a reasonable suggestion: additional updated data is available for both the Sheep Mountain area and the Almagre area

    I’m interested in whether the data used in MBH98 correlates with local climate.

    Still waiting for those missing functions. temp = f(precip)

    Temperature of what, precipitation of what, and why does it matter to MBH98 and the assumptions therein?

    PS
    Verification is complete.
    The model passed with flying colors!
    Results up soon.

  • Deech56 // August 18, 2008 at 12:18 pm

    RE: Hank Roberts // August 17, 2008 at 10:45 pm

    Thanks, but I was wondering about a counter to the claims that the R^2 values are some kind of smoking gun and that there were shenanigans involved in the W&A paper and its publication. Unfortunately, I don’t have the mathematical background to properly evaluate the CA claims beyond my normal skepticism of claims from that site.

  • Ray Ladbury // August 18, 2008 at 12:42 pm

    Matt accuses: “You seem to think there’s very little consequence to being very, very, very wrong. ”

    Actually, that is not correct. Being wrong does decrease a scientist’s credibility, but not nearly so much as being perceived as “having an agenda”. Ehrlich’s reputation has suffered among scientists precisely because he is perceived as pushing an agenda–and this despite the fact that most scientists think he’s right on the science. Carl Sagan suffered from some of the same bad press, despite being one of the most brilliant and creative astronomers of his day. Sagan’s goal was popularizing science, but his advocacy of arms control and anti-nuclear positions sometimes seeped into these popularizations. I think James Hansen’s reputation has suffered despite the fact that most climate scientists agree with him. Scientists react poorly to other scientists as advocates even when they agree with the agenda the scientists may be pushing. Scientists who are advocates (on the left or the right) do pay a price for that advocacy.
    Ultimately, a lot of the venom from denialist circles comes from the fact that they simply don’t understand how science is done.

  • Gavin's Pussycat // August 18, 2008 at 2:07 pm

    Ray:

    Actually, that is not correct. Being wrong does decrease a scientist’s credibility,

    I expected it to continue “…but being right does not increase it.”

    Silly me. Case in point: Hansen 1988.

  • Dano // August 18, 2008 at 2:11 pm

    Lazar:

    Instead we are engaged in this pitiful argument about whether we are actually having an impact at all, while the increasing evidence for an increasing rate of increasingly heavy damage piles up around us.

    What Ehrlich said 30 years ago doesn’t alter any of this, not one iota.

    Sadly, this is incorrect.

    See, folks need to believe they are not fouling their nest. Opportunities where someone says - even incorrectly - that all of Ehrlich’s statements are wrong because one was wrong need to be jumped on and exploited.

    This is human nature.

    Most folks need to be distracted. They need to believe something else. This is the challenge.

    Best,

    D

  • Gavin's Pussycat // August 18, 2008 at 3:10 pm

    Deech56, the Wahl-Ammann manuscript

    http://www.cgd.ucar.edu/ccr/ammann/millennium/refs/Wahl_ClimChange2007.pdf

    pretty much gives the counter you’re asking for, in section 2.3 and apprendix 1.

    It’s not easy reading though.

  • Hank Roberts // August 18, 2008 at 3:31 pm

    Ray, I think you’re way off into personal opinion and confusing ‘an agenda’ with sharing knowledge of the public health implications emerging from one’s work before anyone much wants to hear it.

    Ozone layer
    Vaccination
    Lead
    Tobacco
    Tributyl tin
    Trawling
    Roundworms
    Yellow fever

    Scientists and doctors speak up when they are obliged to.

    That’s not an agenda. That’s responsibility.

  • matt // August 18, 2008 at 3:35 pm

    Ray: Actually, that is not correct. Being wrong does decrease a scientist’s credibility, but not nearly so much as being perceived as “having an agenda”.

    I think your entire post is spot-on.

  • dhogaza // August 18, 2008 at 4:12 pm

    Thus, we can go through the current crop of “team” papers, removing any that use MBH-related stats methods

    So, let’s see, above you’ve shown that you don’t understand the MBH-related stats, because by implication you state that it boils down to cherry-picking. We’re really supposed to agree with a proposal by you that a tool be tossed out even though you demonstrate ignorance about it?

  • Ray Ladbury // August 18, 2008 at 6:11 pm

    Hank, Perhaps I was not clear. I was rather lamenting the fact that a scientist’s reputation as a scientist suffers when he or she feels the need to speak out. Carl Sagan today is only known for Cosmos and some of his popular writing, but his contributions to planetary physics were also notable. By and large, scientist expect to do science and leave policy to politicians, engineers, economists, etc. However, when the latter still don’t understand the threat, scientists have to weigh whether to wade into politics–a toxic environment for most scientists–or let society go to hell in its own handbasket. The fact is that most scientists do not speak up, and many resent those who do even when they agree with what they are saying. Scientists should speak up–it’s the courageous thing to do–but they have to realize that they will likely take fire from behind them as well as in front.

    Science tends to be very conservative in the sense that unless an effect (threat) can be shown to be significant, it doesn’t generate much activity. That’s inconsistent with politics–where threats compete for attention–and with engineering–where the worst-case threat is assumed to ensure the system remains viable. What’s broken here is not the science. Science has shown beyond doubt that there is a significant threat. What is broken is the political response–which has been nonexistent–and has necessitated scientists venturing well outside their comfort zones.

  • Hank Roberts // August 18, 2008 at 10:07 pm

    Then we agree (and I expect Matt disagrees).

    Case in point:
    http://pubs.acs.org/subscribe/journals/esthag-w/2006/aug/policy/pt_santer.html

  • MrPete // August 19, 2008 at 12:42 am

    Lazar, sorry, spent 20 mins searching past threads. I know I saw one of your graphs showing data with/without precip, but now cannot find the link. Any hints? (Then again, sounds like you’re close to putting the whole shebang together.)

    Can’t comment usefully w/o that.

    My (overly abbreviated) temp=f(precip) , temp=f(etc) comment is simply this: if what we want to reconstruct is temperature, then other elements must be factored out, whether by valid PCA or otherwise. If growth is connected to more than one variable, we’ve got to do “something” to reduce the physical equation to become a function of the one variable of interest.

    In even more-simplified layman’s terms:

    * if warm+dry growth is different from warm+wet growth, we need a way to distinguish the two to properly estimate what happened temp-wise

    * Likewise, if warm+stormy (bark-stripping) produces radically different growth from warm+calm (no bs :)), then again we need a way to distinguish the two.

    It’s good, satisfying fun to do the analysis and see what correlations can be found. At the same time, the stats/analysis needs to connect to physical reality.

    I’ll sneak in to read/say more after I have a link to the results you mention (about removal of MBH precip proxies causing significant change to the results.)

    (Oh, you said “I’m interested in whether the data used in MBH98 correlates with local climate.” Great! So we have 25 more years of local climate data, and 25 more years of exact-same-tree data for some of these key MBH98 proxies, that nicely fits the previously collected samples.)

  • Luke // August 19, 2008 at 2:07 am

    Just FYI - a new climate change blog by the Director of the Research Institute for Climate Change and Sustainability at the University of Adelaide, South Australia.

    http://bravenewclimate.com/

  • ChuckG // August 19, 2008 at 2:33 am

    Open Thread on Open Mind. So discuss please. Math. Not hand waving.

    Pat Frank (Skeptic article) versus Gavin Schmidt:

    http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/langswitch_lang/bg#comment-95633

  • Hank Roberts // August 19, 2008 at 3:16 am

    http://www.agu.org/pubs/crossref/2008/2007JD009295.shtml

    The correlation between temperature and precipitation — because lighter oxygen-16 isotopes make lighter water molecules, which evaporate preferentially, increasing the amount of oxygen-16 in rainfall/snowfall (and in material built up from that water in annual bands)

  • dhogaza // August 19, 2008 at 4:03 am

    * if warm+dry growth is different from warm+wet growth, we need a way to distinguish the two to properly estimate what happened temp-wise

    First you have to show that warm+dry is actually a historical possibility in the Great Basin.

    If it isn’t, you can, of course, simply dismiss the warm+dry scenario …

  • dhogaza // August 19, 2008 at 4:06 am

    ChuckG … it’s obvious that Schmidt knows math. Rather than ask us to discuss it, why don’t you show us why Schmidt is wrong? Pat Frank’s errors seem easy enough to understand, so please educate us as to why Schmidt’s rebuttal is wrong.

  • dhogaza // August 19, 2008 at 4:07 am

    I mean like, isn’t Pat Frank some two-bit weather type guy and Gavin Schmidt some PhD math type?

    I mean … why should I reject the argument of a professional, trained mathematician like Gavin?

  • Lazar // August 19, 2008 at 10:01 am

    Results!
    The tree-ring - climate model passes verification with significance at alpha = 0.01.
    Fig. 10.
    Conclusion: bristlecone-pine tree-ring growth depends on autumn temperature and precipitation, and winter precipitation.

    Constructed a network of temp and precip records back to 1889 (map: net1, yellow pins).
    Temperature records are unreliable prior to 1900 (Fig. 9a).
    Rather than chuck out data, I relied on the mean to eliminate some of the error.
    The model was calibrated over 1940:1980, and verification done over 1889:1939. Passed verification, with significance at alpha=0.05, r-squared of 0.08. The model performed excellently until the first seven years of data where, although clearly still responsive (peaks and troughs) there is a divergent trend almost certainly due to inhomogeneity in the early portion of temperature records. Chucking out the first seven years gave an r-squared in verification of 0.21, significant at alpha=0.01.
    There was a residual positive trend approximately 1/3rd of the magnitude of the trend in tree-ring growth. Having consistently found similar results with other data, although the trend is not significant, it is likely real and likely due to co2 fertilization. Detrending over 1848-1980 and running the regression gave a marginally improved r-squared of 0.23 in verification, and the residuals gave a much improved fit to a normal distribution.

  • Gavin's Pussycat // August 19, 2008 at 10:03 am

    Deech56,

    having thought about your question a bt more, and re-read W&A, I now understand that their explanation, while right, is precisely the way not to explain it, making a very simple matter overly complicated.

    The matter is really very simple: the r2 test is about detecting the existence of a functional relationship between a variable x and a variable y, like in (restricting ourselves to the linear case):

    y = ax + b + noise (1).

    It doesn’t matter what a and b are, all the test says is that x and y “co-vary”: if x goes up, so does y. If x goes down, so does y. It makes no difference if the average level of y is completely different from that of x; it doesn’t matter if the size of the swings in y is very different from that of the swings in x.

    In fact, you may apply any linear transformation to y: if you write

    Y = p y + q,

    you will have (easy to show)

    Y = A x + B,

    with A, B different from a, b but computable from them and from p, q.

    Now, the r2 value of Y against x will be identical to that of y against
    x.

    What we want to test in the case of climate field reconstruction is not that y is functionally related to x, but that y is a reconstruction of x:

    y = x + noise (2).

    This calls for an entirely different kind of test (like perhaps the RE test).

    Using the r2 test blindly is not just a blunt instrument, it is the wrong instrument. You are not testing the conjecture you’re supposed to test. W & A give some nice examples with plots how with the r2 test you can both reject perfectly good reconstructions and swallow junk…

    I used to think that McI was pretty sharp — evil, dishonest but sharp. But now I see that in his insistence that the Hockey Team should present r2 test results, and his implication that not doing so is somehow fraudulent, is, well, plain dumb.

    BTW in my understanding tests like r2 are “lightweight” tests, typically only used as a first cut at deciding if “there is something to it”. It’s also fairly easy to cheat. A more industry-strength test in the linear regression case would be to just compute the regression trend a and its standard deviation, construct a confidence interval, and see if the value 0 — or whatever your null hypothesis is — lies inside it.

    More generally, like in the climate field reconstruction case, you should formulate a realistic error model for your data (proxies), propagate this through the computation to obtain an error model for your unknowns — the temperature reconstruction — and judge that against your null hypothesis — like, “the 20th century is nothing exceptional”. This is how to get the grey zones you see in the IPCC hockey plots BTW.

    Hope his helps. I am surprised Tamino hasn’t yet written about this — perhaps t
    oo simple, below his dignity ;-)

  • Gavin's Pussycat // August 19, 2008 at 11:47 am

    Ah, the r2 is described here:

    http://en.wikipedia.org/wiki/Coefficient_of_determination

    …and the RE (and a lot more) here:

    http://books.google.fi/books?id=zr8Ucld6FYcC&pg=PA181&lpg=PA181&dq=%22reduction+of+error%22+%22RE+statistic%22&source=web&ots=ZgAnXuJOMF&sig=lujR9mBVKwXh36m_UvsweazHDXw&hl=fi&sa=X&oi=book_result&resnum=1&ct=result#PPA183,M1

  • Dano // August 19, 2008 at 2:26 pm

    BTW in my understanding tests like r2 are “lightweight” tests, typically only used as a first cut at deciding if “there is something to it”. It’s also fairly easy to cheat. A more industry-strength test in the linear regression case would be to just compute the regression trend a and its standard deviation, construct a confidence interval, and see if the value 0 — or whatever your null hypothesis is — lies inside it.

    Exactly.

    When first scanning a paper to see if it is worth your time, the r^2 is where you start, then you look over your preferred stat measurement to see if you should delve further. If the numbers are high enough, you read the paper.

    When I was doing my microecon and urbecon series, I had a hard time reading their papers, as the r^2s and Ts were lower than what I was used to from the natural sciences.

    Best,

    D

  • Tom Woods // August 19, 2008 at 4:36 pm

    Just something to ponder…

    Recent studies have shown a doubling of stratospheric water vapour, likely from increasing atmospheric heights due to global warming, overshooting thunderstorm tops from stronger tropical cyclones and mesoscale convective systems etc…

    Since sulfur dioxide reacts with water vapour in the stratosphere to form sulfuric acid droplets, would SO2 flux from volcanic activity cause even greater swings in global temperatures?

    I would assume that the increase in stratospheric water vapour would make for a thicker vail of sulfuric acid given a large volcanic eruption. Even a smaller eruption that manages to have an eruptive plume that reaches the stratosphere could very well have greater implications on global temperatures if there’s more water vapour for SO2 to react with.

    Perhaps in the future a large volcanic eruption (VEI 5-6 or greater) may cause 1-2°C swings in global temperatures as they rise further as we go from enhanced greenhouse effect to enhanced reductions in insolation from thicker sulfuric acid vails.

    I bring this up due to the eruption of the Kasatochi volcano, which had an estimated 1.5Tg flux of SO2. This is only around 10% of the SO2 flux from Pinatubo but it got me thinking…

    Anyone with any input on this I’d like to hear from.

  • TCO // August 19, 2008 at 7:24 pm

    Pussy: Wegman said that R2 was the wrong metric to look at what’s going on effectively. The problem with Steve is that he is so confounded with PR and math exploration that he neglects to really think about how different algorithms interact with diofferent data sets in a curious manner.

  • Deech56 // August 19, 2008 at 7:25 pm

    Gavin’s Pussycat and Dano - Thanks for the information and the links. I will read this over. My own stats and linear regression coursework was from back in the Reagan era, and my experience with r^2 comes from running standard curves for lab measurements, where anything less than 0.9 results in frowns.

    What disturbs me is the way that this is used to discredit the 10-year old work by MBH, and by implication any subsequent confirmations. Unfortunately, the “circling the wagons” meme plays well among the general public; meanwhile, the ice is melting and flora and fauna are migrating as nature responds to the effects of rising CO2.

    Deech

  • tamino // August 19, 2008 at 7:57 pm

    A note to readers: I’ve suffered a back injury which makes it very difficult to get around, and I’ve been taking it as easy as possible. Thank goodness my wife (the finest woman in the world) is taking excellent care of me. But it’s been nearly a week since the last post, and may be several days until the next. In the meantime, I’m glad discussion continues apace.

    Carry on.

  • Gavin's Pussycat // August 19, 2008 at 9:15 pm

    TCO: thanks, wasn’t aware of that (my neglect, haven’t been taking Wegman very seriously — life’s too short).

    Tamino get well soon! We need you ;-)

  • george // August 19, 2008 at 9:29 pm

    Being wrong does decrease a scientist’s credibility,

    I don’t think being wrong in itself necessarily decreases credibility — at least not among one’s fellow scientists.

    Some of the greatest scientists in history were wrong from time to time.

    First, it is a rare for a scientist to get it “right” the very first time. Even Einstein was “wrong” in his first attempts at a theory of gravity. In fact, it took him over ten years of hard work (and several mistakes) to get it right!

    Second, there are different degrees of “wrongness”.

    Technically, Niels Bohr was “wrong” when he had the electron moving as a point particle about the nucleus in a well defined “orbit” much like a planet around the sun.

    His initial model may have been wrong, but “wrong” is really a relative concept in science. In fact, the Bohr model of the atom was “righter” than all of the others out there at that time. Same with Einstein’s theory of gravitation. It may only be “right” within a certain domain, for example (not unlike Newton’s laws). It may be invalid at very small scales.

    When it comes right down to it, no one is really “right” in an absolute sense. That’s not to claim that all models and theories are created equal (or any such nonsense), merely to say that all efforts to describe nature are imperfect approximations.

    I think the only ones who would “downgrade” a scientist’s credibility in response to his/her being “wrong” are those who do not understand how science works.

  • Hank Roberts // August 19, 2008 at 9:50 pm

    Yeow. If it’s lower back/muscle spasm, I can recommend Maggie’s Back Book.
    http://openlibrary.org/b/OL4901783M

    Shorter: sketches of positions that stretch the problem out gently, stop the pain, let the inflammation reduce. Simple stuff.
    Works.

    Take it easy, if this is new to you it’s real easy to be overconfident and tweak it again.

    [Response: It is new to me, so I'll be aware and try to avoid overexertion through ignorance.]

  • Gavin's Pussycat // August 19, 2008 at 9:56 pm

    TCO do you have a link?

  • Lazar // August 19, 2008 at 10:32 pm

    I’ve suffered a back injury which makes it very difficult to get around

    Oh dear.
    I know it can be extremely painful.
    From personal experience, two weeks minimum before it’s safe.

  • Hank Roberts // August 19, 2008 at 10:52 pm

    Yeek. I’ve had back trouble since I was a youngster and I’m almost 60. Reaffirming, Maggie’s got _great_ advice illustrating positions that will avoid pain, both for stretching and for sleeping.

    This will brighten your day:
    http://blogs.nature.com/climatefeedback/2008/08/more_for_the_annals_of_climate_1.html

  • george // August 19, 2008 at 11:06 pm

    Hope your back problem is muscle related and not serious.

  • TCO // August 20, 2008 at 12:04 am

    Gavin: just searched on the web and could not find it. As I recall, it was in testimony, in response to a question about r2.

  • Dano // August 20, 2008 at 12:21 am

    Hank, we’ve been deconstructing see-oh-too’s mendacity for years. Years. At least newer blog posts can cut-paste old work already done.

    Best,

    D

  • MrPete // August 20, 2008 at 1:03 am

    tamino — good luck. (My half-bit of experience-based wisdom: we’ve found it’s usually the day _after_ exertion/stress when you wipe out your back. Something about everything being loosened up. “All I did was…” and wham. Now that you’ve had one, being extra careful after a good workout day is gonna help.)

  • Dave Rado // August 20, 2008 at 1:11 am

    Gavin’s Pussycat writes re. Wahl-Ammann
    manuscript "It’s not easy reading though." (in context of Bishop Hill and McIntyre accusations).

    I do hope Tamino will post about this when he’s feeling better - it would be good
    if there were an article one could link to that shows the latest disinformation for what it is, in a way that is
    intelligible to laymen.

  • Gavin's Pussycat // August 20, 2008 at 1:34 am

    TCO, couldn’t find it either… that’s when I decided to ask ;-)

    Seriously, don’t doubt it’s true. I’d like to see his argument…

  • MrPete // August 20, 2008 at 1:45 am

    Lazar — interesting preliminary results. Sounds about right.

    A nit you may want to check: don’t know how you placed the pins on the map; I’m quite certain your c0524 location is way off to the southeast. Perhaps ddmm.mmm (or ddmmss) was assumed to be dd.ddd? co524 is on Almagre a couple of km SE of Pike’s Peak and SW of CoSpgs… not in the flatland halfway between CoSpgs and Pueblo.

  • Deech56 // August 20, 2008 at 2:08 am

    And Tamino - get well soon.

  • ChuckG // August 20, 2008 at 3:07 am

    dhogaza // August 19, 2008 at 4:06 am

    The brevity of my post has led you to misunderstand me. You have made an assumption which may be implicit in my post, in which case I am sorry, but clearly is not explicit.

    So I withdraw the request rather than flesh it out. Why waste BW?

  • TCO // August 20, 2008 at 3:36 am

    Gavin’s Pussy: I’m sure I recall it, but I just scanned the testimony and don’t see it. Might have been some other discussion, not the testimony? Or a slight chance it might not have been Wegman but still someone else on “my side”. Pretty sure it was Weg though. But my recollection is that there wasn’t much followup to explain why he felt that way.

  • Barton Paul Levenson // August 20, 2008 at 12:27 pm

    tamino writes:

    I’ve suffered a back injury which makes it very difficult to get around, and I’ve been taking it as easy as possible.

    I’m very sorry to hear that. I will pray for healing for you.

    -BPL

  • Hank Roberts // August 20, 2008 at 3:26 pm

    Dano, yep, just pointing out Nature’s blog had noticed the smell of the pond scum there. Reminded me of when Judith Curry posted at CA what she thought on first reading their twisted text.

  • Gavin's Pussycat // August 20, 2008 at 4:43 pm

    TCO: “my side”
    did you ever get the feeling that there is something messed up about your loyalties?
    What about making the truth, no matter what, “your side”? You are almost there already. It is an honourable side to be on.

  • Dave Rado // August 20, 2008 at 4:50 pm

    More re. Gavin’s Pussycat’s post - I think the paper you linked to is the one that Bishop Hill attacked in his post, referring to it at the “CC paper”?

  • MrPete // August 20, 2008 at 7:41 pm

    G.P., a lot of us are seeking truth. And nobody on this planet appears to have a monopoly. :)

  • Paul Middents // August 20, 2008 at 9:13 pm

    Mr. Pete,

    I think real scientists doing real science as their life’s work come much closer to a monopoly on “truth” than a bunch of amateur auditors with a bristle cone pine obsession.

  • TCO // August 20, 2008 at 9:34 pm

    Gavin: My hopes are more with reforming the reformers . Or working with those that are more questioning of everything (Mosh-pit, JohnV, Zorita). I can’t really get good curious type conversations going with the RC types (too controlling, too shutting the discussion down, too “Herr Doktor Professor”). Bit more free play here and Lambert’s site. And I will give you credit that have heard you a couple times challenge your own side and drive better thinking in the process. On the warmer side, Annan and Atmoz seem driven by curiosity, also. Cheers.

  • TCO // August 20, 2008 at 9:59 pm

    Mr. Pete:

    When that BCP coffee expedition was done (and blogged on), we were shown initial data with a “more to come” message. Now it seems from your response that we may perhaps not get ANY more. I want a full report. The selective release of the data is both amateurish and manipulative. If you “broke” or mislabelled or whatever some of the cores, fess up. Also release all the raw data.

  • TCO // August 20, 2008 at 10:01 pm

    Also give us a much more deliberate explanation of what is going to be done in terms of the manual dating. Who’s going to do it or not, etc. If the answer is “I don’t know” or “someone will date it if they ever feel like it”, then we need to take the results to date as the finished product and judge both the expedition and the tree climate behavior based on what was found out and reported.

  • Ray Ladbury // August 21, 2008 at 1:19 am

    TCO, I beg your pardon, but I don’t know of a single scientist doing real research who is not “curiosity driven”. If I were not curiosity driven, I could go to work for a hedge fund and make one helluva lot more than I do now. However, don’t you think it makes more sense to be curious about what is not well known rather than what is well known? To me, it makes a whole lot more sense to be curious about those aspects of climate science that are still uncertain, rather than the role of CO2 which is tightly nailed down.
    Finally, to have respect for the expertise of folks like Gavin Schmidt, who herd the cats at Realclimate is not so much respect for authority as it is respect for expertise, achievement and patience. The goal of Realclimate is to teach people about climate science–and it does that very well. The goal of this site has less to do with climate science and more to do with proper analysis of data–a goal it accomplishes quite well.

  • george // August 21, 2008 at 1:33 am

    I can’t really get good curious type conversations going with the RC types…

    I suspect that Albert Einstein was not curious about whether Arthur Eddington was hiding key data from the 1919 eclipse under his pillow that disproved General Relativity, either. :)

  • TCO // August 21, 2008 at 4:58 am

    The “teaching” style of RC is not beneficial to really digging into things. I would contrast say Volokh.com which has brilliant intellects at the helm, but has probing interactions with the commenters as well.

  • Gavin's Pussycat // August 21, 2008 at 5:11 am

    TCO:
    >And I will give you credit that have
    >eard you a couple times challenge your own side and drive better
    >thinking in the process.

    Huh? I vehemently deny that ;-)

  • TCO // August 21, 2008 at 5:13 am

    Ray:

    I got my union card, too. So I’ve seen science, seen finance. Seen different kinds of cats in both. It’s not like scientists are something that I read about or see on TV, that I need your special help to have a feel for.

    Science is actually big business in some ways, if you look at all the people in it, all the government dollars. Please spare me any dreaming on how much you think you could make if you sold out, btw, it’s a competitive market there too. But I think your quality of life factoring in work load, pleasant travel to conferences, cost of living in Manhatten, feeling of social utility, job security, and…yes…interest makes it very likely that you’re better off doing science than solving modified heat flux diff e q’s for Goldman. IOW, yes, I acknowledge an interest driving choices…but no, I actually happen to know that vast majority of union card awardees could not cut it to get that rocket scientist job. (Some can of course…and the very best will be Feynmans and Lisa Randalls and the like…and it’s better for the world that they don’t sell out…but don’t forget all the also rans either.)

    Scientists come in a lot of different flavors and they vary in intellect and inquisitiveness. I am always happy to meet one with the real blazing Feynman-like curiosity and ability. But it’s the minority (they like to learn sure…and they want to get discoveries and papers, sure…but genuine probing curiosity comes in different grades, just as brains do.)

  • Gavin's Pussycat // August 21, 2008 at 5:17 am

    Dave Rado, yes apparently it is. And its history may explain why it contains this extensive explanation on suitability of test metrics.

  • MrPete // August 21, 2008 at 6:18 am

    Paul M, does this mean we therefore should have less respect for the 13 year old whose science experiment was published in JAMA? Or that the only truth is published truth? C’mon, let’s not go down that path.

    TCO, your questions have mostly been answered already. Sorry if you don’t like the answers. Understanding that it can be hard to search, I’ll answer again.

    There’s been no selective release of data. Anything that crossdated has been released, no matter what the data “says”. The undated data is still at the lab today. (Frankly, the project has been out of sight out of mind for a few months. Yes we’re “slow;” are the pro’s any faster?) When SteveM gets back from current travel he’ll head over to pick up whatever there is; hopefully we can make the scans accessible sooner than later. Has anyone else _ever_ done this? Not that I know of.

    You want “raw” data. For cores, normally that’s the crossdated ring widths, which are available now. We also hope to make the scans available online, which as noted may be a first.

    I also answered on the manual dating. AFAIK, we are about to receive the core scans, enabling anyone to manually date if they are willing to download the images. We can’t exactly duplicate the physical cores :). Suggestions on making that process more productive and/or more accessible are most welcome. I for one am quite motivated to work on the problem in my copious (hah) spare time. It is a great puzzle: why do some cores auto-match while others do not? So far, I don’t see obvious reasons. My guesses: current techniques depend on variable growth. “Boring” growth can’t be auto-identified. And spongy/rotty rings also cause havoc. (No, we didn’t break or mislabel. A faint chance we may have something scientific to say about that some day. I need some research time.)

    Finally TCO, I’m curious about your perspective on this:
    1) What’s the basis for your “amateurish, manipulative” claim? Have you actually examined the data? It’s been available for quite a long time. If you have suggestions for improvement, I’m all ears.

    2) If what we’ve released (i.e. all data generated to date, with comprehensive metadata) is “amateurish and manipulative” I suppose that goes double for those whose work we replicated, who have released 40 samples from 28 trees after coring 60+ trees at the site 25 years ago, and explicitly state that cores without “signal” are trashed? I humbly accept the “amateurish” label, particularly if such work by others is similarly understood, and even more so if you can point me to a more professionally collected and documented dendro data set that can be highlighted as a model of better practice. Honest, I’m all ears. We have no illusions about the quality of our field work; if it is any good at all I consider that a minor miracle :-D.

    Oh, and I place myself squarely in the camp of just wanting to know what the data says. I don’t care WHAT it says; I just don’t want peoples’ biases coloring the results. And yes that includes whatever bias I may have.

  • MrPete // August 21, 2008 at 6:27 am

    Ray L, “don’t you think it makes more sense to be curious about what is not well known rather than what is well known?”

    I agree. I also think it makes sense to be curious when there’s a significant apparent disagreement among scientists about what is “well known.”

    To me, what McKittrick has documented about the assessment of uncertainty in forcings is quite interesting. At the end of the AR4 scientific input, 7.5 of 15 forcing topics were gauged least-certain. Subsequent editing (without review by the scientific community) produced a slightly different result: 0 of 8 were least-certain.

    A change from 50% at the worst level of uncertainty to none, is to me a matter of valid curiosity. Particularly since that’s my own major question: how certain are we, really?

  • Gavin's Pussycat // August 21, 2008 at 7:15 am

    TCO, this seems pertinent:
    http://www.realclimate.org/index.php/archives/2005/12/how-to-be-a-real-sceptic/

  • Ray Ladbury // August 21, 2008 at 12:57 pm

    MrPete, the uncertain forcers in climate are known–clouds and aerosols. The others are pretty well understood, and CO2 is among the most tightly constrained. You of course are welcome to try and construct a climate model that is consistent with the data and has a low CO2 sensitivity. It would be a very interesting beast. Nobody has succeeded so far.

  • george // August 21, 2008 at 1:02 pm

    Being a true skeptic involves being skeptical of people who perceive a need to portray themselves as a “skeptic”.

    Skepticism used to be a mindset. In recent times it seems to have become more of a uniform to inspire awe: “Never fear. Skepticman here!”

    Would probably make a good cartoon.

  • Ray Ladbury // August 21, 2008 at 1:43 pm

    TCO, You claim to be a scientist, but I see little or no understanding of the motivations of scientists in your post. I particularly like your assumptions about my workload. I work about 80 hours a week, except when I travel to conferences. Then I work more. About 75% of what I do is bureaucratic BS. I do it because the other 25% is fascinating. Scientists do what they do because they want to understand how things work. There is no way you could pay them enough to work the hours they work. Now, there are some of us who are also curious about things other than our own narrow spheres of research, and yes, they are a minority.
    As to RC, perhaps its teaching style is not effective for you. I know I’ve learned a helluva lot from it.

  • Hank Roberts // August 21, 2008 at 3:14 pm

    George, look up Doonesbury +”Teach the Controversy” — doonesbury/2006/03/05/

  • Paul Middents // August 21, 2008 at 3:21 pm

    Mr. Pete,

    Nice sidestep to the 13 year old girl publishing in JAMA. Are you referring to 9 year old Emily Rosa by any chance? She coauthored with her mother a take down of healing touch–somewhat akin to shooting fish in a barrel.

    Do you find “truth” on teener, Kristen Brynes’ Ponder the Maunder site? She now has a foundation named after her. That must increase her truthiness.

    Your original comment referred to a “monopoly on truth”. Clearly no individual has a monopoly but given a choice on who is more likely to have a handle on climate science “truth”, I’ll put my money on the pro’s publishing their work in the peer reviewed literature.

  • Paul Middents // August 21, 2008 at 3:37 pm

    ChuckG,

    Pat Frank is a PhD Chemist with 50 peer reviewed publications, but none relating to climate.

    I would be very interested in a discussion of the Pat Frank/Gavin Schmidt exchange. Gavin reluctantly spent a great deal of time extracting from Frank the basis for his “model”.

    Please do tell us what you found unconvincing or inconsistant in Schmidt’s responses to Frank?

  • TCO // August 21, 2008 at 4:25 pm

    Mr. Pete: Thanks for trying to answer me. I still think that this expedition and it’s partial results were over-touted and under-delivered.

  • Rainman // August 21, 2008 at 4:30 pm

    Tamino: Back issues are not pleasant. (I’ve tweaked my back a few times in Aikido.)

    Find a good bio-physics certified chiropracter. There are some butchers out there, but a good one can work miracles.

  • TCO // August 21, 2008 at 5:32 pm

    Ray:

    I spent several weeks at Langley. Most PIs were driving out across the airbase before 1700. The place was a graveyard on weekends. Nothing lazy. But not a pressure cooker. Maybe your 80-hour weeks are not representative.

    P.s. I did not claim to be a scientist.

    [Response: I think 80-hr weeks are actually pretty typical. But it's not because managers are looking over our shoulders cracking a whip. It's because when a problem or question gets hold of you, it won't let go. My wife complains that when I get that "glazed" look in my eye ... she knows I'm in another universe, and it's not easy to pull me back.

    Yeah we work hard, and we work long hours. But we do it because we love it, and we can't *not* do it.]

  • Deech56 // August 21, 2008 at 7:37 pm

    I would add to Tamino’s comment regarding what drives scientists - with grad school and post doc training, a real job may not be a reality until one is in his or her early-to-mid 30s. It’s a fun life, but the lentil soup and PB&J routine gets old (especially if you have a family). In what other field can you say that you are the first to discover something? If you don’t have a driving curiosity there’s really no reason to go through the hassle.

  • Deech56 // August 21, 2008 at 7:55 pm

    Oh, and going through the “Caspar and Jesus paper” post - the math may be challenging, but in reading the description of the publication of the papers, it seems that the author does not have a strong handle on the foibles of manuscript publication.

    For example: not every revised MS is sent back to the reviewers, a rejected paper usually does find another home, and difficulties with sequential publishing and cross referencing do exist. To automatically assume that there is a great conspiracy is a stretch, IMHO.

    The paper should be judged on its merits, not assumptions regarding its submission history. Does the recent von Storch, et al. manuscipt provide additional confirmation? (I know he indicated before Congress that redoing the MBH analysis according to the suggested methods led to - a hockey stick.)

    von Storch, H., E. Zorita and J.F. González-Rouco, 2008: Assessment of three temperature reconstruction methods in the virtual reality of a climate simulation. International Journal of Earth Sciences (Geol. Rundsch.) DOI 10.1007/s00531-008-0349-5

  • David B. Benson // August 21, 2008 at 9:18 pm

    TCO // August 21, 2008 at 5:32 pm — Tamino’s respnse suggests that science is a mental addiction for (some) scientists. :-)

    Writing computer programs can be like that as well.

  • george // August 21, 2008 at 9:41 pm

    I am always happy to meet one with the real blazing Feynman-like curiosity and ability.”

    Feynman was genuinely curious — as are (I think) most scientists. Feynman may have been more curious about a wider range of subjects than some scientists, but based on my personal experience working with scientists, I can say with some confidence that he had no monopoly on that trait by any means.

    The operative word above is “genuinely”. I’m not sure Feynman would have had much (or any) patience for a lot of the “science” that is pursued these days (on blogs and elsewhere) in the name of “curiosity.”

    I never met the man, but based on what I have read about him, I suspect he might even have had some rather unkind words to say about some of it.

  • TCO // August 21, 2008 at 9:58 pm

    I’ve seen both sides of the fence guys and would be wary of the tautologies, of the self-licking ice cream cones. I’ve spent significant time at a couple national labs as well. We’re not talking I banker level of hours there. It’s much more a 9 to 5. And lots of people even just in middle manager business jobs work hard guys. Don’t be so quick to paint yourself as saints…

  • David B. Benson // August 21, 2008 at 10:38 pm

    george // August 21, 2008 at 9:41 pm — We already known what Feynman would call it: cargo cult science.

    TCO // August 21, 2008 at 9:58 pm — Are saints mentally addicted too? :-)

  • Ray Ladbury // August 22, 2008 at 1:18 am

    TCO, You often do not see me at work on weekends–rather, you’re likely to find me with my face bathed in the glow from my laptop. Or if I’m testing, I’ll be at the accelerator for 16-20 hours a day (weekends are typicaly the only time we can get beam). I would contend that you really aren’t going to learn how science works from a visit to Langley–even one that lasts “a few weeks”.
    I agree with Tamino, my job is also my hobby–but don’t ever let anybody try to tell you it ain’t work.

  • ChuckG // August 22, 2008 at 1:29 am

    Paul Middents // August 21, 2008 at 3:37 pm
    Reread my post. It is supposed to be neutral in tone. I only wished for Frank/Gavin exchange be fleshed out over here on an Open Thread rather than clutter up that thread. Their latest exchange has fleshed it out.

    I knew who Pat Frank was. (dhogaza doesn’t) Frank clearly had better cred than Lord Moncton. Which is what piqued my interest.

    My math skills are old and weak. As am I. Probably the last time I questioned someone’s math was in early 1964. Sometimes I can even follow the very clear Tamino posts!

    Phenology clearly supports GW. And there is no reason why I shouldn’t assume AGW is the cause.

  • MrPete // August 22, 2008 at 1:46 am

    Paul M - with your more nuanced response, I agree with you, except that I’ve yet to find any potholes of folk who tie dollars and truthiness together.

    (I too have marveled at a teen having a “foundation” — your mention prompted me to actually check it out. The no-surprise part: it’s to allow contributions to her college education. The boring part: it is not what most think of as a real foundation: not a non-profit org’n, not reg’d w/ IRS (or at least not in the main online database of same.) I wouldn’t expect this to go further than a young woman taking advantage of her 15 minutes of fame to pay for a college education. Here today, gone to maui.)

  • MrPete // August 22, 2008 at 2:00 am

    Tamino sez: “My wife complains that when I get that “glazed” look in my eye … she knows I’m in another universe, and it’s not easy to pull me back.”

    Heh. FWIW, it’s called flow. Lots of us have the bug. Your site is one of the places I go for an intentional flow-break ;).

    Some studies have given helpful insights about flow. Arrange work to minimize flow breaks: a two-minute distraction can easily cost 15 minutes of flow. (Nice intro by the guy who first wrote about it in 1991: http://psychologytoday.com/articles/index.php?term=19970701-000042&page=1)

    [Response: How true! My wife has finally figured out that if she interrupts me for one minute, it can sabotage a train of thought which has been proceeding for a lot longer than that. When I'm really in the groove, I try to isolate myself; it's not always easy.]

  • Gavin's Pussycat // August 22, 2008 at 9:58 am

    Re: flow.

    My wife has finally figured out that if she interrupts me for one minute, it can sabotage a train of thought which has been proceeding for a lot longer than that.

    Same here. So true, so true.

    BTW this seems to be typical for the Asperger syndrome, found a lot in both scientists and IT people. I wonder how many of us have it?

  • Ray Ladbury // August 22, 2008 at 12:23 pm

    Apropos of Flow:

    A doctor, lawyer and physicist are talking about whether it’s better to have a wife or a mistress.

    “Much better to have a mistress,” says the doctor emphatically. “You can have fun with her as long as she looks good and then you can dump her.”
    “Woah,” says the Lawyer, “that’s dangerous. You could get hit with a palimony suit and lose half of everything you own. It’s much better to have a wife. It’s a contractual, legal arrangement. Everybody knows what’s expected. Much better to have a wife.”
    “You’re both wrong,” says the physicist. “It’s better to have both.”
    “Whoa, dude!” say Lawyer and Doctor simultaneously.
    “Yeah,” says the physicist, “that way, when it’s 11:00 and you’re not home, your wife thinks you’re with your mistress. Your mistres thinks you’re with your wife, and you can be at the lab getting some real work done.”

    [Response: That's hilarious! I guess I'm gonna have to get a mistress...]

  • george // August 22, 2008 at 4:02 pm

    A wife who is a herself a scientist is best of all (and orders of magnitude simpler)

    Then, when you are at the lab, you know that she is also at the lab (not necessarily the same one) and you don’t need to worry about her galavanting about cheating on you.

    And what could be more distracting to your “flow” of thoughts than having to worry about two people (wife and mistress) galavanting about cheating on you?

    BTW
    If this thread is for “discussion of things global-warming related, but not pertinent to existing thread”, maybe you need to start a “pertinent” thread
    Or maybe “impertinent” would be a better description. I hope (for your sake) that your wife does not read this stuff.

  • Hank Roberts // August 22, 2008 at 6:41 pm

    Chuckle. I pointed it out to my wife.

    She was deep into a complicated Excel spreadsheet, set up to lay out and explain knitting patterns to friends who are having trouble following a pattern that came with poor written instructions (most of which are pretty poorly written, like “for the other sleeve reverse the previous steps”).

    > Asperger’s
    Ding!

  • Hank Roberts // August 22, 2008 at 6:54 pm

    One last thought on this tangent and I’ll leave it — somewhere recently I noticed research saying that when people are presented with two different conversations or audio programs, some of us can manage to follow both of them at the same time; other people consistently find the situation intolerable because of interference. The story speculated this may have to do with the rate at which the brain hemispheres exchange information from the two ears.

    Earlier I recall much study of how oral input can displace visual; how visual imagery can displace input from the eyes; and so forth.

    Didn’t turn up a cite in a quick search — just to say this may well have a whole lot of factors involved.

    But, hey, before I married, I had st times dated women who basically thought of only one thing at a time. They found me incomprehensible because I do branching trees and parallel threads in everything I think about and much of what I talk about.

    Got lucky at last. Grateful. Not complaining.
    (Hi honey!)

    [Response: My wife is like you: a multitasker par excellence. I'm a single-thread guy; when I get on a train of thought the rest of the universe had better not interfere! We complement each other nicely.]

  • Jason Bint // August 22, 2008 at 7:26 pm

    For those wondering, the Colorado tree samples are here:

    http://www.climateaudit.org/data/colorado/

    The NAS panel report is what is being thought of I believe. It has a discussion of r2 and other issues starting on page 92

    VALIDATION AND THE PREDICTION SKILL OF THE PROXY RECONSTRUCTION

    And starting on page 112

    Criticisms and Advances of Reconstruction Techniques

    With what is being though of here on 113:

    Regarding metrics used in the validation step in the reconstruction exercise, two issues have been raised (McIntyre and McKitrick 2003, 2005a,b). One is that the choice of “significance level” for the reduction of error (RE) validation statistic is not appropriate. The other is that different statistics, specifically the coefficient of efficiency (CE) and the squared correlation (r2), should have been used (the various validation statistics are discussed in Chapter 9). Some of these criticisms are more relevant than others, but taken together, they are an important aspect of a more general finding of this committee, which is that uncertainties of the published reconstructions have been underestimated. Methods for evaluation of uncertainties are discussed in Chapter 9.

    Reference:
    http://books.nap.edu/openbook.php?record_id=11676&page=

  • Paul Middents // August 22, 2008 at 7:28 pm

    ChuckG // August 22, 2008 at 1:29 am

    Thank you for pointing out the latest exchange (Aug 21) between Frank and Schmidt. Should Frank respond, it will be interesting to see if his reply addresses the use/abuse of logarithms.

    Eli Rabett is the man to chronicle and comment on a train wreck of this length and magnitude. It is reminiscent of the epic exchanges on Dot Earth between Arthur Smith and Gerhard Kramm defending the Gerlich and Tscheuschner paper which in essence disproves the entire greenhouse effect. Gerlich and Tscheuschner themselves, eventually entered the fray.

    http://rabett.blogspot.com/2008/02/all-you-never-wanted-to-know-about.html

    The parallels between Frank and Gerlich and Tscheuschner are striking. Grandiosity comes to mind first. Gerlich and Tscheuschner title their work:

    “Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics”

    Frank’s subtitle states:

    “The claim that anthropogenic CO2 is responsible for the current warming of Earth climate is scientifically insupportable because climate models are unreliable”

    It doesn’t get much grander than that. It is also noteworthy that neither found a home for their work in the peer reviewed literature.

    All the protagonists boast advanced degrees in the physical sciences. Frank had his very aggressive defender in the early exchanges with Schmidt–Gerald Browning, a retired atmospheric scientist. Gerlich and Tscheuschner had Gerard Kramm, atmospheric scientist, University of Alaska.

    Both episodes required the tenacious pursuit by very knowledgeable people (Schmidt and Smith) before the essential flaws in the scientist’s reasoning could be apparent to non-specialists.

    We owe professionals like Gavin Schmidt, Arthur Smith and our lop eared friend at the “run” a debt of gratitude for their willingness to confront, at great length, the most pernicious in support of denying and delaying. These are the credentialed scientists who really believe they have found physically based reasons that we have nothing to worry about.

    Tamino is equally heroic in his on-going confrontation via this blog of the pseudo-scientific underbelly—those second tier amateur auditors who would lie with statistics. These are the folks who, via their blogs, really seem to have the ear of and provide the ammunition for the skeptic crowd.

  • MrPete // August 22, 2008 at 7:36 pm

    Hank, that’s a known test. Typically men are one kind (single-focus) while women are the other (multi-focus). Definitely not universal. The typical test is two simultaneous audio streams. Result: single-focus people are able to listen to a single source. Multi-focus people go nuts because they can’t tune out one of the sources.

    Me? I get in the flow and tune everything out :)

    I dunno about Aspies, but we were guardians for an ADHD girl for a while. Horrible at school work but Taco Bell loved her: she could operate all seven work stations simultaneously. She was their best night-time closer by far. ADHD is not worse, just different :)

  • Jason Bint // August 22, 2008 at 7:48 pm

    Or possibly this finding in the Wegman report:

    Based on discussion in Mann et al. (2005) and Dr. Mann’s response to the letters from the Chairman Barton and Chairman Whitfield, there seems to be at least some confusion on the meaning of R2. R2 is usually called the coefficient of determination and in standard analysis of variance; it is computed as 1 – (SSE/SST). SSE is the sum of squared errors due to lack of fit (of the regression or paleoclimate reconstruction) while SST is the total sum of squares about the mean. If the fit is perfect the SSE would be zero and R2 would be one. Conversely, if the fit of the reconstruction is no better than taking the mean value, then SSE/SST is one and the R2 is 0. On the other hand, the Pearson product moment correlation, r, measures association rather than lack of fit. In the case of
    simple linear regression, R2 = r2. However, in the climate reconstruction scenario, they are not the same thing. In fact, what is called β in MBH98 is very close what we have called R2.

  • Gavin's Pussycat // August 22, 2008 at 9:05 pm

    Jason, thanks, that is it.

    Wahl and Ammann show that r2 is inappropriate — which it obviously is. R2 makes sense also for reconstructions. But how does it differ from RE? Does it?

  • Gavin's Pussycat // August 22, 2008 at 9:54 pm

    Jason, the answers are in W & A. R2 and RE are the same (or nearly so). Should have RTFP :-)

  • ChuckG // August 22, 2008 at 11:11 pm

    Paul Middents // August 22, 2008 at 7:28 pm

    Climate Progress is visited right after Rabett is visited right after Open Mind which is visited right after RC. Six days a week. Even retirees need a day off. RC since June ‘06.

    I am intimately familiar with the potential weaknesses displayed by physicists and lesser mortals when they stray much out of their core competence(s). Having had to deal with it on and off over the years before I retired in ‘93.

    So my expectation was that Frank would be discredited.

  • Steve Reynolds // August 22, 2008 at 11:43 pm

    “I would contend that you really aren’t going to learn how science works from a visit to Langley–even one that lasts “a few weeks”.”

    One more data point: From when I worked at JPL, the majority of the scientists (certainly not all) that I worked with were 9 to 5 types.

  • Jason Bint // August 23, 2008 at 12:03 am

    GP: “Jason, the answers are in W & A. R2 and RE are the same (or nearly so).”

    I believe if you look in “W&A’s” SI for their Climatic Change paper, they come up with a figure of .52 for the RE. How that corresponds to R2 or r2, or that R2 and r2 are only the same insimple linear regressions I have no idea: I’m a librarian not a statistician. So your point is lost on me.

    All I can say is that the NAS said “uncertainties of the published reconstructions have been underestimated” in relation to the issues, that the “choice of ’significance level’ for the reduction of error (RE) validation statistic is not appropriate” and “different statistics, specifically the coefficient of efficiency (CE) and the squared correlation (r2), should have been used” and Wegman said ” In the case of simple linear regression, R2 = r2. However, in the climate reconstruction scenario, they are not the same thing. In fact, what is called β in MBH98 is very close what we have called R2.”

    I am not and was not editorializing, just pointing out the references that may have been what you were looking for.

  • Ray Ladbury // August 23, 2008 at 2:04 am

    Steve, what division was that? All I can say is that my own experience is not consistent with that. Even those scientists who do go to hearth and home are usually buried in their research ’til the wee hours. Yes, there are 9-5ers in science. Usually not for long, though.

  • TCO // August 23, 2008 at 2:53 am

    2 months at Langley
    4 months LANL
    3 months at a military lab
    several years in and around R&D within F500
    4 years getting the union card

    Contrasted with several years in business/marketing/military/consulting/engineering

    My take: R&D very 9 to 5ish. The only guys in the lab at 2300 are graduate students in universities. PIs are not (in academia, government, or industry).

    BTW, I would “James Annan Bayesian BET” your 80 hour weeks are BOTH non-representative and exaggerated. It’s a common pattern for people to over-report workload (even in law and consulting and I-banking and military service where there REALLY are some sweat shops). It’s a commonly written about phenomenon. Check this out:

    http://www.google.com/search?q=exaggeration+of+hours+worked+per+week&rls=com.microsoft:en-us&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1

    P.s. Yeah, I’ve swung through JPL for a few days too. Felt just like Langley in terms of work pace!

    [Response: Math is my work. It's the 1st thing I think of when I get up in the morning; my biggest problem falling asleep at night is that I'm still thinking about. I don't go to the toilet without my notebook so I can scribble more equations while taking a crap. When my wife and I go on vacation, she sometimes slaps my hand when I pull out the notebook at the dinner table in the fancy restaurant. But even if she manages to get me to put away the pen and paper, I can still work on it; eventually you develop the ability to do it in your head. It's a passion, I love it, but it's still work and it's still my job. And 80 hours a week is an UNDERestimate of how much I do it.

    Based on my experience, I'm hardly the only one. Many don't spend more than 40 hr/week officially "at work," in fact many are in the office/lab/classroom a lot less than that -- because all the distractions can interrupt doing *real* work. If you estimate how hard and how long we work by how long the guys you've worked with are in the office or the lab, either your estimate is way off or you've been looking at the wrong guys.

    I don't know why you insist on calling us liars. Maybe you're just jealous of the fact that we love our work so much we can't get enough of it.]

  • Steve Reynolds // August 23, 2008 at 3:26 am

    “…what division was that?”

    Sorry, that was too long ago (1976-1977) to remember. The slow paced environment was one reason I left, though. I remember after telling management that my project had progressed as far as it could until some higher level decisions were made, being advised by a co-worker not to do that. He said I should keep a project going until I was sure what would replace it.

  • Gavin's Pussycat // August 23, 2008 at 5:48 am

    I am not and was not editorializing, just pointing out the references
    that may have been what you were looking for.

    Yes and I was thanking you that indeed the second one was. RTFP was to myself, thinking aloud.

  • Gavin's Pussycat // August 23, 2008 at 6:08 am

    > eventually you develop the ability to do it in your head

    Yes, sure. The main risk is losing good ideas. I resisted for many many years getting a mobile phone; now that I have one (forced by my boss), I notice that it is worth its weight in gold as a primitive notebook ;-)

    No, doesn’t do equations, but it remembers for me.

    As to work, this seems relevant:

    http://www.paulgraham.com/opensource.html

    “How scientists work” would be worth its own post/thread.

  • Deech56 // August 23, 2008 at 10:24 am

    TCO, it’s not the hours spent “at work” that defines the passion; it’s the whole path to get there and the having to deal with failure (worked R&D for a biotech company - most projects fail at some point). Since scientists get paid to do thinking, the places in which “thinking” can happen are not bound by the walls of the lab. Besides, I thought the original point was whether scientists were curiosity-driven.

    Oh, and in case anyone’s missed it, John Mashey has a great post over at Deltoid. I guess one can say he’s thought a little about the subject.

  • TCO // August 23, 2008 at 2:50 pm

    Practicing scientists may be more curiosity driven than some other fields (manufacturing, marketing) but not so much as they pat themselves on the back for and not so much as the image/stereotype.

  • Ray Ladbury // August 23, 2008 at 6:01 pm

    TCO, you make a lot of assumptions–all they do is show you don’t have many close interactions with scientists. It is often all my wife can do to keep me from taking my laptop to a party.
    It seems important to you that you can believe this. Fine, I’m not here to disillusion you–only to say that this is inconsistent with my experience.

  • george // August 23, 2008 at 6:07 pm

    Science is really a lifestyle rather than a job.

    The whole “how many hours do you work thing” is overblown anyway — absurd, really.

    It’s not the number of hours you work but what you accomplish.

    As with other very creative careers like art, the real measure of a good scientific career is certainly not the “number of hours worked.”

    I’ve worked with lots of scientists over the years and I’d say there is a significant difference between a scientific career and many other careers — ie, it’s not simply an imagined or “stereotyped” difference.

    the difference is this: scientists never really “go home” from work. They are always thinking about the latest problem, even in their dreams.

  • Hank Roberts // August 23, 2008 at 6:07 pm

    TCO, your basic point seems to be that everybody lies and they started doing it first.

    This is a world view of sorts. Is it your best?

  • MrPete // August 23, 2008 at 6:08 pm

    Curious: is John Mashey here the SGI John Mashey?

  • Deech56 // August 23, 2008 at 6:32 pm

    MrPete - the John Mashey who posted at Deltoid is. I might assume he’s the one who also posts here.

  • dhogaza // August 23, 2008 at 7:44 pm

    Yes, he is.

    Regarding Pat Frank, I was confusing him with Pat Michaels …

    Unless I’m confusing Pat Michaels with someone else, which, given that I’m easily confused …

  • Paul Middents // August 23, 2008 at 8:07 pm

    MrPete,

    Read Mashey’s entire post on Deltoid. It is the best “Climate Science How too” ever written. Along the way you will find the answer to your question revealed.

    Paul

  • MrPete // August 23, 2008 at 10:05 pm

    Thanks, yes the Deltoid article “tells all.” Good timing for my question :)

  • Lazar // August 24, 2008 at 1:06 am

    TCO;

    reforming the reformers

    If they are reformers, are they doing something which isn’t being done, and/or improving on what is being done?

    The Changing Character of Precipitation, Trenberth et al., BAMS

    Some excerpts…

    The diurnal cycle in precipitation
    is particularly pronounced over the United States in summer (Fig. 3) and is poorly simulated in most numerical models.

    [...] some models are wrong everywhere.

    [...] The foremost need is better documentation and processing of all aspects of
    precipitation.

    [...] a need for improved parameterization of convection

    [...] the improvement of “triggers”

    They’re not whining…

    [...] Accordingly, at NCAR we have established a “Water Cycle Across Scales” initiative to address the issues outlined above, among others.

    not just one voice either (I liked this but it wasn’t in the search.)

    So we have the audit of MBH98, Steve raises some theoretically valid criticisms, and a similar conclusion to the reliability of MBH98 could be gained from comparison with other reconstructions, only with less effort and time. The auditing approach which shows that a methodology is approximately wrong, cannot show that it is approximately right. Follow-up/replication can show that the results are reasonable, or questionable, but not the methodology which may produce the right answer for the wrong reasons (providing the answer is right, I don’t see that as a great problem), they can tell us what a reasonable result looks like, and the errors involved, and can be productive of future work.

    I think it would be good if Steve did a reconstruction. Show the world how to do it right. Join the Team!

    Data access reform…
    They are not helping the cause of open access
    by demanding data so they can dump on it, and have that spread by the international media, or by unreasonable personal attacks e.g., and particularly, on Lonnie Thompson.

  • Lazar // August 24, 2008 at 1:23 am

    … on research scientists. Those I know do not work as much as 80 hours, but certainly more than 40. Every one takes their work home. There are fewer distractions outside the lab, and working at night is the best. When they’re not doing formally recognizable work, they’re thinking. The intensity of the work is great. I have never met a research scientist who did it for the fabulous money and cushy workload.

  • TCO // August 24, 2008 at 2:49 am

    I think most business professionals think about work at home.

    Laz: I think there is some utility to Steve giving the system a little bit of a kick in the ass. But at this point, fixes/improvements are very unlikely to come from him or his ilk. He has not published for 3 years now. Set aside even the idea of reconstructions…in many cases, he doesn’t even define the EXTENT (numeric extent) of flaws that he sees. Like he blathers about rain series or something in MBH, but does not say how much it changes the answer to switch them out. He’s all about PR…and very little about mapping parameter space…about understanding sensativity. I contrast this with Burger and Cubash’s full factorial analysis. At this point, the main bad thing is that a lot of my fellow conservatives are sitting in echo chambers and listening to Steve and being amen choirs….and thinking that “the man” is not listenting to Steve…when Steve is not even clearly making points.

    I think the ideas themselves are fascinating. But it is a shame to see them approached so tendentiously.

    For instance, the red noise nature of simulation. Steve has avoided (in a John Edwards/Bill Clinton manner) coming to grips with defining how his VERY SAMPLE DEPENDANT “red noise” gives more of an effect, than simple red noise. He ought to at LEAST show both cases. As it is, it’s probably circular logic.

  • Lazar // August 24, 2008 at 10:21 am

    MrPete, thanks… I use Steve’s network details for coordinates. Google automatically converts to ddmmss. Any ideas?

  • Lazar // August 24, 2008 at 12:20 pm

    Hank,

    feedbacks — huge rainfalls, extreme erosion, lots of fresh carbonate rock exposed, more rainfall

    That seems to run contrary to

    The long-term carbon cycle, fossil fuels and atmospheric composition
    Robert A. Berner
    Nature 2003

    The deposition of carbonates derived from the weathering of carbonates is not shown because these processes essentially balance one another over the long term

    Weathering of silicates -> deposition of carbonates is a potential negative feedback
    Decomposition of deposited carbonates is positive.
    Weathering of organic sediments (kerogen) is positive.

  • Hank Roberts // August 24, 2008 at 6:02 pm

    Lazar, rate of change.

    Did you read the paper and look at the illustrations? Follow up the footnotes and check citing articles?

    On the longer time scale, the entire PETM is just a little blip.

    On the human time scale, a few decades or centuries of extreme precipitation events is a disaster.

    Look at the rainfall in the American Southwest recently — there are large alluvial fans of debris below mountain valleys on which people have built houses, relying on the geologists’ evidence that no flash floods have reached that far down the drainage since the last big episode of extreme precipitation around the end of the last ice age. Remember how we’ve had a period of ten thousand years of unusually stable climate?

    Those areas have had floods recently again, from unusually large thunderstorms.

    Look again at the paper I cited:
    http://ic.ucsc.edu/~jzachos/eart120/readings/Schmitz_Puljate_07.pdf

  • Steve Reynolds // August 24, 2008 at 7:15 pm

    “Data access reform…
    They are not helping the cause of open access by demanding data so they can dump on it…”

    The concern that others will find something wrong with his data seems to me the worst possible excuse for a true scientist to withhold data.

  • carl // August 24, 2008 at 8:07 pm

    Lazar says,
    “I think it would be good if Steve did a reconstruction. Show the world how to do it right. Join the Team!”

    That’s not what he does. He audits, using his specialty to do so. It’s not his job to release reconstructions; it’s the paleoclimatologists’ job to do that.

  • MrPete // August 24, 2008 at 8:47 pm

    Lazar: this is a typical data format/conversion challenge. The data you’re using is
    ddd.mm -ddd.mm
    38.46 -104.59

    Change to the following for Google Maps:
    ddd mmN ddd mmW
    38 46n 104 59w (those are spaces)

    How to know if it is ddd.dd or ddd.mm? See if any fractions are higher than 59 :)

  • Gavin's Pussycat // August 24, 2008 at 9:05 pm

    carl,

    there are cats like that, that know precisely what to do, and are explicit about it, but after a visit to the vet cannot do it themselves any more :-)

    They are called ‘consultancy cats’. Steve is a bit like that.

    He has no excuse, having worked with CFR software and produced results (well, there were ‘issues’, but fixable if he wants to.)

    TCO sees through his game, while being ideologically motivated not to; all it takes is opening your eyes.

  • Lazar // August 24, 2008 at 9:59 pm

    MrPete,

    Great. Thanks!

  • Lazar // August 24, 2008 at 11:15 pm

    TCO;

    I think there is some utility to Steve giving the system a little bit of a kick in the ass.

    Okay.

    But at this point, fixes/improvements are very unlikely to come from him or his ilk. [...]

    Agreed.

    [...] At this point, the main bad thing is that a lot of my fellow conservatives are sitting in echo chambers and listening to Steve and being amen choirs

    The psychology of denial…
    The online types who clutter the forums or write political blogs are mostly activists/pamphleteers who have been spoonfed whatever the most recent, practical policy implications of conservatism were as an ideology labelled ‘conservatism’, without really understanding either the underlying philosophy nor the practical circumstances surrounding those policy choices. E.g. they repeat 80s policy as mantra without addressing new problems, or they imagine those policies are practical solutions. A mantra is small government- and market- fundamentalism. Any proposed ‘fact’ in agreement is taken as gospel. They have the ideology bug, it won’t be shifted by facts or questioning or argumentation ’till you’re blue in the face. They are not intellectually curious. They are best ignored as they are increasingly irrelevant. When the online conservative world read CA, they received great comfort. When I read CA, I was scared. I’m an old type of, very, very, very traditional conservative (read Edmund Burke, John Adams). Trust in and respect for authority / the experts / the village elders… because that is what works (mostly). We live in advanced technological societies experiencing unprecedented rates of technological and cultural change… the possibility that the elders are idiots means the train comes off the rails. It scared me more than the worst implications of AGW did. So I read around Steve, I read what the experts were saying. I don’t expect the pamphleteers to do so. Observe there is an increasing disparity between the pamphleteers and the conservative masses, including the base. In the primaries, the pamphleteers supported Fred Thompson and despised McCain. The base, even the base, trust the scientists according to recent polls. Ordinary conservatives, ordinary people of whatever political stripe, have great instincts. Politics is a sewer. ‘Trust the people’ — Churchill. Not pseudo-intellectuals, too much noise in their heads. The pamphleteers are not your companions, TCO, they can’t follow where you’re going and you can’t take them. Perhaps you’ll find better companionship here? Many if not most are liberals/lefties, but what does it matter? Man is not a political animal. Everything does not reduce to a political question. That’s a Marxist point of view. That’s the frame through which the pamphleteers view things. Politics sucks, man.

  • Lazar // August 25, 2008 at 12:35 am

    Hank,

    On the human time scale, a few decades or centuries of extreme precipitation events is a disaster.

    … I’m certainly not disputing, ditto for rates of change. I’ll do more reading to try and see where you’re going with the feedback thingy.

  • Lazar // August 25, 2008 at 12:56 am

    TCO;

    Steve has avoided (in a John Edwards/Bill Clinton manner) coming to grips with defining how his VERY SAMPLE DEPENDANT “red noise” gives more of an effect, than simple red noise.

    I note he gave a nod to that issue in his first post on the most recent Wahl & Amman release… but only perceptible to those previously aware of the issue. I agree he needs to address it head-on.

  • Hank Roberts // August 25, 2008 at 1:44 am

    > He audits, using his specialty

    Chuckle. But look at his publications.
    Look at the people replicating his work and citing it.

  • Lazar // August 25, 2008 at 3:37 am

    Carl,

    That’s not what he does. He audits, using his specialty to do so. It’s not his job to release reconstructions; it’s the paleoclimatologists’ job to do that.

    Some people collect stamps.

    Once the rockets are up, who cares where they come down
    “That’s not my department!” says Werner Von Braun – Tom Lehrer

    In a facile manner (sorry), I’m trying to say the view is too narrow.

    If you want to improve things, and your strategy doesn’t work, you change strategy.

  • george // August 25, 2008 at 5:58 am

    “Data access reform…
    They are not helping the cause of open access by demanding data so they can dump on it…”

    The concern that others will find something wrong with his data seems to me the worst possible excuse for a true scientist to withhold data.

    While that may be the perception that some try to encourage, I think the truth is actually a bit different.

    I suspect that some scientists simply could not be bothered giving McIntyre and some others data because

    1) the scientists do not see these people as serious about doing real science (investing the time in properly understanding the issues, attending scientific conferences, publishing in peer reviewed journals, etc)
    2) the scientists have been turned off by the modus operandi of such people (with all that entails)
    3) the scientists have better/more important things to do with their time

    Admittedly, this interpretation of reality is a little more mundane than the alternative — not quite as fraught with high drama, mystery and intrigue:

    No conspiracies to hide data
    No “Piltdown man” frauds
    No “greatest hoax ever perpetrated on the American public”s
    No efforts to squelch whistle-blowers
    etc, etc

    But reality is often a little less exciting than we might have it.

  • Ray Ladbury // August 25, 2008 at 12:40 pm

    George–one thing to add to your list: Science doesn’t audit. It replicates independently. If I wonder about a colleague’s data, I don’t ask him for the data and try to redo his analysis. I gather data myself and look to see if it is consistent with my colleague’s data. Scientists are not bean counters–or bristle-cone counters. The problem scientists have with McIntyre is that his whole attitude and Oeuvre betrays a flawed understanding of the scientific method.

  • Dano // August 25, 2008 at 12:40 pm

    george @ August 25, 2008 at 5:58 am:

    Bingo.

    Good to see echoes of Dano this long after the events.

    Best,

    D

  • Hank Roberts // August 25, 2008 at 12:48 pm

    There’s a large literature available to which those folks could contribute if they worked out their method and described it so others could use it. The fact that they don’t makes people think it’s mostly grandstanding.

    Examples of doing it right:
    http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VD0-4SNHP07-1&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_version=1&_urlVersion=0&_userid=10&md5=84644c690299e23cdb381c93c323c336

  • Hank Roberts // August 25, 2008 at 12:50 pm

    better link:
    http://dx.doi.org/10.1016/j.im.2008.03.004
    general search:
    http://scholar.google.com/scholar?num=100&hl=en&lr=&newwindow=1&safe=off&scoring=r&q=spreadsheet+errors&as_ylo=2008

  • Hank Roberts // August 25, 2008 at 1:10 pm

    And, on why journals are better than blogs for making real contributions:
    http://ars.userfriendly.org/cartoons/?id=20080825

  • Lazar // August 25, 2008 at 3:18 pm

    Steve Reynolds,

    The concern that others will find something wrong with his data seems to me the worst possible excuse for a true scientist to withhold data.

    I don’t mean a genuine audit, finding genuine errors. I mean doing a hack PR analysis on the released data, e.g. conflating weather with climate to unfairly disparage the reliability of climate models, and spreading this confusion through the gullible media. Scientists are not going to want to release data if that’s the way the ‘auditors’ audit.

  • Lazar // August 25, 2008 at 3:34 pm

    … AGW is a serious issue. Scientist have responsiblity a) as citizens b) as scientists. When data is abused for PR purposes, scientists have a responsibility to clear the mess up, which means dropping what they’re doing. If CA cannot act responsibly, they undermine the cause for open access. What do they want? Is it open access, or is it PR?

  • apolytongp // August 25, 2008 at 3:34 pm

    Ray:

    Mann did not gather data. What he did was put together an algorithm. An equation. A statistical machine that crunched input and generated output. A math function (or relationship if you are pedantic) in the very broadest math sense. Examination of the algorithm by running the same data through variants of the algorithm other data through same algorithm (MM, Huybers, WA, Wegman, VS-Z, etc.) is a reasonable and insightful thing to do.

    My issue with McIntyre is that he confounds the “message broadcasting” (mostly on his controlled blog, no less) so much with his analysis (doing isolated cases for effect, dotcom stock example, etc., only reporting the things that make MBH look bad, not quantifying things that sound bad but have major impact, etc.) that we get little real understanding of the algorithm-data couple. And his acolytes listening to him are not really curious either. Heck, I know I’m clueless…but I at least sorta ahve a feel for that–am not out in the Rumsfeldian unkown unknown land. The amen choir just enjoys the social/political frolic and doesn’t try to think. (Zorita, Burger…even Mosher, JohnV….heck even bender occasionally) show more real curiosity.

    Mann is defensive and not thoughtful either…very ego-driven “young Turk” type scientist, rather than curious mathematician. Unwilling to show work so it can be examined, writing atrociously, not sharing all details of his math method, and seeing comments as a battle for PR rather than exploration of phenomena. While I understand that you all are politically sympathetic with him (e.g. Daily Kos discussion, liking Obama, etc), I am cheered when I see Gavin’s Pussy (e.g.) thinking independantly, giving Tammy a check every now and then.

    P.s. (Pre-emptive strike) Please spare me any blather about how I don’t understand scientists, how great they are either. I’ve name-dropped enough to show that I have some experience. And I find that (e.g.) the union-card-holders most proud of their status are the ones who are the most minimal.

  • apolytongp // August 25, 2008 at 3:35 pm

    Crap, stupid wordpress: Should be under TCO (not a sock-puppet, but wordpress won’t let me have the nickname TCO. Someone already had it.)

  • apolytongp // August 25, 2008 at 3:37 pm

    Lazar: Scientists should release data and methods regardless of whether their opponent will misuse it. And it is also the most common sense thing to worry about people finding mistakes. Most publications have some. And most scientists accept a standard that is well shy of the attitude of mathematicians with thereoms. God knows, someone could come after me and find things I did wrong. It’s natural to not like that.

  • apolytongp // August 25, 2008 at 3:51 pm

    I’m missing an “or” and it should be “minor impact”. Sheesh, think I’m turning dyslexic.

  • Ray Ladbury // August 25, 2008 at 5:02 pm

    TCO/apolytonp, What would be learned by running the same data through the same algorithm? The most you can hope to catch with that is a simple error–or if you think it’s occurring outright fraud. That’s not how science works. Rather, the way a scientist would “replicate” the result would be to develop an algorithm and dataset independently and see if it produced similar results. That way, you test not just the data or the algorithm, but also the assumptions behind the analysis. If there is disagreement, it usually gets hashed out at conferences. McIntyre’s methodology is fundamentally unscientific. It is my impression that he doesn’t really have the discipline to submit to the usual peer-review process.

  • TCO // August 25, 2008 at 5:22 pm

    Ray:

    It’s a control. I just talked about how it’s interesting to vary the algorithm and/or the data set to explore the impact. I’m in shock that you would need to ask that.

    Oh, and given how complicated the algorithms (and datasets) are it’s doubly important to do a control. Heck, Wegman, MM, WA all have been expected to first demonstrate ability to replicate. And it was not trivial. Heck, Steve still doesn’t know how to get the error bars (the math equation is not shared in the paper).

  • TCO // August 25, 2008 at 5:24 pm

    I agree that McI lacks discipline and that going through peer review would be beneficial (would tighten his wandering logic and explication), but that he is too lazy or tendentious to do so.

  • george // August 25, 2008 at 6:29 pm

    the way a scientist would “replicate” the result would be to develop an algorithm and dataset independently and see if it produced similar results.

    I think what John Van Vliet did is a good example of this.

    His effort provided far more insight into whether THE NASA GISTEMP algorithms and implementation (ie, code) are doing what NASA claims — than simply recompiling and re-running the NASA code with the same (or even similar) data would have done.

    That is especially true when the algorithm and/or computer code might be less than transparent, as some have complained about (incessantly) in the case of GISTEMP.

    If the people who are supposed to be “repeating the experiment” have trouble even compiling the code, should I really trust that they got the implementation of Hansen’s algorithm right?
    Give me one good reason why?

    If they do manage to eliminate all the compiler errors and end up getting a different result from Hansen when they run the code on the NASA dataset, is it due to some error in the original algorithm? In the implementation? or in the effort to replicate? Other than having yet a third party attempt to repeat the effort precisely, How does one decide?

    like it or not, this is where expertise is highly relevant.

    Sorry, but I, for one, would have very little faith that someone who seemed to have so much trouble compiling would be able to implement an algorithm correctly. (I’m saying that based on my years doing scientific programming)

  • Ray Ladbury // August 25, 2008 at 6:39 pm

    TCO, you are missing the point. All you can answer by redoing an analysis is: Did they do it right? That’s not all that interesting, and depending on the disposition of the auditor, you tend to get a predictable antagonistic or sympathetic bias. Such an audit or control would be appropriate within a collaboration–after all, they have access to the data, code and people on a daily basis.
    Once the results are published, the time for audits is over. Then, the work must be independent. That’s how science works: audits are internal; replication is external and independent.

  • Steve Reynolds // August 25, 2008 at 8:45 pm

    Lazar: “I don’t mean a genuine audit, finding genuine errors. I mean doing a hack PR analysis on the released data, e.g. conflating weather with climate…”

    While I have not seen any evidence of McIntyre doing that, it still does not matter. Scientists should not withhold data that public funding has paid for (except possibly some military application data).

    At least for me, the negative effects on credibility associated with withholding data and methods are much worse than what any ‘hack analysis’ can generate.

  • Steve Reynolds // August 25, 2008 at 9:05 pm

    Ray: “Once the results are published, the time for audits is over. Then, the work must be independent. That’s how science works: audits are internal; replication is external and independent.”

    I’ve never seen that stated as part of the scientific method before. Is that Ladbury’s Law?

    Is that how the error in the satellite temperature measurements was resolved? Did RSS get their own satellite?

  • nanny_govt_sucks // August 25, 2008 at 9:06 pm

    All you can answer by redoing an analysis is: Did they do it right? That’s not all that interesting,

    Are you serious? What if the answer is “THEY DIDN’T DO IT RIGHT”. Are you saying you would be uninterested in this result?

  • apolytongp // August 25, 2008 at 9:35 pm

    No, I’m NOT missing the point. You can do more than that. You can explore parameter space and have a control.

  • MrPete // August 25, 2008 at 9:41 pm

    I can appreciate both sides of this internal/external tiff.

    I think it is worth reminding ourselves that falsification is a rather important element of science.

    The problem here is that so much of the controversial science work is qualitatively different from what we’re used to. In essence, it is statistical analysis of data sets.

    When both the data and the analysis are opaque, what does “falsification” mean in practical terms? Kinda hard to falsify when the data and analysis being compared are opaque.

    I can see why some of these things have emerged over time (e.g. good dendrochronology doesn’t depend on explaining all the data), and I can see that a new generation of dendros is/will take things to a new level.

    At this early stage in the development of a science, I think “did they do it right?” is a valid, even important question.

  • David B. Benson // August 25, 2008 at 9:47 pm

    Arthur Anderson & Co. used to audit. Do you know why they don’t anymore? And why this might be relevant to one of the comment items just now?

  • David B. Benson // August 25, 2008 at 9:49 pm

    Steve Reynolds // August 25, 2008 at 9:05 pm — In at least parts of physics and chemistry, nobody accepts the results (except maybe the authors) in a paper until the effect has been independently replicated.

    Cold fusion, anyone?

  • Lazar // August 25, 2008 at 10:08 pm

    TCO;

    Scientists should release data and methods regardless of whether their opponent will misuse it.

    That is ethics. But how to pragmatically achieve open access given human nature; how do scientists respond to misuse/PR spin of data and methods? If open access is what CA/Stockwell/Watts really, really want, they need to drop the PR. Show responsibility. Gain trust. Otherwise, they’re just harming the cause.

    From a selfish point of view (I’d like access), and from a societal point of view of greater technological progress, I’d agree in the general case that they “should” release data and code where copyright and grant terms allow.
    For this particular issue, at this particular time, I’d side with scientists selectively releasing their data and code until the dust has settled. That point of view is entirely due to political and corporate corruption and the pressing nature of the issue(s).

  • Ray Ladbury // August 25, 2008 at 10:56 pm

    Steve Reynolds, One of the precepts of the scientific method is INDEPENDENT verification. How do you remain independent if you are sharing data, codes, ideas, etc.? These things get shared internally within a research group. The methodology is summarized in the paper. If reviewers do not understand how the research was done from the description, they will ask for clarification. Once they are satisfied, the research is published, and subsequent efforts have to be independent.

    The reason why “did they do it right?” is not all that interesting is that the answer emerges in the process of independent replication, and by that time, any incorrect research will likely have been supplanted.

    MrPete says: “The problem here is that so much of the controversial science work is qualitatively different from what we’re used to. In essence, it is statistical analysis of data sets. ”

    Huh? Exactly how is statistical analysis of datasets anything new in science?

    And falsification? Dude, science has SO moved beyond Karl Popper! There are information theoretic approaches, Bayesian approaches… Yes, falsification is an important aspect of science, but it is not the entire story. Do you really expect evolution to be falsified? Gravity? Climate science is over a century and a half old. I rather doubt the basic model of Earth’s climate will look dramatically different in 100 years. Details will change. We’ll understand inter-relations between forcers and feedbacks better, and we may even find a few new forcers, the the outline of the theories would likely be recognizable to a climate scientist of our time.
    I really don’t think anything I’ve said is all that controversial among people who actually DO science.

  • HankRoberts // August 25, 2008 at 11:11 pm

    “… On a non-drought related point – it’s fascinating that sceptics like to use, when it suits their purpose, the same temperature series they discredit to prove their latest “cooling” idea. Surely they can’t have it both ways. The excellent Wood-for-Trees website allows one to plot, compare and contrast, in as many ways as you can imagine, the relative differences between the two ground-based and two satellite temperature analyses. The trends are very similar, with the major differences being in how GISTEMP treats averaging of stations across the Arctic, and in the different baseline periods used to compute the ‘temperature anomaly’….”

    Found here:
    http://bravenewclimate.com/2008/08/24/dr-jennifer-marohasy-ignores-the-climate-science/

    Points to here:
    http://www.woodfortrees.org/plot/hadcrut3vgl/from:1979/offset:-0.146/mean:12/plot/uah/from:1979/mean:12/plot/rss/from:1979/mean:12/plot/gistemp/from:1979/offset:-0.238/mean:12

  • Steve Reynolds // August 25, 2008 at 11:46 pm

    Lazar: “…from a societal point of view of greater technological progress, I’d agree in the general case that they “should” release data and code where copyright and grant terms allow.”

    If I can believe what I read at CA, most ‘grant terms’ not only allow, but require sharing data and methods.

    Do you think scientists concerned about negative PR should be able to use that as an excuse to violate their grant terms?

  • David B. Benson // August 25, 2008 at 11:55 pm

    Ray Ladbury // August 25, 2008 at 10:56 pm — Nothing you’ve written is in the least controversial for actual scientists.

    That said, with ever greater use of computational experimentation, in some research specialties there is a movement towards making the computer programs available to the audience of the journal accepting the (summarizing) paper. The advantages of doing so are most unclear to me; I have enough trouble re-reading my old research codes when I neeed to go back to those; certainly don’t want to have to look at someone else’s.

  • Ray Ladbury // August 26, 2008 at 1:58 am

    David Benson, When I have developed an analysis technique, I am more than happy to share it. If people call, I can help them through the steps of the analysis, but typically, unless it’s a pretty straightforward analysis, I prefer to let them reproduce it independently, with guidance only as needed. In part, this is because I know it can always be improved upon, and what they come up with might be much better. I don’t see much advantage to passing code between groups.

  • apolytongp // August 26, 2008 at 5:20 am

    Well let me fill you in, Ray. The advantage is that the algorithm is exactly described by the code, but generally NOT properly described in the paper. For instance in MBH98, the acentric standardization was NOT listed in the paper. Capisce?

  • apolytongp // August 26, 2008 at 5:24 am

    For instance, the error bars are a mystery. The paper doesn’t give an exact algorithm or equation that allows for reproducing the lines in the figure.

  • Philippe Chantreau // August 26, 2008 at 5:27 am

    TCO, I agree with Lazar in his 10.08 post. When you see how some individuals (for lack of a better word) can torture data, it’s probably better that these data are not released to them.

    You say “opponent” but do not emphasize that the opponent must have certain qualifications and ethics in order to make data release truly productive from a scientific point of view. Ray nicely summarized why that is: real expertise is needed.

    Some “skeptics” (Watts comes to mind) have shown incompetence and bias .They do not deserve to be given data if all they’re going to do is to foster talking points with no regard for proper analysis. Even from a truly skeptical point of view (as in striving to understand reality), that will do more harm than good.

    Nincompoops who believe that they know better than anyone else and the complacent media who give them a voice have forever made data and method release an impossible dilemma for real researchers. That’s a shame.

    Hence, it is understandable, and possibly beneficial, that some are reluctant to release anything.

  • tamino // August 26, 2008 at 8:45 am

    I’m traveling today, and will be on the road all week, so moderation may be a bit slow.

  • MrPete // August 26, 2008 at 12:09 pm

    Ray sez several things, including: “The reason why ‘did they do it right?’ is not all that interesting is that the answer emerges in the process of independent replication, and by that time, any incorrect research will likely have been supplanted.”

    Several things have been noted that are quite correct in theory, but (surprisingly?) not in today’s reality. In the current compressed environment (so to speak ;)), we’re trying to act on data before anything truly independent is done, and before incorrect research is supplanted. Instead, with falsification considered boring if not offensive, we assume the validity of what was published yesterday, and build on top of that.

    Ignoring the dreck, there are multiple “deniers” who are rather well qualified to comment in their areas of expertise, which others are simply reluctant to accept. Sure, they get upset and comment outside their expertise. It’s a rare person who sticks 100% to their knitting.

    Re the stability of science. I’m amazed that you present such a…boringly stable… perspective. How big a shift does it take for you to think something quite different has been learned? Sure, the “basic” outline is known. But do we really know enough to know that action X is needed, and will help?

    Just consider how much our understanding of methane has shifted in the last few decades. Ruminant sources (1970’s?) Trees moving from carbon sequestration to possible source of GHG (2006). Oceans as methane source (2008). These are pretty significant items.

    Trees are a huge topic. How long have we “more or less known the answer?” They thought we did only a decade ago at Kyoto. Now we ask questions like: do we grow more to sequester more carbon, grow less to avoid methane release? Do we replace forests with farming to produce biofuel? Or does it even matter? Those have been, and in some case still are, pretty good questions with the potential to not just shift our understanding by 5% somewhere, but change our actions.

    Bottom line: I think “getting it right” is still a pretty important question.

    Nincompoops, complacent media, and passionate campaigners exist everywhere. Hiding is not going to solve those problems; its a false dilemma to think hiding data and methods is gonna help. Sunlight exposes the truth. Both internally and externally. It’s a lesson learned well in many arenas of science, math, technology of all kinds. It’s time for this arena to join the fun.

  • apolytongp // August 26, 2008 at 1:17 pm

    If you only reveal secrets selectively, tha makes me believe your claims less. A Fenmann would not be scared.

  • Ray Ladbury // August 26, 2008 at 1:21 pm

    TCO/apolytongp, sorry, but the error bars need to be determined independently as well. You can argue that the reviewers should have demanded clarification of the procedure, but once it’s published, the analysis stands or falls on its own. There is no benefit in repeating an analysis that has been published previously. If there are problems reproducing the resluts, that comes out of the independent efforts of other groups. Science says: Do your own freakin’ analysis. Don’t rely slavishly on what was done before, since this only slows progress.

  • kevin // August 26, 2008 at 1:24 pm

    Steve Reynolds: I haven’t been reading CA, so I don’t know exactly what was said; please clarify what you mean by “sharing”: Are they saying that the terms of most research grants require that data be made *publicly* available, i.e. to anyone and everyone?

    I have no experience with climate science research grants. But in my (admittedly limited) experience with grant funded research in psychology, I’ve never seen a requirement that the raw data be made public.

    I’m not saying you definitively can’t believe what you’ve read, but I wouldn’t recommend uncritically accepting it, either. A skeptic wants evidence for claims he would like to believe as well as claims he does not want to believe, eh?

  • HankRoberts // August 26, 2008 at 5:59 pm

    > The paper doesn’t

    You know how science is done.

    Look in the journals citing that 20-year-old paper for any comments on it. Look at subsequent work by the authors. See if they have (as one would expect in the usual course) improved their methodology and presentation over time.

    Possibilities could include:

    – they persist repeatedly in doing exactly the same thing, and nobody complains about it in the journals.

    Conclusion, you’re the crank.

    – their work and presentation changed over time, along lines suggested in the journal comments.

    Conclusion, you’re not looking past the first paper. You should look for normal developments, not fix your attention on on one 2o-year-old paper, or you’re the crank.

    – their work and presentation changed over time, without incorporating suggestions made as comments in the journals. Commenters there continue to advise other methods or explanations.

    Conclusion, you’re at least turning in the same direction as those in the field who are particularly qualified to comment on the work, and you’re a cheerleader.

    – the authors’ later work never changes along the lines you wish their first paper had been done, and nobody except McI and CA continue to comment on the 2o-year-old paper.

    Conclusion: beware being a fanboy.

    Look in the journals citing that 20-year-old paper for any comments on it. Look at subsequent work by the authors. See if they have (as one would expect in the usual course) improved their methodology and presentation over time.

    Possibilities could include:

    – they persist repeatedly in doing exactly the same thing, and nobody complains about it in the journals.

    Conclusion, you’re the crank.

    – their work and presentation changed over time, along lines suggested in the journal comments.

    Conclusion, you’re not looking past the first paper. You should look for normal developments, not fix your attention on on one 2o-year-old paper, or you’re the crank.

    – their work and presentation changed over time, without incorporating suggestions made as comments in the journals. Commenters there continue to advise other methods or explanations.

    Conclusion, you’re at least turning in the same direction as those in the field who are particularly qualified to comment on the work, and you’re a cheerleader.

    – the authors’ later work never changes along the lines you wish their first paper had been done, and nobody anywhere comments in agreement with you.

    Conclusion: you’re a genius, and an LP!
    Play on, brother!
    http://en.wikipedia.org/wiki/Eppur_Si_Muove

  • Gavin's Pussycat // August 26, 2008 at 6:17 pm

    apowhatever:

    I am cheered when I see Gavin’s Pussy (e.g.) thinking independantly, giving Tammy a check every now and then.

    Poppycock. I correct Tamino’s typos as a maintenance contribution to a great resource that I use in my own teaching (currently in a summer school on Iceland, well received, thank you Tamino!)

    Lot to learn, you have.

  • carl // August 26, 2008 at 6:49 pm

    lazar - He audits the scientific community in a way that no one else does. He has consistently provided great critique of paleoclimatic reconstructions, doing all of the climate change community a service that they should already have done. McIntyre does the work that no one responsible for will do by themselves. He does the work that he doesn’t have to do and he is a tremendous help to the community. How about the people actually responsible for paleoclimate make a reconstruction without fatal errors?

  • matt // August 27, 2008 at 1:47 am

    Phillippe Chantreau: TCO, I agree with Lazar in his 10.08 post. When you see how some individuals (for lack of a better word) can torture data, it’s probably better that these data are not released to them.

    Do you really believe that because some people might take information and use it for their own agenda that information should be withheld? Are you kidding me? Is this true for wars? Is this true for governments? Is this true for big corporations that are leasing the resources of this country?

    Do you really mean this?

    Could we extend if further and state that if a fact exists that might harm a cause that you believe is just that it is OK to withhold or supress that truth?

    Your statements scare the hell out of me, honestly. I hope you think about them and retract them.

  • dhogaza // August 27, 2008 at 3:36 pm

    Do you really believe that because some people might take information and use it for their own agenda that information should be withheld?

    If I knew that the recipient were going to use the data to, in essence, lie, why yes. Why cooperate with a liar?

    Could we extend if further and state that if a fact exists that might harm a cause that you believe is just that it is OK to withhold or supress that truth

    It’s not truth that annoys scientists, matt.

  • apolytongp // August 27, 2008 at 4:07 pm

    Ray,

    1. Having the exact equation (and data) is useful if one is doing a study, comparing how variants of the “algorithm-data” combination perform.n It’s a CONTROL.

    2. Mann’s work was analytical and mathematical in nature. He did not gather data. He did a meta-analysis. To a great extent, his work product IS the algorithm. If he had just drawn the charts and had NO methods section, he would have never gotten published. So why justify a poor methods section?

    3. A lot of time has already been spent trying to dig out exactly what Mann did, which was wasted time. For instance, Mann does not disclose the acentric standardization.

  • Philippe Chantreau // August 27, 2008 at 4:44 pm

    Matt, please add a little more grandiloquent drama to your self righteousness. I’m about to be moved, I swear.
    I ain’t retracting nothing so long as clowns like Watts or the buffoons of “CO2 Science”, etc are out there trying to manipulate the masses.

    You guys don’t even think about examining statistical evidence used by the pharmaceutical industry and would never propose to hold it to the same “standards” that you whine so loud about for climate science. How about chemical applications? Do you ask for data release there? If not, why not?
    How about botched or suppressed environemental assessments? Where is the “skeptic crowd” drama to defend the scientists who want to have their data made public but can’t because officials distort or drop their work altogether? Read what Pat Neuman has to say on RC then come back with you self righteous salad. I’ll buy it when you’re all no longer so selective in your whining.

    Climate science has much more information available out there than many or most. Funny you mention corporations leasing resources, when the policies for them was decided by a secret “Task Force.” Is the skeptic crowd whining for Cheney to release that info? If not, why not?

    You prove my point: you want only a certain type of info released so that you can attack the conclusions drawn from it because you don’t like them. That’s what the big outcry about Hansen’s code was about and now that it’s out, nobody talks about it any more. Could it be because there is really nothing there to talk about?
    Were you all outraged that administration wanted to silence Hansen? I do not recall blog post by you saying as much, but I might have missed it. Did I?
    You’re all about freedom, access and transparency, sure.

    And here I am, stating my opinion, and first thing you do is try to make me retract it. I did not propose any legislation, or general rule, or even any action, but you immediately want me to shut up. Funny.

    I’m sure you’ll have another very moving come back to this, but I won’t engage in a blogging match with you. I have a job and a life and blogging is very low on my priority list, so don’t waste too much time trying to make me look wrong, I don’t care.

  • HankRoberts // August 27, 2008 at 5:56 pm

    > He audits the scientific community in
    > a way that no one else does.

    And no one else does it this way because, er, um, ah …. why again?

    > because some people might take
    > information and use it for their own
    > agenda that information should be
    > withheld?

    And where do you keep your data files?
    Just curious to have a look. Trust me.

  • MrPete // August 27, 2008 at 8:44 pm

    Hank, interesting that your possibilities list presumes papers accepted by insiders are correct, and critics are wrong.

    You’ve never seen an accepted conclusion eventually overturned? Twenty years is a nit.

  • george // August 28, 2008 at 12:56 am

    Truly “independent” approaches, algorithms, computer code. etc are usually best when it comes to “replicating” a scientist’s results (not the same as “duplication”. replication uses the same data but need not use the same methods)

    Even claims of independence have to be looked at carefully.

    There is famous case of the latter related by Richard Feynman (in “QED: The Strange Theory of Light and Matter”) :

    It took two ‘independent’ groups of physicists two years to calculate this next term, and then another year to find out there was a mistake—experimenters had measured the value to be slightly different, and it looked for a while that the theory didn’t agree with experiment for the first time, but no: it was a mistake in arithmetic. How could two groups make the same mistake? It turns out that near the end of the calculation the two groups compared notes and ironed out the differences between their calculations, so they were not really independent.

    But if they had been truly independent, it is unlikely (or at least less likely) that they would have made the very same error.

    Also, it is quite possible to get the description of the “algorithm” that was used right in the methods section, but implement it wrong in the code.

    Second (and this may come as a complete surprise to some) it is also quite possible to compile the very same code on different compilers and or/with different compiler settings and get different answers when one runs the two executables!

    So, just getting someone else’s code to compile does not ensure that one has implemented their algorithm correctly (or even the way that they implemented it, correct of not)

    On the other hand, if you can’t even get Hansen’s code to compile, what does that say? (and don’t give me the lame excuse about his using FORTRAN and assuming a UNIX-like platform. If you are not familiar with those, you have no business complaining)

    Finally, there are lots and lots of cases where a simple explanation of an algorithm is much clearer than the computer code! (Look in “Numerical Recipes some time. Some of their code is downright cryptic (especially their short variable names and the way they make something into its inverse without telling anyone) — but their explanations are top notch (Tamino’s explanations are on par with theirs) .

    As a computer engineer, I’d have to say that the latter is the rule rather than the exception (unfortunately). I’ve looked at far more than my share of spaghetti code and i have grown to despise it with a passion.

  • carl // August 28, 2008 at 1:16 am

    Philippe Chantreau -
    Wow, that’s a long explanation all to justify not releasing code that the global economy may make a significant investment on.

  • matt // August 28, 2008 at 2:48 am

    DHogaza: It’s not truth that annoys scientists, matt.

    Yes, as we’ve learned it is accountability, oversight, and archiving data.

  • matt // August 28, 2008 at 3:24 am

    Phillipe C. I did not propose any legislation, or general rule, or even any action, but you immediately want me to shut up.

    I don’t want you to shut up at all. I just want you recognize how heavy-handed your statement was when applied across the board.

    Federally funded data SHOULD be free (unless nat security is involved). If you create your own data, on your own dime, then by all means, keep it private if you wish.

  • Steve Reynolds // August 28, 2008 at 4:01 am

    kevin: “I haven’t been reading CA, so I don’t know exactly what was said; please clarify what you mean by “sharing”: Are they saying that the terms of most research grants require that data be made *publicly* available, i.e. to anyone and everyone?”

    This was posted at CA: http://www.climateaudit.org/?p=2237

    “The NSF agencywide policy states that researchers are “expected to share with other researchers, at no more than incremental cost and within a reasonable time, the primary data, samples, physical collections and other supporting materials created or gathered.”1313National Science Foundation, NSF Grant Policy Manual, (Arlington, VA, 2005). ”

    Whether ‘other researchers’ includes the public, you will have to ask the NSF. However, I think it has become common practice to archive data on publicly available websites.

  • Ray Ladbury // August 28, 2008 at 1:13 pm

    apolytongp, If you are reproducing somebody else’s analysis, it is not a control, it is an undergraduate science lab. You have said yourself that all Mann did was perform a meta-analysis. What is to stop you from doing the same–from coming up with your own damned equation. Do the words “independent research” mean nothing over at CA? Oh, that’s right. They don’t publish.

    You say: “A lot of time has already been spent trying to dig out exactly what Mann did, which was wasted time. ”

    Well, on that at least we can agree. His analysis has long since been superseded, and guess what, the conclusions are pretty much the same. Unless you are doing a history of science paper, you have no business delving into somebody else’s code. And I haven’t seen the braintrust over at CA give anybody serious much reason to cooperate.

  • apolytongp // August 28, 2008 at 2:56 pm

    Philippe, I’m very much in favor of how crustallography data (in chemistry, responding to your point) is required to be put in the ICSD or JCPDF for archiving…and how reviewers will examine the data and the trial structure directly. This has payed huge dividends in clearing out poor structures, cleaning up the literature, etc.

    The rest of the stuff about eefill corporations and stuff sorta went passed me. Let’s stick to the science and best practices. Not try to dumb everything down. (BTW, while pharma science reports are known to have a marketing angle…has been well discussed…the record keeping, statistics, double-blind practices, controls…are better than climate science.) And they should be. Lots of money in that stuff. And it’s important (lives depend on the insights).

  • Ray Ladbury // August 28, 2008 at 5:05 pm

    Steve Reynolds,
    What is typically done with “data” is that the principal investigator gets a first shot at it for a year or two. It is then made available–often over the Web. Different fields have different policies. NASA is usually particularly accommodating. DOE high-energy physics experiments–well good luck getting any of their data–at least in a usable form.
    However, as pointed out repeatedly, Mann et al. merely did a meta-analysis on publicly available data. No group is under any compulsion to share analysis methods–and as George and I have been saying repeatedly, it’s best if research methods remains independent.

  • t_p_hamilton // August 28, 2008 at 5:48 pm

    Time for facts straight from the horses mouth:

    NSF Awards and Conditions July 1, 2008, side by side comparison with previous bulletin, at
    http://www.nsf.gov/pubs/policydocs/rtc/termsidebyside.pdf

    pg 36
    “(c) The Federal Government has the right to:
    (1) Obtain, reproduce, publish or otherwise use the data first produced under an
    award.
    (2) Authorize others to receive, reproduce, publish, or otherwise use such data for
    Federal purposes.
    (d) (1) In addition, in response to a Freedom of Information Act (FOIA) request for
    research data relating to published research findings produced under an award that
    were used by the Federal Government in developing an agency action that has the
    force and effect of law, the Federal awarding agency shall request, and the recipient
    shall provide, within a reasonable time, the research data so that they can be made
    available to the public through the procedures established under the FOIA.”

    I suppose there is an upside for Mann, Hansen etc. since the federal government’s policy has not been taking action on global warming. :)

    pg 37
    “(i) Research data is defined as the recorded factual material commonly accepted in
    the scientific community as necessary to validate research findings, but not any of
    the following: preliminary analyses, drafts of scientific papers, plans for future
    research, peer reviews, or communications with colleagues. This “recorded”
    material excludes physical objects (e.g., laboratory samples).”

    I can tell you that a description of an algorithm is all that is necessary in my field, not the code itself.

  • t_p_hamilton // August 28, 2008 at 5:52 pm

    “MrPete // August 27, 2008 at 8:44 pm

    Hank, interesting that your possibilities list presumes papers accepted by insiders are correct, and critics are wrong.

    You’ve never seen an accepted conclusion eventually overturned? ”

    Not the way “auditors” are going about it. That is what we are trying to get across - what WILL work!

  • apolytongp // August 28, 2008 at 7:50 pm

    Ray:

    1. If I want to study the interaction of data and algorithm (e.g. with red noise) this is an extension in parameter space. A new experiment to learn how the two interact (as expected or something novel). Doing the known data with known algorithm and checking to reported output is a control. The same concept applies if I want to look at alternate methods (e.g. Burger and Cubasch 05, WA, Huybers, MM, etc.) with known data. You run a control to make sure that help interpret your findings (knowing that the changes you made were in the area of the experiment that you intended them to be).

  • apolytongp // August 28, 2008 at 8:09 pm

    Ray:

    I have already stated that I think CA should publish. Don’t think I’m on his “side”. When I criticize Mann for a failing, it does not excuse McI…or visa versa. However, the “lack of good publishing by CA” is no excuse for poor disclosure of methods and/or data. (It may be a mild excuse for not helping them by discussion.)

  • apolytongp // August 28, 2008 at 8:46 pm

    george:

    Good points, but my reply is not to let perfect be the enemy of good. Yes, code is not always executable or easy to follow. That does not mean that there are no insights to be gained from looking at them.

  • apolytongp // August 28, 2008 at 8:49 pm

    Ray:

    I actually agree that MBH is to picked upon (and implicitly or explicitly) having it’s faults extended by skeptics to larger areas. So what. That still doesn’t mean that the actual criticism of it in isolation is not valid. The problem is that both sides are so tied up in the battle of appearances, of inferences to policy, of appearing to look bad or good or hurt or winning of attack, that they fail to be engaged in truth the way a mathematician would/should.

  • apolytongp // August 28, 2008 at 8:57 pm

    Ray: I recommend to you chapter 13 of E. Bright Wilson’s AN INTRODUCTION TO SCIENTIFIC RESEARCH. This book is a classic and the “motherhood” comments on clarity, methods, publishing actualm data, disclosing all adjustments, etc. support my point of view.

  • apolytongp // August 28, 2008 at 8:57 pm

    http://www.amazon.com/Introduction-Scientific-Research-Bright-Wilson/dp/0486665453

  • apolytongp // August 28, 2008 at 10:16 pm

    t-p: I’m NOT SURE that that broad language would cover not sharing the code. But in any case, Mann specifically refused to share the algorithm!

  • Ray Ladbury // August 28, 2008 at 10:58 pm

    Apolytongp, If you were part of the research group that had published Mann et al. , AND you had not yet published, I would agree. A control would be fine. However, I don’t believe either of those two conditions are met. Therefore YOUR research needs to be independent of Mann’s and of everybody else who is not collaborating with you.
    By all means, Mann et al. and every other researcher need to make their methods sufficiently clear that the RESULTS can be reproduced–not the analysis. That they came up short in that respect and were still published speaks to:
    1)the novelty of the work as the first successful multi-proxy climate reconstruction on such a scale.
    2)the shortcomings of Mann et al.
    3)the shortcomings of the reviewers.

    However, the analysis was published, and it stands or falls on its own merits. There is zero benefit in helping McI et al. do their undergraduate lab experiment. Feel the pain and let it go.

  • HankRoberts // August 28, 2008 at 11:13 pm

    http://www.watoday.com.au/opinion/who-is-behind-climate-change-deniers-20080802-3ou6.html?page=1

  • David B. Benson // August 29, 2008 at 12:33 am

    Algorithms are now patentable, I believe. Furthermore, I am under the impression that the patent belongs to the university where the invention was made.

  • Lazar // August 29, 2008 at 2:11 am

    The World Avoided by the Montreal Protocol
    Geophys. Res. Lett., 35, L16811
    doi:10.1029/2008GL034590

    Without the Montreal Protocol, the effective equivalent stratospheric chlorine (EESC, combining the effects of chlorine and bromine) could, depending on the scenario chosen, have reached 9 ppbv by ~2025 [WMO, 2007] or even as early as 2002 [Prather et al., 1996] with growth rates typical of the late 1960s and early 1970s. We apply the UK Chemistry and Aerosols (UKCA) climate-chemistry model (section 2) to the problem of how climate would have responded [...]

    Column ozone decreases everywhere, with losses ranging from 5% in the tropics, through mid-latitude losses of 10–15%, to ~30% in Arctic and ~60% in Antarctic spring [...]

    The high-latitude ozone depletion would also have had a large effect on surface
    climate, with a further enhancement of the warming in the lee of the Antarctic Peninsula, similar to the observed surface temperature change, and a strengthening of the SAM. In the Arctic, the avoided ozone loss is associated
    with a warming of the Arctic Ocean and North America, with cooling over Western Europe and Siberia. These predicted changes are comparable to those expected by 2025 due to greenhouse gases [IPCC, 2007].

  • Steve Reynolds // August 29, 2008 at 2:20 am

    Ray Ladbury: “…Mann et al. merely did a meta-analysis on publicly available data.”

    My understanding is that much of that data was not public until forced to be so by McIntyre. Even now some of the data is only available in inconsistent versions, with no documentation as to which version Mann used. Also, the supposedly independent studies that have ‘confirmed’ Mann’s results used mostly the same data and methods, so are not independent.

    Ray: “… as George and I have been saying repeatedly, it’s best if research methods remains independent.”

    Whether that is best is your opinion, not some principle of the scientific method. Why not let each researcher choose what he thinks is the best method? If the process works as you say, the best methods will win out in the end.

  • Barton Paul Levenson // August 29, 2008 at 1:16 pm

    matt writes:

    DHogaza: It’s not truth that annoys scientists, matt.

    Yes, as we’ve learned it is accountability, oversight, and archiving data.

    Darn those rotten old scientists! From now on, let’s have all our science done by crackpot bloggers and right-wing talk-radio show hosts.

  • kevin // August 29, 2008 at 2:54 pm

    Note that the NSF terms posted by Steve R. and T.P. Hamilton are from 2005 and 2008, respectively. I realize that we’re having a mostly philosophical conversation about sharing data and methods, but for those who enjoy picking nits, does anyone know what the NSF official terms (or anything else that might indicate standard practice) were in, say, 1998?

    I’ll try to google it later, but I have an appointment now. BBIAB

  • Luis Dias // August 29, 2008 at 4:41 pm

    Do you really expect evolution to be falsified? Gravity?

    You can equate Climate Science to Gravity and Evolution endlessly, but still you miss the point. Just because something is “scientific”, it doesn’t follow that such something is as rock solid as “Gravity”, or I could say that Psychology is as rock solid as “Gravity”, and everyone a bit intelligent here would chuckle.

    Well, on that at least we can agree. His analysis has long since been superseded, and guess what, the conclusions are pretty much the same.

    You aren’t talking about Caspar and Amman paper of 2006, perhaps, now are you? It would be a terrible mistake on your part.

  • Luis Dias // August 29, 2008 at 4:42 pm

    Ooops, sorry meant Ammann and Wahl (2007)

  • dhogaza // August 29, 2008 at 5:42 pm

    You aren’t depending on CA as your news source as to what constitutes a terrible mistake in science, are you?

    If so, it would be a terrible mistake on your part…

  • Ray Ladbury // August 29, 2008 at 6:22 pm

    Luis Dias, Climate science is over a century and a half old. The basic forcers have been known for about a century, and the sensitivity of climate to CO2 doubling is nailed down by multiple lines of evidence. Yeah, I’d say you can probably take that to the bank.

    As to reconstructions, see:
    http://www.realclimate.org/index.php/archives/2006/07/the-missing-piece-at-the-wegman-hearing/langswitch_lang/fr

    and

    http://www.realclimate.org/index.php/archives/2007/05/the-weirdest-millennium/langswitch_lang/fr

  • apolytongp // August 29, 2008 at 6:27 pm

    Ray: Have you read the (classic) Wilson book that I referred to? Have you read NASA SP 7010?

    http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19640016507_1964016507.pdf

  • Chris O'Neill // August 29, 2008 at 6:29 pm

    Well, on that at least we can agree. His analysis has long since been superseded, and guess what, the conclusions are pretty much the same.

    You aren’t talking about Ammann and Wahl (2007), perhaps, now are you?

    Well, I wouldn’t be. I would be talking about, e.g.

    “Proxy-Based Northern Hemisphere Surface Temperature Reconstructions: Sensitivity
    to Method, Predictor Network, Target Season, and Target Domain” by S. Rutherford et al, Journal of Climate. Methods completely different from MBH98 are now used for reconstructions. This obsession with an outdated method is astounding. The people with such an interest in it are suffering some sort of obsessive-compulsive disorder.

  • Dano // August 29, 2008 at 6:40 pm

    You aren’t talking about Caspar and Amman paper of 2006, perhaps, now are you? It would be a terrible mistake on your part.

    I like it that this is the best they can do - play ‘informed’ and dumb at the same time.

    Not a new tactic, lad; debunked and put to rest long ago, and it’s rotted enough that your attempt to make it rise, zombie-like, can’t work. Instead, you may want to try trotting out some denialist testable hypotheses, data, models, analyses, body of evidence, list of journal articles to make your case.

    Oh, wait: you can’t. No wonder you’re playing Dr Frankenstein with the rhetoric.

    Best,

    D

  • t_p_hamilton // August 29, 2008 at 7:13 pm

    apolytongp:”But in any case, Mann specifically refused to share the algorithm!” Is that true for all of Mann’s papers?

    Does the following page have code for reconstructing MBH, along with data, or not?

    http://www.cgd.ucar.edu/ccr/ammann/millennium/CODES_MBH.html

  • kevin // August 29, 2008 at 7:16 pm

    Yeah, psychology is not like gravity. In a sense, psychology is more like quantum mechanics, because many human behaviors are predictable in aggregate, or describable with a probability distribution, but accurately predicting the specific behavior of one particular human is pretty much impossible. However, if Luis Dias is snidely implying that there are not solid empirical results in psychological science, this says more about the state of his knowledge than about the state of psychology.

    Sorry for the OT, pet peeve.

  • Ray Ladbury // August 29, 2008 at 7:23 pm

    Actually, Steve, the scientific method does require the RESULTS to be confirmed INDEPENDENTLY. That means:
    1) Gather your own data.
    2) Come up with your own procedure for analyzing that data.
    3) Come up with your own quality controls and error estimations.
    4) Submit to a journal of your choice for publication and let the work stand or fall on its own merits.

    If you don’t follow this, you aren’t really doing science. Repeating somebody else’s analysis with their data is a laboratory exercise for undergrads in a science class, not real science. One exception might be if you had developed an independent algorithm, you might request to run it on someone else’s dataset. However, the emphasis there is not so much on the results as it is on the comparative effectiveness of the algorithms.

    The scientific method has this one pretty much worked out.

  • apolytongp // August 29, 2008 at 9:30 pm

    Ray: your comments at 1923 are in significant contrast to the much more classical views of Katzoff and Wilson (better and more famous researchers than you btw, and certainly more published on how to report research). They talk a lot about how all details of data, standardization, etc. should be shared. The biggest benefit is to science efficiency, because even if part of a wrok is invalidated, other parts may thus be useable.

  • Hank Roberts // August 29, 2008 at 11:05 pm

    http://arxiv.org/abs/0808.3283v1

    !!!

    It’s not a huge effect, so ignore those who will claim this explains global warming …

    “the fractional difference between the 226Ra counting rates at perihelion and aphelion is 3 × 10−3 …”

  • Ray Ladbury // August 29, 2008 at 11:45 pm

    apolytongp, You know, it’s funny you should mention technical writing, because one of the cardinal rules I learned was that you don’t vector the reader vaguely to some voluminous tome and give no idea what you want them to get out of it. But you know what else? I’m willing to bet that there’s nothing in either of those two references that says it’s a good idea to bestow all your data and algorithms on someone who has zero record of publication in the field just because they ask for it. If there is such a directive, I’d sure like a detailed reference for it.

  • Hank Roberts // August 30, 2008 at 12:10 am

    Just a reminder, a page from the recent history:

    _____excerpt follows________

    The basic conclusion of the 1999 paper by Dr. Mann and his colleagues was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators, such as melting on icecaps and the retreat of glaciers around the world, which in many cases appear to be unprecedented during at least the last 2,000 years.

    Based on the analyses presented in the original papers by Mann et al. (1998, 1999) and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the preceding millennium. However, the substantial uncertainties currently present in the quantitative assessment of large-scale surface temperature changes prior to about A.D. 1600 lower our confidence in this conclusion compared to the high level of confidence we place in the Little Ice Age cooling and 20th century warming. Even less confidence can be placed in the original conclusions by Mann et al. (1999) that “the 1990s are likely the warmest decade, and 1998 the warmest year, in at least a millennium” because the uncertainties inherent in temperature reconstructions for individual years and decades are larger than those for longer time periods, and because not all of the available proxies record temperature information on such short timescales. We also question some of the statistical choices made in the original papers by Dr. Mann and his colleagues. However, our reservations with some aspects of the original papers by Mann et al. should not be construed as evidence that our committee does not believe that the climate is warming, and will continue to warm, as a result of human activities.

    Large-scale surface temperature reconstructions are only one of multiple lines of evidence supporting the conclusion that climatic warming is occurring in response to human activities, and they are not the primary evidence. The scientific consensus regarding human-induced global warming would not be substantively altered if, for example, the global mean surface temperature 1,000 years ago was found to be as warm as it is today. This is because reconstructions of surface temperature do not tell us why the climate is changing. To answer that question, one would need to examine the factors, or forcings, that influence the climate system. Prior to the Industrial Revolution, the primary climate forcings were changes in volcanic activity and in the output of the Sun, but the strength of these forcings is not very well known. In contrast, the increasing concentrations of greenhouse gases in the atmosphere over the past century are consistent with both the magnitude and the geographic pattern of warming seen by thermometers.

    One significant part of the controversy on this issue is related to data access. The collection, compilation, and calibration of paleoclimatic proxy data represent a substantial investment of time and resources, often by large teams of researchers. The committee recognizes that access to research data is a complicated, discipline-dependent issue, and that access to computer models and methods is especially challenging because intellectual property rights must be considered.

    Our view is that all research benefits from full and open access to published datasets and that a clear explanation of analytical methods is mandatory. Peers should have access to the information needed to reproduce published results, so that increased confidence in the outcome of the study can be generated inside and outside the scientific community. Paleoclimate research would benefit if individual researchers, professional societies, journal editors, and funding agencies continued their efforts to ensure that existing open access practices are followed.

    So where do we go from here? ….
    ______end excerpt____________

    Answer: they recommend going forward.

    http://www7.nationalacademies.org/ocga/testimony/Surface_Temperature_Reconstructions.asp

  • Steve Reynolds // August 30, 2008 at 12:47 am

    Ray: “If you don’t follow this, you aren’t really doing science.”

    I don’t really care what you call it, if an analysis can show a supposedly important paper’s major result can be duplicated using random noise for the data, I want to know about it. I hope most scientists would have similar thoughts.

  • matt // August 30, 2008 at 1:23 am

    Ray: If you don’t follow this, you aren’t really doing science. Repeating somebody else’s analysis with their data is a laboratory exercise for undergrads in a science class, not real science.

    Yes, it’s a job for auditors :) See, the auditors don’t pretend they are engineers. They don’t pretend they are the test pilots. They won’t take your job. They won’t make you look stupid. They will help ensure a mistake isn’t made. Why is that bad?

    Every other profession has oversight and lives with auditors. Why do you detest it so?

    If it takes a year to replicate and reproduce a result, and a few weeks of scrutiny to find an unititalized variable in 100,000 lines of simulation code that caused a mistake in the results, don’t you think that is a good investment?

    If Mann didn’t have time to run noise through his algorithm, but somebody else did, isn’t that a good thing?

    Is your goal moving the state of knowledge ahead or playing a game of “nobody can come in my fort”?

    Please, not another round of “that’s not how science is done.” Instead, please address if it’s better for the discovery time of errors to be reduced or not.

  • David B. Benson // August 30, 2008 at 1:44 am

    Hank Roberts // August 30, 2008 at 12:10 am — Also, plenty of data to show that regions in the northern hemisphere are now warmer than at any time in the past 5000+ years:

    http://www.npr.org/templates/story/story.php?storyId=914542
    http://www.physorg.com/news112982907.html
    http://news.bbc.co.uk/2/hi/science/nature/7580294.stm
    http://researchnews.osu.edu/archive/quelcoro.htm
    http://news.softpedia.com/news/Fast-Melting-Glaciers-Expose-7-000-Years-Old-Fossil-Forest-69719.shtm
    http://en.wikipedia.org/wiki/%C3%96tzi_the_Iceman

  • David B. Benson // August 30, 2008 at 1:48 am

    Steve Reynolds // August 30, 2008 at 12:47 am — Almost everybody accepts orbital forcing as explaining the so-called ice ages. However Carl Wunsch has an admirable paper showing that an AR(2) process also can largely duplicate the latter half of the Vostok record after using the first half for training.

    I thought the paper was very well done, but it didn’t cause me to doubt orbital forcing. Do you understand why that is?

  • Ray Ladbury // August 30, 2008 at 2:00 am

    Steve, if a paper is in error, that will come out as others try to replicate the results. What is more, rather than merely showing that the one result is wrong, another researcher may figure out how to do it right. And as has been pointed out ad nauseum (no, this is not an exageration), there have been many subsequent reconstructions that show similar results. None strongly contradicts MBH98. So, other than history of science or doing a highschool science project, the fixation of the denialists on this one result is puzzling. See:
    http://www.realclimate.org/index.php/archives/2005/02/dummies-guide-to-the-latest-hockey-stick-controversy/langswitch_lang/sk

    http://www.realclimate.org/index.php/archives/2005/01/on-yet-another-false-claim-by-mcintyre-and-mckitrick/langswitch_lang/sk

    and

    http://www.realclimate.org/index.php/archives/2007/05/the-weirdest-millennium/langswitch_lang/sk

  • Chris O'Neill // August 30, 2008 at 2:12 am

    Steve Reynolds:

    if an analysis can show a supposedly important paper’s major result can be duplicated using random noise for the data

    Absolute garbage if you’re talking about MBH98. MBH98’s uncentered method generates a very small hockeystick bias, less than about 0.1 degree C. (Such bias does not exist in up-to-date methods.) This does not amount to “duplicated using random noise”. The real question is why does Steve Reynolds so gullibly believe this.

  • Ray Ladbury // August 30, 2008 at 2:19 am

    Apolytongp says, “Ray: your comments at 1923 are in significant contrast to the much more classical views of Katzoff and Wilson …”
    Put your money where your mouth is–produce a quote where any serious scientist advocates nominally independent research groups sharing experimental setups, code, analysis and data. This is a recipe for propagation of systematic error. Now, add to that the fact that those asking for the special access have no publication record in the field, and I think it would be astounding if you could find a serious scientist to support you. Sorry, dude, you’ll have to do a whole lot better than vague references and appeals to authority.

  • Hank Roberts // August 30, 2008 at 2:44 am

    Sigh.

    Might as well create bingo boards assigning numbers to the stock criticisms and the regular few posters about the Hansen paper, because the same people will keep reposting the same talking points as long as there’s an open topic anywhere in the world about climate, eh?

  • apolytongp // August 30, 2008 at 5:05 am

    I didn’t send you to a tome, Ray. I sent you to chapter 13 (of Wilson). That is the chapter on reporting results. Within that, I direct you to the sections on data and methods. (My remarks already refer to this, but if you need explicit, there you go.)

    FYI, Wilson is very famous for discoveries on vibrational spectroscpy and also is the father of a Nobel Prize winner. Although, you’re not seemingly familiar with his book, it is considered a classic. Is available in Dover paperback (and by Interlibrary Loan, of course.)

    If you don’t know the Katzoff memo, I’m surprised. It’s a classic NASA document. FYI, it is 30 pages long. And widely regarded as a little gem. Not a tome. And I LINKED TO IT!

    ——————————-

    With that further explanation, to respond to your requerst for easier info, please let me know when you’ve read them and what you take away from them as regards this argument, my outlook, etc.

  • dhogaza // August 30, 2008 at 3:04 pm

    Denialists will still be arguing about MBH in 2100 as they sit sweltering lap deep in seawater in downtown manhattan …

  • Gavin's Pussycat // August 30, 2008 at 3:05 pm

    if an analysis can show a
    supposedly important paper’s major result can be duplicated using
    random noise for the data, I want to know about it.

    If it were true, I would want to know about it too!

    …but it isn’t, and we both know that, don’t we Stevie?

    Thanks for moving the goalposts again. It’s so revealing.

  • dhogaza // August 30, 2008 at 3:06 pm

    And, meanwhile, the latest from NSIDC makes it look like 2007’s minimum ice extent record might be safe after all. Buckle your seatbelts, folks, it’s going to be a long winter of our being bombarded by denialists claiming that the fact that 2008 didn’t set a new record means that global cooling is continuing and a new ice age is upon us.

  • george // August 30, 2008 at 6:56 pm

    While I certainly agree that it is best if scientists make their methods clear enough so that someone “skilled in the art” (borrowed from the patent lingo) can repeat their work if they so desire, there is no hard and fast “rule” that says one has to do this.

    I think the scientific community is the best judge of a paper’s merits or lack thereof. If other scientists think you have not demonstrated what you claim, make no mistake, they WILL tell you — if not in person at a conference, then in a journal.

    Scientists who make claims without backing them up are usually not the ones to get the credit for the claims. A very famous example of this is Newton’s law of gravitation. Before Newton published his Principia, Robert Hooke actually surmised that gravity obeyed an inverse square law, but Hooke was not able to show how this would lead to Kepler’s laws and Newton was. The rest, of course, is history.

    All this debate about making data and code available to anyone and everyone is simply silly. Science has never worked that way and probably never will. It makes no sense whatsoever to share data and code with people who have no clue what it means.

    Please explain to me how it is productive to share one’s data and code with the likes of James Inhofe. you can’t, of course because it is not productive in the least. It’s a total waste of time.

    Unfortunately, the data/code -sharing ” debate ” has been “framed” from the getgo by those who would have us all believe that

    1) significant numbers of scientists within the climate science community are not releasing their data and/or code to other scientists

    2) those scientists who are not releasing data to every Tom, Dick and Marry who requests it are somehow dishonest, unscientific, have something to hide and/or are perpetrating fraud on the general public (If you selected “all of the above”, you win the “True Skeptic” Award)

    Personally, I feel it is pretty much a waste of time to even argue with people who have framed the debate in such a way.

  • apolytongp // August 31, 2008 at 1:08 am

    Wilson’s book was written in the 5os and even then argues clearly for a practice of archiving details to centralized archives, which already existed then. He also argues for detailed exposition of all aspects of new approaches and standardizations.

    The whole Science/Nature puff peice phenom (like a press release almost) is an abomination. Of course, solid, solid works should be done in the specialist literature to back up “fast breaking news”. But what happens is ego scientists and young Turks don’t bother.

  • apolytongp // August 31, 2008 at 1:35 am

    Chris:

    I AGREE with your point on the impact of the acentricity. Actually I haven’t done the math, but what I agree with is that Steve McI has been dishonest in mushing different issues together and trying to have his “centerpeice” (the undocumented and according to eveny many on Mann’s side, but not Mike WRONG acentric standardization) take the load of several other method choices. IN contrast, I find that Burger and Cubasch’s approach (a full factorial of method decisions) or Zorita or Huybers or Wahl and Amman’s way of taking things apart and recording the impact is generally better.

    As far as the concentration on picking on that one paper, I think the defenders are a bit off here as well. If there are faults with the paper, it should be irrelevant what other work has been done in the science. We should be able to judge it on it’s own as an algorithm. As a method. Mann has been defensive and distasteful when people tried to peel the onion and judge the equation.

  • Ray Ladbury // August 31, 2008 at 2:37 am

    So, apolytongp, how many scientific publications do you have?
    I have asked you for a quote that supports your contention that data, code, etc. should be shared among independent research groups. You have not provided one. You know you can’t, because that is contrary to good scientific practice and besides it is a recipe for propagation of systematic errors.
    I have no objection to archiving code and data–that’s fine. My objection is sharing the SAME code and SAME data with outside groups. Sharing data to be combined with other data into a meta-analysis is OK. Sharing code is not.
    For my PhD research in experimental particle physics, we always had two independent groups looking for the same particles. You could discuss the research, apply the same criteria on the data, even look over each other’s shoulders, but you never shared analysis code beyond a subroutine to do fitting or the like. If a code is sufficiently complicated, it will have errors, and neither you nor anyone else will find them. Share the code and they propagate. I don’t see why you don’t understand this.

  • Steve Reynolds // August 31, 2008 at 2:44 am

    Chris O’Neill: “ MBH98’s uncentered method generates a very small hockeystick bias, less than about 0.1 degree C. (Such bias does not exist in up-to-date methods.)”

    Chris, if you me to take what you say as something more than just your opinion, providing a useful link that I can check is necessary.

  • apolytongp // August 31, 2008 at 3:14 am

    Ray:

    1. In all seriousness, the scientist I refer to is E. Bright Wilson, his noted work, chapter 13, the section on “data” and the section on “method”. (couple pages total). I’ll bet your local library has a copy. If not, have the clerk ILL it.

    Read that and see if (or how much) it backs me up. I’m honestly interested in your reaction.

    2. 10 publications.

  • Steve Reynolds // August 31, 2008 at 3:17 am

    Ray: “…there have been many subsequent reconstructions that show similar results. None strongly contradicts MBH98.”

    You are very concerned about independence. How independent were these?
    Also, your links don’t have much to say about error bars. Is there much reason to anything would be contradicted with as much uncertainty as probably exists?

  • Chris O'Neill // August 31, 2008 at 3:28 am

    f there are faults with the paper, it should be irrelevant what other work has been done in the science.

    Sure, the existence of later, correct, papers doesn’t change the existence of faults in an earlier paper. My interest is in the results from correct papers. The papers without the earlier faults say there is a hockeystick. I don’t see any challenge to their methods.

    We should be able to judge it on it’s own as an algorithm.

    I’m sorry, but I’m just not that interested in papers that don’t use the best methods available.

  • Chris O'Neill // August 31, 2008 at 4:17 am

    Steve Reynolds:

    if you me to take what you say as something more than just your opinion, providing a useful link that I can check is necessary

    You’re welcome to find the bias in, for example, the REGEM method referred to in Proxy-Based Northern Hemisphere Surface Temperature Reconstructions: Sensitivity
    to Method, Predictor Network, Target Season, and Target Domain
    by S. Rutherford et al, Journal of Climate. MacIntyre hasn’t found the bias, perhaps you can.

    BTW, I notice that your claim:

    an analysis can show a supposedly important paper’s major result can be duplicated using random noise for the data

    has vanished from sight.

  • apolytongp // August 31, 2008 at 12:13 pm

    Chris, I actually find that paper fascinating because of the level of complexity of the algorithm. Thought Burger and Cubasch’s full factorial was genius.

  • Gavin's Pussycat // August 31, 2008 at 12:41 pm

    Heck, Ray, to underscore your argument, you’re not even safe sharing compilers…

    http://n2.nabble.com/Re:-Polar-stereographic,different-values-on-different-platforms–td740713.html

  • Chris O'Neill // August 31, 2008 at 2:20 pm

    apolytongp:

    I actually find that paper fascinating because of the level of complexity of the algorithm.

    Yes, but it’s amazing how much interest a paper of purely academic and historical significance attracts.

    Thought Burger and Cubasch’s full factorial was genius.

    Maybe, but only of academic interest. It didn’t include an up-to-date method.

  • Ray Ladbury // August 31, 2008 at 2:44 pm

    apolytongp, Frankly, reading what Wilson has to say is pretty low on my list of things to do. I’m all for transparency. I’m not for sharing data and code. When folks call me and ask about an analysis method in one of my papers, I am more than happy to work through the method with them until they understand it. I will not give them my code, because I don’t have 100% confidence that it is error free. If they had trouble reproducing my result, I would go through my code again to look for errors.

    I would be more than happy to meet you half way. If you quote the (short) passages that you believe support sharing of code, I’ll tell you if I agree with that interpretation. However, even with an anthority like Wilson, it will not change my opinion, as I’ve seen what happens if you remove the firewall between nominally independent research teams.

  • Ray Ladbury // August 31, 2008 at 2:47 pm

    Steve Reynolds,
    I’m not an expert on all the paleoclimatic reconstructions. However, based on my knowledge, the algorithms are independent. Some of the data are also independent, but not all. If there were a bias in the data, it could contaminate multiple, but probably not all reconstructions. There is no reason to expect such a bias in the data.

  • apolytongp // August 31, 2008 at 6:16 pm

    Chris:

    How is “up to date”ness something that is so special? If you cited something special about the REGEM method in terms of its performance on noise, in terms of significance tests, in terms of where it works well, doesn’t work well (for instance in a field like biology or sociology), if you cited theoretical stats methods, etc. All those things would turn me on. It’s like saying someone has come up with a new method of TEM structure solution for complicated crystals and then cites it for a complicated, tricky and debated structure. I really want to see how it does on known cases first.

    What does it mean (in a Bayesian estimation sense) when we read that something can only be dectected with very special methods of analysis? Also, the interesting thing about Burger and Cubasch was showing all the switches and how much they change the answer. It seems like the opposite of robust. Seems like something where a very particular method is needed. Plus it seems a lot more enccompassing and even just well stated than the Rutherford paper.

    None of this is to say that Rutherford’s method is bad, etc. I don’t know enough to judge that. It might be right for all I know. But I know what I would need to check to feel better about it. And it wouldn’t be “newness”.

  • apolytongp // August 31, 2008 at 6:17 pm

    Ray: Understood. No hard feelings.

  • Phil B. // August 31, 2008 at 7:14 pm

    I have been following the proxy base temperature reconstruction since 2000. The elephant in the room is the proxy linearity and stationary assumption. Mathematically, P(t) = a + b*T(t) + n(t). Where a and b are constants for 1000 years or so, and T(t) is annualized temperature and n(t) is noise. This is an extraordinary assumption for tree rings and I haven’t seen any papers that demonstrate that this is valid assumption.

  • Barton Paul Levenson // August 31, 2008 at 10:52 pm

    apolytongp writes:

    Mann has been defensive and distasteful when people tried to peel the onion and judge the equation.

    AGW deniers have been distasteful when they tried to subpoena all of Mann’s paperwork. I’m much more afraid of congressmen and senators attacking scientists than I am of scientists getting something wrong. A scientist getting something wrong is usually caught right away; cf cold fusion. But congress going after people can take years to put right; cf Joe McCarthy and HUAC.

  • Chris O'Neill // September 1, 2008 at 2:22 am

    apolytongp:

    How is “up to date”ness something that is so special?

    It’s very, very special if the results matter and previous methods had shortcomings.

    It seems like the opposite of robust.

    Anyone can choose a set of methods that have a variety of shortcomings. REGEM is nothing like any of the methods considered by Burger and Cubasch.

    None of this is to say that Rutherford’s method is bad, etc.

    There’s a very good reason why there’s so little interest in Rutherford et al’s paper and so much interest in MBH98 by non-climate scientists. Promoting knowledge of Rutherford would blunt the message from global warming denialists that there is something wrong with reconstructions because there is something wrong with the method being used because they wouldn’t have a defect to talk about.

  • Gavin's Pussycat // September 1, 2008 at 8:19 am

    BTW found this gem:

    The Lunar Conspiracy.

  • Gavin's Pussycat // September 1, 2008 at 8:28 am

    Phil B., I remember something was said about that in Rutherford at al.

    Theoretically you expect any relationship to be approximately linear for small variations, which is the case here. Empirically, what you do is use other proxies (corals, ice cores, …) besides tree rings (and different types of tree ring data, species, growth location and conditions, …)

    If all that turns out to be reasonably consistent, then I don’t see what’s your problem.

  • MrPete // September 1, 2008 at 2:41 pm

    Stopped in for a five minute break [heading down to New Orleans soon to help out...]

    Ray, you said
    If a code is sufficiently complicated, it will have errors, and neither you nor anyone else will find them. Share the code and they propagate. I don’t see why you don’t understand this.

    I tend to agree with you about shared code leading to propagated error… if the purpose of sharing is reuse without QA.

    However, nobody has shown evidence that software QA is happening intra group, let alone inter-group.

    This is where open-source science can benefit, just as open source software benefits.

    Yes, a common bug will propagate across every email server or domain name server or whatever. However, more eyes find more bugs and squash them much earlier.

    When we’re talking software development — and that’s exactly what this is — there’s plenty of history to say that keeping code hidden does not benefit.

    Best analogy I can think of here is security software. Security by obscurity is not rubustly secure. Statistical analysis by obscurity is not robust either.

    OK, five minutes up. Back to packing and prep… :)

  • Hank Roberts // September 2, 2008 at 12:04 am

    For those who may have followed this Google result and not gotten the file, it’s moved.

    Science and politics of global climate change: North on the hockey stick, Sep 4, 2006 … Last week he gave an interesting seminar to our department …

    Description still at the old web page:
    sciencepoliticsclimatechange.blogspot.com/2006/09/north-on-hockey-stick.html

    But if you click the link you get
    File not found:
    http://www.met.tamu.edu/people/faculty/dessler/NorthH264.mp4

    hello. the link is now http://geotest.tamu.edu/userfiles/216/NorthH264.mp4

    unfortunately, the old link is no longer available. spread the word

    (hat tip and thanks for the reply in email from Andrew Dessler)

  • Ray Ladbury // September 2, 2008 at 1:00 pm

    MrPete,
    Good luck in the Gulf states. The thing you seem to neglect about scientific code is its short shelf-life. Scientific coding is usually pretty close to single-use. If is specifically constructed to perform a single analysis, and once that analysis is performed, it gets shelved. Individual modules of the code may be resurrected in new analyses, but these will generally be general-purpose (e.g. sorting, fitting, FT,…) algorithms. Even if the same group were to look at the same data again, they’d likely use a different algorithm, for the simple reason that the analysts will have learned from their previous analysis.

  • P. Lewis // September 2, 2008 at 1:26 pm

    Tee hee. The following should lead to some fun for the next 10 years or so:

    “Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia” by Mann, Zhang, Hughes, Bradley, Miller, Rutherford and Ni, @ PNAS (not up as of yet, but see my post @ Deltoid.

    Can we expect more Congressional appearances?

  • P. Lewis // September 2, 2008 at 10:48 pm

    The Mann et al 2008 paper is now available on open access (and the supporting info) at PNAS.

  • David B. Benson // September 3, 2008 at 12:49 am

    “Era of Scientific Secrecy Near End”

    http://www.livescience.com/culture/080902-open-science.html

    discusses pros and cons of ‘open science’.

  • dhogaza // September 3, 2008 at 3:35 am

    Watts, unbelievably, has an Al Gore is fat post up on his blog.

    Oh, wait, I think I meant to say “believably” …

  • apolytongp // September 3, 2008 at 4:14 pm

    E. Bright Wilson, Jr. AN INTRODUCTION TO SCIENTIFIC RESEARCH, 1955(!)

    Chapter 13 “Reporting the Results of Research” (section 13.4 “Text”)

    “The Method”: …If new procedures or new variants of old procedures have been employed, these should be described. Ideally, sufficient detail should be given to enable a research worker on another continent to duplicate the method. This may involve detail best relegated to an appendix or in extreme cases to a supplemental report in on of the documentation centers…It is important not only that others be able to duplicate the procedures but also that it be made possible for critics to judge the validity and future readers to correct the results in the light of later discoveries. This means that sources of materials, methods of purification, information on possibly relevant materials, etc. should be given. The standards used for various measurements are particularly important.

    “The Data”: It is vital to publish the actual data on which conclusions are based…Primary measurements should be published and not merely derived quantities. Many magnetic susceptibility data have been published in terms of Weiss magnetons instead of in the units in which they were actually measured. This is an outmoded theoretical concept whose disappearance affects a good number of perfectly good experimental papers. It is worth remembering that good data can easily outlast many successive theories. The data should be presented in their rawest form so that later theorists can use them. If it is impractical to do this, the treatment to which the data have been subjected should be so clearly and completely specified that the original values can be recoverd by later readers if needed.

    …the manuscript should be preserved and annotated to show the notebook references…it should be possible at any later date to go backward from the published conclusions all the way to the original notebook entries, experimental photographs, and records. Any processing given to the data should also be available and indexed.

    “Equations”:

    …Sufficient detail (of derivation, TCO) should be given to enable a reader for whom the article is intended to follow the steps himself…one should be conservative in interpreting the word “obvious”…

    …Mathematical papers without misprints and errors are the exception rather than the rule…

  • apolytongp // September 3, 2008 at 6:55 pm

    Tamino:

    I’m trying to think of a post to make that will mix some of the Palin gun-moll Earth Mother fertility meme in. Something that will be sufficiently taunting as to satisfy my desire to tweak blue staters. But still get past your censor. And somehow tie in the climate stuff at least for cover. Any suggestions on how I do that?

    [Response: I'd rather you didn't]

  • Lost and Confused // September 3, 2008 at 10:00 pm

    P. Lewis, an interesting aspect in regards to that paper can be found in the press release, where Mann makes the comment:

    “Ten years ago, we could not simply eliminate all the tree-ring data from our network because we did not have enough other proxy climate records to piece together a reliable global record…”

  • ChuckG // September 4, 2008 at 12:38 am

    More detailed Pat Frank (pseudo?) science - Gavin math lesson @

    http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/langswitch_lang/bg#comment-97209

    Sure would like to see comments over here (to keep the decks clear for further combat over there in case it materializes) by those whose math skills are much greater than mine.

  • Ray Ladbury // September 4, 2008 at 1:27 am

    Apolytongp, Your quote of Wilson in no way suggests sharing of code or data–merely publication of sufficient detail. For instance, it simply will not be practical to publish raw data from the experiments at the LHC, which will generate terabytes of data. Likewise, the analysis will be described, but the code will likely remain internal–as it should.

  • dhogaza // September 4, 2008 at 3:29 am

    “Ten years ago, we could not simply eliminate all the tree-ring data from our network because we did not have enough other proxy climate records to piece together a reliable global record…”

    Lost and Confused learns that science marches forward, while McIntyre is convinced that he can overturn the results of thousands of climate science papers by proving that the BCP analysis is dodgy.

    I’m *sure* that L&C thinks that science, marching forward, shows that BCP reconstructions aren’t necessary, is UNFAIR! ANTI-DEMOCRACY! ANTI-CRETIN-DENIALISM!

    L&C: Get a friggin’ grip.

  • Paul Middents // September 4, 2008 at 3:30 am

    ChuckG alerts us to a train wreck that just won’t stop. I still think the rascally rabett is the one to chronicle and comment on this. Gavin is even asking for an intervention.

  • Dano // September 4, 2008 at 5:15 am

    an interesting aspect in regards to that paper can be found in the press release, where Mann makes the comment:

    L & C as Dr Frankenstein, desperately trying to resurrect the long-dead argument. If only to justify their chosen self-identity, or maybe self-relevance…

    Thanks for the laugh. Your mommy is calling you to brush your teeth and go to bed.

    Best,

    D

  • apolytongp // September 4, 2008 at 5:55 am

    Ray: But Mann refused (at first) to share his algorithm. And his pub lication did not disclose parts of the procedure. Wilson addresses the issue of large information by deffering to archives.

    P.s. There are fundamental issues in the Wilson discussion. I don’t see you adressing them. Just definding wording. I think this will be my last. It is too tedious to engage and refrain from putdowns.

  • apolytongp // September 4, 2008 at 5:56 am

    Feel free for last word though. Serioues. no hard feelings.

  • Barton Paul Levenson // September 4, 2008 at 11:10 am

    apolyton posts:

    I’m trying to think of a post to make that will mix some of the Palin gun-moll Earth Mother fertility meme in.

    It’s certainly relevant that Governor Palin doesn’t believe global warming is manmade and that creationism should be taught alongside evolution in public school biology classes. I, for one, would not care to have a scientific illiterate in charge of the world’s foremost nation when it comes to science. Just one more reason I’m not voting GOP this year. Or any year.

  • P. Lewis // September 4, 2008 at 12:03 pm

    Lost and Confused wrote:

    P. Lewis, an interesting aspect in regards to that paper can be found in the press release, where Mann makes the comment:

    “Ten years ago, we could not simply eliminate all the tree-ring data from our network because we did not have enough other proxy climate records to piece together a reliable global record…”

    Why is that an interesting aspect?

  • Ray Ladbury // September 4, 2008 at 12:19 pm

    apolytongp, Maybe I’ll start a conspiracy blog suggesting that the reason GWB hasn’t been at the RNC is because he’s been undergoing hormone replacement therapy and in reality he IS Sarah Palin. You can post you inflamatory rhetoric over there.

  • Ray Ladbury // September 4, 2008 at 12:45 pm

    apolytongp, might I suggest that Congressional subpoenas are not the best way to facilitate scientific openness. I fully agree that Mann et al. could have handled the situation more adeptly–both in terms of politics and in terms of some of his analysis. However, the level of personal attacks and invective heaped upon him after MBH98 was bound to generate a siege mentality. The fact of the matter is that MBH98 is of interest now only for historical reasons–it was the first successful multiproxy study with such an ambitious scale. Like many pioneering studies, it had its flaws, and these flaws were addressed in subsequent efforts–which largely verified the results of MBH98.
    Note that Wilson says: “Ideally, sufficient detail should be given to enable a research worker on another continent to duplicate the method.”
    That does not mean releasing the algorithm. Indeed, I would take issue with Wilson’s contention that the goal is duplication of the method. The goal is verification of the results by a sufficiently similar method. Researchers and reviewers may also disagree about how much detail is actually needed. However, you need to realize that the people seeking to reproduce your results are your rivals, not your friends. Scientific openness does not require sharing code, and it certainly does not require “audits”.

  • Lost and Confused // September 4, 2008 at 1:41 pm

    P. Lewis, I apologize for that. My comment assumes a certain degree of background knowledge, which was wrong of me to do. The reason that comment is interesting is MBH98 was criticized by people saying without bristlecone proxies the “hockey stick” disappeared. This became an issue of a fairly large amount of controversy. Now Mann has stated the MBH98 reconstruction was dependent upon tree rings. This effectively resolves that particular controversy.

    Consider this passage from MBH98, “[T]he long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network, suggesting that potential tree growth trend biases are not influential in the multiproxy climate reconstructions.”

  • Lost and Confused // September 4, 2008 at 1:48 pm

    Ray Ladbury, I have to disagree when you say, “The fact of the matter is that MBH98 is of interest now only for historical reasons…” Certain aspects of MBH98 have been reused in a number of other papers in the last decade.

    An example of this is the MBH98 PC1. It has been reused in a number of papers. If one decides the MBH98 PC methodology was flawed, more papers than just MBH98 would be affected. Clearly the MBH98 is still important.

  • t_p_hamilton // September 4, 2008 at 2:42 pm

    “Apolytongp, Your quote of Wilson in no way suggests sharing of code or data–merely publication of sufficient detail. For instance, it simply will not be practical to publish raw data from the experiments at the LHC, which will generate terabytes of data. Likewise, the analysis will be described, but the code will likely remain internal–as it should.”

    But, but, E. BRIGHT WILSON! E. BRIGHT WILSON!

    And then the noise machine putters off into the sunset, thinking he/she has made some point. Wilson was saying nothing more than what we all know is the ideal. Papers are written for a certain audience, and knowledge is assumed on the part of the readership. If it turns out that the information given in the paper is not adequate to figure out what Mann etc. did (and the reviewers did not specify more details - hey it happens), then his colleagues will ask him politely for those details or else do it their own way, and either get results that confirm or deny the original paper. This is NOTHING out of the ordinary for a scientific paper. Note that this sequence of events is even better than “auditing”, which actually accomplishes nothing except PR for an agenda trying to cast doubts on the conclusions.

  • Hank Roberts // September 4, 2008 at 3:13 pm

    What scientific research needs these days is a regular FAQ for each paper — so the script kiddies who copypaste questions can be pointed to answers without giving them the pleasure of wasting the researchers’ time and clogging discussions with same old same old stuff.

  • Chris O'Neill // September 4, 2008 at 3:49 pm

    apolytongp:

    E. Bright Wilson, Jr. AN INTRODUCTION TO SCIENTIFIC RESEARCH, 1955

    That’s nice. BTW, let us know if MBH98 ever regains any practical significance.

  • Ray Ladbury // September 4, 2008 at 3:59 pm

    Lost and Confused, Re: PC1 in MBH98, see:

    http://www.realclimate.org/index.php/archives/2005/02/dummies-guide-to-the-latest-hockey-stick-controversy/langswitch_lang/in

    This has been discussed ad nauseum. The fact is that the current methods are more skillful, more robust and STILL show the same thing–namely: It’s freakin’ hot out there!

  • Chris O'Neill // September 4, 2008 at 4:22 pm

    Certain yet Lost and Confused:

    Consider this passage from MBH98, “[T]he long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network, suggesting that potential tree growth trend biases are not influential in the multiproxy climate reconstructions.”

    As has been pointed out to you, you are basing your certainty that Mann lied on your disputed interpretation of his words. This is blatantly dishonest.

  • Trying_to_make_sense // September 4, 2008 at 4:57 pm

    \\Lost and Confused 1:41 PM

    Although the attacks on MBH98 kept changing, I thought the BCP attack was that if you remove Bristlecone pines the hockey stick disappears. The statement now is that you can remove all tree ring proxies and the result remains. I don’t see why this statement is saying that the attack was correct? I am under the impression that MBH98 was based on many tree ring proxies (including BCP). So, of course if you remove all tree ring proxies MBH98 cannot be replicated. Am I missing something here?

  • Dano // September 4, 2008 at 6:18 pm

    What’s good is that all of the Dr Frankensteinian reviving of long-dead arguments is not going on in the offices of decision-makers. It is only going on in comment threads. By folks who should reply ‘answered over and over years ago. We’ve moved on.’

    The world’s societies are discussing how to adapt and mitigate, not whether a first paper should be perfect in the minds of ideologues ardently holding down a scruffy fort located in the far reaches of the denialist fringe.

  • Gavin's Pussicat // September 4, 2008 at 6:33 pm

    Libelous and Clintonian, did you notice this in the press release:

    Results of this study without tree-ring data show that for the Northern
    Hemisphere, the last 10 years are likely unusually warm for not just the
    past 1,000 as reported in the 1990s paper and others, but for at least
    another 300 years going back to about A.D. 700 without using tree-ring data.

    Mann is referring to 700 AD - now. With not just BCP removed, but all tree rings. In 1998/1999 they barely made 1000 AD, with tree rings. Apples and oranges.

    The reason that comment is interesting is MBH98 was criticized by people
    saying without bristlecone proxies the “hockey stick” disappeared. This
    became an issue of a fairly large amount of controversy. Now
    Mann has stated the MBH98 reconstruction was dependent upon tree
    rings. This effectively resolves that particular controversy.

    Again, apples and oranges. It was never in question that removing all tree ring data made the 1998 (and 1999) reconstruction next to worthless, at least the interesting early part. But removing contentious proxies like BCP did not, as has been demonstrated to the hilt for those inpressionable by factual evidence.

    As we say out here, you’re reading the press release like the Devil reads the Bible :-)

  • Lost and Confused // September 4, 2008 at 8:31 pm

    Ray Ladbury, that link is completely irrelevant to your point. The validity of your point is not tied to the validity of the criticisms of MBH PC methodology. The issue was whether MBH was of interest for more than “historical reasons” and clearly it is.

    Chris O’Neill, if you read my post again you should see I made no accusations or even comments regarding whether or not Mann lied. I am attempting to avoid any discussion of people or motives now, and I would appreciate it if you would not misrepresent my posts.

    Trying_to_make_sense, you are largely correct as to what the criticisms said. As you point out, the statement in the press release says now that criticism would be untrue. However, the statement does say a decade ago removal of tree ring proxies was not possible. This resolves a controversy where people had said you could remove all tree rings proxies and still get the same result. I had never heard anyone raise the point you raise here (that some tree ring proxies could be removed, but not all), so I had not considered it when making that post. I always heard the defenses raised refer to all tree rings, similar to the portion quoted from MBH.

  • David B. Benson // September 4, 2008 at 9:13 pm

    apolytongp // September 3, 2008 at 4:14 pm wrote “…Mathematical papers without misprints and errors are the exception rather than the rule…” and I asume he was quoting from E. Bright Wilson, Jr. AN INTRODUCTION TO SCIENTIFIC RESEARCH, 1955.

    This is false now and it was false then, at least regarding mathematical papers written by mathematicians, physcists and astronomers.

    Probably chemists, too, but I don’t read chemistry much.

  • t_p_hamilton // September 4, 2008 at 10:27 pm

    Lost and Confused: “The issue was whether MBH was of interest for more than “historical reasons” and clearly it is.”

    Since subsequent papers have been published with more data, clearly presented supplementary information, and numerous statistical methods, with resulting HIGHER RESOLUTION, why would the first paper be of anything but historical interest?

  • Hank Roberts // September 5, 2008 at 3:20 am

    “… only one of these series … exhibits a signicant correlation with the time history of the dominant temperature pattern of the 1902-1980 calibration period. Positive calibration variance scores for the NH series cannot be obtained if this indicator is removed from the network …”

    Let’s put that in context:

    ——excerpt follows——-

    … Further consistency checks are required. The most basic involves checking the potential resolvability of long-term variations by the underlying data used. An indicator of climate variability should exhibit, at a minimum, the red noise spectrum the climate itself is known to exhibit, see Mann and Lees, 1996 and references therein. A signicant decit of power relative to the median red noise level thus indicates a possible loss of true climatic variance, with a decit of zero frequency power indicative of less trend than expected from noise alone, and the likelihood that the longest secular” timescales under investigation are not adequately resolved. Only 5 of the indicators including the ITRDB PC1, Polar Urals, Fennoscandia, and both Quelccaya series are observed to have at least median red noise power at zero frequency for the pre-calibration AD 1000-1901 period. It is furthermore found that only one of these series — PC 1 of the ITRDB data — exhibits a signicant correlation with the time history of the dominant temperature pattern of the 1902-1980 calibration period. Positive calibration variance scores for the NH series cannot be obtained if this indicator is removed from the network of 12 in contrast with post-AD 1400 reconstructions for which a variety of indicators are available which correlate against the instrumental record. Though, as discussed earlier, ITRDB PC 1 represents a vital region for resolving hemispheric temperature trends, the assumption that this relationship holds up over time nonetheless demands circumspection. Clearly, a more widespread network of quality millennial proxy climate indicators will be required for more confident inferences.
    ——end excerpt———-

    You know how to find the source.

  • Barton Paul Levenson // September 5, 2008 at 11:02 am

    Lost and Confused, unbelievably, posts:

    The reason that comment is interesting is MBH98 was criticized by people saying without bristlecone proxies the “hockey stick” disappeared. This became an issue of a fairly large amount of controversy. Now Mann has stated the MBH98 reconstruction was dependent upon tree rings. This effectively resolves that particular controversy.

    Consider this passage from MBH98, “[T]he long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network,

    Lost, READ what you’re quoting! Saying the trend is “robust to” the tree evidence means the trend is still there even without the tree evidence.

  • Barton Paul Levenson // September 5, 2008 at 12:13 pm

    Hank, the FAQ for a paper is a pretty darned good idea — can you write to those organizations that are responsible for several journals and suggest this? I’d be willing to sign letters.

  • Dano // September 5, 2008 at 8:00 pm

    Hank, the FAQ for a paper is a pretty darned good idea — can you write to those organizations that are responsible for several journals and suggest this? I’d be willing to sign letters.

    I’m not sure doing it for the reason Hank gave is a good enough reason. There are numerous journals that give brief synopses with applications for practice that get at what Hank is suggesting.

    I used to discuss with Chris Mooney way back about what science needed to communicate better, and while FAQs are a decent idea, if the researcher who writes them is disconnected from the lay public, the FAQ won’t do much good.

    Systemically, we need to require a communications class at the undergrad level for science majors, and a track where those who don’t want to be lab jockeys but communicate well can have a niche explaining relevance of papers. This has come up in one form or another, but little progress at the Uni level.

    Best,

    D

  • David B. Benson // September 5, 2008 at 10:41 pm

    Here ia a link Timo found to a paper doing a borehole based temperature reconstruction of the last 20,000 years. The most recent 2,000 years are of particular interest; see Figure 1 in the paper:

    http://www.geo.lsa.umich.edu/~shaopeng/2008GL034187.pdf

  • HankRoberts // September 6, 2008 at 12:38 am

    > FAQs for science papers

    Maybe one of the science journalism programs could make a project of that sort of thing. Idea’s free for the taking.

  • Chris O'Neill // September 6, 2008 at 9:00 am

    Lost and Confused:

    if you read my post again you should see I made no accusations or even comments regarding whether or not Mann lied. I am attempting to avoid any discussion of people or motives now, and I would appreciate it if you would not misrepresent my posts.

    It would help if you withdrew your implication that Mann lied in MBH98 with your interpretation of MBH98:

    MBH98 .. says this (1400 AD) reconstruction is robust to the removal of dendroclimatic indicators (tree rings).

  • Barton Paul Levenson // September 6, 2008 at 11:10 am

    Maybe not a FAQ per se but a sort of translation into layman:

    WHAT THE PAPER MEANS

    We used statistics to test whether the Arctic ice cap was melting at a faster and faster rate or not. We found that we couldn’t tell. We did not find that it was not melting; it is. Just that it doesn’t appear to be speeding up yet.

    Or something of the sort.

  • Hank Roberts // September 6, 2008 at 4:38 pm

    Yep. Simpler helps. I was thinking ‘ FAQ’ to collect the frequently asserted questions that pop up over and over wherever climate is mentioned, so each paper could gather up the list of copypaste stuff pertinent to it.

    I suppose that would only encourage more of it.

  • Hank Roberts // September 7, 2008 at 4:24 am

    http://viridiandesign.org/

  • Barton Paul Levenson // September 7, 2008 at 10:34 am

    TCO posts:

    The death rate will increase until at least 100-200 million people per year will be starving to death during the next ten years.”

    TCO, 200 million people DID starve to death in the last ten years! Go read some WHO statistics about malnutrition and famine. Keep in mind that something like 15 million people die of all causes every year.

  • Barton Paul Levenson // September 7, 2008 at 10:46 am

    Oops! I did it again — answered an old post (by TCO) because I was confused about the dates I was reading. Sorry about that.

  • apolytongp // September 7, 2008 at 3:34 pm

    “per year in the next ten years” NOT EQUAL to “in ten years”. It’s a difference of ten times.

  • Lazar // September 7, 2008 at 4:46 pm

    Re Mann et al 2008, the supp info pdf has interesting plots such as the effect of removing dendro proxies. But the scan at PNAS is blurred for some (e.g. fig S5 is barely legible at 400% zoom). Go here instead :)

  • Arch Stanton // September 7, 2008 at 6:52 pm

    Hi guys, I’m having a discussion with someone who claims that climate “anomaly” data is derived solely from low temp data. I have been unable to find anything about it. Is there any truth to it?

    Thanks, Arch

  • David B. Benson // September 7, 2008 at 9:55 pm

    Arch Stanton // September 7, 2008 at 6:52 pm — Your discussant is terribly confused.

  • grobblewobble // September 8, 2008 at 8:38 am

    I’d like to continue here with a discussion in the ‘(more) less ice’ thread, since it was getting rather off-topic.

    Barton Paul Levenson wrote:
    [quote]You are conflating the people who do this full-time with those who happened to be convinced by them. The latter indeed deserve help and not derision; but the former simply have to be stopped.[/quote]
    Sir, I beg to differ on this matter. First, I wonder if such a distinction can really be made. It requires an understanding of the motives of other people, which is risky bussiness at best.

    Secondly, how could they be ’stopped’? In the field of science, if someone is misinterpretating observations or using faulty logic, his work is prevented from being published through the process of peer review.
    However, in everyday life such a thing does not and should not exist. The right of free speech demands that every lunatic can spread as much disinformation as he desires.

    It is sad and it bothers me that this is getting in the way of making a complicated scientific finding clear to the general public - especially as it is something that many people hope to be false or at least exaggerated.

    Frustrating as this may be, ’stopping’ people from denying the truth sounds to me like a cure worse than the disease. IMHO, only an open minded exchange of knowledge can eventually make more people see the truth.

  • Ian Jolliffe // September 8, 2008 at 9:36 am

    Apologies if this is not the correct place to make these comments. I am a complete newcomer to this largely anonymous mode of communication. I’d be grateful if my comments could be displayed wherever it is appropriate for them to appear.

    It has recently come to my notice that on the following website, related to this one, my views have been misrepresented, and I would therefore like to correct any wrong impression that has been given.
    http://tamino.wordpress.com/2008/03/06/pca-part-4-non-centered-hockey-sticks/

    An apology from the person who wrote the page would be nice.

    In reacting to Wegman’s criticism of ‘decentred’ PCA, the author says that Wegman is ‘just plain wrong’ and goes on to say ‘You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe, one of the world’s foremost experts on PCA, author of a seminal book on the subject. He takes an interesting look at the centering issue in this presentation.’ It is flattering to be recognised as a world expert, and I’d like to think that the final sentence is true, though only ‘toy’ examples were given. However there is a strong implication that I have endorsed ‘decentred PCA’. This is ‘just plain wrong’.

    The link to the presentation fails, as I changed my affiliation 18 months ago, and the website where the talk lived was closed down. The talk, although no longer very recent – it was given at 9IMSC in 2004 - is still accessible as talk 6 at http://www.secamlocal.ex.ac.uk/people/staff/itj201/RecentTalks.html
    It certainly does not endorse decentred PCA. Indeed I had not understood what MBH had done until a few months ago. Furthermore, the talk is distinctly cool about anything other than the usual column-centred version of PCA. It gives situations where uncentred or doubly-centred versions might conceivably be of use, but especially for uncentred analyses, these are fairly restricted special cases. It is said that for all these different centrings ‘it’s less clear what we are optimising and how to interpret the results’.
    I can’t claim to have read more than a tiny fraction of the vast amount written on the controversy surrounding decentred PCA (life is too short), but from what I’ve seen, this quote is entirely appropriate for that technique. There are an awful lot of red herrings, and a fair amount of bluster, out there in the discussion I’ve seen, but my main concern is that I don’t know how to interpret the results when such a strange centring is used? Does anyone? What are you optimising? A peculiar mixture of means and variances? An argument I’ve seen is that the standard PCA and decentred PCA are simply different ways of describing/decomposing the data, so decentring is OK. But equally, if both are OK, why be perverse and choose the technique whose results are hard to interpret? Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.
    I am by no means a climate change denier. My strong impressive is that the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics. Misrepresenting the views of an independent scientist does little for their case either. It gives ammunition to those who wish to discredit climate change research more generally. It is possible that there are good reasons for decentred PCA to be the technique of choice for some types of analyses and that it has some virtues that I have so far failed to grasp, but I remain sceptical.

    Ian Jolliffe

    [Response: I apologize for having misrepresented your opinion, but I hope you realize that it was an honest statement of my interpretation of your presentation, in no way was it a deliberate attempt to misrepresent you.

    In your presentation you state: "It seems unwise to use uncentred analysis unless the origin is meaningful." I took this to mean that you endorse uncentered analysis when the origin is meaningful. If you disagree, I accept your disagreement, but it seems to me that I can hardly be blamed for thinking so. It also seems to me (and I'm by no means the only one) that the origin in the analysis of MBH98 is meaningful.

    I certainly agree with this statement from your comment: "... the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence ..."]

  • Arch Stanton // September 8, 2008 at 3:10 pm

    David B Benson

    >my discussant

    I thought so.

    Thanks

  • Lost and Confused // September 8, 2008 at 11:47 pm

    t_p_hamilton you say, “Since subsequent papers have been published with more data, clearly presented supplementary information, and numerous statistical methods, with resulting HIGHER RESOLUTION, why would the first paper be of anything but historical interest?” As I already said, parts of the MBH have been reused in a number of these subsequent papers. As long as parts of it are still being used, it is still of interest.

    Barton Paul Levenson, I do not understand your post. You say, “Lost, READ what you’re quoting! Saying the trend is ‘robust to’ the tree evidence means the trend is still there even without the tree evidence.” That is exactly how I interpreted the comment, so I do not know why you told me I need to “READ” the quote. Previously it was claimed the trend existed without “tree evidence.” Mann has now said “tree evidence” could not have been thrown away a decade ago. Could you explain what was so unbelievable about my post?

  • HankRoberts // September 8, 2008 at 11:55 pm

    Speaking of ‘twisted’:
    http://bravenewclimate.com/2008/09/04/twisted-the-distorted-mathematics-of-greenhouse-denial/#

  • TCO // September 9, 2008 at 12:16 am

    Tamino, as stated before, the most damning thing is that an expert on PCA can’t really even follow what Mann is doing, let alone opine on if it is right/wrong. We will have dhogza along in a second to say “well he didn’t say it was for sure wrong”. But that’s not even the point. The point is that someone who is an expert has significant questions. How are we supposed to evaluate Mann as analysis given the difficulties from an expert?

  • TCO // September 9, 2008 at 12:24 am

    Also my clear implication from Jolliffe originally and then especially given the recent comments is that off-centering is a sometime thing requiring some justification and still to be looked at curiosly. Given that Mann didn’t even cite that he had DONE THIS, perhaps he did not do what he should have?

    Tammy, you’re like one of my favorite libs, so how about breaking ranks with the cabal and at least say that Mann should have noted that he did the particular normalization within his description of methods? It’s such a minor point. Doesn’t require you to trade in the NASA pass, the Hybrid, the cabal mailing list, what have you. Just a little teensy minor point for proper documentation.

    ;-)

    [Response: I do agree that Mann et al. should have noted the conventions used for their analysis. I don't believe it was in any way an attempt to deceive.]

  • pough // September 9, 2008 at 12:45 am

    Ian, if it means anything, my reading of that post didn’t lead me to think you specifically endorsed anything. In fact, I was assuming what turns out had happened: that you hadn’t been consulted, just that your work (as interpreted by Tamino) seemed to be backing up usage of uncentered analysis in certain circumstances.

  • TCO // September 9, 2008 at 12:46 am

    Actually I lied. BigCityLiberal is my favorite. You’re on the list though.

  • Timothy Chase // September 9, 2008 at 1:53 am

    TCO wrote:

    Also my clear implication from Jolliffe originally and then especially given the recent comments is that off-centering is a sometime thing requiring some justification and still to be looked at curiously. Given that Mann didn’t even cite that he had DONE THIS, perhaps he did not do what he should have?

    I agree with both you and Tamino on this point, of course. But my personal view is that Michael Mann was probably writing for fellow climatologists who probably wouldn’t bat an eye at seeing or identifying the use of de-centered PCA. So much like your calculus professor might have skipped steps 1-10 because they were obvious to him — and he just naturally assumed everyone else, Mann omitted the obvious. And as I and others have noted, it gets used in a variety of disciplines and has been since the 1970s.

    [Response: I'll disagree. I don't think the use of decentered PCA is one of those "obvious" steps, and it should have been mentioned.]

  • TCO // September 9, 2008 at 2:24 am

    Cool. I don’t think it was an attempt to hide either. Sorry, you’re still behind BCL, though.

    [Response: I can accept that.]

  • george // September 9, 2008 at 3:07 am

    With all due respect Dr. Joliffe, based on your presentation alone, it would be difficult if not impossible for me (or anyone else) to know that Tamino had “misrepresented your views”.

    And under the circumstances, I think “misinterpreted” (rather than “misrepresented”) might have been a better word for you to have used here.

    I think it is important to view Tamino’s statement in its full context because doing so makes it clear that

    1) when Tamino commented that Wegman was “just plain wrong”, he was specifically referring to this statement by Wegman:

    Centering the mean is a critical factor in using the principal component methodology properly.

    .

    Perhaps it was not your intention to do so in your presentation, but you did seem to imply that using uncentered PCA might be warranted in certain case(s ) — specifically, as you said, when “the origin is meaningful”

    Forgive me, but your implication (intentional or not) does seem to stand in direct conflict with Wegman’s categorical claim that

    Centering the mean is a critical factor in using the principal component methodology

    2) When Tamino said

    “You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe”

    it seems quite likely that he was actually referring back to his immediately preceding sentence:

    Centering is the usual custom, but other choices are still valid; we can perfectly well define PCs based on variation from any “origin” rather than from the average.

    Again, perhaps it was not your intent to give this impression to those reading your presentation, but I too can see how your statement in your presentation

    “It seems unwise to use uncentred analysis unless the origin is meaningful”

    might be interpreted as Tamino interpreted it.

    I actually think it is unfair of you to hold Tamino completely responsible for any misinterpretation of your views on the subject of uncentered PCA.

    If you really did not believe that uncentered PCA was warranted when you made that presentation, perhaps you should have made that perfectly clear in uour original presentation.

    Perhaps your view on the subject has evolved since then?

    Full text of Tamino post below:

    First let’s dispense with the last claim, that non-centered PCA isn’t right. This point was hammered by Wegman, who was recently quoted in reader comments thus:

    “The controversy of Mann’s methods lies in that the proxies are centered on the mean of the period 1902-1995, rather than on the whole time period. This mean is, thus, actually decentered low, which will cause it to exhibit a larger variance, giving it preference for being selected as the first principal component. The net effect of this decentering using the proxy data in MBH and MBH99 is to produce a “hockey stick” shape. Centering the mean is a critical factor in using the principal component methodology properly. It is not clear that Mann and associates realized the error in their methodology at the time of publication.”

    Just plain wrong. Centering is the usual custom, but other choices are still valid; we can perfectly well define PCs based on variation from any “origin” rather than from the average. It fact it has distinct advantages IF the origin has particular relevance to the issue at hand. You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe, one of the world’s foremost experts on PCA, author of a seminal book on the subject. He takes an interesting look at the centering issue in this presentation.

  • Timothy Chase // September 9, 2008 at 3:22 am

    Tamino wrote:

    Response: I’ll disagree. I don’t think the use of decentered PCA is one of those “obvious” steps, and it should have been mentioned.

    Not a problem. In this area and a great many others (no doubt) I would strongly recommend that people give your views considerably more credence than mine. However, perhaps you will consider this: sometimes you yourself have questions that unlike so many nowadays cannot be answered on the web or in the privacy of your own mind. And perhaps this is one of those times.

  • Patrick Hadley // September 9, 2008 at 5:51 am

    George says that it would have been impossible for Tamino or anyone else to know that he had misrepresented Jolliffe’s views. If you look back at the thread you will see that this was pointed out on the thread by several posters who were roundly abused for their pains. It was patently obvious from his presentation that Jolliffe did not give carte blanche for the use of decentred PCAs.

    What about these comments by Jolliffe (who is certainly no climate change denier) that “given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA” and “It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics.”

    Surely it is time to admit that the hockey stick and all its later reincarnations are utterly bogus artifacts and that defending it gives ammunition to those who wish to discredit climate research more generally.

    [Response: Of course Jolliffe didn't give carte blanche for the use of uncentered or decentered PCA. Neither did he make a blanket condemnation of those procedures. From his latest comment it's evident that he didn't address the issue of decentered (as opposed to uncentered) PCA at all. It appears he now discredits decentering, and he's entitled to his opinion. But the hockey stick remains when using centered PCA, and when using no PCA at all. The claim that it's nothing but "utterly bogus artifacts" is what's really bogus.

    The case for global warming rests on a mountain of evidence, of which the hockey stick is only a small (and far from crucial) part. It's the denialists who focus on the hockey stick to the exclusion of all else, in an attempt to discredit climate science in general.]

  • Gavin's Pussycat // September 9, 2008 at 6:08 am

    Tamino:

    It also seems to me (and I’m
    by no means the only one) that the origin in the analysis of
    MBH98 is meaningful.

    FWIW that’s how I understood the whole point of Tamino’s PCA posts.

  • mikep // September 9, 2008 at 7:42 am

    Here is what McIntyre wrote in 2005, in response to initial comments by Mann using Joliffe as an authority:

    “The second presentation cited by Mann is a Powerpoint presentation on the Internet by Jolliffe (a well known statistician).
    Jollife explains that non-centered PCA is appropriate when the reference means are chosen to have some a priori meaningful interpretation for the problem at hand. In the case of the North American ITRDB data used by MBH98, the reference means were chosen to be the 20th century calibration period climatological means. Use of non-centered PCA thus emphasized, as was desired, changes in past centuries relative to the 20th century calibration period. (http://www.realclimate.org/index.php?p=98)
    In fact, Jolliffe says something quite different. Jolliffe’s actual words are:
    “it seems unwise to use uncentered analyses unless the origin is meaningful. Even then, it will be uninformative if all measurements are far from the origin. Standard EOF analysis is (relatively) easy to understand –variance maximization. For other techniques it’s less clear what we are optimizing and how to interpret the results. There may be reasons for using no centering or double centering but potential users need to understand and explain what they are doing.”
    Jolliffe’s presents cautionary examples showing that uncentered PCA gives results that are sensitive to whether temperature data are measured in Centigrade rather than Fahrenheit, whereas centered PCA is not affected. Jolliffe nowhere says that an uncentered method is “the” appropriate one when the mean is “chosen” to have some special meaning, he states, in effect, that having a meaningful origin is a necessary but not sufficient ground for uncentered PCA. But he points out that uncentered PCA is not recommended “if all measurements are far from the origin”, which is precisely the problem for the bristlecone pine series once the mean is de-centered, and he warns that the results are very hard to interpret. Finally, Jolliffe states clearly that any use of uncentered PCA should be clearly understood and disclosed - something that was obviously not the case in MBH98. In the circumstances of MBH98, the use of an uncentered method is absolutely inappropriate, because it simply mines for hockey stick shaped series. Even if Mann et al. felt that it was the most appropriate method, it should have had warning labels on it.”
    Joliffe has specifically confirmed that McIntyre’s interpretation of what he said is correct. So someone at least could interpret what Joliffe wrote correctly. The crucial mistake some readers seem to have made is to confuse a necessary condition with a sufficient condition. Blaming Joliffe for being insufficiently clear is ungracious in the extreme. Joliffe was far far clearer than the 1998 MBH article which failed to mention the use on non-centering at all, and, contrary to what is said above, non-centering is very non-standard and would not be assumed by the ordinary Nature reader. It’s a very eccentric thing to do. Can’t we just accept that uncentred PCA requires exceptional justification if it is to be used in this area (beginning by telling people it’s being used in the first place)?

  • Ian Jolliffe // September 9, 2008 at 9:10 am

    Thanks for the apology, Tamino.
    Some further clarification: a lot of the confusion seems to have arisen because of the terminology. Uncentred PCA and decentred PCA are completely different animals. My presentation dealt only with uncentred PCA (and doubly centred PCA). I’ve just looked at it again and it seems completely unambiguous that this is the case. Thus when I talked about the ‘origin’ being meaningful I meant the point at which all the variables as originally measured are zero, and nothing else. Using anything other than column means or row means to centre the data wasn’t even on my radar. It was only fairly recently that I realised the exact nature of decentred PCA so I couldn’t have endorsed it.
    A response from Timothy Chase (thanks for giving a name - I may be old-fashioned but I prefer to know who I’m talking to) suggests that decentred PCA ‘gets used in a variety of disciplines and has been since the 1970s’. I’m aware of uses of uncentred and doubly-centred PCA, but not of decentred PCA. I’d be grateful for the references.

  • Barton Paul Levenson // September 9, 2008 at 9:43 am

    grobble,

    I wasn’t advocating prior restraint. I was advising handling our own public relations efforts so that people stop listening to liars and crackpots.

  • null{} // September 9, 2008 at 12:13 pm

    Tamino said:

    “I certainly agree with this statement from your comment:”

    “It therefore seems crazy . . . and that a group of influential climate scientists have doggedly defended a piece of dubious statistics. “

  • dean_1230 // September 9, 2008 at 12:30 pm

    Tamino,

    Can we expect to see you revisit your tutorial with Joliffe’s correction in mind?

  • Chris O'Neill // September 9, 2008 at 1:10 pm

    Certain despite being Lost and Confused:

    Mann has now said “tree evidence” could not have been thrown away a decade ago.

    Not just now, but nine years ago also:

    In using the sparser dataset available over the entire millennium, only a relatively small number of indicators are available in regions (e.g. western North America) where the primary pattern of hemispheric mean temperature variation has significant amplitude, and where regional variations appear to be closely tied to global-scale temperature variations in model-based experiments. THESE FEW INDICATORS THUS TAKE ON A PARTICULARLY IMPORTANT ROLE (in fact, as discussed below, ONE SUCH INDICATOR, PC#1 of the ITRB data, IS FOUND TO BE ESSENTIAL)

    This is very, very old news.

  • dhogaza // September 9, 2008 at 1:22 pm

    Can we expect to see you revisit your tutorial with Joliffe’s correction in mind?

    The tutorial doesn’t change, only the reference to Joliffe.

    Null{}: Quote-mining is a sin.

  • AndyL // September 9, 2008 at 2:05 pm

    Tamino,
    In response to Ian Joliffe you say ” I certainly agree with this statement from your comment: “… the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence …” ”

    To be sure there is no further misunderstanding between you and Joliffe, can you confirm you agree that IPCC and Gore should not have given such prominence to the Hockey Stick.

    Further, do you agree with the remainder of his statement “it is crazy …that a group of influential climate scientists have doggedly defended a piece of dubious statistics”

    [Response: No I do *not* agree that "IPCC and Gore" should not have given such prominence to the hockey stick. Your question is itself dishonest; it's the denialist camp which has focused too much attention on the hockey stick, painting it as a crucial centerpiece of climate science, which it is not.]

  • AndyL // September 9, 2008 at 2:42 pm

    Tamino,

    thanks for your reply.

    My question was not dishonest. I wanted to draw out what you meant - which you have clarified.

    However you claim to agree with Joliffe. It is not clear is whether your statement agrees or disagrees with what Joliffe meant. It appears to me that you may have misinterpreted him again.

  • Ray Ladbury // September 9, 2008 at 3:12 pm

    Andy L.,
    While I would agree that there are some aspects of MBH98 that 10 years down the road are difficult to defend, I don’t think anyone is trying to defend them. Rather, members of the climate science community are defending the character of good scientists against calumny by the denialists. They are also pointing out that none of the errors in MBH98 substantively affect the basic conclusion: It is hotter now than it has been in a very, very long time. It would seem that the denialists are so eager to attack the characters of M, B and H precisely to divert attention away from the second point.

  • Timothy Chase // September 9, 2008 at 3:15 pm

    Ian Jolliffe wrote:

    A response from Timothy Chase (thanks for giving a name - I may be old-fashioned but I prefer to know who I’m talking to) suggests that decentred PCA ‘gets used in a variety of disciplines and has been since the 1970s’. I’m aware of uses of uncentred and doubly-centred PCA, but not of decentred PCA. I’d be grateful for the references.

    Here are a few that P. Lewis dug up while Tamino was going through his explanation of PCA, centered and non-centered. And there are more. Then I have run into multi-scale principle component analysis, non-linear principle component analysis, kernel principle component analysis, etc.. The latter is getting some use in the identification of climate modes where positive and negative phases aren’t simply negative images of one another. It seems to have a number of variations — which get used in a large variety of disciplines, including image and sound processing, facial recognition, ecological studies, medicine, genetics, economics, etc.. Google and Google Scholar bring up a fair amount.

  • Timothy Chase // September 9, 2008 at 3:16 pm

    In any case, you might check out Tamino’s presentation on principle component analysis…

    PCA, part 1
    http://tamino.wordpress.com/2008/02/16/pca-part-1/

    PCA, part 2
    http://tamino.wordpress.com/2008/02/20/pca-part-2/

  • Timothy Chase // September 9, 2008 at 3:17 pm

    Practical PCA
    http://tamino.wordpress.com/2008/02/21/practical-pca/

    PCA part 4: non-centered hockey sticks
    http://tamino.wordpress.com/2008/03/06/pca-part-4-non-centered-hockey-sticks/

    PCA part 5: Non-Centered PCA, and Multiple Regressions
    http://tamino.wordpress.com/2008/03/19/pca-part-5-non-centered-pca-and-multiple-regressions

    He expresses some reservations with respect to how it was performed in the original paper by Mann. But he also points out that you get essentially the same results if you use other methods including centered principle component analysis — as is demonstrated by other studies of temperature proxies.

  • Timothy Chase // September 9, 2008 at 3:18 pm

    Ian Jolliffe,

    One question. You write:

    Thus when I talked about the ‘origin’ being meaningful I meant the point at which all the variables as originally measured are zero, and nothing else.

    Wouldn’t this depend upon the coordinate system? Such that by choosing a different coordinate system, you could make all the variables equal zero for what ever point you like? I hope that by “being meaningful” you might mean something more restrictive than this — or at least I would prefer something a little more restrictive, such as especially meaningful within the historical context of problem or given the available data, such that the choice is not arbitrary.

  • Bill // September 9, 2008 at 4:50 pm

    dhogaza, I wouldn’t call Tamino a sinner for taking this quote:

    It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics.

    And reducing it to:

    It therefore seems crazy that the MBH hockey stick has been given such prominence…

    especially since the part that was removed seems to refer to Tamino. Of course, since this is Tamino’s blog, we should heed his instructions to trust the source, who states this:

    Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

  • dhogaza // September 9, 2008 at 5:23 pm

    Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

    And people have analyzed the data without doing so, and get the hockey stick.

    Mann et al have added a very large number of new proxies, analyze the set without using any type of PCA, and get the hockey stick.

    On and on, ad infinitum.

  • george // September 9, 2008 at 5:39 pm

    One thing is very interesting in this whole hockey stick debate:

    While many of the experts in various disciplines related to the debate have been able to view the whole hockey stick controversy in context for what it really means, some people 9on both “sides”) have a very hard time letting MBH98 go.

    Wegman criticized Mann’s statistics, but nonetheless said that the case for global warming did not rest on Mann’s results and that it was ‘time to put the “hockey stick” controversy behind us and move on.” ‘

    “We do agree with Dr. Mann on one key point: that MBH98/99 were not the only evidence of global warming.
    As we said in our report, “In a real sense the paleoclimate results of MBH98/99 are essentially irrelevant to the consensus on climate change. The instrumented temperature record since 1850 clearly indicates an increase in temperature.” We certainly agree that modern global warming is real. We have never disputed this point. We think it is time to put the “hockey stick” controversy behind us and move on.”

    The NRC issued a report that concluded that some of Mann’s claims (particularly about individual years in the 90’s being the hottest in the last 1000 years) were not supported with any certainty, but nonetheless stated quite unambiguously that the case for warming did not depend on Mann’s results.

    Dr. Jolliffe clarifies above (thank you Dr. Jollife) that “It was only fairly recently that I realised the exact nature of decentred PCA so I couldn’t have endorsed it” and “given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA”, but he also says
    “I am by no means a climate change denier. My strong impressive is that the evidence rests on much much more than the hockey stick.”

  • Gaelan Clark // September 9, 2008 at 5:47 pm

    If the hockey stick is not important, then why are we concerned over what has been termed—because of the hockey stick alone—”unprecedented warming” in the last few decades?

  • Timothy Chase // September 9, 2008 at 6:36 pm

    Bill quotes:

    Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

    Seems like a rather odd thing to say as PCA gets used in the processing of sound, economic analysis (which is pretty much all dynamic), climate modes (oscillations, which are by definition dynamic), etc..

    It is such a widely used technique, but given this statement, it is beginning to sound like it shouldn’t be used at all.

  • johnG // September 9, 2008 at 7:02 pm

    Can you or your readers recommend any good references for understanding astronomical forcing?

    I’m trying to build a presentation to my astronomy club on astronomical forcing, but also want to put this subject in the context of paleoclimate evidence, current greenhouse gas theory, and be able to construct some very simple models that illustrate changes in insolation with changes in orbit.

    My community is a hotbed of global warming denial, and so I’m hoping that my presentation will allow me to get some of the fine discussion I see on this and other climate-related blogs into places where it’s badly needed.

    Thanks in advance,
    jg

  • L Miller // September 9, 2008 at 7:08 pm

    “My strong impressive is that the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics.”

    Dr Jolliffe

    I’m a rather infrequent poster here but since I’ve already seen your post here linked on thee separate sites I thought I would give you some feedback on the nature of this debate.

    While I certainly agree that climate change evidence rests on much more then the hockey stick, the hockey stick itself rests on much more then a single paper published in 1998. Since there more then a dozen papers have seen the same result without PCA, and it’s been demonstrated that neither centering nor the use of PCA have any impact on the final outcome of MBH98.

    It isn’t at all uncommon for less then perfect choices to be made in first of it’s kind papers like the one in question. The ultimate test isn’t that such flaws exist but if the results hold up when those flaws are fixed in future papers, and the hockey stick certainly has held up. It’s not surprising, therefore that climate scientists should defend it.

    While I think it’s clear you are addressing your comments towards a specific part of one paper, that isn’t the claim being made by those who typically bring this topic up. I’ve already seen linking to you post here claiming it as “proof” the hockey stick shape doesn’t exist at all and that the issues you point out mean that every paper which yields the same result as MBH98 should be dismissed. I know that sounds ridiculous, but it truly is the line being spread about the 1998 paper and your comments on it.

  • Gavin's Pussycat // September 9, 2008 at 8:22 pm

    Gaelan Clark:

    what has been termed — because of the hockey stick
    alone– “unprecedented warming” in the last few decades

    Stop lying.

  • pough // September 9, 2008 at 8:39 pm

    If the hockey stick is not important, then why are we concerned over what has been termed—because of the hockey stick alone—”unprecedented warming” in the last few decades?

    I’m not entirely sure, but I think you’re referring to two things with one name (unfortunately easy and common). There is “the hockey stick”, which is sometimes one report: MBH98 and there is “the hockey stick”, which is a number of papers that all show a similar shape.

    MBH98 is not alone in showing unprecedented warming in the last few decades. For that reason (and because it was done so long ago and has been superseded) it is no longer important.

    Also keep in mind that “unprecedented” doesn’t just refer to temperature level, but also to rate of increase. I like to say that slowing from 100km/h to zero is nothing particularly interesting unless you happen to do it in the space of one meter.

  • Pete // September 9, 2008 at 9:41 pm

    L Miller, Jollife has simply admonished Tamino for mirepresenting his views as supporting the use of decentered PCA as used in MBH98. It seems that he has never seen that paper or any of the others claimed to have used this methodology. Perhaps Wegman was right in that its long overdue that this field used world-class statisticians given the importance being claimed for this research. It would be interesting to see Dr Jolliffe’s take on MBH98 and the papers you allude to, but he must be pretty busy to have not even noticed them given their high profile.

  • None // September 9, 2008 at 9:56 pm

    dhogaza,
    Have there been ANY non-PCA multiproxy studies which get a hockeystick WITHOUT reliance the Gaspe and extremely contentious Bristlecone pine series ?

  • David B. Benson // September 9, 2008 at 10:59 pm

    johnG // September 9, 2008 at 7:02 pm — I recommend W.F. Ruddiman’s “Earth’s Climate: Past and Future” as a good starter. You also should consider David Archer’s “The Long Thaw” or else papers available on his publications web page.

    For some mathematical treatments, it seems that Wikipedia is not a bad place to begin.

  • Dean P // September 9, 2008 at 11:18 pm

    Pough,

    One thing to keep in mind is that only the GISS shows the “unprecedented” rate of change since 1979. If you use the HadCRU data, the rate of change at the end of the 20th century is almost identical to the rate of change between 1910-1940, which as I understand it, was due to totally natural causes.

    And since neither of these records go back past the 1800s, then it may be vain to say that it’s “never” happened before. Never is a very long time.

    [Response: The early 20th-century warming is not attributed entirely to natural causes. And the warming rate according to HadCRU data is greater for the late 20th century than for 1910-1940, although the difference is not statistically significant.]

  • David B. Benson // September 10, 2008 at 12:20 am

    Dean P // September 9, 2008 at 11:18 pm — The rate of change for the last century is roughly comperable to the recovery, in central Greenland, from the 8.2 kybp event, a bit faster, and the recovery from Younger Dryas, maybe a bit slower.

    However, this is a comparison between the global temperatures of HadCRUTv3 and the regional temperature of Greenland; not really fair to imply that global temperatures went up that fast at those pre-(Holocene climatic optimum) times. In particular, Younger Dryas does not show up at all in the Antarctic and Pategonian paleodata.

  • cce // September 10, 2008 at 12:20 am

    RSS shows slightly more warming since 1979 than GISTEMP. 0.17 degrees per decade vs 0.16 degrees per decade (as of August ‘08).

    http://cce.890m.com/gistemp-vs-rss.jpg

    The difference between RSS, GISTEMP, and HadCRUT are negligible.

    http://cce.890m.com/giss-vs-all.jpg

Leave a Comment