Open Mind

Open Thread #5

August 10, 2008 · 559 Comments

For discussion of things global-warming related, but not pertinent to existing threads.

Categories: Global Warming

559 responses so far ↓

  • Bella Green // August 10, 2008 at 3:40 am

    This is the perfect place to thank you again for the endless time, effort and patience you put into this blog and to encourage you to continue. I am using your brains to help me write my lectures for the class I’ll be “teaching” on climate science this fall for Senior University (Southwestern University in Georgetown, Texas). Between your site and the fine gentlemen at RealClimate, I’m confident that I’ll be able to communicate the basics and answer all their questions. I don’t think I’d have to nerve to do this without having real scientists as backup, though that won’t stop me from having a shaking fit after every lecture and having a glass of Scotch when it’s over! Public speaking is *not* easy for me. And I want to remind you that, even if it often doesn’t seem like it, you really are making a difference “out here”.

    Ya know, I think there should be a different, more dignified name for blogs like yours that challenge us and increase our knowledge — a title that separates them from girls going on about makeup, for example (though those blogs do have their uses when one’s 12 yr old daughter is discovering makeup (I’m SO not ready for this!))

    Ah yes – my cat Greymantle is reading this (really) and says hello, and asks your Blueberry to please avoid walkabouts longer than overnight, and also strongly warns against getting into fights with anything that can get one’s entire head into its maw. Cheers, mate!

    [Response: Best of luck with the class. If specific questions arise, feel free to ask them here, but be advised that the folks at RealClimate know a lot more than I do. By any chance are you an Aussie?]

  • Paul Middents // August 10, 2008 at 6:38 am

    Bella,

    Press on fearlessly in your teaching. You will touch a few and one of them might make a difference. This website and RealClimate are great resources. There are lots of others. Check out the septic sites too. Every once in a while a wise a** in your class will and you need to be prepared.

    I taught Astronomy (among lots of other things) for ten years in a community college (1991-2001). Global warming was becoming an issue. I was skeptical at first out of shear ignorance. A little study of the physics and history quickly converted me. I regret that I did not give the issue enough prominence in my classes.

    You can make a difference, one student at a time. They remember you and they remember what you say. It’s a little scary.

    Paul

    [Response: Which reminds me: Spencer Weart's Discover of Global Warming is a resource of tremendous value, one of the best.]

  • michel // August 10, 2008 at 8:45 am

    http://www.guardian.co.uk/environment/2008/aug/10/climatechange.arctic

    Is this true?

  • Lazar // August 10, 2008 at 11:13 am

    Why the Climate Audit / David Stockwell attack on CSIRO “Drought Exceptional Circumstances Report” is wrong.

    The CSIRO report predicts increasing frequency and severity of exceptional temperature and rainfall events, over all seven regions of Australia for temperature, and three of seven regions for rainfall (no discernible changes in the others). An exceptional temperature event, in the context of drought, is an annual average temperature above the 95th percentile of observed temperatures during 1910-2007. An exceptional rainfall event is likewise a total annual rainfall below the 5th percentile for 1900-2007. This difference in periods is due to availability of reliable observational data for temperature and rainfall. Severity is measured as the area effected during an exceptional event. Predictions were made using an ensemble of 13 GCMs.

    David Stockwell claims

    all climate models failed standard internal validation tests for regional droughted area in Australia over the last century

    The tests David Stockwell employed were…

    … correlating model predictions for individual years of exceptional rainfall with observed years of exceptional rainfall! This ignores noise (internal variability in the climate system and GCM climate simulations) and that the CSIRO report predicted frequency. Steve MicIntrye and the auditors repeat this mistake here, with the obligatory snark from Steve (“Even for Michael Mann, a correlation of -0.013 between model and observation wouldn’t be enough. For verification, he’d probably require at least 0.0005.”) and a 100-word paragraph about the trouble involved in untarring a .tar archive.

    … comparing trends from linear regression. For each year of modelled (mean of 13 GCMs) and observed data he took the area effected, but for years when there was no exceptional event (i.e. most years) he used an ‘area effected’ value of zero, resulting that the residuals are not even close to normally distributed. Still, he applied a t-test to the difference in observed and modelled trends. But the error term was calculated only as the standard deviation of the 13 GCM modelled trends. He ignored the error in calculating a trend itself, which when taken into account renders the observed and modelled trends as statistically insignificant (not different from zero) — unsurprising given the treatment
    of years not containing an exceptional event.

    … he claims to test “The probability of significance of the difference between the observed trend and mean trend projected for the return period (returnp-p), the mean time between successive droughts at the given level.” and concludes “This indicates the frequency of droughts in the models has no relationship to the actual frequency of droughts”What he actually did was compare the mean for the entire period 1900-2007 of the number of years between exceptional events for modelled and observed data. Not trends.

    … he completely ignores the analysis of exceptional temperature events in the CSIRO report which incidentally show much better correlations between model and observed.

    … he claims that GCMs are calibrated on regional precipitation data. “Standard tests of model skill are either internal (in-sample) validation, where skill is calculated on data used to calibrate
    the model, or external (out-of-sample) validation, where skill is calculated on held-back data. As external validation is the higher hurdle, poor internal validation blocks further use of the model. Here internal validation is performed
    on the thirteen models over the period 1900 to 2007 for each of the seven Australian regions.”
    They are not.

    This is the first time I am actually angry about…
    Denialists pestering scientists.
    Producing disinformation.
    And setting themselves as auditors in order to sell that disinformation.

    “Key claims of the CSIRO report do not pass obvious statistical test for “significance”.” — Steve McIntyre.

    “Studies of complex variables like droughts should be conducted with statisticians to ensure the protocol meets the objectives of the study.” == David Stockwell

    “I don’t think its fair to single out CSIRO. You need to identify the enemy — IMO bias and pseudoscience. There are targets for review everywhere. The public face of science has shifted from atom splitters to GHG accounting.” — David Stockwell

    For a reasonable model-observation comparison, do read the CSIRO report especially figures 8 and 10.

    Thanks (I think) to ST for pointing this out.

  • Allen // August 10, 2008 at 1:10 pm

    Lazar,

    Thanks for the review. I took your advice and downloaded the CSIRO report and looked at the figures you suggested. I also read Stockwell’s report.

    He says as his #1 critique: “…While drought area decreased
    in the last century in all regions of Australia except for Vic&Tas and SWWA,
    the models simulated increase in droughted area in all regions. The
    Vic&Tas region has very low observed trend (+1% per year) in droughted
    area. This means the climate models are significantly biased in the opposite
    direction to observed drought trends…”

    Your recommended CSIRO Figure 10 seems to bear him out. The actual data seems to decrease over time in all but two areas– while the models show an increase over time in all the areas. That is, the model trends are opposite the actual data trends even in the calibration period.

    Also, the CSIRO report authors’ Summary indicates “…the
    qualitative assessment that the temperature data have the lowest uncertainty, that there is higher uncertainty with
    the rainfall data, and that the soil moisture data – being derived from a combination of rainfall data, low resolution
    observations of evaporation, and modelling – are the least reliable…”

    So, even the CSIRO authors themselves seem to be saying something similar to what Stockwell said — that is, they caution regarding drought aspects of the report.

    I am perplexed. There does not seem to be much of a disconnect on the content of Figure 10 and its import regarding the model predictions.

    Anyhow, I’ve bookmarked this site (my first visit) as it seems to provide more depth than many.

  • dhogaza // August 10, 2008 at 1:58 pm

    Is this true?

    Is what true? The reported observation that melting of the arctic ice cap accelerated in mid-July, thus putting things back on track to meet or break last year’s low ice extent record? Yes, that’s true.

  • Lazar // August 10, 2008 at 4:32 pm

    Allen,

    Thanks for the response.

    He says as his #1 critique: “…While drought area decreased in the last century in all regions of Australia except for Vic&Tas and SWWA, the models simulated increase in droughted area in all regions. The Vic&Tas region has very low observed trend (+1% per year) in droughted area. This means the climate models are significantly biased in the opposite
    direction to observed drought trends…”

    First off, talk of “droughted area” instead of ‘area effected by extreme rainfall’ is wrong (the CSIRO report does not talk of “droughted area”) and elides the role of temperature and its analysis in the CSIRO report. Drought is multiply defined and is effected by a combination of rainfall, temperature, and wind speed, not just total or average amounts, but when (what time of year).

    A better understanding can be found in…

    Cai, W., and T. Cowan (2008)
    Dynamics of late autumn rainfall reduction over southeastern Australia
    Geophys. Res. Lett., 35
    doi:10.1029/2008GL033727

    and

    Cai, W., and T. Cowan (2008)
    Evidence of impacts from rising temperature on inflows to the Murray-Darling Basin
    Geophys. Res. Lett., 35
    doi:10.1029/2008GL033390.

    And here (also read Luke’s comments). The situation in the MDB is at critical.

    The CSIRO report analyses extreme events of temperature and precipitation. A single extreme temperature or precipitation event is not of itself sufficient for the Australian National Rural Advisory Council and the Minister for Agriculture, Fisheries and Forestry to issue an exceptional circumstances (aka drought) declaration. Equating an extreme precipitation event with drought is simply wrong.

    Stockwell claimed that the data and his analysis showed, among other results, that “the climate models are significantly biased in the opposite direction to observed drought [not "drought" -- Lazar] trends”. That claim and others are demonstrably (above) false.

    The claim that the graphs show modelled and observed trends are of the opposite sign is your claim. Eyeballing graphs is not reliable though. The data used to produce the graphs are 10 year moving averages and therefore highly autocorrelated. You would need to test for significance and account for autocorrelation to make a solid claim.

    Anyway, I hope you stick around this site.

  • Lazar // August 10, 2008 at 4:57 pm

    Allen,

    trends even in the calibration period

    GCMs are not statistical models. The atmosphere is divided into parcels, boundary conditions applied (surface topography, oceans, forcings) under well-defined physical laws, conservation of energy exchange, mass, momentum between parcels calculated, the whole thing swirls into motion.

  • John Mashey // August 10, 2008 at 6:35 pm

    Although the following doesn’t Open Mind’s fine technical analyses’ focus, since this thread has had some material on disinformation:

    Synopsis of Naomi Oreskes, You CAN argue with the Facts – Full Talk, April 17,2008 – Stanford U – 40 minutes .

    Naomi is an award-winning geoscientist/science historian, a Professor at UCSD and as of July, promoted to Provost of of the Sixth College there. She is also a meticulous researcher, as seen from past books, and from having reviewed a few chapters of the book she mentions in the talk. She unearthed some fascinating memos, although of course, impossible to replicate the exhaustive database of tobacco documents.

    If you haven’t seen her earlier 58-minute video, The American Denial of Global Warming”, you might watch that first. It’s first half is a longer version of the development of climate science, and the second half is about the George C. Marshall Institute.

    This talk has about 10 minutes of background, and the rest is new material on the Western Fuels Association.]

    The video production isn’t flashy, but it’s good enough. This, of course, is an informal seminar talk – for the thorough documentation, you’ll have to await the book.

    ======SUMMARY=====
    00:00 Background [fairly familiar, some overlap with earlier talk]

    10:30 1988, Hansen in Congress, IPCC starts

    11:05 “Tobacco strategy” to challenge science

    I.e., use of similar techniques, sometimes by same people

    14:50 Western Fuels Association (Power River coal companies)

    Sophisticated marketing campaign in test markets

    17:20 1991 – WFA creates ICE – Information Council for Environment

    ICE ~ Tobacco Industry Research Council (TIRC) –
    See Allan M. Brandt, “The Cigarette Century”

    21:00 WFA print campaign

    23:00 Scientists are more believable than coal people, so use scientists, create memes

    25:30 WFA produces video “The Greening of Earth”, provides many copies

    The Greening Earth Society (astroturf); more CO2 is good for the whole Earth Excerpts from video

    30:00- Video shows the Sahara turning completetely green

    32:20- “Plants have been eating CO2 and they’re starved”
    Discussion of circumstances under which CO2 does help and illustration of marketing tactics, cherry-picking, etc. I.e., how does one use a few tidbits of real science to create an impression very different form the overview? Are there lessons for scientists?

    40:00 end

  • Hank Roberts // August 10, 2008 at 7:48 pm

    > GCMs are not statistical models

    Yep. Spelled out here:

    http://www.thebulletin.org/print/web-edition/roundtables/the-uncertainty-climate-modeling?order=asc

    —excerpt—–
    … this problem is not fundamental to climate models, but is a symptom of something more general: how scientific information gets propagated beyond the academy. What we have discussed here can be broadly described as tacit knowledge–the everyday background assumptions that most practicing climate modelers share but that rarely gets written down. It doesn’t get into the technical literature because it’s often assumed that readers know it already. It’s not included in popular science summaries because it’s too technical. It gets discussed over coffee, or in the lab, or in seminars, but that is a very limited audience. Unless policy makers or journalists specifically ask climate modelers about it, it’s the kind of information that can easily slip through the cracks.

    Shorn of this context, model results have an aura of exactitude that can be misleading. Reporting those results without the appropriate caveats can then provoke a backlash from those who know better, lending the whole field an aura of unreliability.

    So, what should be done? Exercises like this discussion are useful, and should be referenced in the future. But there’s really no substitute for engaging more directly with the people that need to know.
    —–end excerpt——–

  • Lazar // August 10, 2008 at 8:29 pm

    E.g., Allen, here is a plot of data from the Murray-Darling Basin from the CSIRO report. The black are observations, red are model values. Years without extreme precipitation are treated as missing values, and model values are the mean of the 13 GCMs, which is why the peaks of observational data are higher and the number of missing values greater than for model data. The model trend is up and the observed trend down, but neither trend is significant, and the observed is within the confidence interval of the modelled.

  • Brian D // August 10, 2008 at 11:23 pm

    This is a little off-topic for the proto-discussion forming here, but DeSmogBlog’s linked Open Mind. Seeing as WordPress isn’t particularly well-organized for browsing one blog by topic (and Tamino’s blogged nearly everything under a single “global warming” tag), I submitted a small selection of some of the more pertinent topics there (based on which inactivist arguments show up in the comments most commonly). The reaction from the resident denialists is, in a word, amusing (provided one can bring oneself to laugh at gross, irresponsible idiocy).

  • Luke // August 10, 2008 at 11:37 pm

    A challenge offered – (not by me – but $1000 up for grabs)

    http://www.jennifermarohasy.com/blog/archives/003315.html

    [Response: That's the same Jennifer Marohasy who recently posted We Aren’t Responsible for Rising Atmospheric Carbon Dioxide: A Note from Alan Siddons. 'Nuff said.]

  • Luke // August 11, 2008 at 12:20 am

    Of course – but $1000 quick bucks for any triers offered in that thread by Michael Duffy who runs a sceptic show on Australian ABC radio. Just FYI.

  • Allen // August 11, 2008 at 12:26 am

    Lazar,

    Thanks for the constructive reply.

    As a relative newcomer to “Climate Science” issues, I am having the probably-normal difficulties assessing conflicting viewpoints. A superficial look at (logical) arguments is not adequate, I find.

    Therefore, I floated my observation, hoping for a constructive reply — and I got one. Your reply gives me something to study and think about that should improve my understanding — once I do my “homework”.

    I’ll followup your references.

  • Allen // August 11, 2008 at 12:35 am

    Hank Roberts,

    Your “excerpt” hits one of the nails on the head. I find that, for the most part, articles and reports (pro and con) that I have scanned in the last few weeks leave out a lot of detail that I think should be in there — if the authors want to be convincing to an outsider. Sometimes, merely defining jargon in a glossary (or abbreviations at their first use) would help. Moreover, on the few occasions when I have dug down to the references (pro and con), they left out scientific steps necessary to be convincing to an ignorant but interested third party. Just an observation.

  • Bella Green // August 11, 2008 at 12:38 am

    Paul, thanks for the encouragement. I’ve done a couple of presentations and encountered the obligatory smart-a–, and didn’t lose my temper. So far so good…

    HB, I’ve recommended my students read Dr. Weart’s book before we start. And no, I’m not an Aussie, I just like words, and ‘walkabout’ is one of my favorites. I’m so very glad your cat survived his extended ‘walkabout’.

  • Hank Roberts // August 11, 2008 at 12:46 am

    Brian, I hope you asked our host’s okay before doing that. Else you’ll just increase his shitload — remember he’s got to shovel the stuff from people here who don’t come here to learn.

    A plea to web-competent folks — come up with something like a killfile or blackhole list or spam filter to which blog hosts can submit the IP addresses of persistent time wasters, to share flagging the copypasters. It’s not censorship to identify the sources, especially when they’re sockpuppeting. I suspect there are a lot fewer of them than the userids would indicate, from the amount of pure repetition. Google the obvious phrases, the ones that are the tastiest bait. They repeat themselves.

    Hosts, don’t bite. Their goal is to waste your time and delay, delay, delay your real work.

  • Luke // August 11, 2008 at 12:57 am

    But more importantly on the drought issue:

    Messy stuff – for starters the process of drought declaration and revocation needs to be modelled properly. For example I think in the state of Queensland if you declared drought on 12 month percentile 5 rainfall you might think that you end up in drought 5% of the time i.e. 5 years in 100 on “average”. But bad droughts are multi-year in nature. They persist. They persist as a “break” doesn’t occur. So if you use a revocation rule of reaching median rainfall you end up in drought 23% of the time (from memory of a Qld example). Much longer than 5%.

    From a severity point of view temperatures are up compared to previous droughts – supposedly making droughts worse. But if the southern circulation effects are to produce more high pressure systems over the continent, then wind may be less. And in the formulation of evaporation – solar radiation, wind and vapour pressure are more important than temperature. Having said that – (and speculating now) – high soil temperatures affect the vapour transport in soils making situations worse (So I’m told). So we don’t have evaporation sorted out in the modelling process.

    On the wind issue – http://ams.allenpress.com/perlserv/?request=get-document&doi=10.1175%2FJCLI4181.1

    I wouldn’t expect CSIRO to get the trend exactly right in their 1900-2000 runs. The years won’t match up. What they should try to get right is the spectral components of year to year and decadal variability. Something like a weather generator would do – the statistical properties should be correct but it won’t match any particular year.

    And there appears to be decadal, interdecadal and quasi-decadal modes in regional rainfall. Tamino will have to help me here as on statistical hiding to nothing but there is some feeling that AGW may be slowing down quasi-decadal variability – all speculation from myself as an agriculturalist but here you are:

    Rainfall Variability at Decadal and Longer Time Scales: Signal or Noise?

    http://www.bom.gov.au/bmrc/clfor/cfstaff/sbp/journal_articles/holger_jclim_2005.pdf

    http://jedac.ucsd.edu/PROJECTS/PUBLISHED/GDR_PREDICTION/GDR_Prediction.pdf

    McPhaden, M. J., and D. Zhang, 2002: Slowdown of the meridional overturning circulation in the upper Pacific Ocean. Nature, 415, 603–608.
    Allan, R.J. (1985). The Australasian Summer Monsoon, Teleconnections, and Flooding in the Lake Eyre Basin. Royal Geographical Society of Australasia.

    So how good is the modelling of all the decadal influences …. hmmmm …

    So IMO – not enough depth by CSIRO – a fairly modest analysis of a serious issue by Stockwell and hope we don’t throw the baby out with the bathwater. The Australian Government Treasury’s problem is that they have been shelling out billions of dollars in drought aid for decades. Some landholders may have had 200 years worth of support by now. So it’s a very reasonable question as to whether the probability distribution has changed. And multiple interactions abound.

  • Joseph // August 11, 2008 at 1:15 am

    That’s the same Jennifer Marohasy who recently posted We Aren’t Responsible for Rising Atmospheric Carbon Dioxide: A Note from Alan Siddons. ‘Nuff said.

    That’s amazing. How do they do that? I’ve looked at what I think is equivalent data, and the pattern is really clear and undeniable (graph of detrended series here). I can’t believe they wouldn’t see this. I can only suppose there’s some intentional obfuscation there.

    [Response: Ya think?]

  • trevor // August 11, 2008 at 4:14 am

    Lazar and Luke. Why don’t you pop over to David Stockwell’s blog where he has just posted a more detailed discussion on the CSIRO report. I am sure that he will be happy to engage with you.

    http://landshape.org/enm/cherry-picking-in-australia/

  • Hank Roberts // August 11, 2008 at 4:42 am

    Chuckle. Or wait til he he can get his thesis published in a refereed journal, whichever makes more sense to you to evaluate scientific claims.

    There’s always E’n'E.

  • Luke // August 11, 2008 at 5:00 am

    Argh – should have checked instead of using memory – my bad – the 24% was for decile one declaration scenario (percentile 10)

    Declared percentile 5 annual rainfall – revoked at percentile 30 rainfall – 8.2% area of the state of Queensland on average drought declared; revoke at percentile 50 rainfall 13.3% on average declared

    for simulated pasture instead of rainfall the percentages were 12.4% 17.8% respectively.

    from: National Drought Forum 2003: Science for Drought: Brisbane Australia pp 141-151

    Day et al.

    Simulating historical droughts: some lessons for
    drought policy

    1964-2003

  • michel // August 11, 2008 at 7:19 am

    The reported observation that melting of the arctic ice cap accelerated in mid-July, thus putting things back on track to meet or break last year’s low ice extent record? Yes, that’s true.

    See, this is what puzzled me. The article is at

    http://www.guardian.co.uk/environment/2008/aug/10/climatechange.arctic

    and it referenced an organization whose site is at

    http://nsidc.org/arcticseaicenews/

    where I can’t find any reference to dramatic events of the week before the dateline of the piece. The dramatic events seem not to have been reported any place else. The piece is datelined August 10, so I was expecting something to have happened in the first week in August. But not only did it not seem to be on the site, the charts in the above link don’t seem to show 2008 catching up with 2007. I can’t find any of the quotes from the article or anything approximating them, either.

    So what gives? Is it real?

    [Response: Perhaps Maslowski and/or Serreze are including up-to-date data on sea ice thickness (which isn't readily available as far as I know). The news story may be based in part on a presentation by Maslowski in June. Other experts don't agree, Stroeve expects arctic sea ice to last until about 2030.

    But the sea ice extent for this year is on track to be the 2nd-lowest all-time but not to break the all-time low observed last year. However, there's another month of the melt season yet to come, and they may have information that leads them to believe it'll break last year's record -- the ice is a lot thinner this year than last according to all reports I've seen. Extent only covers 2 dimensions, thickness is the missing 3rd dimension, so extent data alone don't tell the whole story.

    In general, it's wise not to take reports in newspapers too seriously; journalists have a habit of pronouncing every scientist's opinion as the latest authoritative truth, and of blowing things out of proportion. They also have a habit of emphasizing the dramatic, at the sacrifice of perspective and rigor. Websites run by scientists (like RealClimate or Cryosphere Today are a better bet for reliable information than newspaper articles.]

  • Matt // August 11, 2008 at 8:20 am

    Hank: Chuckle. Or wait til he he can get his thesis published in a refereed journal, whichever makes more sense to you to evaluate scientific claims.

    There’s always E’n’E.

    And let me guess, a similarly phrased paragraph about would result in yet another copy/paste soliloquy from you on “trolls.” Note that while I do respect but often disagree with the intellectual capabilities of the first 3, I cannot support you on KennyG.

    But of course, YOU aren’t ever one of those copy/paste trolls. Are you. It’s always the person you don’t agree with that is the troll. Perhaps the label “troll” is just a crutch to help you deal with things that are distastful to you.

    It reminds me that those that are usually the first to beg for tolerance are usually the least tolerant. And those that beg for giving are usually the most stingy.

    Ironic, is all.

  • Petro // August 11, 2008 at 1:58 pm

    michel asked:
    “So what gives? Is it real?”

    As it was explained to you by dhogaza above, the behaviour of Artic has been atypical since mid-July. If you do not believe scientists at NSIDC or commenters here, you can always turn on the primary data. From link below:
    http://rapidfire.sci.gsfc.nasa.gov/realtime/2008224/
    you can access satellite photos of the Earth since April 2001. Identify relevant Artic pictures and compare them between the years. It is evident even to layman, that the Arctic ice this year is different.

  • Hank Roberts // August 11, 2008 at 2:02 pm

    Matt, read David Brin’s piece.
    Yes, I know it bothers you that people don’t consider bloggers reliable sources of information.

    But there are very few bloggers who can cite sources, read science papers knowledgeably, and have a track record of being able to teach well.

    So I rely on refereed journals because while that’s not sufficient to know someone really knows what they’re talking about, it is at least a first hurdle passed and they are participating in a forum where knowledgeable people will correct their mistakes _in_the_journal_.

    If there’s no publication record, and nobody I consider trustworthy vouches for the blogger, then it’s just another blogger.

    No offense, man, but I don’t consider you a trustworthy source about published science, I don’t know you, I don’t know what if anything you’ve published, we haven’t any friends in common, and all I see is your opinion.

    Point to science journals and I’ll look at what you refer to. Point to bloggers and, yawn, maybe, but life is short and there’s plenty of good stuff to read.

  • dhogaza // August 11, 2008 at 2:24 pm

    Here’s the NSIDC graph that may’ve triggered that story. As you can see the decrease in ice extent accelerated slightly about the first week of the month, while last year at about this time we saw the curve starting to flatten a bit. I was wrong when I said mid-July, it wasn’t that early…

    However now it’s flattened out a bit again.

    A few days ago, some people were speculating that the acceleration in melting might continue and that the two lines (last year and this) might cross in September after all.

    I think what you’re seeing is some people obsessing over short-term (days!) fluctuations in the rate at which the arctic ice cap is melting, It’s like a horse race – who will win, 2007 or 2008? Treat it as fun, nothing more.

    Note that the latest piece on the NSIDC site is dated August 1, before that little uptick in the rate of melt. Obviously they themselves didn’t see it as being worthy of comment, as you’ve noted. And, it’s not, really, unless you’ve got a bet out as to whether or not the 2008 minimum will beat last year’s (and there are some people out there with public bets, so, sure, they’re going to be keenly interested).

  • Dano // August 11, 2008 at 2:29 pm

    MAtt:

    your head fake fails to distract away from the fact that denialists cannot discuss “their” “ideas” in refereed journals.

    Best,

    D

  • Hank Roberts // August 11, 2008 at 2:34 pm

    PS, Matt, if this was supposed to have some extra words in it, and was something relevant, try again:
    “a similarly phrased paragraph about would result in yet another copy/paste soliloquy from you on ‘trolls.’ Note that while I do respect but often disagree with the intellectual capabilities of the first 3, I cannot support you on KennyG.”

    I assumed that was failed snark and ignored that figuring you just dropped some words editing, but coming back, was it supposed to mean something serious?

    Try again if so. You’re coming up

  • Hank Roberts // August 11, 2008 at 2:42 pm

    Hm. WP does seem to be dropping edits.
    And Brin’s website isn’t responding.
    Bugs in the intartubes again?

    Anyhow, Matt, look this one up. He’s seriously addressing what’s missing in blogging compared to older areas where people disagree, and talks about why science done in the journals works:

    David Brin’s article ‘Disputation Arenas: Harnessing Conflict and … It was lead article in the American Bar Association’s Journal on Dispute Resolution …
    http://www.davidbrin.com/disputationarticle1.html

  • Hank Roberts // August 11, 2008 at 3:11 pm

    Here, save the trouble of reading it all, this is the core from Brin’s piece, and why blogging isn’t capable of resolving scientific issues (yet)

    ——excerpt—–
    What each of the older accountability arenas has — and today’s Internet lacks — is centripetal focus. A counterbalancing inward pull. Something that acts to draw foes together for fair confrontation, after making their preparations in safe seclusion.

    No, I’m not talking about goody-goody communitarianism and “getting along.” Far from it. Elections, courtrooms, retail stores and scientific conferences all provide fierce testing grounds, where adversaries come together to have it out… and where civilization ultimately profits from their passion and hard work.

    This process may not be entirely nice. But it is the best way we ever found to learn, through fair competition, who may be right and who is wrong.

    Yes, counter to the fashion of postmodernism, I posit the existence and pertinence of “true and false” — better and worse — needing no more justification than the pragmatic value these concepts have long provided. In science you compare theory to nature’s laws. … In a myriad fields, this process slowly results in better theories, notions, laws and products. Again, it is murky and inefficient… and it works.

    My point is that today’s Internet currently lacks good processes for drawing interest groups — many of them bitterly adversarial — out of those passworded castles to arenas where their champions can have it out, where ideas may be tested and useful notions get absorbed into an amorphous-but-growing general wisdom.

    Some claim that such arenas do exist on the Net — in a million chat rooms and Usenet discussion groups — but I find these venues lacking in dozens of ways. Many wonderful and eloquent arguments are raised, only to float away like ghosts, seldom to join any coalescing model. Rabid statements that are decisively refuted simply bounce off the ground, springing back like the undead. Reputations only glancingly correlate with proof or ability. Imagine anything good coming out of science, law, or markets if the old arenas ran that way!

    … I am selfish and practical. I want something more out of all the noise.

    Eventually, I want good ideas to win. …

    —–end excerpt——

    That’s why science is done in refereed journals.

  • Lazar // August 11, 2008 at 4:14 pm

    Atmospheric Warming and the Amplification of Precipitation Extremes

    Richard P. Allan and Brian J. Soden
    Science DOI: 10.1126/science.1160787

    Abstract:

    “Climate models suggest that extreme precipitation events will become more common in an anthropogenically warmed climate. However, observational limitations have hindered a direct evaluation of model projected changes in extreme precipitation. Here, we use satellite observations and model simulations to examine the response of tropical precipitation events to naturally driven changes in surface temperature and atmospheric moisture content. These observations reveal a distinct link between rainfall extremes and temperature, with heavy rain events increasing during warm periods and decreasing during cold periods. Furthermore, the observed amplification of rainfall extremes is found to be larger than predicted by models, implying that projections of future changes in rainfall extremes due to anthropogenic global warming may be underestimated.”

    (h/t abelard)

  • Joseph // August 11, 2008 at 5:13 pm

    Some claim that such arenas do exist on the Net — in a million chat rooms and Usenet discussion groups — but I find these venues lacking in dozens of ways.

    I had the feeling that online article predated blogs. It’s from 2000, so that’s pretty much the case.

  • Hank Roberts // August 11, 2008 at 5:46 pm

    http://www.gebco.net/data_and_products/gebco_world_map/images/gda_world_map_small.jpg

    Good bathymetric (depth) map of the Arctic, helps make clear that it’s a deep bowl with relatively narrow, and shallow, connections to the rest of the world’s oceans.

  • Hank Roberts // August 11, 2008 at 7:17 pm

    Joseph wrote:

    > that article predated blogs …

    Which have even less ability to handle bogus crap than the old Usenet newsgroups (which deprecated crossposting copypasted stuff).

    You understand he’s pointing out how it took centuries to make the other fora capable of sorting out and disposing of the crap, right? And how science does it?

    You know the need to look for subsequent references to material. Here:

    http://davidbrin.blogspot.com/2006/12/todays-centrifugal-net-is-not-arena-or.html
    Brin says, more recently:

    ——excerpt follows——-

    “Some of you have read my extensive essay – written for the American Bar Association – about the underlying common traits of markets, science, courts and democracy — the “accountability arenas” that have empowered free individuals to compete and create without tumbling quickly into repression and outrage…. for the first time, ever. Alas, over the years since, I have found that people have trouble perceiving some of what the paper describes… or why today’s internet just does not yet have what it takes to empower us with a “fifth arena.”

    … needed tools are absolutely missing.

    Oh, our would-be masters want it this way. Those who would return us to a style of feudalism. They would let us wrangle and spume and EXPRESS ourselves, endlessly online….
    —-end excerpt——

  • Joseph // August 11, 2008 at 11:07 pm

    You understand he’s pointing out how it took centuries to make the other fora capable of sorting out and disposing of the crap, right? And how science does it?

    The other fora have limitations too. There’s plenty of poor research that passes peer-review. I can provide a number of examples.

    Of course, peer-review is a good thing. It lends itself to some challenges, though, like accusations of establishment bias.

    Blogs don’t have anything like peer-review, but there are some areas where they are clearly an innovation, e.g. in how quickly feedback and corrections can be produced. A blog with an open comment policy can have rapid response reader-review, which is also open, as opposed to peer-review.

    In my experience, a lot of times you can tell which blogs are crank blogs by the way they arbitrarily delete comments, by the way they deal with corrections, and so on. Granted, there’s no objective way at the moment to tell a good blog from a bad blog.

    The scientific literature is for scientists. Blogs, on the other hand, can be for scientists but also lay people. I don’t have any hard evidence of this, but recently there was one of those polls in a blog I frequent which asked the following question (paraphrasing from memory).

    “Do you feel that the scientific community has done a good job of communicating the safety of vaccines?”

    The answer that won overwhelmingly was this:

    “No, but blogs like Orac’s help.”

    (Orac’s blog is a different blog to the one that had the poll).

    So I think that at least among blog readers, blogs have a lot of swaying power, more so than standard scientific authority a lot of times. That’s just how things go. Innovations are invented, and they can revolutionize the way we do things.

  • Hank Roberts // August 12, 2008 at 12:21 am

    re Lazar’s posting above, I’ve mentioned this one before; it’s a good place to start to find other paleo links to rainfall changes the last time there was a huge greenhouse gas excursion in a short period of time.

    It’s one of them feedbacks — huge rainfalls, extreme erosion, lots of fresh carbonate rock exposed, more rainfall, more extreme weathering, biogeochemical cycling.

    The lesson from the past is: don’t go there.

    http://ic.ucsc.edu/~jzachos/eart120/readings/Schmitz_Puljate_07.pdf

  • Matt // August 12, 2008 at 4:34 am

    Hank: assumed that was failed snark and ignored that figuring you just dropped some words editing, but coming back, was it supposed to mean something serious?

    Yeah, it ate a bit about if someone made the comment you made but instead of your target submitted Gavin, Mann or KennyG, you would have freaked out and called them a troll. Alas, the moment is lost. Not sure if you like KennyG or not. I don’t, so I’m pretty sure you do. :)

  • Hank Roberts // August 12, 2008 at 7:27 pm

    I don’t even know what ‘KennyG’ is!
    So I probably don’t want to know …

    One for Tamino:

    “… powerful statistical tools that allow scientists to run approximations of a climate model many times extremely quickly, providing … a large ste of results with which to calculate probabilities…. known in the trade as ‘emulators’ …”
    at p. 18 of the PDF file:

    http://www.nerc.ac.uk/publications/planetearth/2008/summer/sum08-rapid.pdf

    http://www.nerc.ac.uk/publications/planetearth/2008/summer/

  • David B. Benson // August 12, 2008 at 9:49 pm

    Tree-ring based reconstructions of northern Patagonia precipitation since AD 1600

    http://hol.sagepub.com/cgi/content/abstract/8/6/659

    Seems to be behind a paywall for me, but the abstract is interesting.

  • Hank Roberts // August 13, 2008 at 12:48 am

    http://features.csmonitor.com/environment/2008/08/12/are-they-really-going-to-gut-the-endangered-species-act/#comment-2456
    ——-excerpt——-

    … the proposed rules would prohibit federal agencies from assessing the greenhouse gas emissions from construction projects.


    After the AP broke the story, the Department of the Interior released a statement describing the proposed changes as “narrow.”

  • Duae Quartunciae // August 13, 2008 at 3:18 am

    Does this blog have a feed?

    [Response: I don't know! Anyone?]

  • Hank Roberts // August 13, 2008 at 4:43 am

    > feed

    Usual caveat, I know nothing, Nothing about this.

    I looked it up:
    http://codex.wordpress.org/WordPress_Feeds

  • Hank Roberts // August 13, 2008 at 4:47 am

    Oh, and here’s a site whose author figured it out (a tech book writer); here’s her explanation:
    http://www.mariasguides.com/2007/11/16/site-topics-available-as-rss-feeds-and-e-mail-subscriptions/

  • Duae Quartunciae // August 13, 2008 at 5:25 am

    Thanks… as my father says: When all else fails, read the manual.

    I have found your feed, and added it to my reader. The link I used for your feed is tamino’s RSS feed.

  • cce // August 13, 2008 at 7:36 am

    Not to cause controversy (actually, yes), the Auditors are working Wahl and Ammann (mostly Ammann) pretty hard lately. Many accusations and numbers thrown about, i.e. calibration and verification. I understand about 2% of this stuff, and I question the objectivity of the source. A post, perhaps?

  • David B. Benson // August 14, 2008 at 12:59 am

    “These data demonstrate that the MWP and LIA are global climate events, not only restricted to the Northern Hemisphere.”

    from

    http://www.cosis.net/abstracts/EGU2007/01568/EGU2007-J-01568.pdf

    a two page abstract.

  • Barton Paul Levenson // August 14, 2008 at 1:13 pm

    I can’t resist writing this in here, even though I just wrote it in at RealClimate. Call me a spammer.

    I found another mistake by Miskolczi. His equation (4) is:

    AA = SU A = SU(1-TA) = ED

    where

    AA = Amount of flux Absorbed by the Atmosphere
    SU = Upward blackbody longwave flux = sigma Ts^4
    A = “flux absorptance”
    TA = atmospheric flux transmittance
    ED = longwave flux downward

    These are simple identity definitions. I do wonder why Miskolczi used the upward blackbody longwave for the amount emitted by the ground when he should have used the upward graybody longwave — he’s allegedly doing a gray model, after all. Apparently he forgot the emissivity term, which is about 0.95 for longwave for the Earth. One more hint that he doesn’t really understand the distinction between emission and emissivity.

    Note that he seems to be saying the downward flux from the atmosphere (ED) must be the same as the total amount of longwave absorbed by the atmosphere (AA).

    The total inputs to Miskolczi’s atmosphere are AA, K, P and F, which respectively stand for the longwave input from the ground, the nonradiative input (latent and sensible heat) from the ground, the geothermal input from the ground, and the solar input. P is negligible and I don’t know why he even puts it in here unless he’s just trying to be complete. He’s saying, therefore, if you stay with conservation of energy, that

    AA + K + F = EU + ED

    Now, from Kiehl and Trenberth’s 1997 atmospheric energy balance, the values of AA, K, and F would be about 350, 102, and 67 watts per square meter, respectively, for a total of 519 watts per square meter. EU and ED would be 195 and 324, total 519, so the equation balances.

    But for Miskolczi’s equation (4) to be true, since AA = ED, we have

    K + F = EU

    That is, the sum of the nonradiative fluxes and the absorbed sunlight should equal the atmospheric longwave emitted upward. For K&T97, we have 102 + 67 = 195, or 169 = 195, which is an equation that will get you a big red X from the teacher.

    There is no reason K + F should equal EU, therefore Miskolczi’s equation (4) is wrong. Q.E.D.

  • Petro // August 15, 2008 at 10:58 pm

    Since there are several denialists in this site among the commenters, I would like to ask you a couple of questions:

    What evidence you have that Andrew Watts is telling truth?

    Why you consider him better source of knowledge on climate science than the climate scientists?

    These questions have puzzled me a long. Please help me to understand!

  • Barton Paul Levenson // August 16, 2008 at 1:37 pm

    Correction — in the next to last paragraph, “upward” should read “downward.”

    *sigh*

  • Hank Roberts // August 16, 2008 at 4:02 pm

    Talk of the Nation, August 15, 2008 · Scientists studying many different parts of the planet’s ecosystems are warning that Earth may be on the verge of a sixth major mass extinction event.

    http://www.npr.org/templates/story/story.php?storyId=93636633

    “I have had scientists who have pulled me over to the side and said in private much the, what you’re saying, ‘The situation is much worse than we are willing to talk about in public … we don’t want to scare people’”
    – Ira Flatow at 05:55

    “I remember the way the country organized after Pearl Harbor…Americans changed their entire economy in a year…. You can do it if you have the right incentive, and fear is, ought to be a great incentive if we care anything about children and grandchildren … there’s a lot of reason to be scared for them…. Have you heard anything about ecosystem services?”
    – Paul Ehrlich

  • Hank Roberts // August 16, 2008 at 4:18 pm

    Here’s the link to the PNAS article talked about in the NPR Science Friday audio file:

    http://www.pnas.org/content/early/2008/08/08/0801911105.abstract

    Here’s Wired online:
    NOTE, GOOD: links to the above and many other related science papers
    (pointing out how poorly this story is being covered)
    http://blog.wired.com/wiredscience/2008/08/the-sixth-extin.html

  • Hank Roberts // August 16, 2008 at 4:50 pm

    Here’s a collection of presentations from the National Academy on the current extinction:
    http://www.nasonline.org/site/PageNavigator/SACKLER_biodiversity_program

    Amphibians, where climate change has correlated with a fungus problem:

    http://progressive.atl.playstream.com/nakfi/progressive/Sackler/sackler_12_07_07/david_wake/david_wake.html

  • TCO // August 16, 2008 at 4:51 pm

    Paul Ehrlich? He’s been wrong on the disaster predictions before.

  • Dano // August 16, 2008 at 4:52 pm

    Hank,

    I’m an urban ecology guy. Green infrastructure, ecosystem services’ CBAs, built environment greening, nearby nature. I speak nationally several times a year on the topic of how to do green infra.

    What you quoted from Ira Flatow and Paul Ehrlich is absolutely correct. PNAS has a recent special feature on ecosystem services, and here is Paul’s latest, with a hopeful note at the end, after the reader must digest the passage

    Yet despite a ballooning number of publications about biodiversity and its plight, there has been dispiritingly little progress in stanching the losses—so little that some commentators have characterized applied ecology as ‘‘an evermore sophisticated refinement of the obituary of nature’’ (18). As conservation-oriented scientists, we are responsible for biodiversity. Its loss is our failure.

    [ pp 11579-80. emphasis added, footnotes omitted]

    “Having an ecological education means living in a world of wounds.” — Aldo Leopold

    ——–

    WRT feeds, if one is using FireFox, one can see the RSS feed logo in the browser window. Clicking on the logo allows a subscription. Yet another reason to use Moe-ziller.

    Best,

    D

  • Hank Roberts // August 16, 2008 at 5:29 pm

    TCO, we are currently IN the disaster Ehrlich was worrying about 40 years ago. And the trend is awful. Go talk to the nearest ecologist about it.

    The same bullshitters had a stable of lying crap artists working to fool people then as now.

    Look at the numbers.

  • Hank Roberts // August 16, 2008 at 5:31 pm

    Here, TCO. I realize how incredibly hard it is for people to understand this and how hard it is to believe that their personal experience isn’t telling them the state of the whole world.
    http://thingsbreak.wordpress.com/2008/08/12/a-case-of-the-mondays/

  • Hank Roberts // August 16, 2008 at 6:09 pm

    http://www.sfgate.com/cgi-bin/blogs/green/detail?blogid=49&entry_id=29113
    ____excerpt____

    … to comment on the Bush administration’s attempt to gut the Endangered Species Act …, turns out … the Fish and Wildlife Service is no longer accepting comments by email (H/T Grist). (It seems to have something to do with the 600,000 comments they got about protecting polar bears—the very thing they’re trying not to do.)

    That’s right, they want you to waste some paper trying to speak out for the environment…. And by name: They’ll be posting all the personal information you provide on their web page, which they apparently know how to use even though they choose not to.
    ————————–

    http://switchboard.nrdc.org/blogs/awetzler/bush_administration_decides_to.html

  • Petro // August 16, 2008 at 6:10 pm

    TCO tells:

    “Paul Ehrlich? He’s been wrong on the disaster predictions before.”

    Give us justification for your opinion.

  • TCO // August 16, 2008 at 9:14 pm

    [edit] Predictions and Quotes
    “In ten years all important animal life in the sea will be extinct. Large areas of coastline will have to be evacuated because of the stench of dead fish.” Paul Ehrlich, Earth Day 1970

    “Population will inevitably and completely outstrip whatever small increases in food supplies we make, … The death rate will increase until at least 100-200 million people per year will be starving to death during the next ten years.” Paul Ehrlich in an interview with Peter Collier in the April 1970 of the magazine Mademoiselle.

    By…[1975] some experts feel that food shortages will have escalated the present level of world hunger and starvation into famines of unbelievable proportions. Other experts, more optimistic, think the ultimate food-population collision will not occur until the decade of the 1980s.” Paul Ehrlich in special Earth Day (1970) issue of the magazine Ramparts.

    “The battle to feed humanity is over. In the 1970s the world will undergo famines . . . hundreds of millions of people (including Americans) are going to starve to death.” (Population Bomb 1968)

    “Smog disasters” in 1973 might kill 200,000 people in New York and Los Angeles. (1969)

    “I would take even money that England will not exist in the year 2000.” (1969)

    “Before 1985, mankind will enter a genuine age of scarcity . . . in which the accessible supplies of many key minerals will be facing depletion.” (1976)

    “By 1985 enough millions will have died to reduce the earth’s population to some acceptable level, like 1.5 billion people.” (1969)

    “By 1980 the United States would see its life expectancy drop to 42 because of pesticides, and by 1999 its population would drop to 22.6 million.” (1969)

    “Actually, the problem in the world is that there is much too many rich people…” – Quoted by the Associated Press, April 6, 1990

    “Giving society cheap, abundant energy would be the equivalent of giving an idiot child a machine gun.” – Quoted by R. Emmett Tyrrell in The American Spectator, September 6, 1992

    “We’ve already had too much economic growth in the United States. Economic growth in rich countries like ours is the disease, not the cure.” – Quoted by Dixy Lee Ray in her book Trashing the Planet (1990)

    ————————-

    http://en.wikipedia.org/wiki/Paul_R._Ehrlich

  • Matt // August 16, 2008 at 9:20 pm

    Petro: Give us justification for your opinion.

    http://en.wikipedia.org/wiki/Ehrlich-Simon_bet

  • Matt // August 16, 2008 at 9:27 pm

    Hank: TCO, we are currently IN the disaster Ehrlich was worrying about 40 years ago. And the trend is awful. Go talk to the nearest ecologist about it.

    Ehrlich predicted half our species would be lost by 2000, and that all would be lost between 2010 and 2025.

    I don’t think you can claim things are playing out as he has predicted.

    The man gets an F- for prediction accuracy.

  • dhogaza // August 16, 2008 at 9:37 pm

    Ehrlich predicted half our species would be lost by 2000, and that all would be lost between 2010 and 2025.

    I’d like a direct citation to where he said all life on earth would extinct by 2025.

  • dhogaza // August 16, 2008 at 9:38 pm

    Hank’s comment doesn’t declare that Erlich was right regarding the timeframe, but he was certainly right about the shape of the curve.

    We are in the midst of a major extinction event, and the pace is accelerating.

  • Hank Roberts // August 16, 2008 at 11:10 pm

    Matt:

    http://scienceblogs.com/intersection/jackson%282008%29.jpg

    Look at it.

    Do you feel anything, knowing these numbers?

    Do you feel anything, knowing you’ve been wrong?

  • TCO // August 17, 2008 at 12:06 am

    Yeah, like I said. He’s been wrong on the predictions before.

    He’s a nutter. A touchstone. An alarmist version of a skeptic kook. or like a real socialist is to a liberal fellow traveler. Like Che of t shirt fame. Yum, yum….

  • dhogaza // August 17, 2008 at 12:26 am

    I wouldn’t call him a nutter … and yes, he’s been hyperbolic but you do also realize that quote-mining a few hundred words from several books doesn’t necessarily paint a portrait, I should think …

  • dhogaza // August 17, 2008 at 12:36 am

    But let’s see, what lesson is there for climate science skeptics, here?

    Paul Ehrlich is an extremely good scientist. His dabbling in predicting future food supplies and the like have been wrong, no doubt about it. You’d *expect* skeptics to take to heart the lesson that an expert in one field may well fall short when dabbling in another. For instance, McIntyre in climate science. Or, say Lomborg about anything other than political science.

    His predictions regarding extinction rates aren’t really wrong in the same sense. He should’ve not staked himself to a timeframe. Here we sit in 2008, with increasing evidence that half of the world’s species may be committed to eventual extinction today. It may take the rest of the century for the story to play out, but the gist of the story is no different than the story told by Ehrlich: we’re dooming an inordinate percentage of our biological heritage to extinction.

    Of course, while Ehrlich was wrong about the scale of famine in the world in the 1970s and 1980s, those who were promising that technology would end world hunger in a similar timeframe were just as wrong. But we don’t hear about that so much from the right, do we?

  • Hank Roberts // August 17, 2008 at 12:47 am

    And yet, the cold numbers say this:

    http://scienceblogs.com/intersection/jackson%282008%29.jpg

  • dhogaza // August 17, 2008 at 3:02 am

    Oh, but Ehrlich did talk about extinction. Hank, what you’re linking supports the notion of “commitment to extinction”.

    In the long term view there’s no distinction. But in efforts to debunk Ehrlich, it’s everything, like a few centuries vs. his two or three decades makes a difference.

    Wrong, from all we know, but hmmm … trivially true.

  • Dano // August 17, 2008 at 5:39 am

    The thing wingnuts don’t want to believe is that if Ehrlich was off by, say, 40-50 years, what’s that % wrongness?

    IOW: the denialists are grasping at straws.

    Best,

    D (whose grad advisor was postdoc in Ehrlich’s lab and who has been lucky enough to have Paul explain this stuff in person, face-to-face).

  • TCO // August 17, 2008 at 2:05 pm

    Dhog/Hank: It’s a balence. If you make less definite predictions, or more tentative ones, then you lose all the excitement level. If Ehrlich really believed the extreme predictions that were wrong, he had a wrong world view and should learn from his mistake. If he didn’t believe them….well that’s just demogaugery. In any case, I see a conjunction of science with PR…where the science suffers. Kinda reminds me of Climate Audit.

  • dhogaza // August 17, 2008 at 3:52 pm

    Well, Ehrlich openly admits that many (not all) of his predictions didn’t come to pass.

    Unlike the guy who runs Climate Audit, he is able to admit when he’s been wrong …

  • Dano // August 17, 2008 at 4:02 pm

    If Ehrlich really believed the extreme predictions that were wrong, he had a wrong world view and should learn from his mistake. If he didn’t believe them….well that’s just demagoguery. In any case, I see a conjunction of science with PR…where the science suffers. Kinda reminds me of Climate Audit.

    In 2-4 generations (IMHO), folk will look back at statements like these and shake their heads in wonder, asking why few were listening and why that society thought because the timelines were a few decades off made the information not worth listening to.

    I’d call it pathetic, but it is more accurately the human condition – calling the human condition pathetic doesn’t do anything useful.

    Best,

    D

  • Hank Roberts // August 17, 2008 at 4:18 pm

    TCO, you’re hockey sticking again.

    Look at the extinction numbers now, after 20 years of study. Don’t blow off what’s known now because the 20 year old early work, when the concern was first raised, was imperfect.

    Look at the world and how much is already lost.
    Did you read those numbers linked above? Can you imagine how an ecology can work with such losses?

    “And then . . . they came for me . . . And by that time there was no one left to speak up.”

  • TCO // August 17, 2008 at 4:27 pm

    Ypu’re living a meme, Hank. the guy’s attention getting, publicity grabbing predics were wrong. If he had predicted the truth and taken away all the England will be gone by 2000 silliness, he would not have had any noteriety.

  • Lee // August 17, 2008 at 6:36 pm

    In practical terms, Ehrlich does not matter. What Ehrlich said back then DOES NOT MATTER.

    What matters is what has actually happened over the last 10, 50, 100 years – which is INDEPENDENT of what Ehrlich predicted.

    And what is happening, is that we are seeing ecosystem and ecosystem service collapses on massive scales – that list that just got posted is chilling. Massive worldwide fisheries collapses, oceanic dead zones, tropical forest removal and collapse, and on and on. We are seeing extinction, commitment to extinction, simplification of ecosystems, alteration even of species – N.A. Cod , for example, have under massive fishing pressure evolved to a smaller, earlier-reproducing, and likely shorter-lived species.

    And that just scratches the surface.

    On top of all this already observed shit, we are INCREASING the stresses we put on ecosystems. We are co-opting even more resources, even more surface area, even more of the worlds freshwater supplies, to human uses. We have created a society and economy that is dependent on the behaviors that are causing those stresses.

    CO2-induced warming and ocean acidification is just one more set of pressures on already badly damaged and stressed ecosystems and ecosystem services – but they would likely be huge all on their own. We arent adding them on their own, – we are adding them on top pof all this other damage we’ve done to the services and systems that support our way of life on this planet.

    And yet, in the face of this documented pattern of damage or collapse of ecosystem service after ecosystem service, of increasing anthropogenic pressure on the natural structures that support our cultures and societies, we somehow aren’t engaging the hard conversations about how we diminish our impact on those damaged systems, how we mitigate and ameliorate that damage in ways that can continue to give us acceptable and good standards of living on this planet.

    Instead we are engaged in this pitiful argument about whether we are actually having an impact at all, while the increasing evidence for an increasing rate of increasingly heavy damage piles up around us.

    What Ehrlich said 30 years ago doesn’t alter any of this, not one iota.

  • cce // August 17, 2008 at 8:14 pm

    Although I think it’s obvious to the point of being “fact” that we’re in the midst of an extinction event, Ehrlich’s predictions were clearly hyperbole. Unfortunately, “The Boy Who Cried Wolf” ends with the flock being eaten up.

  • Deech56 // August 17, 2008 at 8:32 pm

    To follow up on cce’s post of August 13, 2008/ 7:36 am : http://bishophill.squarespace.com/blog/2008/8/11/caspar-and-the-jesus-paper.html

    If there’s anything posted elsewhere, a pointer would be helpful. Thanks.

  • Hank Roberts // August 17, 2008 at 10:45 pm

    http://www.cgd.ucar.edu/ccr/ammann/millennium/

    Paleoclimate Reconstructions

    UNDER REVISIONPaleoclimate Reconstructions
    An evaluation using real world and “pseudo” proxies based on coupled GCM output

    Collaboration between:

    NCAR CGD Paleo //
    NCAR IMAGe //
    NCAR Assessment Initiative

    Goals
    (1) Provide transparent multi-platform code of past climate reconstruction techniques to the
    community.
    (2) Use state-of-the-art coupled Atmosphere-Ocean General Circulation Model ouput to test reconstruction techniques used in context with proxy data.

    Two of the four sections are hyperlinked to date:

    http://www.cgd.ucar.edu/ccr/ammann/millennium/AW_supplement.html

    http://www.cgd.ucar.edu/ccr/ammann/millennium/SignificanceThresholdAnalysis/

  • Hank Roberts // August 17, 2008 at 10:48 pm

    It’s not hyperbole, it’s range of error. Forty years ago almost nobody had _heard_ of ecology outside of biology departments. Pick any other field and look at what they expected to happen over the same time span.

    Got your flying car yet?

    The optimistic mistakes are too bad.
    The pessimistic mistakes are still pretty bad.

    Look at that table again, look at the numbers.

  • Matt // August 17, 2008 at 11:09 pm

    Hank: Do you feel anything, knowing these numbers?

    Do you feel anything, knowing you’ve been wrong?

    Alas, the “fake but accurate” argument again.

    Hank, if you want me to pat Ehrlich on the back for guessing the sign of the first derivative of species growth, then I’ll give him that much: He got the sign right.

    But getting the sign right and the magnitude very wrong isn’t enough. We rely on scientists and engineers to get both the sign right and the magnitude close. Ehrlich was WAAAAAY off on the magnitude (1000X). Do you acknowledge that?

    Can you show me the text in which Ehrlich was very clear that he meant “committed to extinction” versus “extinct”? Those are very easy concepts to grasp, and I don’t see him being clear on the difference from his scary writings in the 60’s and 70’s.

    I’m growing oh so tired of scientists failing to stand by previous predictions with after-the-fact corrections on what they meant. Remember the “business as usual” debate? Same thing.

  • Hank Roberts // August 17, 2008 at 11:17 pm

    PS — you all realize this is not Paul Ehrlich’s paper?
    Don’t confuse the NPR radio interview linked earlier with this work.

    You should at least look at it and look it up:

    http://scienceblogs.com/intersection/jackson(2008).jpg

  • Matt // August 17, 2008 at 11:27 pm

    Lee: And what is happening, is that we are seeing ecosystem and ecosystem service collapses on massive scales – that list that just got posted is chilling.

    Yes, the list is very important and saddening. But we must condemn those that try to help their cause by overstating the truth. I’m sure folks that made the case for the Iraq war also believed they were helping. But we cannot have “experts” bully us and circumvent checks and balances by stretching thruths–even if those experts believe they are taking us to a “better place. ”

    Hank posted a comment from Ehrlich above, which was:


    You can do it if you have the right incentive, and fear is, ought to be a great incentive if we care anything about children and grandchildren … there’s a lot of reason to be scared for them…. Have you heard anything about ecosystem services?”

    Here we get a peek into Ehlrich’s mind and combined with his track record of overstating extinction rates, we might be able to guess that the man believes lying is OK if it helps mankind.

    This is why people are so distrustful of the current predictions from scientists about warming.

    And when you read about the behind-the-scenes tactics here in preparing the last IPCC report, I get even more distrustful. FWIW, this stuff will play very, very poorly in Peoria. If this tale were picked up by 20/20 and turned into 30 minute story, it’d be devastating to the cause.

    http://bishophill.squarespace.com/blog/2008/8/11/caspar-and-the-jesus-paper.html

  • dhogaza // August 18, 2008 at 12:28 am

    Can you show me the text in which Ehrlich was very clear that he meant “committed to extinction” versus “extinct”?

    Actually, I said that, and I didn’t say that Ehrlich said it. Please re-read what I said.

    And, don’t take Bishop Hill’s blog as gospel. You’re going to be sadly disappointed, eventually, if you do.

  • Hank Roberts // August 18, 2008 at 1:18 am

    > committed to extinction

    http://books.google.com/books?id=yMAP4DAL9A4C&pg=PA328&dq=ehrlich+extinction+%2B%22committed+to+extinction%22&lr=&sig=ACfU3U2hcAmkXmCqmNQmytvEjnsBitczBg

  • MrPete // August 18, 2008 at 1:22 am

    Interesting Erhrlich discussion. Dano, thanks for the link to his new paper; I’ve passed it on to my wife, who also heard much of this in person from Ehrlich back in the 70’s.

    30 years of osmosis tells me to greatly respect the problems caused by our massive pressure on habitat and ecosystems, and also to remain hopeful (if/when we wake up to the stupidity of many of our actions) by respecting nature’s unbelievable resilience.

  • Hank Roberts // August 18, 2008 at 1:28 am

    By the way, the same deception is operating throughout the denial process. No mechanism. No proof of extrapolation from present knowledge. If CO2 has almost doubled why hasn’t temperature almost doubled. If what they said 20, 30, 40 years ago wasn’t exactly right, how can we think we know anything more today?

    Has anyone found signs of intelligent life in the universe yet?

  • MrPete // August 18, 2008 at 1:44 am

    Qualitatively different situation, Hank.

    In terms of population/biomass loss, we have pretty good modern day measurement numbers and there’s not much uncertainty about the sign. We’ve seen species go extinct under human-caused pressure. And we know that certain actions reduce the pressure.

    For climate, the assumptions are orders of magnitude larger and broader. We’re trusting the GCM’s more than the current measurements. We’re making big guesses about major influences. And we assume we can take action to fix the problem (and not cause even more harm in the process.)

  • MrPete // August 18, 2008 at 1:48 am

    I just read a bit of the back-discussion about the data we collected last year. Interesting to see the clamor for “the rest of the graphs.”

    Expectations appear to be confused. Here’s some light on the subject. For a familiar context, I’ll organize this according to traditional data releases/archives, such as the older data that we extended. (e.g. Google ITRDB CO524)

    There are five potential sets of data, to put it most generally. Without getting into the validity of each category:

    1) Easily crossdated samples with the “desired signal” (not my definition; others call it that)
    2) Easily crossdated samples without “signal”
    3) Manually crossdated samples with “signal”
    4) Manually crossdated samples, without “signal”
    5) Provenance details for all the above

    #5 is generated at time of collection
    #1,#2 take a short amount of time to generate
    #3, #4 take much longer, often a few years. Some scientists set the samples aside forever, others keep picking at it until most/all are dated.

    For the data collected in the 1980’s:
    #1 is available
    #2,3,4,5 were never released and cannot be found.

    For our data:
    #5 was released immediately
    #1, #2 were released immediately
    #3, #4 do not yet exist

    Bottom line: We’ve already released more data than many others release, even decades after the fieldwork is complete. What some here are clamoring for goes way beyond what many scientists ever provide. In this case it is not provided simply because it does not yet exist.

    TCO’s preference that data be withheld until everything is complete would ordinarily be logical as far as I’m concerned. In this case it is incompatible with our goal of transparency. If transparency makes some suspicious, so be it.

  • MrPete // August 18, 2008 at 2:03 am

    Ray L suggests the data we’ve released can’t be critiqued because it is not “published” and means bupkis because it’s not in a peer journal.

    Interesting. Quite a few scientists have critiqued various aspects already. And others have gladly received the data already available. Sure, it would be nice if it gets into a journal someday. Not my big dream. The data is already available; I expect it will also make its way into ITRDB sooner than later. Not sure where the extensive provenance belongs. I haven’t seen an archive for 2-D (available now) let alone 3-D (one of these days) dendro images. Personally, I’m happy when good data is made available. The data can speak for itself.

    (I have a comment in response to the “where is the data” questions; it won’t yet post. Patience please.)

  • dhogaza // August 18, 2008 at 3:26 am

    or climate, the assumptions are orders of magnitude larger and broader.

    Strange. For the basic GHG hypothesis, we have lab measurements.

    Yet, for the human-forced extinction stuff, we have no lab measurements, yet, the GHG stuff is less rigorous.

    Strange.

    What are these “assumptions” you are talking ab out?

  • dhogaza // August 18, 2008 at 3:32 am

    And others have gladly received the data already available.

    So what has happened to the Great Left Wing Conspiracy Against Truth that is pretty much the entire raison d’etre for CA? Gosh, could it be that scientists are interested in science, after all?

    I almost hope, that after a decade+ of trying to debunk the hockey stick, that you’ll succeed (not that I think you will). It’s irrelevant. “Oh, we defeated an early paper, while science blitzkriegs onwards”.

    Really, when it boils down to it – you folks have *nothing*. The most you can prove is that climate science is correct, even if one early paper is subject to review.

    It’s a bit like saying that Galileo was wrong for not accurately modeling the different rate of fall for two cannonballs of differing weight.

  • Rattus Norvegicus // August 18, 2008 at 3:35 am

    MrPete,

    I think that you put too much faith in the “incredible resilience of nature”. Inevitably the loss of species leads to simplification of ecosystems and increased instability. This jeopardizes ecosystem services and makes maintaining our civilization more difficult.

    I remember learning a lot of this stuff in the early to mid ’70’s when my dad was taking graduate classes in ecology by going on most of the class field trips with him. It was fun and educational, I just didn’t realize at the time that it was cutting edge science.

  • Hank Roberts // August 18, 2008 at 4:27 am

    Clue:
    http://www.sciencemag.org/cgi/content/abstract/319/5860/192

    Look again at that list. What’s missing? What’s broken because of what’s missing?

  • matt // August 18, 2008 at 4:34 am

    Hank: If what they said 20, 30, 40 years ago wasn’t exactly right, how can we think we know anything more today?

    “They” don’t need to be exactly right. But if someone’s prediction is 3 orders of magnitude off from the actual, don’t you think it’s fair to scrutinize the next “sky is falling” pronouncement?

    You seem to think there’s very little consequence to being very, very, very wrong.

    If scientists want to be taken seriously, there must be an element of accountability. Accountability means there’s pain if someone is wrong–even if they meant to harm and tried their best. That is how the real world works. If scientists dont’ want to deal with accountability, then kick the problem over to engineers and bean counters. They deal with it all the time.

    But we can’t have people with zero accountability scaring the world.

    [Response: It's really not correct to compare the overstatements of an individual, or even a small group, to the clear and overwhelming consensus of the climate science community. As for accountability, there's a tremendous amount of review and a very *conservative* summary in the assessment reports of the IPCC. Comparing one scientist's overstatements to the global warming consensus is mistaken.

    And it's downright foolish to focus on the negative consequences of the extremely unlikely event that they're wrong while ignoring the vastly greater negative consequences of the extremely likely event that they're right.]

  • MrPete // August 18, 2008 at 4:38 am

    R.N… except for the “too much faith” part, I agree 100% with what you say. My “resilience” statement relates to how well nature comes back from a variety of disasters, whether near-extinctions, horrible fires, etc etc. And how well it fights off many of our inane attempts to put a leash on natural processes. Living along the coasts and watching what happens when people try to control beach shifts, or keep rebuilding homes on the top of the cliffs…(or fighting bentonite clay in Colorado) you just gotta laugh or else you’ll cry.

    You’re 100% correct: once life is gone, it’s gone. And reduced biodiversity hurts more than we can imagine. We’ve got to learn how to be better caretakers of our home. (Anyone here enjoy Pollan, BTW?)

  • tamino // August 18, 2008 at 4:44 am

    If you submit a comment and it seems to disappear, that’s probably because it’s been sent to the spam queue. It still gets reviewed for approval, so there’s not need to re-submit, especially multiple times.

  • MrPete // August 18, 2008 at 5:00 am

    dhogaza, I’ll just agree to disagree with you, since you seem to want to argue more than look into the real issues.

    Let’s assume you are correct, that the HS is irrelevant. If so, then it doesn’t matter that W&A’s confirmation has been debunked. It doesn’t matter about sbBCP.

    Thus, we can go through the current crop of “team” papers, removing any that use MBH-related stats methods and any that use sbBCP. And it will make no difference because it is all immaterial?

    I submit it is more impactful than you suspect. And from the conversations I’ve had, serious dendros are aware and are quietly working on some radically new and hopefully better methodologies and data processes.

    Let’s revisit this question in a few years. I have no clue about the policy outcome, but have confidence dendro best practices will be significantly different, and SteveM nicely vindicated, in not too many years.

    What’s being proven is not that “Climate Science” is correct, nor that it is incorrect! What’s being proven is that “Climate Science” knows a lot less than is claimed… that we’re overconfident of our understanding, and overconfident of our ability to manage climate by whatever means.

    If you want to help this whole thing move forward, start ignoring all the attitudes and see what you can learn from the various people involved in this. Including Tamino and Schmidt, and also SteveM, GBrowning and others. They’re all pretty smart people with a lot of strengths.

    In the meantime, we’re accomplishing little through my responses to the various razzes. Someone will just seek another way to denigrate rather than become a serious inquirer or expositor.

  • Lazar // August 18, 2008 at 9:35 am

    MrPete,

    Lazar, AFAIK, your graph link is meaningless to your quest.

    You still did not answer the question…

    “Do you still have doubts that MBH used precipitation series to reconstruct temperature?”

    … the plot shows the temperature reconstruction produced by the MBH98 algorithm changes if precipitation proxies are deleted from the original input file. Are you seriously maintaining MBH98 did not use precipitation proxies to reconstruct temperature?

    If they’re all temp or precip proxies, they should correlate as such.

    Correlate with what? And who is maintaining that they ought be “all temp” or “all precip”, and why?

    You continue to ignore a reasonable suggestion: additional updated data is available for both the Sheep Mountain area and the Almagre area

    I’m interested in whether the data used in MBH98 correlates with local climate.

    Still waiting for those missing functions. temp = f(precip)

    Temperature of what, precipitation of what, and why does it matter to MBH98 and the assumptions therein?

    PS
    Verification is complete.
    The model passed with flying colors!
    Results up soon.

  • Deech56 // August 18, 2008 at 12:18 pm

    RE: Hank Roberts // August 17, 2008 at 10:45 pm

    Thanks, but I was wondering about a counter to the claims that the R^2 values are some kind of smoking gun and that there were shenanigans involved in the W&A paper and its publication. Unfortunately, I don’t have the mathematical background to properly evaluate the CA claims beyond my normal skepticism of claims from that site.

  • Ray Ladbury // August 18, 2008 at 12:42 pm

    Matt accuses: “You seem to think there’s very little consequence to being very, very, very wrong. ”

    Actually, that is not correct. Being wrong does decrease a scientist’s credibility, but not nearly so much as being perceived as “having an agenda”. Ehrlich’s reputation has suffered among scientists precisely because he is perceived as pushing an agenda–and this despite the fact that most scientists think he’s right on the science. Carl Sagan suffered from some of the same bad press, despite being one of the most brilliant and creative astronomers of his day. Sagan’s goal was popularizing science, but his advocacy of arms control and anti-nuclear positions sometimes seeped into these popularizations. I think James Hansen’s reputation has suffered despite the fact that most climate scientists agree with him. Scientists react poorly to other scientists as advocates even when they agree with the agenda the scientists may be pushing. Scientists who are advocates (on the left or the right) do pay a price for that advocacy.
    Ultimately, a lot of the venom from denialist circles comes from the fact that they simply don’t understand how science is done.

  • Gavin's Pussycat // August 18, 2008 at 2:07 pm

    Ray:

    Actually, that is not correct. Being wrong does decrease a scientist’s credibility,

    I expected it to continue “…but being right does not increase it.”

    Silly me. Case in point: Hansen 1988.

  • Dano // August 18, 2008 at 2:11 pm

    Lazar:

    Instead we are engaged in this pitiful argument about whether we are actually having an impact at all, while the increasing evidence for an increasing rate of increasingly heavy damage piles up around us.

    What Ehrlich said 30 years ago doesn’t alter any of this, not one iota.

    Sadly, this is incorrect.

    See, folks need to believe they are not fouling their nest. Opportunities where someone says – even incorrectly – that all of Ehrlich’s statements are wrong because one was wrong need to be jumped on and exploited.

    This is human nature.

    Most folks need to be distracted. They need to believe something else. This is the challenge.

    Best,

    D

  • Gavin's Pussycat // August 18, 2008 at 3:10 pm

    Deech56, the Wahl-Ammann manuscript

    http://www.cgd.ucar.edu/ccr/ammann/millennium/refs/Wahl_ClimChange2007.pdf

    pretty much gives the counter you’re asking for, in section 2.3 and apprendix 1.

    It’s not easy reading though.

  • Hank Roberts // August 18, 2008 at 3:31 pm

    Ray, I think you’re way off into personal opinion and confusing ‘an agenda’ with sharing knowledge of the public health implications emerging from one’s work before anyone much wants to hear it.

    Ozone layer
    Vaccination
    Lead
    Tobacco
    Tributyl tin
    Trawling
    Roundworms
    Yellow fever

    Scientists and doctors speak up when they are obliged to.

    That’s not an agenda. That’s responsibility.

  • matt // August 18, 2008 at 3:35 pm

    Ray: Actually, that is not correct. Being wrong does decrease a scientist’s credibility, but not nearly so much as being perceived as “having an agenda”.

    I think your entire post is spot-on.

  • dhogaza // August 18, 2008 at 4:12 pm

    Thus, we can go through the current crop of “team” papers, removing any that use MBH-related stats methods

    So, let’s see, above you’ve shown that you don’t understand the MBH-related stats, because by implication you state that it boils down to cherry-picking. We’re really supposed to agree with a proposal by you that a tool be tossed out even though you demonstrate ignorance about it?

  • Ray Ladbury // August 18, 2008 at 6:11 pm

    Hank, Perhaps I was not clear. I was rather lamenting the fact that a scientist’s reputation as a scientist suffers when he or she feels the need to speak out. Carl Sagan today is only known for Cosmos and some of his popular writing, but his contributions to planetary physics were also notable. By and large, scientist expect to do science and leave policy to politicians, engineers, economists, etc. However, when the latter still don’t understand the threat, scientists have to weigh whether to wade into politics–a toxic environment for most scientists–or let society go to hell in its own handbasket. The fact is that most scientists do not speak up, and many resent those who do even when they agree with what they are saying. Scientists should speak up–it’s the courageous thing to do–but they have to realize that they will likely take fire from behind them as well as in front.

    Science tends to be very conservative in the sense that unless an effect (threat) can be shown to be significant, it doesn’t generate much activity. That’s inconsistent with politics–where threats compete for attention–and with engineering–where the worst-case threat is assumed to ensure the system remains viable. What’s broken here is not the science. Science has shown beyond doubt that there is a significant threat. What is broken is the political response–which has been nonexistent–and has necessitated scientists venturing well outside their comfort zones.

  • Hank Roberts // August 18, 2008 at 10:07 pm

    Then we agree (and I expect Matt disagrees).

    Case in point:
    http://pubs.acs.org/subscribe/journals/esthag-w/2006/aug/policy/pt_santer.html

  • MrPete // August 19, 2008 at 12:42 am

    Lazar, sorry, spent 20 mins searching past threads. I know I saw one of your graphs showing data with/without precip, but now cannot find the link. Any hints? (Then again, sounds like you’re close to putting the whole shebang together.)

    Can’t comment usefully w/o that.

    My (overly abbreviated) temp=f(precip) , temp=f(etc) comment is simply this: if what we want to reconstruct is temperature, then other elements must be factored out, whether by valid PCA or otherwise. If growth is connected to more than one variable, we’ve got to do “something” to reduce the physical equation to become a function of the one variable of interest.

    In even more-simplified layman’s terms:

    * if warm+dry growth is different from warm+wet growth, we need a way to distinguish the two to properly estimate what happened temp-wise

    * Likewise, if warm+stormy (bark-stripping) produces radically different growth from warm+calm (no bs :)), then again we need a way to distinguish the two.

    It’s good, satisfying fun to do the analysis and see what correlations can be found. At the same time, the stats/analysis needs to connect to physical reality.

    I’ll sneak in to read/say more after I have a link to the results you mention (about removal of MBH precip proxies causing significant change to the results.)

    (Oh, you said “I’m interested in whether the data used in MBH98 correlates with local climate.” Great! So we have 25 more years of local climate data, and 25 more years of exact-same-tree data for some of these key MBH98 proxies, that nicely fits the previously collected samples.)

  • Luke // August 19, 2008 at 2:07 am

    Just FYI – a new climate change blog by the Director of the Research Institute for Climate Change and Sustainability at the University of Adelaide, South Australia.

    http://bravenewclimate.com/

  • ChuckG // August 19, 2008 at 2:33 am

    Open Thread on Open Mind. So discuss please. Math. Not hand waving.

    Pat Frank (Skeptic article) versus Gavin Schmidt:

    http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/langswitch_lang/bg#comment-95633

  • Hank Roberts // August 19, 2008 at 3:16 am

    http://www.agu.org/pubs/crossref/2008/2007JD009295.shtml

    The correlation between temperature and precipitation — because lighter oxygen-16 isotopes make lighter water molecules, which evaporate preferentially, increasing the amount of oxygen-16 in rainfall/snowfall (and in material built up from that water in annual bands)

  • dhogaza // August 19, 2008 at 4:03 am

    * if warm+dry growth is different from warm+wet growth, we need a way to distinguish the two to properly estimate what happened temp-wise

    First you have to show that warm+dry is actually a historical possibility in the Great Basin.

    If it isn’t, you can, of course, simply dismiss the warm+dry scenario …

  • dhogaza // August 19, 2008 at 4:06 am

    ChuckG … it’s obvious that Schmidt knows math. Rather than ask us to discuss it, why don’t you show us why Schmidt is wrong? Pat Frank’s errors seem easy enough to understand, so please educate us as to why Schmidt’s rebuttal is wrong.

  • dhogaza // August 19, 2008 at 4:07 am

    I mean like, isn’t Pat Frank some two-bit weather type guy and Gavin Schmidt some PhD math type?

    I mean … why should I reject the argument of a professional, trained mathematician like Gavin?

  • Lazar // August 19, 2008 at 10:01 am

    Results!
    The tree-ring – climate model passes verification with significance at alpha = 0.01.
    Fig. 10.
    Conclusion: bristlecone-pine tree-ring growth depends on autumn temperature and precipitation, and winter precipitation.

    Constructed a network of temp and precip records back to 1889 (map: net1, yellow pins).
    Temperature records are unreliable prior to 1900 (Fig. 9a).
    Rather than chuck out data, I relied on the mean to eliminate some of the error.
    The model was calibrated over 1940:1980, and verification done over 1889:1939. Passed verification, with significance at alpha=0.05, r-squared of 0.08. The model performed excellently until the first seven years of data where, although clearly still responsive (peaks and troughs) there is a divergent trend almost certainly due to inhomogeneity in the early portion of temperature records. Chucking out the first seven years gave an r-squared in verification of 0.21, significant at alpha=0.01.
    There was a residual positive trend approximately 1/3rd of the magnitude of the trend in tree-ring growth. Having consistently found similar results with other data, although the trend is not significant, it is likely real and likely due to co2 fertilization. Detrending over 1848-1980 and running the regression gave a marginally improved r-squared of 0.23 in verification, and the residuals gave a much improved fit to a normal distribution.

  • Gavin's Pussycat // August 19, 2008 at 10:03 am

    Deech56,

    having thought about your question a bt more, and re-read W&A, I now understand that their explanation, while right, is precisely the way not to explain it, making a very simple matter overly complicated.

    The matter is really very simple: the r2 test is about detecting the existence of a functional relationship between a variable x and a variable y, like in (restricting ourselves to the linear case):

    y = ax + b + noise (1).

    It doesn’t matter what a and b are, all the test says is that x and y “co-vary”: if x goes up, so does y. If x goes down, so does y. It makes no difference if the average level of y is completely different from that of x; it doesn’t matter if the size of the swings in y is very different from that of the swings in x.

    In fact, you may apply any linear transformation to y: if you write

    Y = p y + q,

    you will have (easy to show)

    Y = A x + B,

    with A, B different from a, b but computable from them and from p, q.

    Now, the r2 value of Y against x will be identical to that of y against
    x.

    What we want to test in the case of climate field reconstruction is not that y is functionally related to x, but that y is a reconstruction of x:

    y = x + noise (2).

    This calls for an entirely different kind of test (like perhaps the RE test).

    Using the r2 test blindly is not just a blunt instrument, it is the wrong instrument. You are not testing the conjecture you’re supposed to test. W & A give some nice examples with plots how with the r2 test you can both reject perfectly good reconstructions and swallow junk…

    I used to think that McI was pretty sharp — evil, dishonest but sharp. But now I see that in his insistence that the Hockey Team should present r2 test results, and his implication that not doing so is somehow fraudulent, is, well, plain dumb.

    BTW in my understanding tests like r2 are “lightweight” tests, typically only used as a first cut at deciding if “there is something to it”. It’s also fairly easy to cheat. A more industry-strength test in the linear regression case would be to just compute the regression trend a and its standard deviation, construct a confidence interval, and see if the value 0 — or whatever your null hypothesis is — lies inside it.

    More generally, like in the climate field reconstruction case, you should formulate a realistic error model for your data (proxies), propagate this through the computation to obtain an error model for your unknowns — the temperature reconstruction — and judge that against your null hypothesis — like, “the 20th century is nothing exceptional”. This is how to get the grey zones you see in the IPCC hockey plots BTW.

    Hope his helps. I am surprised Tamino hasn’t yet written about this — perhaps t
    oo simple, below his dignity ;-)

  • Gavin's Pussycat // August 19, 2008 at 11:47 am

    Ah, the r2 is described here:

    http://en.wikipedia.org/wiki/Coefficient_of_determination

    …and the RE (and a lot more) here:

    http://books.google.fi/books?id=zr8Ucld6FYcC&pg=PA181&lpg=PA181&dq=%22reduction+of+error%22+%22RE+statistic%22&source=web&ots=ZgAnXuJOMF&sig=lujR9mBVKwXh36m_UvsweazHDXw&hl=fi&sa=X&oi=book_result&resnum=1&ct=result#PPA183,M1

  • Dano // August 19, 2008 at 2:26 pm

    BTW in my understanding tests like r2 are “lightweight” tests, typically only used as a first cut at deciding if “there is something to it”. It’s also fairly easy to cheat. A more industry-strength test in the linear regression case would be to just compute the regression trend a and its standard deviation, construct a confidence interval, and see if the value 0 — or whatever your null hypothesis is — lies inside it.

    Exactly.

    When first scanning a paper to see if it is worth your time, the r^2 is where you start, then you look over your preferred stat measurement to see if you should delve further. If the numbers are high enough, you read the paper.

    When I was doing my microecon and urbecon series, I had a hard time reading their papers, as the r^2s and Ts were lower than what I was used to from the natural sciences.

    Best,

    D

  • Tom Woods // August 19, 2008 at 4:36 pm

    Just something to ponder…

    Recent studies have shown a doubling of stratospheric water vapour, likely from increasing atmospheric heights due to global warming, overshooting thunderstorm tops from stronger tropical cyclones and mesoscale convective systems etc…

    Since sulfur dioxide reacts with water vapour in the stratosphere to form sulfuric acid droplets, would SO2 flux from volcanic activity cause even greater swings in global temperatures?

    I would assume that the increase in stratospheric water vapour would make for a thicker vail of sulfuric acid given a large volcanic eruption. Even a smaller eruption that manages to have an eruptive plume that reaches the stratosphere could very well have greater implications on global temperatures if there’s more water vapour for SO2 to react with.

    Perhaps in the future a large volcanic eruption (VEI 5-6 or greater) may cause 1-2°C swings in global temperatures as they rise further as we go from enhanced greenhouse effect to enhanced reductions in insolation from thicker sulfuric acid vails.

    I bring this up due to the eruption of the Kasatochi volcano, which had an estimated 1.5Tg flux of SO2. This is only around 10% of the SO2 flux from Pinatubo but it got me thinking…

    Anyone with any input on this I’d like to hear from.

  • TCO // August 19, 2008 at 7:24 pm

    Pussy: Wegman said that R2 was the wrong metric to look at what’s going on effectively. The problem with Steve is that he is so confounded with PR and math exploration that he neglects to really think about how different algorithms interact with diofferent data sets in a curious manner.

  • Deech56 // August 19, 2008 at 7:25 pm

    Gavin’s Pussycat and Dano – Thanks for the information and the links. I will read this over. My own stats and linear regression coursework was from back in the Reagan era, and my experience with r^2 comes from running standard curves for lab measurements, where anything less than 0.9 results in frowns.

    What disturbs me is the way that this is used to discredit the 10-year old work by MBH, and by implication any subsequent confirmations. Unfortunately, the “circling the wagons” meme plays well among the general public; meanwhile, the ice is melting and flora and fauna are migrating as nature responds to the effects of rising CO2.

    Deech

  • tamino // August 19, 2008 at 7:57 pm

    A note to readers: I’ve suffered a back injury which makes it very difficult to get around, and I’ve been taking it as easy as possible. Thank goodness my wife (the finest woman in the world) is taking excellent care of me. But it’s been nearly a week since the last post, and may be several days until the next. In the meantime, I’m glad discussion continues apace.

    Carry on.

  • Gavin's Pussycat // August 19, 2008 at 9:15 pm

    TCO: thanks, wasn’t aware of that (my neglect, haven’t been taking Wegman very seriously — life’s too short).

    Tamino get well soon! We need you ;-)

  • george // August 19, 2008 at 9:29 pm

    Being wrong does decrease a scientist’s credibility,

    I don’t think being wrong in itself necessarily decreases credibility — at least not among one’s fellow scientists.

    Some of the greatest scientists in history were wrong from time to time.

    First, it is a rare for a scientist to get it “right” the very first time. Even Einstein was “wrong” in his first attempts at a theory of gravity. In fact, it took him over ten years of hard work (and several mistakes) to get it right!

    Second, there are different degrees of “wrongness”.

    Technically, Niels Bohr was “wrong” when he had the electron moving as a point particle about the nucleus in a well defined “orbit” much like a planet around the sun.

    His initial model may have been wrong, but “wrong” is really a relative concept in science. In fact, the Bohr model of the atom was “righter” than all of the others out there at that time. Same with Einstein’s theory of gravitation. It may only be “right” within a certain domain, for example (not unlike Newton’s laws). It may be invalid at very small scales.

    When it comes right down to it, no one is really “right” in an absolute sense. That’s not to claim that all models and theories are created equal (or any such nonsense), merely to say that all efforts to describe nature are imperfect approximations.

    I think the only ones who would “downgrade” a scientist’s credibility in response to his/her being “wrong” are those who do not understand how science works.

  • Hank Roberts // August 19, 2008 at 9:50 pm

    Yeow. If it’s lower back/muscle spasm, I can recommend Maggie’s Back Book.
    http://openlibrary.org/b/OL4901783M

    Shorter: sketches of positions that stretch the problem out gently, stop the pain, let the inflammation reduce. Simple stuff.
    Works.

    Take it easy, if this is new to you it’s real easy to be overconfident and tweak it again.

    [Response: It is new to me, so I'll be aware and try to avoid overexertion through ignorance.]

  • Gavin's Pussycat // August 19, 2008 at 9:56 pm

    TCO do you have a link?

  • Lazar // August 19, 2008 at 10:32 pm

    I’ve suffered a back injury which makes it very difficult to get around

    Oh dear.
    I know it can be extremely painful.
    From personal experience, two weeks minimum before it’s safe.

  • Hank Roberts // August 19, 2008 at 10:52 pm

    Yeek. I’ve had back trouble since I was a youngster and I’m almost 60. Reaffirming, Maggie’s got _great_ advice illustrating positions that will avoid pain, both for stretching and for sleeping.

    This will brighten your day:
    http://blogs.nature.com/climatefeedback/2008/08/more_for_the_annals_of_climate_1.html

  • george // August 19, 2008 at 11:06 pm

    Hope your back problem is muscle related and not serious.

  • TCO // August 20, 2008 at 12:04 am

    Gavin: just searched on the web and could not find it. As I recall, it was in testimony, in response to a question about r2.

  • Dano // August 20, 2008 at 12:21 am

    Hank, we’ve been deconstructing see-oh-too’s mendacity for years. Years. At least newer blog posts can cut-paste old work already done.

    Best,

    D

  • MrPete // August 20, 2008 at 1:03 am

    tamino — good luck. (My half-bit of experience-based wisdom: we’ve found it’s usually the day _after_ exertion/stress when you wipe out your back. Something about everything being loosened up. “All I did was…” and wham. Now that you’ve had one, being extra careful after a good workout day is gonna help.)

  • Dave Rado // August 20, 2008 at 1:11 am

    Gavin’s Pussycat writes re. Wahl-Ammann
    manuscript "It’s not easy reading though." (in context of Bishop Hill and McIntyre accusations).

    I do hope Tamino will post about this when he’s feeling better – it would be good
    if there were an article one could link to that shows the latest disinformation for what it is, in a way that is
    intelligible to laymen.

  • Gavin's Pussycat // August 20, 2008 at 1:34 am

    TCO, couldn’t find it either… that’s when I decided to ask ;-)

    Seriously, don’t doubt it’s true. I’d like to see his argument…

  • MrPete // August 20, 2008 at 1:45 am

    Lazar — interesting preliminary results. Sounds about right.

    A nit you may want to check: don’t know how you placed the pins on the map; I’m quite certain your c0524 location is way off to the southeast. Perhaps ddmm.mmm (or ddmmss) was assumed to be dd.ddd? co524 is on Almagre a couple of km SE of Pike’s Peak and SW of CoSpgs… not in the flatland halfway between CoSpgs and Pueblo.

  • Deech56 // August 20, 2008 at 2:08 am

    And Tamino – get well soon.

  • ChuckG // August 20, 2008 at 3:07 am

    dhogaza // August 19, 2008 at 4:06 am

    The brevity of my post has led you to misunderstand me. You have made an assumption which may be implicit in my post, in which case I am sorry, but clearly is not explicit.

    So I withdraw the request rather than flesh it out. Why waste BW?

  • TCO // August 20, 2008 at 3:36 am

    Gavin’s Pussy: I’m sure I recall it, but I just scanned the testimony and don’t see it. Might have been some other discussion, not the testimony? Or a slight chance it might not have been Wegman but still someone else on “my side”. Pretty sure it was Weg though. But my recollection is that there wasn’t much followup to explain why he felt that way.

  • Barton Paul Levenson // August 20, 2008 at 12:27 pm

    tamino writes:

    I’ve suffered a back injury which makes it very difficult to get around, and I’ve been taking it as easy as possible.

    I’m very sorry to hear that. I will pray for healing for you.

    -BPL

  • Hank Roberts // August 20, 2008 at 3:26 pm

    Dano, yep, just pointing out Nature’s blog had noticed the smell of the pond scum there. Reminded me of when Judith Curry posted at CA what she thought on first reading their twisted text.

  • Gavin's Pussycat // August 20, 2008 at 4:43 pm

    TCO: “my side”
    did you ever get the feeling that there is something messed up about your loyalties?
    What about making the truth, no matter what, “your side”? You are almost there already. It is an honourable side to be on.

  • Dave Rado // August 20, 2008 at 4:50 pm

    More re. Gavin’s Pussycat’s post – I think the paper you linked to is the one that Bishop Hill attacked in his post, referring to it at the “CC paper”?

  • MrPete // August 20, 2008 at 7:41 pm

    G.P., a lot of us are seeking truth. And nobody on this planet appears to have a monopoly. :)

  • Paul Middents // August 20, 2008 at 9:13 pm

    Mr. Pete,

    I think real scientists doing real science as their life’s work come much closer to a monopoly on “truth” than a bunch of amateur auditors with a bristle cone pine obsession.

  • TCO // August 20, 2008 at 9:34 pm

    Gavin: My hopes are more with reforming the reformers . Or working with those that are more questioning of everything (Mosh-pit, JohnV, Zorita). I can’t really get good curious type conversations going with the RC types (too controlling, too shutting the discussion down, too “Herr Doktor Professor”). Bit more free play here and Lambert’s site. And I will give you credit that have heard you a couple times challenge your own side and drive better thinking in the process. On the warmer side, Annan and Atmoz seem driven by curiosity, also. Cheers.

  • TCO // August 20, 2008 at 9:59 pm

    Mr. Pete:

    When that BCP coffee expedition was done (and blogged on), we were shown initial data with a “more to come” message. Now it seems from your response that we may perhaps not get ANY more. I want a full report. The selective release of the data is both amateurish and manipulative. If you “broke” or mislabelled or whatever some of the cores, fess up. Also release all the raw data.

  • TCO // August 20, 2008 at 10:01 pm

    Also give us a much more deliberate explanation of what is going to be done in terms of the manual dating. Who’s going to do it or not, etc. If the answer is “I don’t know” or “someone will date it if they ever feel like it”, then we need to take the results to date as the finished product and judge both the expedition and the tree climate behavior based on what was found out and reported.

  • Ray Ladbury // August 21, 2008 at 1:19 am

    TCO, I beg your pardon, but I don’t know of a single scientist doing real research who is not “curiosity driven”. If I were not curiosity driven, I could go to work for a hedge fund and make one helluva lot more than I do now. However, don’t you think it makes more sense to be curious about what is not well known rather than what is well known? To me, it makes a whole lot more sense to be curious about those aspects of climate science that are still uncertain, rather than the role of CO2 which is tightly nailed down.
    Finally, to have respect for the expertise of folks like Gavin Schmidt, who herd the cats at Realclimate is not so much respect for authority as it is respect for expertise, achievement and patience. The goal of Realclimate is to teach people about climate science–and it does that very well. The goal of this site has less to do with climate science and more to do with proper analysis of data–a goal it accomplishes quite well.

  • george // August 21, 2008 at 1:33 am

    I can’t really get good curious type conversations going with the RC types…

    I suspect that Albert Einstein was not curious about whether Arthur Eddington was hiding key data from the 1919 eclipse under his pillow that disproved General Relativity, either. :)

  • TCO // August 21, 2008 at 4:58 am

    The “teaching” style of RC is not beneficial to really digging into things. I would contrast say Volokh.com which has brilliant intellects at the helm, but has probing interactions with the commenters as well.

  • Gavin's Pussycat // August 21, 2008 at 5:11 am

    TCO:
    >And I will give you credit that have
    >eard you a couple times challenge your own side and drive better
    >thinking in the process.

    Huh? I vehemently deny that ;-)

  • TCO // August 21, 2008 at 5:13 am

    Ray:

    I got my union card, too. So I’ve seen science, seen finance. Seen different kinds of cats in both. It’s not like scientists are something that I read about or see on TV, that I need your special help to have a feel for.

    Science is actually big business in some ways, if you look at all the people in it, all the government dollars. Please spare me any dreaming on how much you think you could make if you sold out, btw, it’s a competitive market there too. But I think your quality of life factoring in work load, pleasant travel to conferences, cost of living in Manhatten, feeling of social utility, job security, and…yes…interest makes it very likely that you’re better off doing science than solving modified heat flux diff e q’s for Goldman. IOW, yes, I acknowledge an interest driving choices…but no, I actually happen to know that vast majority of union card awardees could not cut it to get that rocket scientist job. (Some can of course…and the very best will be Feynmans and Lisa Randalls and the like…and it’s better for the world that they don’t sell out…but don’t forget all the also rans either.)

    Scientists come in a lot of different flavors and they vary in intellect and inquisitiveness. I am always happy to meet one with the real blazing Feynman-like curiosity and ability. But it’s the minority (they like to learn sure…and they want to get discoveries and papers, sure…but genuine probing curiosity comes in different grades, just as brains do.)

  • Gavin's Pussycat // August 21, 2008 at 5:17 am

    Dave Rado, yes apparently it is. And its history may explain why it contains this extensive explanation on suitability of test metrics.

  • MrPete // August 21, 2008 at 6:18 am

    Paul M, does this mean we therefore should have less respect for the 13 year old whose science experiment was published in JAMA? Or that the only truth is published truth? C’mon, let’s not go down that path.

    TCO, your questions have mostly been answered already. Sorry if you don’t like the answers. Understanding that it can be hard to search, I’ll answer again.

    There’s been no selective release of data. Anything that crossdated has been released, no matter what the data “says”. The undated data is still at the lab today. (Frankly, the project has been out of sight out of mind for a few months. Yes we’re “slow;” are the pro’s any faster?) When SteveM gets back from current travel he’ll head over to pick up whatever there is; hopefully we can make the scans accessible sooner than later. Has anyone else _ever_ done this? Not that I know of.

    You want “raw” data. For cores, normally that’s the crossdated ring widths, which are available now. We also hope to make the scans available online, which as noted may be a first.

    I also answered on the manual dating. AFAIK, we are about to receive the core scans, enabling anyone to manually date if they are willing to download the images. We can’t exactly duplicate the physical cores :). Suggestions on making that process more productive and/or more accessible are most welcome. I for one am quite motivated to work on the problem in my copious (hah) spare time. It is a great puzzle: why do some cores auto-match while others do not? So far, I don’t see obvious reasons. My guesses: current techniques depend on variable growth. “Boring” growth can’t be auto-identified. And spongy/rotty rings also cause havoc. (No, we didn’t break or mislabel. A faint chance we may have something scientific to say about that some day. I need some research time.)

    Finally TCO, I’m curious about your perspective on this:
    1) What’s the basis for your “amateurish, manipulative” claim? Have you actually examined the data? It’s been available for quite a long time. If you have suggestions for improvement, I’m all ears.

    2) If what we’ve released (i.e. all data generated to date, with comprehensive metadata) is “amateurish and manipulative” I suppose that goes double for those whose work we replicated, who have released 40 samples from 28 trees after coring 60+ trees at the site 25 years ago, and explicitly state that cores without “signal” are trashed? I humbly accept the “amateurish” label, particularly if such work by others is similarly understood, and even more so if you can point me to a more professionally collected and documented dendro data set that can be highlighted as a model of better practice. Honest, I’m all ears. We have no illusions about the quality of our field work; if it is any good at all I consider that a minor miracle :-D.

    Oh, and I place myself squarely in the camp of just wanting to know what the data says. I don’t care WHAT it says; I just don’t want peoples’ biases coloring the results. And yes that includes whatever bias I may have.

  • MrPete // August 21, 2008 at 6:27 am

    Ray L, “don’t you think it makes more sense to be curious about what is not well known rather than what is well known?”

    I agree. I also think it makes sense to be curious when there’s a significant apparent disagreement among scientists about what is “well known.”

    To me, what McKittrick has documented about the assessment of uncertainty in forcings is quite interesting. At the end of the AR4 scientific input, 7.5 of 15 forcing topics were gauged least-certain. Subsequent editing (without review by the scientific community) produced a slightly different result: 0 of 8 were least-certain.

    A change from 50% at the worst level of uncertainty to none, is to me a matter of valid curiosity. Particularly since that’s my own major question: how certain are we, really?

  • Gavin's Pussycat // August 21, 2008 at 7:15 am

    TCO, this seems pertinent:
    http://www.realclimate.org/index.php/archives/2005/12/how-to-be-a-real-sceptic/

  • Ray Ladbury // August 21, 2008 at 12:57 pm

    MrPete, the uncertain forcers in climate are known–clouds and aerosols. The others are pretty well understood, and CO2 is among the most tightly constrained. You of course are welcome to try and construct a climate model that is consistent with the data and has a low CO2 sensitivity. It would be a very interesting beast. Nobody has succeeded so far.

  • george // August 21, 2008 at 1:02 pm

    Being a true skeptic involves being skeptical of people who perceive a need to portray themselves as a “skeptic”.

    Skepticism used to be a mindset. In recent times it seems to have become more of a uniform to inspire awe: “Never fear. Skepticman here!”

    Would probably make a good cartoon.

  • Ray Ladbury // August 21, 2008 at 1:43 pm

    TCO, You claim to be a scientist, but I see little or no understanding of the motivations of scientists in your post. I particularly like your assumptions about my workload. I work about 80 hours a week, except when I travel to conferences. Then I work more. About 75% of what I do is bureaucratic BS. I do it because the other 25% is fascinating. Scientists do what they do because they want to understand how things work. There is no way you could pay them enough to work the hours they work. Now, there are some of us who are also curious about things other than our own narrow spheres of research, and yes, they are a minority.
    As to RC, perhaps its teaching style is not effective for you. I know I’ve learned a helluva lot from it.

  • Hank Roberts // August 21, 2008 at 3:14 pm

    George, look up Doonesbury +”Teach the Controversy” — doonesbury/2006/03/05/

  • Paul Middents // August 21, 2008 at 3:21 pm

    Mr. Pete,

    Nice sidestep to the 13 year old girl publishing in JAMA. Are you referring to 9 year old Emily Rosa by any chance? She coauthored with her mother a take down of healing touch–somewhat akin to shooting fish in a barrel.

    Do you find “truth” on teener, Kristen Brynes’ Ponder the Maunder site? She now has a foundation named after her. That must increase her truthiness.

    Your original comment referred to a “monopoly on truth”. Clearly no individual has a monopoly but given a choice on who is more likely to have a handle on climate science “truth”, I’ll put my money on the pro’s publishing their work in the peer reviewed literature.

  • Paul Middents // August 21, 2008 at 3:37 pm

    ChuckG,

    Pat Frank is a PhD Chemist with 50 peer reviewed publications, but none relating to climate.

    I would be very interested in a discussion of the Pat Frank/Gavin Schmidt exchange. Gavin reluctantly spent a great deal of time extracting from Frank the basis for his “model”.

    Please do tell us what you found unconvincing or inconsistant in Schmidt’s responses to Frank?

  • TCO // August 21, 2008 at 4:25 pm

    Mr. Pete: Thanks for trying to answer me. I still think that this expedition and it’s partial results were over-touted and under-delivered.

  • Rainman // August 21, 2008 at 4:30 pm

    Tamino: Back issues are not pleasant. (I’ve tweaked my back a few times in Aikido.)

    Find a good bio-physics certified chiropracter. There are some butchers out there, but a good one can work miracles.

  • TCO // August 21, 2008 at 5:32 pm

    Ray:

    I spent several weeks at Langley. Most PIs were driving out across the airbase before 1700. The place was a graveyard on weekends. Nothing lazy. But not a pressure cooker. Maybe your 80-hour weeks are not representative.

    P.s. I did not claim to be a scientist.

    [Response: I think 80-hr weeks are actually pretty typical. But it's not because managers are looking over our shoulders cracking a whip. It's because when a problem or question gets hold of you, it won't let go. My wife complains that when I get that "glazed" look in my eye ... she knows I'm in another universe, and it's not easy to pull me back.

    Yeah we work hard, and we work long hours. But we do it because we love it, and we can't *not* do it.]

  • Deech56 // August 21, 2008 at 7:37 pm

    I would add to Tamino’s comment regarding what drives scientists – with grad school and post doc training, a real job may not be a reality until one is in his or her early-to-mid 30s. It’s a fun life, but the lentil soup and PB&J routine gets old (especially if you have a family). In what other field can you say that you are the first to discover something? If you don’t have a driving curiosity there’s really no reason to go through the hassle.

  • Deech56 // August 21, 2008 at 7:55 pm

    Oh, and going through the “Caspar and Jesus paper” post – the math may be challenging, but in reading the description of the publication of the papers, it seems that the author does not have a strong handle on the foibles of manuscript publication.

    For example: not every revised MS is sent back to the reviewers, a rejected paper usually does find another home, and difficulties with sequential publishing and cross referencing do exist. To automatically assume that there is a great conspiracy is a stretch, IMHO.

    The paper should be judged on its merits, not assumptions regarding its submission history. Does the recent von Storch, et al. manuscipt provide additional confirmation? (I know he indicated before Congress that redoing the MBH analysis according to the suggested methods led to – a hockey stick.)

    von Storch, H., E. Zorita and J.F. González-Rouco, 2008: Assessment of three temperature reconstruction methods in the virtual reality of a climate simulation. International Journal of Earth Sciences (Geol. Rundsch.) DOI 10.1007/s00531-008-0349-5

  • David B. Benson // August 21, 2008 at 9:18 pm

    TCO // August 21, 2008 at 5:32 pm — Tamino’s respnse suggests that science is a mental addiction for (some) scientists. :-)

    Writing computer programs can be like that as well.

  • george // August 21, 2008 at 9:41 pm

    I am always happy to meet one with the real blazing Feynman-like curiosity and ability.”

    Feynman was genuinely curious — as are (I think) most scientists. Feynman may have been more curious about a wider range of subjects than some scientists, but based on my personal experience working with scientists, I can say with some confidence that he had no monopoly on that trait by any means.

    The operative word above is “genuinely”. I’m not sure Feynman would have had much (or any) patience for a lot of the “science” that is pursued these days (on blogs and elsewhere) in the name of “curiosity.”

    I never met the man, but based on what I have read about him, I suspect he might even have had some rather unkind words to say about some of it.

  • TCO // August 21, 2008 at 9:58 pm

    I’ve seen both sides of the fence guys and would be wary of the tautologies, of the self-licking ice cream cones. I’ve spent significant time at a couple national labs as well. We’re not talking I banker level of hours there. It’s much more a 9 to 5. And lots of people even just in middle manager business jobs work hard guys. Don’t be so quick to paint yourself as saints…

  • David B. Benson // August 21, 2008 at 10:38 pm

    george // August 21, 2008 at 9:41 pm — We already known what Feynman would call it: cargo cult science.

    TCO // August 21, 2008 at 9:58 pm — Are saints mentally addicted too? :-)

  • Ray Ladbury // August 22, 2008 at 1:18 am

    TCO, You often do not see me at work on weekends–rather, you’re likely to find me with my face bathed in the glow from my laptop. Or if I’m testing, I’ll be at the accelerator for 16-20 hours a day (weekends are typicaly the only time we can get beam). I would contend that you really aren’t going to learn how science works from a visit to Langley–even one that lasts “a few weeks”.
    I agree with Tamino, my job is also my hobby–but don’t ever let anybody try to tell you it ain’t work.

  • ChuckG // August 22, 2008 at 1:29 am

    Paul Middents // August 21, 2008 at 3:37 pm
    Reread my post. It is supposed to be neutral in tone. I only wished for Frank/Gavin exchange be fleshed out over here on an Open Thread rather than clutter up that thread. Their latest exchange has fleshed it out.

    I knew who Pat Frank was. (dhogaza doesn’t) Frank clearly had better cred than Lord Moncton. Which is what piqued my interest.

    My math skills are old and weak. As am I. Probably the last time I questioned someone’s math was in early 1964. Sometimes I can even follow the very clear Tamino posts!

    Phenology clearly supports GW. And there is no reason why I shouldn’t assume AGW is the cause.

  • MrPete // August 22, 2008 at 1:46 am

    Paul M – with your more nuanced response, I agree with you, except that I’ve yet to find any potholes of folk who tie dollars and truthiness together.

    (I too have marveled at a teen having a “foundation” — your mention prompted me to actually check it out. The no-surprise part: it’s to allow contributions to her college education. The boring part: it is not what most think of as a real foundation: not a non-profit org’n, not reg’d w/ IRS (or at least not in the main online database of same.) I wouldn’t expect this to go further than a young woman taking advantage of her 15 minutes of fame to pay for a college education. Here today, gone to maui.)

  • MrPete // August 22, 2008 at 2:00 am

    Tamino sez: “My wife complains that when I get that “glazed” look in my eye … she knows I’m in another universe, and it’s not easy to pull me back.”

    Heh. FWIW, it’s called flow. Lots of us have the bug. Your site is one of the places I go for an intentional flow-break ;).

    Some studies have given helpful insights about flow. Arrange work to minimize flow breaks: a two-minute distraction can easily cost 15 minutes of flow. (Nice intro by the guy who first wrote about it in 1991: http://psychologytoday.com/articles/index.php?term=19970701-000042&page=1)

    [Response: How true! My wife has finally figured out that if she interrupts me for one minute, it can sabotage a train of thought which has been proceeding for a lot longer than that. When I'm really in the groove, I try to isolate myself; it's not always easy.]

  • Gavin's Pussycat // August 22, 2008 at 9:58 am

    Re: flow.

    My wife has finally figured out that if she interrupts me for one minute, it can sabotage a train of thought which has been proceeding for a lot longer than that.

    Same here. So true, so true.

    BTW this seems to be typical for the Asperger syndrome, found a lot in both scientists and IT people. I wonder how many of us have it?

  • Ray Ladbury // August 22, 2008 at 12:23 pm

    Apropos of Flow:

    A doctor, lawyer and physicist are talking about whether it’s better to have a wife or a mistress.

    “Much better to have a mistress,” says the doctor emphatically. “You can have fun with her as long as she looks good and then you can dump her.”
    “Woah,” says the Lawyer, “that’s dangerous. You could get hit with a palimony suit and lose half of everything you own. It’s much better to have a wife. It’s a contractual, legal arrangement. Everybody knows what’s expected. Much better to have a wife.”
    “You’re both wrong,” says the physicist. “It’s better to have both.”
    “Whoa, dude!” say Lawyer and Doctor simultaneously.
    “Yeah,” says the physicist, “that way, when it’s 11:00 and you’re not home, your wife thinks you’re with your mistress. Your mistres thinks you’re with your wife, and you can be at the lab getting some real work done.”

    [Response: That's hilarious! I guess I'm gonna have to get a mistress...]

  • george // August 22, 2008 at 4:02 pm

    A wife who is a herself a scientist is best of all (and orders of magnitude simpler)

    Then, when you are at the lab, you know that she is also at the lab (not necessarily the same one) and you don’t need to worry about her galavanting about cheating on you.

    And what could be more distracting to your “flow” of thoughts than having to worry about two people (wife and mistress) galavanting about cheating on you?

    BTW
    If this thread is for “discussion of things global-warming related, but not pertinent to existing thread”, maybe you need to start a “pertinent” thread
    Or maybe “impertinent” would be a better description. I hope (for your sake) that your wife does not read this stuff.

  • Hank Roberts // August 22, 2008 at 6:41 pm

    Chuckle. I pointed it out to my wife.

    She was deep into a complicated Excel spreadsheet, set up to lay out and explain knitting patterns to friends who are having trouble following a pattern that came with poor written instructions (most of which are pretty poorly written, like “for the other sleeve reverse the previous steps”).

    > Asperger’s
    Ding!

  • Hank Roberts // August 22, 2008 at 6:54 pm

    One last thought on this tangent and I’ll leave it — somewhere recently I noticed research saying that when people are presented with two different conversations or audio programs, some of us can manage to follow both of them at the same time; other people consistently find the situation intolerable because of interference. The story speculated this may have to do with the rate at which the brain hemispheres exchange information from the two ears.

    Earlier I recall much study of how oral input can displace visual; how visual imagery can displace input from the eyes; and so forth.

    Didn’t turn up a cite in a quick search — just to say this may well have a whole lot of factors involved.

    But, hey, before I married, I had st times dated women who basically thought of only one thing at a time. They found me incomprehensible because I do branching trees and parallel threads in everything I think about and much of what I talk about.

    Got lucky at last. Grateful. Not complaining.
    (Hi honey!)

    [Response: My wife is like you: a multitasker par excellence. I'm a single-thread guy; when I get on a train of thought the rest of the universe had better not interfere! We complement each other nicely.]

  • Jason Bint // August 22, 2008 at 7:26 pm

    For those wondering, the Colorado tree samples are here:

    http://www.climateaudit.org/data/colorado/

    The NAS panel report is what is being thought of I believe. It has a discussion of r2 and other issues starting on page 92

    VALIDATION AND THE PREDICTION SKILL OF THE PROXY RECONSTRUCTION

    And starting on page 112

    Criticisms and Advances of Reconstruction Techniques

    With what is being though of here on 113:

    Regarding metrics used in the validation step in the reconstruction exercise, two issues have been raised (McIntyre and McKitrick 2003, 2005a,b). One is that the choice of “significance level” for the reduction of error (RE) validation statistic is not appropriate. The other is that different statistics, specifically the coefficient of efficiency (CE) and the squared correlation (r2), should have been used (the various validation statistics are discussed in Chapter 9). Some of these criticisms are more relevant than others, but taken together, they are an important aspect of a more general finding of this committee, which is that uncertainties of the published reconstructions have been underestimated. Methods for evaluation of uncertainties are discussed in Chapter 9.

    Reference:
    http://books.nap.edu/openbook.php?record_id=11676&page=

  • Paul Middents // August 22, 2008 at 7:28 pm

    ChuckG // August 22, 2008 at 1:29 am

    Thank you for pointing out the latest exchange (Aug 21) between Frank and Schmidt. Should Frank respond, it will be interesting to see if his reply addresses the use/abuse of logarithms.

    Eli Rabett is the man to chronicle and comment on a train wreck of this length and magnitude. It is reminiscent of the epic exchanges on Dot Earth between Arthur Smith and Gerhard Kramm defending the Gerlich and Tscheuschner paper which in essence disproves the entire greenhouse effect. Gerlich and Tscheuschner themselves, eventually entered the fray.

    http://rabett.blogspot.com/2008/02/all-you-never-wanted-to-know-about.html

    The parallels between Frank and Gerlich and Tscheuschner are striking. Grandiosity comes to mind first. Gerlich and Tscheuschner title their work:

    “Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics”

    Frank’s subtitle states:

    “The claim that anthropogenic CO2 is responsible for the current warming of Earth climate is scientifically insupportable because climate models are unreliable”

    It doesn’t get much grander than that. It is also noteworthy that neither found a home for their work in the peer reviewed literature.

    All the protagonists boast advanced degrees in the physical sciences. Frank had his very aggressive defender in the early exchanges with Schmidt–Gerald Browning, a retired atmospheric scientist. Gerlich and Tscheuschner had Gerard Kramm, atmospheric scientist, University of Alaska.

    Both episodes required the tenacious pursuit by very knowledgeable people (Schmidt and Smith) before the essential flaws in the scientist’s reasoning could be apparent to non-specialists.

    We owe professionals like Gavin Schmidt, Arthur Smith and our lop eared friend at the “run” a debt of gratitude for their willingness to confront, at great length, the most pernicious in support of denying and delaying. These are the credentialed scientists who really believe they have found physically based reasons that we have nothing to worry about.

    Tamino is equally heroic in his on-going confrontation via this blog of the pseudo-scientific underbelly—those second tier amateur auditors who would lie with statistics. These are the folks who, via their blogs, really seem to have the ear of and provide the ammunition for the skeptic crowd.

  • MrPete // August 22, 2008 at 7:36 pm

    Hank, that’s a known test. Typically men are one kind (single-focus) while women are the other (multi-focus). Definitely not universal. The typical test is two simultaneous audio streams. Result: single-focus people are able to listen to a single source. Multi-focus people go nuts because they can’t tune out one of the sources.

    Me? I get in the flow and tune everything out :)

    I dunno about Aspies, but we were guardians for an ADHD girl for a while. Horrible at school work but Taco Bell loved her: she could operate all seven work stations simultaneously. She was their best night-time closer by far. ADHD is not worse, just different :)

  • Jason Bint // August 22, 2008 at 7:48 pm

    Or possibly this finding in the Wegman report:

    Based on discussion in Mann et al. (2005) and Dr. Mann’s response to the letters from the Chairman Barton and Chairman Whitfield, there seems to be at least some confusion on the meaning of R2. R2 is usually called the coefficient of determination and in standard analysis of variance; it is computed as 1 – (SSE/SST). SSE is the sum of squared errors due to lack of fit (of the regression or paleoclimate reconstruction) while SST is the total sum of squares about the mean. If the fit is perfect the SSE would be zero and R2 would be one. Conversely, if the fit of the reconstruction is no better than taking the mean value, then SSE/SST is one and the R2 is 0. On the other hand, the Pearson product moment correlation, r, measures association rather than lack of fit. In the case of
    simple linear regression, R2 = r2. However, in the climate reconstruction scenario, they are not the same thing. In fact, what is called β in MBH98 is very close what we have called R2.

  • Gavin's Pussycat // August 22, 2008 at 9:05 pm

    Jason, thanks, that is it.

    Wahl and Ammann show that r2 is inappropriate — which it obviously is. R2 makes sense also for reconstructions. But how does it differ from RE? Does it?

  • Gavin's Pussycat // August 22, 2008 at 9:54 pm

    Jason, the answers are in W & A. R2 and RE are the same (or nearly so). Should have RTFP :-)

  • ChuckG // August 22, 2008 at 11:11 pm

    Paul Middents // August 22, 2008 at 7:28 pm

    Climate Progress is visited right after Rabett is visited right after Open Mind which is visited right after RC. Six days a week. Even retirees need a day off. RC since June ‘06.

    I am intimately familiar with the potential weaknesses displayed by physicists and lesser mortals when they stray much out of their core competence(s). Having had to deal with it on and off over the years before I retired in ‘93.

    So my expectation was that Frank would be discredited.

  • Steve Reynolds // August 22, 2008 at 11:43 pm

    “I would contend that you really aren’t going to learn how science works from a visit to Langley–even one that lasts “a few weeks”.”

    One more data point: From when I worked at JPL, the majority of the scientists (certainly not all) that I worked with were 9 to 5 types.

  • Jason Bint // August 23, 2008 at 12:03 am

    GP: “Jason, the answers are in W & A. R2 and RE are the same (or nearly so).”

    I believe if you look in “W&A’s” SI for their Climatic Change paper, they come up with a figure of .52 for the RE. How that corresponds to R2 or r2, or that R2 and r2 are only the same insimple linear regressions I have no idea: I’m a librarian not a statistician. So your point is lost on me.

    All I can say is that the NAS said “uncertainties of the published reconstructions have been underestimated” in relation to the issues, that the “choice of ’significance level’ for the reduction of error (RE) validation statistic is not appropriate” and “different statistics, specifically the coefficient of efficiency (CE) and the squared correlation (r2), should have been used” and Wegman said ” In the case of simple linear regression, R2 = r2. However, in the climate reconstruction scenario, they are not the same thing. In fact, what is called β in MBH98 is very close what we have called R2.”

    I am not and was not editorializing, just pointing out the references that may have been what you were looking for.

  • Ray Ladbury // August 23, 2008 at 2:04 am

    Steve, what division was that? All I can say is that my own experience is not consistent with that. Even those scientists who do go to hearth and home are usually buried in their research ’til the wee hours. Yes, there are 9-5ers in science. Usually not for long, though.

  • TCO // August 23, 2008 at 2:53 am

    2 months at Langley
    4 months LANL
    3 months at a military lab
    several years in and around R&D within F500
    4 years getting the union card

    Contrasted with several years in business/marketing/military/consulting/engineering

    My take: R&D very 9 to 5ish. The only guys in the lab at 2300 are graduate students in universities. PIs are not (in academia, government, or industry).

    BTW, I would “James Annan Bayesian BET” your 80 hour weeks are BOTH non-representative and exaggerated. It’s a common pattern for people to over-report workload (even in law and consulting and I-banking and military service where there REALLY are some sweat shops). It’s a commonly written about phenomenon. Check this out:

    http://www.google.com/search?q=exaggeration+of+hours+worked+per+week&rls=com.microsoft:en-us&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1

    P.s. Yeah, I’ve swung through JPL for a few days too. Felt just like Langley in terms of work pace!

    [Response: Math is my work. It's the 1st thing I think of when I get up in the morning; my biggest problem falling asleep at night is that I'm still thinking about. I don't go to the toilet without my notebook so I can scribble more equations while taking a crap. When my wife and I go on vacation, she sometimes slaps my hand when I pull out the notebook at the dinner table in the fancy restaurant. But even if she manages to get me to put away the pen and paper, I can still work on it; eventually you develop the ability to do it in your head. It's a passion, I love it, but it's still work and it's still my job. And 80 hours a week is an UNDERestimate of how much I do it.

    Based on my experience, I'm hardly the only one. Many don't spend more than 40 hr/week officially "at work," in fact many are in the office/lab/classroom a lot less than that -- because all the distractions can interrupt doing *real* work. If you estimate how hard and how long we work by how long the guys you've worked with are in the office or the lab, either your estimate is way off or you've been looking at the wrong guys.

    I don't know why you insist on calling us liars. Maybe you're just jealous of the fact that we love our work so much we can't get enough of it.]

  • Steve Reynolds // August 23, 2008 at 3:26 am

    “…what division was that?”

    Sorry, that was too long ago (1976-1977) to remember. The slow paced environment was one reason I left, though. I remember after telling management that my project had progressed as far as it could until some higher level decisions were made, being advised by a co-worker not to do that. He said I should keep a project going until I was sure what would replace it.

  • Gavin's Pussycat // August 23, 2008 at 5:48 am

    I am not and was not editorializing, just pointing out the references
    that may have been what you were looking for.

    Yes and I was thanking you that indeed the second one was. RTFP was to myself, thinking aloud.

  • Gavin's Pussycat // August 23, 2008 at 6:08 am

    > eventually you develop the ability to do it in your head

    Yes, sure. The main risk is losing good ideas. I resisted for many many years getting a mobile phone; now that I have one (forced by my boss), I notice that it is worth its weight in gold as a primitive notebook ;-)

    No, doesn’t do equations, but it remembers for me.

    As to work, this seems relevant:

    http://www.paulgraham.com/opensource.html

    “How scientists work” would be worth its own post/thread.

  • Deech56 // August 23, 2008 at 10:24 am

    TCO, it’s not the hours spent “at work” that defines the passion; it’s the whole path to get there and the having to deal with failure (worked R&D for a biotech company – most projects fail at some point). Since scientists get paid to do thinking, the places in which “thinking” can happen are not bound by the walls of the lab. Besides, I thought the original point was whether scientists were curiosity-driven.

    Oh, and in case anyone’s missed it, John Mashey has a great post over at Deltoid. I guess one can say he’s thought a little about the subject.

  • TCO // August 23, 2008 at 2:50 pm

    Practicing scientists may be more curiosity driven than some other fields (manufacturing, marketing) but not so much as they pat themselves on the back for and not so much as the image/stereotype.

  • Ray Ladbury // August 23, 2008 at 6:01 pm

    TCO, you make a lot of assumptions–all they do is show you don’t have many close interactions with scientists. It is often all my wife can do to keep me from taking my laptop to a party.
    It seems important to you that you can believe this. Fine, I’m not here to disillusion you–only to say that this is inconsistent with my experience.

  • george // August 23, 2008 at 6:07 pm

    Science is really a lifestyle rather than a job.

    The whole “how many hours do you work thing” is overblown anyway — absurd, really.

    It’s not the number of hours you work but what you accomplish.

    As with other very creative careers like art, the real measure of a good scientific career is certainly not the “number of hours worked.”

    I’ve worked with lots of scientists over the years and I’d say there is a significant difference between a scientific career and many other careers — ie, it’s not simply an imagined or “stereotyped” difference.

    the difference is this: scientists never really “go home” from work. They are always thinking about the latest problem, even in their dreams.

  • Hank Roberts // August 23, 2008 at 6:07 pm

    TCO, your basic point seems to be that everybody lies and they started doing it first.

    This is a world view of sorts. Is it your best?

  • MrPete // August 23, 2008 at 6:08 pm

    Curious: is John Mashey here the SGI John Mashey?

  • Deech56 // August 23, 2008 at 6:32 pm

    MrPete – the John Mashey who posted at Deltoid is. I might assume he’s the one who also posts here.

  • dhogaza // August 23, 2008 at 7:44 pm

    Yes, he is.

    Regarding Pat Frank, I was confusing him with Pat Michaels …

    Unless I’m confusing Pat Michaels with someone else, which, given that I’m easily confused …

  • Paul Middents // August 23, 2008 at 8:07 pm

    MrPete,

    Read Mashey’s entire post on Deltoid. It is the best “Climate Science How too” ever written. Along the way you will find the answer to your question revealed.

    Paul

  • MrPete // August 23, 2008 at 10:05 pm

    Thanks, yes the Deltoid article “tells all.” Good timing for my question :)

  • Lazar // August 24, 2008 at 1:06 am

    TCO;

    reforming the reformers

    If they are reformers, are they doing something which isn’t being done, and/or improving on what is being done?

    The Changing Character of Precipitation, Trenberth et al., BAMS

    Some excerpts…

    The diurnal cycle in precipitation
    is particularly pronounced over the United States in summer (Fig. 3) and is poorly simulated in most numerical models.

    [...] some models are wrong everywhere.

    [...] The foremost need is better documentation and processing of all aspects of
    precipitation.

    [...] a need for improved parameterization of convection

    [...] the improvement of “triggers”

    They’re not whining…

    [...] Accordingly, at NCAR we have established a “Water Cycle Across Scales” initiative to address the issues outlined above, among others.

    not just one voice either (I liked this but it wasn’t in the search.)

    So we have the audit of MBH98, Steve raises some theoretically valid criticisms, and a similar conclusion to the reliability of MBH98 could be gained from comparison with other reconstructions, only with less effort and time. The auditing approach which shows that a methodology is approximately wrong, cannot show that it is approximately right. Follow-up/replication can show that the results are reasonable, or questionable, but not the methodology which may produce the right answer for the wrong reasons (providing the answer is right, I don’t see that as a great problem), they can tell us what a reasonable result looks like, and the errors involved, and can be productive of future work.

    I think it would be good if Steve did a reconstruction. Show the world how to do it right. Join the Team!

    Data access reform…
    They are not helping the cause of open access
    by demanding data so they can dump on it, and have that spread by the international media, or by unreasonable personal attacks e.g., and particularly, on Lonnie Thompson.

  • Lazar // August 24, 2008 at 1:23 am

    … on research scientists. Those I know do not work as much as 80 hours, but certainly more than 40. Every one takes their work home. There are fewer distractions outside the lab, and working at night is the best. When they’re not doing formally recognizable work, they’re thinking. The intensity of the work is great. I have never met a research scientist who did it for the fabulous money and cushy workload.

  • TCO // August 24, 2008 at 2:49 am

    I think most business professionals think about work at home.

    Laz: I think there is some utility to Steve giving the system a little bit of a kick in the ass. But at this point, fixes/improvements are very unlikely to come from him or his ilk. He has not published for 3 years now. Set aside even the idea of reconstructions…in many cases, he doesn’t even define the EXTENT (numeric extent) of flaws that he sees. Like he blathers about rain series or something in MBH, but does not say how much it changes the answer to switch them out. He’s all about PR…and very little about mapping parameter space…about understanding sensativity. I contrast this with Burger and Cubash’s full factorial analysis. At this point, the main bad thing is that a lot of my fellow conservatives are sitting in echo chambers and listening to Steve and being amen choirs….and thinking that “the man” is not listenting to Steve…when Steve is not even clearly making points.

    I think the ideas themselves are fascinating. But it is a shame to see them approached so tendentiously.

    For instance, the red noise nature of simulation. Steve has avoided (in a John Edwards/Bill Clinton manner) coming to grips with defining how his VERY SAMPLE DEPENDANT “red noise” gives more of an effect, than simple red noise. He ought to at LEAST show both cases. As it is, it’s probably circular logic.

  • Lazar // August 24, 2008 at 10:21 am

    MrPete, thanks… I use Steve’s network details for coordinates. Google automatically converts to ddmmss. Any ideas?

  • Lazar // August 24, 2008 at 12:20 pm

    Hank,

    feedbacks — huge rainfalls, extreme erosion, lots of fresh carbonate rock exposed, more rainfall

    That seems to run contrary to

    The long-term carbon cycle, fossil fuels and atmospheric composition
    Robert A. Berner
    Nature 2003

    The deposition of carbonates derived from the weathering of carbonates is not shown because these processes essentially balance one another over the long term

    Weathering of silicates -> deposition of carbonates is a potential negative feedback
    Decomposition of deposited carbonates is positive.
    Weathering of organic sediments (kerogen) is positive.

  • Hank Roberts // August 24, 2008 at 6:02 pm

    Lazar, rate of change.

    Did you read the paper and look at the illustrations? Follow up the footnotes and check citing articles?

    On the longer time scale, the entire PETM is just a little blip.

    On the human time scale, a few decades or centuries of extreme precipitation events is a disaster.

    Look at the rainfall in the American Southwest recently — there are large alluvial fans of debris below mountain valleys on which people have built houses, relying on the geologists’ evidence that no flash floods have reached that far down the drainage since the last big episode of extreme precipitation around the end of the last ice age. Remember how we’ve had a period of ten thousand years of unusually stable climate?

    Those areas have had floods recently again, from unusually large thunderstorms.

    Look again at the paper I cited:
    http://ic.ucsc.edu/~jzachos/eart120/readings/Schmitz_Puljate_07.pdf

  • Steve Reynolds // August 24, 2008 at 7:15 pm

    “Data access reform…
    They are not helping the cause of open access by demanding data so they can dump on it…”

    The concern that others will find something wrong with his data seems to me the worst possible excuse for a true scientist to withhold data.

  • carl // August 24, 2008 at 8:07 pm

    Lazar says,
    “I think it would be good if Steve did a reconstruction. Show the world how to do it right. Join the Team!”

    That’s not what he does. He audits, using his specialty to do so. It’s not his job to release reconstructions; it’s the paleoclimatologists’ job to do that.

  • MrPete // August 24, 2008 at 8:47 pm

    Lazar: this is a typical data format/conversion challenge. The data you’re using is
    ddd.mm -ddd.mm
    38.46 -104.59

    Change to the following for Google Maps:
    ddd mmN ddd mmW
    38 46n 104 59w (those are spaces)

    How to know if it is ddd.dd or ddd.mm? See if any fractions are higher than 59 :)

  • Gavin's Pussycat // August 24, 2008 at 9:05 pm

    carl,

    there are cats like that, that know precisely what to do, and are explicit about it, but after a visit to the vet cannot do it themselves any more :-)

    They are called ‘consultancy cats’. Steve is a bit like that.

    He has no excuse, having worked with CFR software and produced results (well, there were ‘issues’, but fixable if he wants to.)

    TCO sees through his game, while being ideologically motivated not to; all it takes is opening your eyes.

  • Lazar // August 24, 2008 at 9:59 pm

    MrPete,

    Great. Thanks!

  • Lazar // August 24, 2008 at 11:15 pm

    TCO;

    I think there is some utility to Steve giving the system a little bit of a kick in the ass.

    Okay.

    But at this point, fixes/improvements are very unlikely to come from him or his ilk. [...]

    Agreed.

    [...] At this point, the main bad thing is that a lot of my fellow conservatives are sitting in echo chambers and listening to Steve and being amen choirs

    The psychology of denial…
    The online types who clutter the forums or write political blogs are mostly activists/pamphleteers who have been spoonfed whatever the most recent, practical policy implications of conservatism were as an ideology labelled ‘conservatism’, without really understanding either the underlying philosophy nor the practical circumstances surrounding those policy choices. E.g. they repeat 80s policy as mantra without addressing new problems, or they imagine those policies are practical solutions. A mantra is small government- and market- fundamentalism. Any proposed ‘fact’ in agreement is taken as gospel. They have the ideology bug, it won’t be shifted by facts or questioning or argumentation ’till you’re blue in the face. They are not intellectually curious. They are best ignored as they are increasingly irrelevant. When the online conservative world read CA, they received great comfort. When I read CA, I was scared. I’m an old type of, very, very, very traditional conservative (read Edmund Burke, John Adams). Trust in and respect for authority / the experts / the village elders… because that is what works (mostly). We live in advanced technological societies experiencing unprecedented rates of technological and cultural change… the possibility that the elders are idiots means the train comes off the rails. It scared me more than the worst implications of AGW did. So I read around Steve, I read what the experts were saying. I don’t expect the pamphleteers to do so. Observe there is an increasing disparity between the pamphleteers and the conservative masses, including the base. In the primaries, the pamphleteers supported Fred Thompson and despised McCain. The base, even the base, trust the scientists according to recent polls. Ordinary conservatives, ordinary people of whatever political stripe, have great instincts. Politics is a sewer. ‘Trust the people’ — Churchill. Not pseudo-intellectuals, too much noise in their heads. The pamphleteers are not your companions, TCO, they can’t follow where you’re going and you can’t take them. Perhaps you’ll find better companionship here? Many if not most are liberals/lefties, but what does it matter? Man is not a political animal. Everything does not reduce to a political question. That’s a Marxist point of view. That’s the frame through which the pamphleteers view things. Politics sucks, man.

  • Lazar // August 25, 2008 at 12:35 am

    Hank,

    On the human time scale, a few decades or centuries of extreme precipitation events is a disaster.

    … I’m certainly not disputing, ditto for rates of change. I’ll do more reading to try and see where you’re going with the feedback thingy.

  • Lazar // August 25, 2008 at 12:56 am

    TCO;

    Steve has avoided (in a John Edwards/Bill Clinton manner) coming to grips with defining how his VERY SAMPLE DEPENDANT “red noise” gives more of an effect, than simple red noise.

    I note he gave a nod to that issue in his first post on the most recent Wahl & Amman release… but only perceptible to those previously aware of the issue. I agree he needs to address it head-on.

  • Hank Roberts // August 25, 2008 at 1:44 am

    > He audits, using his specialty

    Chuckle. But look at his publications.
    Look at the people replicating his work and citing it.

  • Lazar // August 25, 2008 at 3:37 am

    Carl,

    That’s not what he does. He audits, using his specialty to do so. It’s not his job to release reconstructions; it’s the paleoclimatologists’ job to do that.

    Some people collect stamps.

    Once the rockets are up, who cares where they come down
    “That’s not my department!” says Werner Von Braun – Tom Lehrer

    In a facile manner (sorry), I’m trying to say the view is too narrow.

    If you want to improve things, and your strategy doesn’t work, you change strategy.

  • george // August 25, 2008 at 5:58 am

    “Data access reform…
    They are not helping the cause of open access by demanding data so they can dump on it…”

    The concern that others will find something wrong with his data seems to me the worst possible excuse for a true scientist to withhold data.

    While that may be the perception that some try to encourage, I think the truth is actually a bit different.

    I suspect that some scientists simply could not be bothered giving McIntyre and some others data because

    1) the scientists do not see these people as serious about doing real science (investing the time in properly understanding the issues, attending scientific conferences, publishing in peer reviewed journals, etc)
    2) the scientists have been turned off by the modus operandi of such people (with all that entails)
    3) the scientists have better/more important things to do with their time

    Admittedly, this interpretation of reality is a little more mundane than the alternative — not quite as fraught with high drama, mystery and intrigue:

    No conspiracies to hide data
    No “Piltdown man” frauds
    No “greatest hoax ever perpetrated on the American public”s
    No efforts to squelch whistle-blowers
    etc, etc

    But reality is often a little less exciting than we might have it.

  • Ray Ladbury // August 25, 2008 at 12:40 pm

    George–one thing to add to your list: Science doesn’t audit. It replicates independently. If I wonder about a colleague’s data, I don’t ask him for the data and try to redo his analysis. I gather data myself and look to see if it is consistent with my colleague’s data. Scientists are not bean counters–or bristle-cone counters. The problem scientists have with McIntyre is that his whole attitude and Oeuvre betrays a flawed understanding of the scientific method.

  • Dano // August 25, 2008 at 12:40 pm

    george @ August 25, 2008 at 5:58 am:

    Bingo.

    Good to see echoes of Dano this long after the events.

    Best,

    D

  • Hank Roberts // August 25, 2008 at 12:48 pm

    There’s a large literature available to which those folks could contribute if they worked out their method and described it so others could use it. The fact that they don’t makes people think it’s mostly grandstanding.

    Examples of doing it right:
    http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VD0-4SNHP07-1&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_version=1&_urlVersion=0&_userid=10&md5=84644c690299e23cdb381c93c323c336

  • Hank Roberts // August 25, 2008 at 12:50 pm

    better link:
    http://dx.doi.org/10.1016/j.im.2008.03.004
    general search:
    http://scholar.google.com/scholar?num=100&hl=en&lr=&newwindow=1&safe=off&scoring=r&q=spreadsheet+errors&as_ylo=2008

  • Hank Roberts // August 25, 2008 at 1:10 pm

    And, on why journals are better than blogs for making real contributions:
    http://ars.userfriendly.org/cartoons/?id=20080825

  • Lazar // August 25, 2008 at 3:18 pm

    Steve Reynolds,

    The concern that others will find something wrong with his data seems to me the worst possible excuse for a true scientist to withhold data.

    I don’t mean a genuine audit, finding genuine errors. I mean doing a hack PR analysis on the released data, e.g. conflating weather with climate to unfairly disparage the reliability of climate models, and spreading this confusion through the gullible media. Scientists are not going to want to release data if that’s the way the ‘auditors’ audit.

  • Lazar // August 25, 2008 at 3:34 pm

    … AGW is a serious issue. Scientist have responsiblity a) as citizens b) as scientists. When data is abused for PR purposes, scientists have a responsibility to clear the mess up, which means dropping what they’re doing. If CA cannot act responsibly, they undermine the cause for open access. What do they want? Is it open access, or is it PR?

  • apolytongp // August 25, 2008 at 3:34 pm

    Ray:

    Mann did not gather data. What he did was put together an algorithm. An equation. A statistical machine that crunched input and generated output. A math function (or relationship if you are pedantic) in the very broadest math sense. Examination of the algorithm by running the same data through variants of the algorithm other data through same algorithm (MM, Huybers, WA, Wegman, VS-Z, etc.) is a reasonable and insightful thing to do.

    My issue with McIntyre is that he confounds the “message broadcasting” (mostly on his controlled blog, no less) so much with his analysis (doing isolated cases for effect, dotcom stock example, etc., only reporting the things that make MBH look bad, not quantifying things that sound bad but have major impact, etc.) that we get little real understanding of the algorithm-data couple. And his acolytes listening to him are not really curious either. Heck, I know I’m clueless…but I at least sorta ahve a feel for that–am not out in the Rumsfeldian unkown unknown land. The amen choir just enjoys the social/political frolic and doesn’t try to think. (Zorita, Burger…even Mosher, JohnV….heck even bender occasionally) show more real curiosity.

    Mann is defensive and not thoughtful either…very ego-driven “young Turk” type scientist, rather than curious mathematician. Unwilling to show work so it can be examined, writing atrociously, not sharing all details of his math method, and seeing comments as a battle for PR rather than exploration of phenomena. While I understand that you all are politically sympathetic with him (e.g. Daily Kos discussion, liking Obama, etc), I am cheered when I see Gavin’s Pussy (e.g.) thinking independantly, giving Tammy a check every now and then.

    P.s. (Pre-emptive strike) Please spare me any blather about how I don’t understand scientists, how great they are either. I’ve name-dropped enough to show that I have some experience. And I find that (e.g.) the union-card-holders most proud of their status are the ones who are the most minimal.

  • apolytongp // August 25, 2008 at 3:35 pm

    Crap, stupid wordpress: Should be under TCO (not a sock-puppet, but wordpress won’t let me have the nickname TCO. Someone already had it.)

  • apolytongp // August 25, 2008 at 3:37 pm

    Lazar: Scientists should release data and methods regardless of whether their opponent will misuse it. And it is also the most common sense thing to worry about people finding mistakes. Most publications have some. And most scientists accept a standard that is well shy of the attitude of mathematicians with thereoms. God knows, someone could come after me and find things I did wrong. It’s natural to not like that.

  • apolytongp // August 25, 2008 at 3:51 pm

    I’m missing an “or” and it should be “minor impact”. Sheesh, think I’m turning dyslexic.

  • Ray Ladbury // August 25, 2008 at 5:02 pm

    TCO/apolytonp, What would be learned by running the same data through the same algorithm? The most you can hope to catch with that is a simple error–or if you think it’s occurring outright fraud. That’s not how science works. Rather, the way a scientist would “replicate” the result would be to develop an algorithm and dataset independently and see if it produced similar results. That way, you test not just the data or the algorithm, but also the assumptions behind the analysis. If there is disagreement, it usually gets hashed out at conferences. McIntyre’s methodology is fundamentally unscientific. It is my impression that he doesn’t really have the discipline to submit to the usual peer-review process.

  • TCO // August 25, 2008 at 5:22 pm

    Ray:

    It’s a control. I just talked about how it’s interesting to vary the algorithm and/or the data set to explore the impact. I’m in shock that you would need to ask that.

    Oh, and given how complicated the algorithms (and datasets) are it’s doubly important to do a control. Heck, Wegman, MM, WA all have been expected to first demonstrate ability to replicate. And it was not trivial. Heck, Steve still doesn’t know how to get the error bars (the math equation is not shared in the paper).

  • TCO // August 25, 2008 at 5:24 pm

    I agree that McI lacks discipline and that going through peer review would be beneficial (would tighten his wandering logic and explication), but that he is too lazy or tendentious to do so.

  • george // August 25, 2008 at 6:29 pm

    the way a scientist would “replicate” the result would be to develop an algorithm and dataset independently and see if it produced similar results.

    I think what John Van Vliet did is a good example of this.

    His effort provided far more insight into whether THE NASA GISTEMP algorithms and implementation (ie, code) are doing what NASA claims — than simply recompiling and re-running the NASA code with the same (or even similar) data would have done.

    That is especially true when the algorithm and/or computer code might be less than transparent, as some have complained about (incessantly) in the case of GISTEMP.

    If the people who are supposed to be “repeating the experiment” have trouble even compiling the code, should I really trust that they got the implementation of Hansen’s algorithm right?
    Give me one good reason why?

    If they do manage to eliminate all the compiler errors and end up getting a different result from Hansen when they run the code on the NASA dataset, is it due to some error in the original algorithm? In the implementation? or in the effort to replicate? Other than having yet a third party attempt to repeat the effort precisely, How does one decide?

    like it or not, this is where expertise is highly relevant.

    Sorry, but I, for one, would have very little faith that someone who seemed to have so much trouble compiling would be able to implement an algorithm correctly. (I’m saying that based on my years doing scientific programming)

  • Ray Ladbury // August 25, 2008 at 6:39 pm

    TCO, you are missing the point. All you can answer by redoing an analysis is: Did they do it right? That’s not all that interesting, and depending on the disposition of the auditor, you tend to get a predictable antagonistic or sympathetic bias. Such an audit or control would be appropriate within a collaboration–after all, they have access to the data, code and people on a daily basis.
    Once the results are published, the time for audits is over. Then, the work must be independent. That’s how science works: audits are internal; replication is external and independent.

  • Steve Reynolds // August 25, 2008 at 8:45 pm

    Lazar: “I don’t mean a genuine audit, finding genuine errors. I mean doing a hack PR analysis on the released data, e.g. conflating weather with climate…”

    While I have not seen any evidence of McIntyre doing that, it still does not matter. Scientists should not withhold data that public funding has paid for (except possibly some military application data).

    At least for me, the negative effects on credibility associated with withholding data and methods are much worse than what any ‘hack analysis’ can generate.

  • Steve Reynolds // August 25, 2008 at 9:05 pm

    Ray: “Once the results are published, the time for audits is over. Then, the work must be independent. That’s how science works: audits are internal; replication is external and independent.”

    I’ve never seen that stated as part of the scientific method before. Is that Ladbury’s Law?

    Is that how the error in the satellite temperature measurements was resolved? Did RSS get their own satellite?

  • nanny_govt_sucks // August 25, 2008 at 9:06 pm

    All you can answer by redoing an analysis is: Did they do it right? That’s not all that interesting,

    Are you serious? What if the answer is “THEY DIDN’T DO IT RIGHT”. Are you saying you would be uninterested in this result?

  • apolytongp // August 25, 2008 at 9:35 pm

    No, I’m NOT missing the point. You can do more than that. You can explore parameter space and have a control.

  • MrPete // August 25, 2008 at 9:41 pm

    I can appreciate both sides of this internal/external tiff.

    I think it is worth reminding ourselves that falsification is a rather important element of science.

    The problem here is that so much of the controversial science work is qualitatively different from what we’re used to. In essence, it is statistical analysis of data sets.

    When both the data and the analysis are opaque, what does “falsification” mean in practical terms? Kinda hard to falsify when the data and analysis being compared are opaque.

    I can see why some of these things have emerged over time (e.g. good dendrochronology doesn’t depend on explaining all the data), and I can see that a new generation of dendros is/will take things to a new level.

    At this early stage in the development of a science, I think “did they do it right?” is a valid, even important question.

  • David B. Benson // August 25, 2008 at 9:47 pm

    Arthur Anderson & Co. used to audit. Do you know why they don’t anymore? And why this might be relevant to one of the comment items just now?

  • David B. Benson // August 25, 2008 at 9:49 pm

    Steve Reynolds // August 25, 2008 at 9:05 pm — In at least parts of physics and chemistry, nobody accepts the results (except maybe the authors) in a paper until the effect has been independently replicated.

    Cold fusion, anyone?

  • Lazar // August 25, 2008 at 10:08 pm

    TCO;

    Scientists should release data and methods regardless of whether their opponent will misuse it.

    That is ethics. But how to pragmatically achieve open access given human nature; how do scientists respond to misuse/PR spin of data and methods? If open access is what CA/Stockwell/Watts really, really want, they need to drop the PR. Show responsibility. Gain trust. Otherwise, they’re just harming the cause.

    From a selfish point of view (I’d like access), and from a societal point of view of greater technological progress, I’d agree in the general case that they “should” release data and code where copyright and grant terms allow.
    For this particular issue, at this particular time, I’d side with scientists selectively releasing their data and code until the dust has settled. That point of view is entirely due to political and corporate corruption and the pressing nature of the issue(s).

  • Ray Ladbury // August 25, 2008 at 10:56 pm

    Steve Reynolds, One of the precepts of the scientific method is INDEPENDENT verification. How do you remain independent if you are sharing data, codes, ideas, etc.? These things get shared internally within a research group. The methodology is summarized in the paper. If reviewers do not understand how the research was done from the description, they will ask for clarification. Once they are satisfied, the research is published, and subsequent efforts have to be independent.

    The reason why “did they do it right?” is not all that interesting is that the answer emerges in the process of independent replication, and by that time, any incorrect research will likely have been supplanted.

    MrPete says: “The problem here is that so much of the controversial science work is qualitatively different from what we’re used to. In essence, it is statistical analysis of data sets. ”

    Huh? Exactly how is statistical analysis of datasets anything new in science?

    And falsification? Dude, science has SO moved beyond Karl Popper! There are information theoretic approaches, Bayesian approaches… Yes, falsification is an important aspect of science, but it is not the entire story. Do you really expect evolution to be falsified? Gravity? Climate science is over a century and a half old. I rather doubt the basic model of Earth’s climate will look dramatically different in 100 years. Details will change. We’ll understand inter-relations between forcers and feedbacks better, and we may even find a few new forcers, the the outline of the theories would likely be recognizable to a climate scientist of our time.
    I really don’t think anything I’ve said is all that controversial among people who actually DO science.

  • HankRoberts // August 25, 2008 at 11:11 pm

    “… On a non-drought related point – it’s fascinating that sceptics like to use, when it suits their purpose, the same temperature series they discredit to prove their latest “cooling” idea. Surely they can’t have it both ways. The excellent Wood-for-Trees website allows one to plot, compare and contrast, in as many ways as you can imagine, the relative differences between the two ground-based and two satellite temperature analyses. The trends are very similar, with the major differences being in how GISTEMP treats averaging of stations across the Arctic, and in the different baseline periods used to compute the ‘temperature anomaly’….”

    Found here:
    http://bravenewclimate.com/2008/08/24/dr-jennifer-marohasy-ignores-the-climate-science/

    Points to here:
    http://www.woodfortrees.org/plot/hadcrut3vgl/from:1979/offset:-0.146/mean:12/plot/uah/from:1979/mean:12/plot/rss/from:1979/mean:12/plot/gistemp/from:1979/offset:-0.238/mean:12

  • Steve Reynolds // August 25, 2008 at 11:46 pm

    Lazar: “…from a societal point of view of greater technological progress, I’d agree in the general case that they “should” release data and code where copyright and grant terms allow.”

    If I can believe what I read at CA, most ‘grant terms’ not only allow, but require sharing data and methods.

    Do you think scientists concerned about negative PR should be able to use that as an excuse to violate their grant terms?

  • David B. Benson // August 25, 2008 at 11:55 pm

    Ray Ladbury // August 25, 2008 at 10:56 pm — Nothing you’ve written is in the least controversial for actual scientists.

    That said, with ever greater use of computational experimentation, in some research specialties there is a movement towards making the computer programs available to the audience of the journal accepting the (summarizing) paper. The advantages of doing so are most unclear to me; I have enough trouble re-reading my old research codes when I neeed to go back to those; certainly don’t want to have to look at someone else’s.

  • Ray Ladbury // August 26, 2008 at 1:58 am

    David Benson, When I have developed an analysis technique, I am more than happy to share it. If people call, I can help them through the steps of the analysis, but typically, unless it’s a pretty straightforward analysis, I prefer to let them reproduce it independently, with guidance only as needed. In part, this is because I know it can always be improved upon, and what they come up with might be much better. I don’t see much advantage to passing code between groups.

  • apolytongp // August 26, 2008 at 5:20 am

    Well let me fill you in, Ray. The advantage is that the algorithm is exactly described by the code, but generally NOT properly described in the paper. For instance in MBH98, the acentric standardization was NOT listed in the paper. Capisce?

  • apolytongp // August 26, 2008 at 5:24 am

    For instance, the error bars are a mystery. The paper doesn’t give an exact algorithm or equation that allows for reproducing the lines in the figure.

  • Philippe Chantreau // August 26, 2008 at 5:27 am

    TCO, I agree with Lazar in his 10.08 post. When you see how some individuals (for lack of a better word) can torture data, it’s probably better that these data are not released to them.

    You say “opponent” but do not emphasize that the opponent must have certain qualifications and ethics in order to make data release truly productive from a scientific point of view. Ray nicely summarized why that is: real expertise is needed.

    Some “skeptics” (Watts comes to mind) have shown incompetence and bias .They do not deserve to be given data if all they’re going to do is to foster talking points with no regard for proper analysis. Even from a truly skeptical point of view (as in striving to understand reality), that will do more harm than good.

    Nincompoops who believe that they know better than anyone else and the complacent media who give them a voice have forever made data and method release an impossible dilemma for real researchers. That’s a shame.

    Hence, it is understandable, and possibly beneficial, that some are reluctant to release anything.

  • tamino // August 26, 2008 at 8:45 am

    I’m traveling today, and will be on the road all week, so moderation may be a bit slow.

  • MrPete // August 26, 2008 at 12:09 pm

    Ray sez several things, including: “The reason why ‘did they do it right?’ is not all that interesting is that the answer emerges in the process of independent replication, and by that time, any incorrect research will likely have been supplanted.”

    Several things have been noted that are quite correct in theory, but (surprisingly?) not in today’s reality. In the current compressed environment (so to speak ;)), we’re trying to act on data before anything truly independent is done, and before incorrect research is supplanted. Instead, with falsification considered boring if not offensive, we assume the validity of what was published yesterday, and build on top of that.

    Ignoring the dreck, there are multiple “deniers” who are rather well qualified to comment in their areas of expertise, which others are simply reluctant to accept. Sure, they get upset and comment outside their expertise. It’s a rare person who sticks 100% to their knitting.

    Re the stability of science. I’m amazed that you present such a…boringly stable… perspective. How big a shift does it take for you to think something quite different has been learned? Sure, the “basic” outline is known. But do we really know enough to know that action X is needed, and will help?

    Just consider how much our understanding of methane has shifted in the last few decades. Ruminant sources (1970’s?) Trees moving from carbon sequestration to possible source of GHG (2006). Oceans as methane source (2008). These are pretty significant items.

    Trees are a huge topic. How long have we “more or less known the answer?” They thought we did only a decade ago at Kyoto. Now we ask questions like: do we grow more to sequester more carbon, grow less to avoid methane release? Do we replace forests with farming to produce biofuel? Or does it even matter? Those have been, and in some case still are, pretty good questions with the potential to not just shift our understanding by 5% somewhere, but change our actions.

    Bottom line: I think “getting it right” is still a pretty important question.

    Nincompoops, complacent media, and passionate campaigners exist everywhere. Hiding is not going to solve those problems; its a false dilemma to think hiding data and methods is gonna help. Sunlight exposes the truth. Both internally and externally. It’s a lesson learned well in many arenas of science, math, technology of all kinds. It’s time for this arena to join the fun.

  • apolytongp // August 26, 2008 at 1:17 pm

    If you only reveal secrets selectively, tha makes me believe your claims less. A Fenmann would not be scared.

  • Ray Ladbury // August 26, 2008 at 1:21 pm

    TCO/apolytongp, sorry, but the error bars need to be determined independently as well. You can argue that the reviewers should have demanded clarification of the procedure, but once it’s published, the analysis stands or falls on its own. There is no benefit in repeating an analysis that has been published previously. If there are problems reproducing the resluts, that comes out of the independent efforts of other groups. Science says: Do your own freakin’ analysis. Don’t rely slavishly on what was done before, since this only slows progress.

  • kevin // August 26, 2008 at 1:24 pm

    Steve Reynolds: I haven’t been reading CA, so I don’t know exactly what was said; please clarify what you mean by “sharing”: Are they saying that the terms of most research grants require that data be made *publicly* available, i.e. to anyone and everyone?

    I have no experience with climate science research grants. But in my (admittedly limited) experience with grant funded research in psychology, I’ve never seen a requirement that the raw data be made public.

    I’m not saying you definitively can’t believe what you’ve read, but I wouldn’t recommend uncritically accepting it, either. A skeptic wants evidence for claims he would like to believe as well as claims he does not want to believe, eh?

  • HankRoberts // August 26, 2008 at 5:59 pm

    > The paper doesn’t

    You know how science is done.

    Look in the journals citing that 20-year-old paper for any comments on it. Look at subsequent work by the authors. See if they have (as one would expect in the usual course) improved their methodology and presentation over time.

    Possibilities could include:

    – they persist repeatedly in doing exactly the same thing, and nobody complains about it in the journals.

    Conclusion, you’re the crank.

    – their work and presentation changed over time, along lines suggested in the journal comments.

    Conclusion, you’re not looking past the first paper. You should look for normal developments, not fix your attention on on one 2o-year-old paper, or you’re the crank.

    – their work and presentation changed over time, without incorporating suggestions made as comments in the journals. Commenters there continue to advise other methods or explanations.

    Conclusion, you’re at least turning in the same direction as those in the field who are particularly qualified to comment on the work, and you’re a cheerleader.

    – the authors’ later work never changes along the lines you wish their first paper had been done, and nobody except McI and CA continue to comment on the 2o-year-old paper.

    Conclusion: beware being a fanboy.

    Look in the journals citing that 20-year-old paper for any comments on it. Look at subsequent work by the authors. See if they have (as one would expect in the usual course) improved their methodology and presentation over time.

    Possibilities could include:

    – they persist repeatedly in doing exactly the same thing, and nobody complains about it in the journals.

    Conclusion, you’re the crank.

    – their work and presentation changed over time, along lines suggested in the journal comments.

    Conclusion, you’re not looking past the first paper. You should look for normal developments, not fix your attention on on one 2o-year-old paper, or you’re the crank.

    – their work and presentation changed over time, without incorporating suggestions made as comments in the journals. Commenters there continue to advise other methods or explanations.

    Conclusion, you’re at least turning in the same direction as those in the field who are particularly qualified to comment on the work, and you’re a cheerleader.

    – the authors’ later work never changes along the lines you wish their first paper had been done, and nobody anywhere comments in agreement with you.

    Conclusion: you’re a genius, and an LP!
    Play on, brother!
    http://en.wikipedia.org/wiki/Eppur_Si_Muove

  • Gavin's Pussycat // August 26, 2008 at 6:17 pm

    apowhatever:

    I am cheered when I see Gavin’s Pussy (e.g.) thinking independantly, giving Tammy a check every now and then.

    Poppycock. I correct Tamino’s typos as a maintenance contribution to a great resource that I use in my own teaching (currently in a summer school on Iceland, well received, thank you Tamino!)

    Lot to learn, you have.

  • carl // August 26, 2008 at 6:49 pm

    lazar – He audits the scientific community in a way that no one else does. He has consistently provided great critique of paleoclimatic reconstructions, doing all of the climate change community a service that they should already have done. McIntyre does the work that no one responsible for will do by themselves. He does the work that he doesn’t have to do and he is a tremendous help to the community. How about the people actually responsible for paleoclimate make a reconstruction without fatal errors?

  • matt // August 27, 2008 at 1:47 am

    Phillippe Chantreau: TCO, I agree with Lazar in his 10.08 post. When you see how some individuals (for lack of a better word) can torture data, it’s probably better that these data are not released to them.

    Do you really believe that because some people might take information and use it for their own agenda that information should be withheld? Are you kidding me? Is this true for wars? Is this true for governments? Is this true for big corporations that are leasing the resources of this country?

    Do you really mean this?

    Could we extend if further and state that if a fact exists that might harm a cause that you believe is just that it is OK to withhold or supress that truth?

    Your statements scare the hell out of me, honestly. I hope you think about them and retract them.

  • dhogaza // August 27, 2008 at 3:36 pm

    Do you really believe that because some people might take information and use it for their own agenda that information should be withheld?

    If I knew that the recipient were going to use the data to, in essence, lie, why yes. Why cooperate with a liar?

    Could we extend if further and state that if a fact exists that might harm a cause that you believe is just that it is OK to withhold or supress that truth

    It’s not truth that annoys scientists, matt.

  • apolytongp // August 27, 2008 at 4:07 pm

    Ray,

    1. Having the exact equation (and data) is useful if one is doing a study, comparing how variants of the “algorithm-data” combination perform.n It’s a CONTROL.

    2. Mann’s work was analytical and mathematical in nature. He did not gather data. He did a meta-analysis. To a great extent, his work product IS the algorithm. If he had just drawn the charts and had NO methods section, he would have never gotten published. So why justify a poor methods section?

    3. A lot of time has already been spent trying to dig out exactly what Mann did, which was wasted time. For instance, Mann does not disclose the acentric standardization.

  • Philippe Chantreau // August 27, 2008 at 4:44 pm

    Matt, please add a little more grandiloquent drama to your self righteousness. I’m about to be moved, I swear.
    I ain’t retracting nothing so long as clowns like Watts or the buffoons of “CO2 Science”, etc are out there trying to manipulate the masses.

    You guys don’t even think about examining statistical evidence used by the pharmaceutical industry and would never propose to hold it to the same “standards” that you whine so loud about for climate science. How about chemical applications? Do you ask for data release there? If not, why not?
    How about botched or suppressed environemental assessments? Where is the “skeptic crowd” drama to defend the scientists who want to have their data made public but can’t because officials distort or drop their work altogether? Read what Pat Neuman has to say on RC then come back with you self righteous salad. I’ll buy it when you’re all no longer so selective in your whining.

    Climate science has much more information available out there than many or most. Funny you mention corporations leasing resources, when the policies for them was decided by a secret “Task Force.” Is the skeptic crowd whining for Cheney to release that info? If not, why not?

    You prove my point: you want only a certain type of info released so that you can attack the conclusions drawn from it because you don’t like them. That’s what the big outcry about Hansen’s code was about and now that it’s out, nobody talks about it any more. Could it be because there is really nothing there to talk about?
    Were you all outraged that administration wanted to silence Hansen? I do not recall blog post by you saying as much, but I might have missed it. Did I?
    You’re all about freedom, access and transparency, sure.

    And here I am, stating my opinion, and first thing you do is try to make me retract it. I did not propose any legislation, or general rule, or even any action, but you immediately want me to shut up. Funny.

    I’m sure you’ll have another very moving come back to this, but I won’t engage in a blogging match with you. I have a job and a life and blogging is very low on my priority list, so don’t waste too much time trying to make me look wrong, I don’t care.

  • HankRoberts // August 27, 2008 at 5:56 pm

    > He audits the scientific community in
    > a way that no one else does.

    And no one else does it this way because, er, um, ah …. why again?

    > because some people might take
    > information and use it for their own
    > agenda that information should be
    > withheld?

    And where do you keep your data files?
    Just curious to have a look. Trust me.

  • MrPete // August 27, 2008 at 8:44 pm

    Hank, interesting that your possibilities list presumes papers accepted by insiders are correct, and critics are wrong.

    You’ve never seen an accepted conclusion eventually overturned? Twenty years is a nit.

  • george // August 28, 2008 at 12:56 am

    Truly “independent” approaches, algorithms, computer code. etc are usually best when it comes to “replicating” a scientist’s results (not the same as “duplication”. replication uses the same data but need not use the same methods)

    Even claims of independence have to be looked at carefully.

    There is famous case of the latter related by Richard Feynman (in “QED: The Strange Theory of Light and Matter”) :

    It took two ‘independent’ groups of physicists two years to calculate this next term, and then another year to find out there was a mistake—experimenters had measured the value to be slightly different, and it looked for a while that the theory didn’t agree with experiment for the first time, but no: it was a mistake in arithmetic. How could two groups make the same mistake? It turns out that near the end of the calculation the two groups compared notes and ironed out the differences between their calculations, so they were not really independent.

    But if they had been truly independent, it is unlikely (or at least less likely) that they would have made the very same error.

    Also, it is quite possible to get the description of the “algorithm” that was used right in the methods section, but implement it wrong in the code.

    Second (and this may come as a complete surprise to some) it is also quite possible to compile the very same code on different compilers and or/with different compiler settings and get different answers when one runs the two executables!

    So, just getting someone else’s code to compile does not ensure that one has implemented their algorithm correctly (or even the way that they implemented it, correct of not)

    On the other hand, if you can’t even get Hansen’s code to compile, what does that say? (and don’t give me the lame excuse about his using FORTRAN and assuming a UNIX-like platform. If you are not familiar with those, you have no business complaining)

    Finally, there are lots and lots of cases where a simple explanation of an algorithm is much clearer than the computer code! (Look in “Numerical Recipes some time. Some of their code is downright cryptic (especially their short variable names and the way they make something into its inverse without telling anyone) — but their explanations are top notch (Tamino’s explanations are on par with theirs) .

    As a computer engineer, I’d have to say that the latter is the rule rather than the exception (unfortunately). I’ve looked at far more than my share of spaghetti code and i have grown to despise it with a passion.

  • carl // August 28, 2008 at 1:16 am

    Philippe Chantreau –
    Wow, that’s a long explanation all to justify not releasing code that the global economy may make a significant investment on.

  • matt // August 28, 2008 at 2:48 am

    DHogaza: It’s not truth that annoys scientists, matt.

    Yes, as we’ve learned it is accountability, oversight, and archiving data.

  • matt // August 28, 2008 at 3:24 am

    Phillipe C. I did not propose any legislation, or general rule, or even any action, but you immediately want me to shut up.

    I don’t want you to shut up at all. I just want you recognize how heavy-handed your statement was when applied across the board.

    Federally funded data SHOULD be free (unless nat security is involved). If you create your own data, on your own dime, then by all means, keep it private if you wish.

  • Steve Reynolds // August 28, 2008 at 4:01 am

    kevin: “I haven’t been reading CA, so I don’t know exactly what was said; please clarify what you mean by “sharing”: Are they saying that the terms of most research grants require that data be made *publicly* available, i.e. to anyone and everyone?”

    This was posted at CA: http://www.climateaudit.org/?p=2237

    “The NSF agencywide policy states that researchers are “expected to share with other researchers, at no more than incremental cost and within a reasonable time, the primary data, samples, physical collections and other supporting materials created or gathered.”1313National Science Foundation, NSF Grant Policy Manual, (Arlington, VA, 2005). ”

    Whether ‘other researchers’ includes the public, you will have to ask the NSF. However, I think it has become common practice to archive data on publicly available websites.

  • Ray Ladbury // August 28, 2008 at 1:13 pm

    apolytongp, If you are reproducing somebody else’s analysis, it is not a control, it is an undergraduate science lab. You have said yourself that all Mann did was perform a meta-analysis. What is to stop you from doing the same–from coming up with your own damned equation. Do the words “independent research” mean nothing over at CA? Oh, that’s right. They don’t publish.

    You say: “A lot of time has already been spent trying to dig out exactly what Mann did, which was wasted time. ”

    Well, on that at least we can agree. His analysis has long since been superseded, and guess what, the conclusions are pretty much the same. Unless you are doing a history of science paper, you have no business delving into somebody else’s code. And I haven’t seen the braintrust over at CA give anybody serious much reason to cooperate.

  • apolytongp // August 28, 2008 at 2:56 pm

    Philippe, I’m very much in favor of how crustallography data (in chemistry, responding to your point) is required to be put in the ICSD or JCPDF for archiving…and how reviewers will examine the data and the trial structure directly. This has payed huge dividends in clearing out poor structures, cleaning up the literature, etc.

    The rest of the stuff about eefill corporations and stuff sorta went passed me. Let’s stick to the science and best practices. Not try to dumb everything down. (BTW, while pharma science reports are known to have a marketing angle…has been well discussed…the record keeping, statistics, double-blind practices, controls…are better than climate science.) And they should be. Lots of money in that stuff. And it’s important (lives depend on the insights).

  • Ray Ladbury // August 28, 2008 at 5:05 pm

    Steve Reynolds,
    What is typically done with “data” is that the principal investigator gets a first shot at it for a year or two. It is then made available–often over the Web. Different fields have different policies. NASA is usually particularly accommodating. DOE high-energy physics experiments–well good luck getting any of their data–at least in a usable form.
    However, as pointed out repeatedly, Mann et al. merely did a meta-analysis on publicly available data. No group is under any compulsion to share analysis methods–and as George and I have been saying repeatedly, it’s best if research methods remains independent.

  • t_p_hamilton // August 28, 2008 at 5:48 pm

    Time for facts straight from the horses mouth:

    NSF Awards and Conditions July 1, 2008, side by side comparison with previous bulletin, at
    http://www.nsf.gov/pubs/policydocs/rtc/termsidebyside.pdf

    pg 36
    “(c) The Federal Government has the right to:
    (1) Obtain, reproduce, publish or otherwise use the data first produced under an
    award.
    (2) Authorize others to receive, reproduce, publish, or otherwise use such data for
    Federal purposes.
    (d) (1) In addition, in response to a Freedom of Information Act (FOIA) request for
    research data relating to published research findings produced under an award that
    were used by the Federal Government in developing an agency action that has the
    force and effect of law, the Federal awarding agency shall request, and the recipient
    shall provide, within a reasonable time, the research data so that they can be made
    available to the public through the procedures established under the FOIA.”

    I suppose there is an upside for Mann, Hansen etc. since the federal government’s policy has not been taking action on global warming. :)

    pg 37
    “(i) Research data is defined as the recorded factual material commonly accepted in
    the scientific community as necessary to validate research findings, but not any of
    the following: preliminary analyses, drafts of scientific papers, plans for future
    research, peer reviews, or communications with colleagues. This “recorded”
    material excludes physical objects (e.g., laboratory samples).”

    I can tell you that a description of an algorithm is all that is necessary in my field, not the code itself.

  • t_p_hamilton // August 28, 2008 at 5:52 pm

    “MrPete // August 27, 2008 at 8:44 pm

    Hank, interesting that your possibilities list presumes papers accepted by insiders are correct, and critics are wrong.

    You’ve never seen an accepted conclusion eventually overturned? ”

    Not the way “auditors” are going about it. That is what we are trying to get across – what WILL work!

  • apolytongp // August 28, 2008 at 7:50 pm

    Ray:

    1. If I want to study the interaction of data and algorithm (e.g. with red noise) this is an extension in parameter space. A new experiment to learn how the two interact (as expected or something novel). Doing the known data with known algorithm and checking to reported output is a control. The same concept applies if I want to look at alternate methods (e.g. Burger and Cubasch 05, WA, Huybers, MM, etc.) with known data. You run a control to make sure that help interpret your findings (knowing that the changes you made were in the area of the experiment that you intended them to be).

  • apolytongp // August 28, 2008 at 8:09 pm

    Ray:

    I have already stated that I think CA should publish. Don’t think I’m on his “side”. When I criticize Mann for a failing, it does not excuse McI…or visa versa. However, the “lack of good publishing by CA” is no excuse for poor disclosure of methods and/or data. (It may be a mild excuse for not helping them by discussion.)

  • apolytongp // August 28, 2008 at 8:46 pm

    george:

    Good points, but my reply is not to let perfect be the enemy of good. Yes, code is not always executable or easy to follow. That does not mean that there are no insights to be gained from looking at them.

  • apolytongp // August 28, 2008 at 8:49 pm

    Ray:

    I actually agree that MBH is to picked upon (and implicitly or explicitly) having it’s faults extended by skeptics to larger areas. So what. That still doesn’t mean that the actual criticism of it in isolation is not valid. The problem is that both sides are so tied up in the battle of appearances, of inferences to policy, of appearing to look bad or good or hurt or winning of attack, that they fail to be engaged in truth the way a mathematician would/should.

  • apolytongp // August 28, 2008 at 8:57 pm

    Ray: I recommend to you chapter 13 of E. Bright Wilson’s AN INTRODUCTION TO SCIENTIFIC RESEARCH. This book is a classic and the “motherhood” comments on clarity, methods, publishing actualm data, disclosing all adjustments, etc. support my point of view.

  • apolytongp // August 28, 2008 at 8:57 pm

    http://www.amazon.com/Introduction-Scientific-Research-Bright-Wilson/dp/0486665453

  • apolytongp // August 28, 2008 at 10:16 pm

    t-p: I’m NOT SURE that that broad language would cover not sharing the code. But in any case, Mann specifically refused to share the algorithm!

  • Ray Ladbury // August 28, 2008 at 10:58 pm

    Apolytongp, If you were part of the research group that had published Mann et al. , AND you had not yet published, I would agree. A control would be fine. However, I don’t believe either of those two conditions are met. Therefore YOUR research needs to be independent of Mann’s and of everybody else who is not collaborating with you.
    By all means, Mann et al. and every other researcher need to make their methods sufficiently clear that the RESULTS can be reproduced–not the analysis. That they came up short in that respect and were still published speaks to:
    1)the novelty of the work as the first successful multi-proxy climate reconstruction on such a scale.
    2)the shortcomings of Mann et al.
    3)the shortcomings of the reviewers.

    However, the analysis was published, and it stands or falls on its own merits. There is zero benefit in helping McI et al. do their undergraduate lab experiment. Feel the pain and let it go.

  • HankRoberts // August 28, 2008 at 11:13 pm

    http://www.watoday.com.au/opinion/who-is-behind-climate-change-deniers-20080802-3ou6.html?page=1

  • David B. Benson // August 29, 2008 at 12:33 am

    Algorithms are now patentable, I believe. Furthermore, I am under the impression that the patent belongs to the university where the invention was made.

  • Lazar // August 29, 2008 at 2:11 am

    The World Avoided by the Montreal Protocol
    Geophys. Res. Lett., 35, L16811
    doi:10.1029/2008GL034590

    Without the Montreal Protocol, the effective equivalent stratospheric chlorine (EESC, combining the effects of chlorine and bromine) could, depending on the scenario chosen, have reached 9 ppbv by ~2025 [WMO, 2007] or even as early as 2002 [Prather et al., 1996] with growth rates typical of the late 1960s and early 1970s. We apply the UK Chemistry and Aerosols (UKCA) climate-chemistry model (section 2) to the problem of how climate would have responded [...]

    Column ozone decreases everywhere, with losses ranging from 5% in the tropics, through mid-latitude losses of 10–15%, to ~30% in Arctic and ~60% in Antarctic spring [...]

    The high-latitude ozone depletion would also have had a large effect on surface
    climate, with a further enhancement of the warming in the lee of the Antarctic Peninsula, similar to the observed surface temperature change, and a strengthening of the SAM. In the Arctic, the avoided ozone loss is associated
    with a warming of the Arctic Ocean and North America, with cooling over Western Europe and Siberia. These predicted changes are comparable to those expected by 2025 due to greenhouse gases [IPCC, 2007].

  • Steve Reynolds // August 29, 2008 at 2:20 am

    Ray Ladbury: “…Mann et al. merely did a meta-analysis on publicly available data.”

    My understanding is that much of that data was not public until forced to be so by McIntyre. Even now some of the data is only available in inconsistent versions, with no documentation as to which version Mann used. Also, the supposedly independent studies that have ‘confirmed’ Mann’s results used mostly the same data and methods, so are not independent.

    Ray: “… as George and I have been saying repeatedly, it’s best if research methods remains independent.”

    Whether that is best is your opinion, not some principle of the scientific method. Why not let each researcher choose what he thinks is the best method? If the process works as you say, the best methods will win out in the end.

  • Barton Paul Levenson // August 29, 2008 at 1:16 pm

    matt writes:

    DHogaza: It’s not truth that annoys scientists, matt.

    Yes, as we’ve learned it is accountability, oversight, and archiving data.

    Darn those rotten old scientists! From now on, let’s have all our science done by crackpot bloggers and right-wing talk-radio show hosts.

  • kevin // August 29, 2008 at 2:54 pm

    Note that the NSF terms posted by Steve R. and T.P. Hamilton are from 2005 and 2008, respectively. I realize that we’re having a mostly philosophical conversation about sharing data and methods, but for those who enjoy picking nits, does anyone know what the NSF official terms (or anything else that might indicate standard practice) were in, say, 1998?

    I’ll try to google it later, but I have an appointment now. BBIAB

  • Luis Dias // August 29, 2008 at 4:41 pm

    Do you really expect evolution to be falsified? Gravity?

    You can equate Climate Science to Gravity and Evolution endlessly, but still you miss the point. Just because something is “scientific”, it doesn’t follow that such something is as rock solid as “Gravity”, or I could say that Psychology is as rock solid as “Gravity”, and everyone a bit intelligent here would chuckle.

    Well, on that at least we can agree. His analysis has long since been superseded, and guess what, the conclusions are pretty much the same.

    You aren’t talking about Caspar and Amman paper of 2006, perhaps, now are you? It would be a terrible mistake on your part.

  • Luis Dias // August 29, 2008 at 4:42 pm

    Ooops, sorry meant Ammann and Wahl (2007)

  • dhogaza // August 29, 2008 at 5:42 pm

    You aren’t depending on CA as your news source as to what constitutes a terrible mistake in science, are you?

    If so, it would be a terrible mistake on your part…

  • Ray Ladbury // August 29, 2008 at 6:22 pm

    Luis Dias, Climate science is over a century and a half old. The basic forcers have been known for about a century, and the sensitivity of climate to CO2 doubling is nailed down by multiple lines of evidence. Yeah, I’d say you can probably take that to the bank.

    As to reconstructions, see:
    http://www.realclimate.org/index.php/archives/2006/07/the-missing-piece-at-the-wegman-hearing/langswitch_lang/fr

    and

    http://www.realclimate.org/index.php/archives/2007/05/the-weirdest-millennium/langswitch_lang/fr

  • apolytongp // August 29, 2008 at 6:27 pm

    Ray: Have you read the (classic) Wilson book that I referred to? Have you read NASA SP 7010?

    http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19640016507_1964016507.pdf

  • Chris O'Neill // August 29, 2008 at 6:29 pm

    Well, on that at least we can agree. His analysis has long since been superseded, and guess what, the conclusions are pretty much the same.

    You aren’t talking about Ammann and Wahl (2007), perhaps, now are you?

    Well, I wouldn’t be. I would be talking about, e.g.

    “Proxy-Based Northern Hemisphere Surface Temperature Reconstructions: Sensitivity
    to Method, Predictor Network, Target Season, and Target Domain” by S. Rutherford et al, Journal of Climate. Methods completely different from MBH98 are now used for reconstructions. This obsession with an outdated method is astounding. The people with such an interest in it are suffering some sort of obsessive-compulsive disorder.

  • Dano // August 29, 2008 at 6:40 pm

    You aren’t talking about Caspar and Amman paper of 2006, perhaps, now are you? It would be a terrible mistake on your part.

    I like it that this is the best they can do – play ‘informed’ and dumb at the same time.

    Not a new tactic, lad; debunked and put to rest long ago, and it’s rotted enough that your attempt to make it rise, zombie-like, can’t work. Instead, you may want to try trotting out some denialist testable hypotheses, data, models, analyses, body of evidence, list of journal articles to make your case.

    Oh, wait: you can’t. No wonder you’re playing Dr Frankenstein with the rhetoric.

    Best,

    D

  • t_p_hamilton // August 29, 2008 at 7:13 pm

    apolytongp:”But in any case, Mann specifically refused to share the algorithm!” Is that true for all of Mann’s papers?

    Does the following page have code for reconstructing MBH, along with data, or not?

    http://www.cgd.ucar.edu/ccr/ammann/millennium/CODES_MBH.html

  • kevin // August 29, 2008 at 7:16 pm

    Yeah, psychology is not like gravity. In a sense, psychology is more like quantum mechanics, because many human behaviors are predictable in aggregate, or describable with a probability distribution, but accurately predicting the specific behavior of one particular human is pretty much impossible. However, if Luis Dias is snidely implying that there are not solid empirical results in psychological science, this says more about the state of his knowledge than about the state of psychology.

    Sorry for the OT, pet peeve.

  • Ray Ladbury // August 29, 2008 at 7:23 pm

    Actually, Steve, the scientific method does require the RESULTS to be confirmed INDEPENDENTLY. That means:
    1) Gather your own data.
    2) Come up with your own procedure for analyzing that data.
    3) Come up with your own quality controls and error estimations.
    4) Submit to a journal of your choice for publication and let the work stand or fall on its own merits.

    If you don’t follow this, you aren’t really doing science. Repeating somebody else’s analysis with their data is a laboratory exercise for undergrads in a science class, not real science. One exception might be if you had developed an independent algorithm, you might request to run it on someone else’s dataset. However, the emphasis there is not so much on the results as it is on the comparative effectiveness of the algorithms.

    The scientific method has this one pretty much worked out.

  • apolytongp // August 29, 2008 at 9:30 pm

    Ray: your comments at 1923 are in significant contrast to the much more classical views of Katzoff and Wilson (better and more famous researchers than you btw, and certainly more published on how to report research). They talk a lot about how all details of data, standardization, etc. should be shared. The biggest benefit is to science efficiency, because even if part of a wrok is invalidated, other parts may thus be useable.

  • Hank Roberts // August 29, 2008 at 11:05 pm

    http://arxiv.org/abs/0808.3283v1

    !!!

    It’s not a huge effect, so ignore those who will claim this explains global warming …

    “the fractional difference between the 226Ra counting rates at perihelion and aphelion is 3 × 10−3 …”

  • Ray Ladbury // August 29, 2008 at 11:45 pm

    apolytongp, You know, it’s funny you should mention technical writing, because one of the cardinal rules I learned was that you don’t vector the reader vaguely to some voluminous tome and give no idea what you want them to get out of it. But you know what else? I’m willing to bet that there’s nothing in either of those two references that says it’s a good idea to bestow all your data and algorithms on someone who has zero record of publication in the field just because they ask for it. If there is such a directive, I’d sure like a detailed reference for it.

  • Hank Roberts // August 30, 2008 at 12:10 am

    Just a reminder, a page from the recent history:

    _____excerpt follows________

    The basic conclusion of the 1999 paper by Dr. Mann and his colleagues was that the late 20th century warmth in the Northern Hemisphere was unprecedented during at least the last 1,000 years. This conclusion has subsequently been supported by an array of evidence that includes both additional large-scale surface temperature reconstructions and pronounced changes in a variety of local proxy indicators, such as melting on icecaps and the retreat of glaciers around the world, which in many cases appear to be unprecedented during at least the last 2,000 years.

    Based on the analyses presented in the original papers by Mann et al. (1998, 1999) and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the preceding millennium. However, the substantial uncertainties currently present in the quantitative assessment of large-scale surface temperature changes prior to about A.D. 1600 lower our confidence in this conclusion compared to the high level of confidence we place in the Little Ice Age cooling and 20th century warming. Even less confidence can be placed in the original conclusions by Mann et al. (1999) that “the 1990s are likely the warmest decade, and 1998 the warmest year, in at least a millennium” because the uncertainties inherent in temperature reconstructions for individual years and decades are larger than those for longer time periods, and because not all of the available proxies record temperature information on such short timescales. We also question some of the statistical choices made in the original papers by Dr. Mann and his colleagues. However, our reservations with some aspects of the original papers by Mann et al. should not be construed as evidence that our committee does not believe that the climate is warming, and will continue to warm, as a result of human activities.

    Large-scale surface temperature reconstructions are only one of multiple lines of evidence supporting the conclusion that climatic warming is occurring in response to human activities, and they are not the primary evidence. The scientific consensus regarding human-induced global warming would not be substantively altered if, for example, the global mean surface temperature 1,000 years ago was found to be as warm as it is today. This is because reconstructions of surface temperature do not tell us why the climate is changing. To answer that question, one would need to examine the factors, or forcings, that influence the climate system. Prior to the Industrial Revolution, the primary climate forcings were changes in volcanic activity and in the output of the Sun, but the strength of these forcings is not very well known. In contrast, the increasing concentrations of greenhouse gases in the atmosphere over the past century are consistent with both the magnitude and the geographic pattern of warming seen by thermometers.

    One significant part of the controversy on this issue is related to data access. The collection, compilation, and calibration of paleoclimatic proxy data represent a substantial investment of time and resources, often by large teams of researchers. The committee recognizes that access to research data is a complicated, discipline-dependent issue, and that access to computer models and methods is especially challenging because intellectual property rights must be considered.

    Our view is that all research benefits from full and open access to published datasets and that a clear explanation of analytical methods is mandatory. Peers should have access to the information needed to reproduce published results, so that increased confidence in the outcome of the study can be generated inside and outside the scientific community. Paleoclimate research would benefit if individual researchers, professional societies, journal editors, and funding agencies continued their efforts to ensure that existing open access practices are followed.

    So where do we go from here? ….
    ______end excerpt____________

    Answer: they recommend going forward.

    http://www7.nationalacademies.org/ocga/testimony/Surface_Temperature_Reconstructions.asp

  • Steve Reynolds // August 30, 2008 at 12:47 am

    Ray: “If you don’t follow this, you aren’t really doing science.”

    I don’t really care what you call it, if an analysis can show a supposedly important paper’s major result can be duplicated using random noise for the data, I want to know about it. I hope most scientists would have similar thoughts.

  • matt // August 30, 2008 at 1:23 am

    Ray: If you don’t follow this, you aren’t really doing science. Repeating somebody else’s analysis with their data is a laboratory exercise for undergrads in a science class, not real science.

    Yes, it’s a job for auditors :) See, the auditors don’t pretend they are engineers. They don’t pretend they are the test pilots. They won’t take your job. They won’t make you look stupid. They will help ensure a mistake isn’t made. Why is that bad?

    Every other profession has oversight and lives with auditors. Why do you detest it so?

    If it takes a year to replicate and reproduce a result, and a few weeks of scrutiny to find an unititalized variable in 100,000 lines of simulation code that caused a mistake in the results, don’t you think that is a good investment?

    If Mann didn’t have time to run noise through his algorithm, but somebody else did, isn’t that a good thing?

    Is your goal moving the state of knowledge ahead or playing a game of “nobody can come in my fort”?

    Please, not another round of “that’s not how science is done.” Instead, please address if it’s better for the discovery time of errors to be reduced or not.

  • David B. Benson // August 30, 2008 at 1:44 am

    Hank Roberts // August 30, 2008 at 12:10 am — Also, plenty of data to show that regions in the northern hemisphere are now warmer than at any time in the past 5000+ years:

    http://www.npr.org/templates/story/story.php?storyId=914542
    http://www.physorg.com/news112982907.html
    http://news.bbc.co.uk/2/hi/science/nature/7580294.stm
    http://researchnews.osu.edu/archive/quelcoro.htm
    http://news.softpedia.com/news/Fast-Melting-Glaciers-Expose-7-000-Years-Old-Fossil-Forest-69719.shtm
    http://en.wikipedia.org/wiki/%C3%96tzi_the_Iceman

  • David B. Benson // August 30, 2008 at 1:48 am

    Steve Reynolds // August 30, 2008 at 12:47 am — Almost everybody accepts orbital forcing as explaining the so-called ice ages. However Carl Wunsch has an admirable paper showing that an AR(2) process also can largely duplicate the latter half of the Vostok record after using the first half for training.

    I thought the paper was very well done, but it didn’t cause me to doubt orbital forcing. Do you understand why that is?

  • Ray Ladbury // August 30, 2008 at 2:00 am

    Steve, if a paper is in error, that will come out as others try to replicate the results. What is more, rather than merely showing that the one result is wrong, another researcher may figure out how to do it right. And as has been pointed out ad nauseum (no, this is not an exageration), there have been many subsequent reconstructions that show similar results. None strongly contradicts MBH98. So, other than history of science or doing a highschool science project, the fixation of the denialists on this one result is puzzling. See:
    http://www.realclimate.org/index.php/archives/2005/02/dummies-guide-to-the-latest-hockey-stick-controversy/langswitch_lang/sk

    http://www.realclimate.org/index.php/archives/2005/01/on-yet-another-false-claim-by-mcintyre-and-mckitrick/langswitch_lang/sk

    and

    http://www.realclimate.org/index.php/archives/2007/05/the-weirdest-millennium/langswitch_lang/sk

  • Chris O'Neill // August 30, 2008 at 2:12 am

    Steve Reynolds:

    if an analysis can show a supposedly important paper’s major result can be duplicated using random noise for the data

    Absolute garbage if you’re talking about MBH98. MBH98’s uncentered method generates a very small hockeystick bias, less than about 0.1 degree C. (Such bias does not exist in up-to-date methods.) This does not amount to “duplicated using random noise”. The real question is why does Steve Reynolds so gullibly believe this.

  • Ray Ladbury // August 30, 2008 at 2:19 am

    Apolytongp says, “Ray: your comments at 1923 are in significant contrast to the much more classical views of Katzoff and Wilson …”
    Put your money where your mouth is–produce a quote where any serious scientist advocates nominally independent research groups sharing experimental setups, code, analysis and data. This is a recipe for propagation of systematic error. Now, add to that the fact that those asking for the special access have no publication record in the field, and I think it would be astounding if you could find a serious scientist to support you. Sorry, dude, you’ll have to do a whole lot better than vague references and appeals to authority.

  • Hank Roberts // August 30, 2008 at 2:44 am

    Sigh.

    Might as well create bingo boards assigning numbers to the stock criticisms and the regular few posters about the Hansen paper, because the same people will keep reposting the same talking points as long as there’s an open topic anywhere in the world about climate, eh?

  • apolytongp // August 30, 2008 at 5:05 am

    I didn’t send you to a tome, Ray. I sent you to chapter 13 (of Wilson). That is the chapter on reporting results. Within that, I direct you to the sections on data and methods. (My remarks already refer to this, but if you need explicit, there you go.)

    FYI, Wilson is very famous for discoveries on vibrational spectroscpy and also is the father of a Nobel Prize winner. Although, you’re not seemingly familiar with his book, it is considered a classic. Is available in Dover paperback (and by Interlibrary Loan, of course.)

    If you don’t know the Katzoff memo, I’m surprised. It’s a classic NASA document. FYI, it is 30 pages long. And widely regarded as a little gem. Not a tome. And I LINKED TO IT!

    ——————————-

    With that further explanation, to respond to your requerst for easier info, please let me know when you’ve read them and what you take away from them as regards this argument, my outlook, etc.

  • dhogaza // August 30, 2008 at 3:04 pm

    Denialists will still be arguing about MBH in 2100 as they sit sweltering lap deep in seawater in downtown manhattan …

  • Gavin's Pussycat // August 30, 2008 at 3:05 pm

    if an analysis can show a
    supposedly important paper’s major result can be duplicated using
    random noise for the data, I want to know about it.

    If it were true, I would want to know about it too!

    …but it isn’t, and we both know that, don’t we Stevie?

    Thanks for moving the goalposts again. It’s so revealing.

  • dhogaza // August 30, 2008 at 3:06 pm

    And, meanwhile, the latest from NSIDC makes it look like 2007’s minimum ice extent record might be safe after all. Buckle your seatbelts, folks, it’s going to be a long winter of our being bombarded by denialists claiming that the fact that 2008 didn’t set a new record means that global cooling is continuing and a new ice age is upon us.

  • george // August 30, 2008 at 6:56 pm

    While I certainly agree that it is best if scientists make their methods clear enough so that someone “skilled in the art” (borrowed from the patent lingo) can repeat their work if they so desire, there is no hard and fast “rule” that says one has to do this.

    I think the scientific community is the best judge of a paper’s merits or lack thereof. If other scientists think you have not demonstrated what you claim, make no mistake, they WILL tell you — if not in person at a conference, then in a journal.

    Scientists who make claims without backing them up are usually not the ones to get the credit for the claims. A very famous example of this is Newton’s law of gravitation. Before Newton published his Principia, Robert Hooke actually surmised that gravity obeyed an inverse square law, but Hooke was not able to show how this would lead to Kepler’s laws and Newton was. The rest, of course, is history.

    All this debate about making data and code available to anyone and everyone is simply silly. Science has never worked that way and probably never will. It makes no sense whatsoever to share data and code with people who have no clue what it means.

    Please explain to me how it is productive to share one’s data and code with the likes of James Inhofe. you can’t, of course because it is not productive in the least. It’s a total waste of time.

    Unfortunately, the data/code -sharing ” debate ” has been “framed” from the getgo by those who would have us all believe that

    1) significant numbers of scientists within the climate science community are not releasing their data and/or code to other scientists

    2) those scientists who are not releasing data to every Tom, Dick and Marry who requests it are somehow dishonest, unscientific, have something to hide and/or are perpetrating fraud on the general public (If you selected “all of the above”, you win the “True Skeptic” Award)

    Personally, I feel it is pretty much a waste of time to even argue with people who have framed the debate in such a way.

  • apolytongp // August 31, 2008 at 1:08 am

    Wilson’s book was written in the 5os and even then argues clearly for a practice of archiving details to centralized archives, which already existed then. He also argues for detailed exposition of all aspects of new approaches and standardizations.

    The whole Science/Nature puff peice phenom (like a press release almost) is an abomination. Of course, solid, solid works should be done in the specialist literature to back up “fast breaking news”. But what happens is ego scientists and young Turks don’t bother.

  • apolytongp // August 31, 2008 at 1:35 am

    Chris:

    I AGREE with your point on the impact of the acentricity. Actually I haven’t done the math, but what I agree with is that Steve McI has been dishonest in mushing different issues together and trying to have his “centerpeice” (the undocumented and according to eveny many on Mann’s side, but not Mike WRONG acentric standardization) take the load of several other method choices. IN contrast, I find that Burger and Cubasch’s approach (a full factorial of method decisions) or Zorita or Huybers or Wahl and Amman’s way of taking things apart and recording the impact is generally better.

    As far as the concentration on picking on that one paper, I think the defenders are a bit off here as well. If there are faults with the paper, it should be irrelevant what other work has been done in the science. We should be able to judge it on it’s own as an algorithm. As a method. Mann has been defensive and distasteful when people tried to peel the onion and judge the equation.

  • Ray Ladbury // August 31, 2008 at 2:37 am

    So, apolytongp, how many scientific publications do you have?
    I have asked you for a quote that supports your contention that data, code, etc. should be shared among independent research groups. You have not provided one. You know you can’t, because that is contrary to good scientific practice and besides it is a recipe for propagation of systematic errors.
    I have no objection to archiving code and data–that’s fine. My objection is sharing the SAME code and SAME data with outside groups. Sharing data to be combined with other data into a meta-analysis is OK. Sharing code is not.
    For my PhD research in experimental particle physics, we always had two independent groups looking for the same particles. You could discuss the research, apply the same criteria on the data, even look over each other’s shoulders, but you never shared analysis code beyond a subroutine to do fitting or the like. If a code is sufficiently complicated, it will have errors, and neither you nor anyone else will find them. Share the code and they propagate. I don’t see why you don’t understand this.

  • Steve Reynolds // August 31, 2008 at 2:44 am

    Chris O’Neill: “ MBH98’s uncentered method generates a very small hockeystick bias, less than about 0.1 degree C. (Such bias does not exist in up-to-date methods.)”

    Chris, if you me to take what you say as something more than just your opinion, providing a useful link that I can check is necessary.

  • apolytongp // August 31, 2008 at 3:14 am

    Ray:

    1. In all seriousness, the scientist I refer to is E. Bright Wilson, his noted work, chapter 13, the section on “data” and the section on “method”. (couple pages total). I’ll bet your local library has a copy. If not, have the clerk ILL it.

    Read that and see if (or how much) it backs me up. I’m honestly interested in your reaction.

    2. 10 publications.

  • Steve Reynolds // August 31, 2008 at 3:17 am

    Ray: “…there have been many subsequent reconstructions that show similar results. None strongly contradicts MBH98.”

    You are very concerned about independence. How independent were these?
    Also, your links don’t have much to say about error bars. Is there much reason to anything would be contradicted with as much uncertainty as probably exists?

  • Chris O'Neill // August 31, 2008 at 3:28 am

    f there are faults with the paper, it should be irrelevant what other work has been done in the science.

    Sure, the existence of later, correct, papers doesn’t change the existence of faults in an earlier paper. My interest is in the results from correct papers. The papers without the earlier faults say there is a hockeystick. I don’t see any challenge to their methods.

    We should be able to judge it on it’s own as an algorithm.

    I’m sorry, but I’m just not that interested in papers that don’t use the best methods available.

  • Chris O'Neill // August 31, 2008 at 4:17 am

    Steve Reynolds:

    if you me to take what you say as something more than just your opinion, providing a useful link that I can check is necessary

    You’re welcome to find the bias in, for example, the REGEM method referred to in Proxy-Based Northern Hemisphere Surface Temperature Reconstructions: Sensitivity
    to Method, Predictor Network, Target Season, and Target Domain
    by S. Rutherford et al, Journal of Climate. MacIntyre hasn’t found the bias, perhaps you can.

    BTW, I notice that your claim:

    an analysis can show a supposedly important paper’s major result can be duplicated using random noise for the data

    has vanished from sight.

  • apolytongp // August 31, 2008 at 12:13 pm

    Chris, I actually find that paper fascinating because of the level of complexity of the algorithm. Thought Burger and Cubasch’s full factorial was genius.

  • Gavin's Pussycat // August 31, 2008 at 12:41 pm

    Heck, Ray, to underscore your argument, you’re not even safe sharing compilers…

    http://n2.nabble.com/Re:-Polar-stereographic,different-values-on-different-platforms–td740713.html

  • Chris O'Neill // August 31, 2008 at 2:20 pm

    apolytongp:

    I actually find that paper fascinating because of the level of complexity of the algorithm.

    Yes, but it’s amazing how much interest a paper of purely academic and historical significance attracts.

    Thought Burger and Cubasch’s full factorial was genius.

    Maybe, but only of academic interest. It didn’t include an up-to-date method.

  • Ray Ladbury // August 31, 2008 at 2:44 pm

    apolytongp, Frankly, reading what Wilson has to say is pretty low on my list of things to do. I’m all for transparency. I’m not for sharing data and code. When folks call me and ask about an analysis method in one of my papers, I am more than happy to work through the method with them until they understand it. I will not give them my code, because I don’t have 100% confidence that it is error free. If they had trouble reproducing my result, I would go through my code again to look for errors.

    I would be more than happy to meet you half way. If you quote the (short) passages that you believe support sharing of code, I’ll tell you if I agree with that interpretation. However, even with an anthority like Wilson, it will not change my opinion, as I’ve seen what happens if you remove the firewall between nominally independent research teams.

  • Ray Ladbury // August 31, 2008 at 2:47 pm

    Steve Reynolds,
    I’m not an expert on all the paleoclimatic reconstructions. However, based on my knowledge, the algorithms are independent. Some of the data are also independent, but not all. If there were a bias in the data, it could contaminate multiple, but probably not all reconstructions. There is no reason to expect such a bias in the data.

  • apolytongp // August 31, 2008 at 6:16 pm

    Chris:

    How is “up to date”ness something that is so special? If you cited something special about the REGEM method in terms of its performance on noise, in terms of significance tests, in terms of where it works well, doesn’t work well (for instance in a field like biology or sociology), if you cited theoretical stats methods, etc. All those things would turn me on. It’s like saying someone has come up with a new method of TEM structure solution for complicated crystals and then cites it for a complicated, tricky and debated structure. I really want to see how it does on known cases first.

    What does it mean (in a Bayesian estimation sense) when we read that something can only be dectected with very special methods of analysis? Also, the interesting thing about Burger and Cubasch was showing all the switches and how much they change the answer. It seems like the opposite of robust. Seems like something where a very particular method is needed. Plus it seems a lot more enccompassing and even just well stated than the Rutherford paper.

    None of this is to say that Rutherford’s method is bad, etc. I don’t know enough to judge that. It might be right for all I know. But I know what I would need to check to feel better about it. And it wouldn’t be “newness”.

  • apolytongp // August 31, 2008 at 6:17 pm

    Ray: Understood. No hard feelings.

  • Phil B. // August 31, 2008 at 7:14 pm

    I have been following the proxy base temperature reconstruction since 2000. The elephant in the room is the proxy linearity and stationary assumption. Mathematically, P(t) = a + b*T(t) + n(t). Where a and b are constants for 1000 years or so, and T(t) is annualized temperature and n(t) is noise. This is an extraordinary assumption for tree rings and I haven’t seen any papers that demonstrate that this is valid assumption.

  • Barton Paul Levenson // August 31, 2008 at 10:52 pm

    apolytongp writes:

    Mann has been defensive and distasteful when people tried to peel the onion and judge the equation.

    AGW deniers have been distasteful when they tried to subpoena all of Mann’s paperwork. I’m much more afraid of congressmen and senators attacking scientists than I am of scientists getting something wrong. A scientist getting something wrong is usually caught right away; cf cold fusion. But congress going after people can take years to put right; cf Joe McCarthy and HUAC.

  • Chris O'Neill // September 1, 2008 at 2:22 am

    apolytongp:

    How is “up to date”ness something that is so special?

    It’s very, very special if the results matter and previous methods had shortcomings.

    It seems like the opposite of robust.

    Anyone can choose a set of methods that have a variety of shortcomings. REGEM is nothing like any of the methods considered by Burger and Cubasch.

    None of this is to say that Rutherford’s method is bad, etc.

    There’s a very good reason why there’s so little interest in Rutherford et al’s paper and so much interest in MBH98 by non-climate scientists. Promoting knowledge of Rutherford would blunt the message from global warming denialists that there is something wrong with reconstructions because there is something wrong with the method being used because they wouldn’t have a defect to talk about.

  • Gavin's Pussycat // September 1, 2008 at 8:19 am

    BTW found this gem:

    The Lunar Conspiracy.

  • Gavin's Pussycat // September 1, 2008 at 8:28 am

    Phil B., I remember something was said about that in Rutherford at al.

    Theoretically you expect any relationship to be approximately linear for small variations, which is the case here. Empirically, what you do is use other proxies (corals, ice cores, …) besides tree rings (and different types of tree ring data, species, growth location and conditions, …)

    If all that turns out to be reasonably consistent, then I don’t see what’s your problem.

  • MrPete // September 1, 2008 at 2:41 pm

    Stopped in for a five minute break [heading down to New Orleans soon to help out...]

    Ray, you said
    If a code is sufficiently complicated, it will have errors, and neither you nor anyone else will find them. Share the code and they propagate. I don’t see why you don’t understand this.

    I tend to agree with you about shared code leading to propagated error… if the purpose of sharing is reuse without QA.

    However, nobody has shown evidence that software QA is happening intra group, let alone inter-group.

    This is where open-source science can benefit, just as open source software benefits.

    Yes, a common bug will propagate across every email server or domain name server or whatever. However, more eyes find more bugs and squash them much earlier.

    When we’re talking software development — and that’s exactly what this is — there’s plenty of history to say that keeping code hidden does not benefit.

    Best analogy I can think of here is security software. Security by obscurity is not rubustly secure. Statistical analysis by obscurity is not robust either.

    OK, five minutes up. Back to packing and prep… :)

  • Hank Roberts // September 2, 2008 at 12:04 am

    For those who may have followed this Google result and not gotten the file, it’s moved.

    Science and politics of global climate change: North on the hockey stick, Sep 4, 2006 … Last week he gave an interesting seminar to our department …

    Description still at the old web page:
    sciencepoliticsclimatechange.blogspot.com/2006/09/north-on-hockey-stick.html

    But if you click the link you get
    File not found:
    http://www.met.tamu.edu/people/faculty/dessler/NorthH264.mp4

    hello. the link is now http://geotest.tamu.edu/userfiles/216/NorthH264.mp4

    unfortunately, the old link is no longer available. spread the word

    (hat tip and thanks for the reply in email from Andrew Dessler)

  • Ray Ladbury // September 2, 2008 at 1:00 pm

    MrPete,
    Good luck in the Gulf states. The thing you seem to neglect about scientific code is its short shelf-life. Scientific coding is usually pretty close to single-use. If is specifically constructed to perform a single analysis, and once that analysis is performed, it gets shelved. Individual modules of the code may be resurrected in new analyses, but these will generally be general-purpose (e.g. sorting, fitting, FT,…) algorithms. Even if the same group were to look at the same data again, they’d likely use a different algorithm, for the simple reason that the analysts will have learned from their previous analysis.

  • P. Lewis // September 2, 2008 at 1:26 pm

    Tee hee. The following should lead to some fun for the next 10 years or so:

    “Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia” by Mann, Zhang, Hughes, Bradley, Miller, Rutherford and Ni, @ PNAS (not up as of yet, but see my post @ Deltoid.

    Can we expect more Congressional appearances?

  • P. Lewis // September 2, 2008 at 10:48 pm

    The Mann et al 2008 paper is now available on open access (and the supporting info) at PNAS.

  • David B. Benson // September 3, 2008 at 12:49 am

    “Era of Scientific Secrecy Near End”

    http://www.livescience.com/culture/080902-open-science.html

    discusses pros and cons of ‘open science’.

  • dhogaza // September 3, 2008 at 3:35 am

    Watts, unbelievably, has an Al Gore is fat post up on his blog.

    Oh, wait, I think I meant to say “believably” …

  • apolytongp // September 3, 2008 at 4:14 pm

    E. Bright Wilson, Jr. AN INTRODUCTION TO SCIENTIFIC RESEARCH, 1955(!)

    Chapter 13 “Reporting the Results of Research” (section 13.4 “Text”)

    “The Method”: …If new procedures or new variants of old procedures have been employed, these should be described. Ideally, sufficient detail should be given to enable a research worker on another continent to duplicate the method. This may involve detail best relegated to an appendix or in extreme cases to a supplemental report in on of the documentation centers…It is important not only that others be able to duplicate the procedures but also that it be made possible for critics to judge the validity and future readers to correct the results in the light of later discoveries. This means that sources of materials, methods of purification, information on possibly relevant materials, etc. should be given. The standards used for various measurements are particularly important.

    “The Data”: It is vital to publish the actual data on which conclusions are based…Primary measurements should be published and not merely derived quantities. Many magnetic susceptibility data have been published in terms of Weiss magnetons instead of in the units in which they were actually measured. This is an outmoded theoretical concept whose disappearance affects a good number of perfectly good experimental papers. It is worth remembering that good data can easily outlast many successive theories. The data should be presented in their rawest form so that later theorists can use them. If it is impractical to do this, the treatment to which the data have been subjected should be so clearly and completely specified that the original values can be recoverd by later readers if needed.

    …the manuscript should be preserved and annotated to show the notebook references…it should be possible at any later date to go backward from the published conclusions all the way to the original notebook entries, experimental photographs, and records. Any processing given to the data should also be available and indexed.

    “Equations”:

    …Sufficient detail (of derivation, TCO) should be given to enable a reader for whom the article is intended to follow the steps himself…one should be conservative in interpreting the word “obvious”…

    …Mathematical papers without misprints and errors are the exception rather than the rule…

  • apolytongp // September 3, 2008 at 6:55 pm

    Tamino:

    I’m trying to think of a post to make that will mix some of the Palin gun-moll Earth Mother fertility meme in. Something that will be sufficiently taunting as to satisfy my desire to tweak blue staters. But still get past your censor. And somehow tie in the climate stuff at least for cover. Any suggestions on how I do that?

    [Response: I'd rather you didn't]

  • Lost and Confused // September 3, 2008 at 10:00 pm

    P. Lewis, an interesting aspect in regards to that paper can be found in the press release, where Mann makes the comment:

    “Ten years ago, we could not simply eliminate all the tree-ring data from our network because we did not have enough other proxy climate records to piece together a reliable global record…”

  • ChuckG // September 4, 2008 at 12:38 am

    More detailed Pat Frank (pseudo?) science – Gavin math lesson @

    http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/langswitch_lang/bg#comment-97209

    Sure would like to see comments over here (to keep the decks clear for further combat over there in case it materializes) by those whose math skills are much greater than mine.

  • Ray Ladbury // September 4, 2008 at 1:27 am

    Apolytongp, Your quote of Wilson in no way suggests sharing of code or data–merely publication of sufficient detail. For instance, it simply will not be practical to publish raw data from the experiments at the LHC, which will generate terabytes of data. Likewise, the analysis will be described, but the code will likely remain internal–as it should.

  • dhogaza // September 4, 2008 at 3:29 am

    “Ten years ago, we could not simply eliminate all the tree-ring data from our network because we did not have enough other proxy climate records to piece together a reliable global record…”

    Lost and Confused learns that science marches forward, while McIntyre is convinced that he can overturn the results of thousands of climate science papers by proving that the BCP analysis is dodgy.

    I’m *sure* that L&C thinks that science, marching forward, shows that BCP reconstructions aren’t necessary, is UNFAIR! ANTI-DEMOCRACY! ANTI-CRETIN-DENIALISM!

    L&C: Get a friggin’ grip.

  • Paul Middents // September 4, 2008 at 3:30 am

    ChuckG alerts us to a train wreck that just won’t stop. I still think the rascally rabett is the one to chronicle and comment on this. Gavin is even asking for an intervention.

  • Dano // September 4, 2008 at 5:15 am

    an interesting aspect in regards to that paper can be found in the press release, where Mann makes the comment:

    L & C as Dr Frankenstein, desperately trying to resurrect the long-dead argument. If only to justify their chosen self-identity, or maybe self-relevance…

    Thanks for the laugh. Your mommy is calling you to brush your teeth and go to bed.

    Best,

    D

  • apolytongp // September 4, 2008 at 5:55 am

    Ray: But Mann refused (at first) to share his algorithm. And his pub lication did not disclose parts of the procedure. Wilson addresses the issue of large information by deffering to archives.

    P.s. There are fundamental issues in the Wilson discussion. I don’t see you adressing them. Just definding wording. I think this will be my last. It is too tedious to engage and refrain from putdowns.

  • apolytongp // September 4, 2008 at 5:56 am

    Feel free for last word though. Serioues. no hard feelings.

  • Barton Paul Levenson // September 4, 2008 at 11:10 am

    apolyton posts:

    I’m trying to think of a post to make that will mix some of the Palin gun-moll Earth Mother fertility meme in.

    It’s certainly relevant that Governor Palin doesn’t believe global warming is manmade and that creationism should be taught alongside evolution in public school biology classes. I, for one, would not care to have a scientific illiterate in charge of the world’s foremost nation when it comes to science. Just one more reason I’m not voting GOP this year. Or any year.

  • P. Lewis // September 4, 2008 at 12:03 pm

    Lost and Confused wrote:

    P. Lewis, an interesting aspect in regards to that paper can be found in the press release, where Mann makes the comment:

    “Ten years ago, we could not simply eliminate all the tree-ring data from our network because we did not have enough other proxy climate records to piece together a reliable global record…”

    Why is that an interesting aspect?

  • Ray Ladbury // September 4, 2008 at 12:19 pm

    apolytongp, Maybe I’ll start a conspiracy blog suggesting that the reason GWB hasn’t been at the RNC is because he’s been undergoing hormone replacement therapy and in reality he IS Sarah Palin. You can post you inflamatory rhetoric over there.

  • Ray Ladbury // September 4, 2008 at 12:45 pm

    apolytongp, might I suggest that Congressional subpoenas are not the best way to facilitate scientific openness. I fully agree that Mann et al. could have handled the situation more adeptly–both in terms of politics and in terms of some of his analysis. However, the level of personal attacks and invective heaped upon him after MBH98 was bound to generate a siege mentality. The fact of the matter is that MBH98 is of interest now only for historical reasons–it was the first successful multiproxy study with such an ambitious scale. Like many pioneering studies, it had its flaws, and these flaws were addressed in subsequent efforts–which largely verified the results of MBH98.
    Note that Wilson says: “Ideally, sufficient detail should be given to enable a research worker on another continent to duplicate the method.”
    That does not mean releasing the algorithm. Indeed, I would take issue with Wilson’s contention that the goal is duplication of the method. The goal is verification of the results by a sufficiently similar method. Researchers and reviewers may also disagree about how much detail is actually needed. However, you need to realize that the people seeking to reproduce your results are your rivals, not your friends. Scientific openness does not require sharing code, and it certainly does not require “audits”.

  • Lost and Confused // September 4, 2008 at 1:41 pm

    P. Lewis, I apologize for that. My comment assumes a certain degree of background knowledge, which was wrong of me to do. The reason that comment is interesting is MBH98 was criticized by people saying without bristlecone proxies the “hockey stick” disappeared. This became an issue of a fairly large amount of controversy. Now Mann has stated the MBH98 reconstruction was dependent upon tree rings. This effectively resolves that particular controversy.

    Consider this passage from MBH98, “[T]he long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network, suggesting that potential tree growth trend biases are not influential in the multiproxy climate reconstructions.”

  • Lost and Confused // September 4, 2008 at 1:48 pm

    Ray Ladbury, I have to disagree when you say, “The fact of the matter is that MBH98 is of interest now only for historical reasons…” Certain aspects of MBH98 have been reused in a number of other papers in the last decade.

    An example of this is the MBH98 PC1. It has been reused in a number of papers. If one decides the MBH98 PC methodology was flawed, more papers than just MBH98 would be affected. Clearly the MBH98 is still important.

  • t_p_hamilton // September 4, 2008 at 2:42 pm

    “Apolytongp, Your quote of Wilson in no way suggests sharing of code or data–merely publication of sufficient detail. For instance, it simply will not be practical to publish raw data from the experiments at the LHC, which will generate terabytes of data. Likewise, the analysis will be described, but the code will likely remain internal–as it should.”

    But, but, E. BRIGHT WILSON! E. BRIGHT WILSON!

    And then the noise machine putters off into the sunset, thinking he/she has made some point. Wilson was saying nothing more than what we all know is the ideal. Papers are written for a certain audience, and knowledge is assumed on the part of the readership. If it turns out that the information given in the paper is not adequate to figure out what Mann etc. did (and the reviewers did not specify more details – hey it happens), then his colleagues will ask him politely for those details or else do it their own way, and either get results that confirm or deny the original paper. This is NOTHING out of the ordinary for a scientific paper. Note that this sequence of events is even better than “auditing”, which actually accomplishes nothing except PR for an agenda trying to cast doubts on the conclusions.

  • Hank Roberts // September 4, 2008 at 3:13 pm

    What scientific research needs these days is a regular FAQ for each paper — so the script kiddies who copypaste questions can be pointed to answers without giving them the pleasure of wasting the researchers’ time and clogging discussions with same old same old stuff.

  • Chris O'Neill // September 4, 2008 at 3:49 pm

    apolytongp:

    E. Bright Wilson, Jr. AN INTRODUCTION TO SCIENTIFIC RESEARCH, 1955

    That’s nice. BTW, let us know if MBH98 ever regains any practical significance.

  • Ray Ladbury // September 4, 2008 at 3:59 pm

    Lost and Confused, Re: PC1 in MBH98, see:

    http://www.realclimate.org/index.php/archives/2005/02/dummies-guide-to-the-latest-hockey-stick-controversy/langswitch_lang/in

    This has been discussed ad nauseum. The fact is that the current methods are more skillful, more robust and STILL show the same thing–namely: It’s freakin’ hot out there!

  • Chris O'Neill // September 4, 2008 at 4:22 pm

    Certain yet Lost and Confused:

    Consider this passage from MBH98, “[T]he long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network, suggesting that potential tree growth trend biases are not influential in the multiproxy climate reconstructions.”

    As has been pointed out to you, you are basing your certainty that Mann lied on your disputed interpretation of his words. This is blatantly dishonest.

  • Trying_to_make_sense // September 4, 2008 at 4:57 pm

    \\Lost and Confused 1:41 PM

    Although the attacks on MBH98 kept changing, I thought the BCP attack was that if you remove Bristlecone pines the hockey stick disappears. The statement now is that you can remove all tree ring proxies and the result remains. I don’t see why this statement is saying that the attack was correct? I am under the impression that MBH98 was based on many tree ring proxies (including BCP). So, of course if you remove all tree ring proxies MBH98 cannot be replicated. Am I missing something here?

  • Dano // September 4, 2008 at 6:18 pm

    What’s good is that all of the Dr Frankensteinian reviving of long-dead arguments is not going on in the offices of decision-makers. It is only going on in comment threads. By folks who should reply ‘answered over and over years ago. We’ve moved on.’

    The world’s societies are discussing how to adapt and mitigate, not whether a first paper should be perfect in the minds of ideologues ardently holding down a scruffy fort located in the far reaches of the denialist fringe.

  • Gavin's Pussicat // September 4, 2008 at 6:33 pm

    Libelous and Clintonian, did you notice this in the press release:

    Results of this study without tree-ring data show that for the Northern
    Hemisphere, the last 10 years are likely unusually warm for not just the
    past 1,000 as reported in the 1990s paper and others, but for at least
    another 300 years going back to about A.D. 700 without using tree-ring data.

    Mann is referring to 700 AD – now. With not just BCP removed, but all tree rings. In 1998/1999 they barely made 1000 AD, with tree rings. Apples and oranges.

    The reason that comment is interesting is MBH98 was criticized by people
    saying without bristlecone proxies the “hockey stick” disappeared. This
    became an issue of a fairly large amount of controversy. Now
    Mann has stated the MBH98 reconstruction was dependent upon tree
    rings. This effectively resolves that particular controversy.

    Again, apples and oranges. It was never in question that removing all tree ring data made the 1998 (and 1999) reconstruction next to worthless, at least the interesting early part. But removing contentious proxies like BCP did not, as has been demonstrated to the hilt for those inpressionable by factual evidence.

    As we say out here, you’re reading the press release like the Devil reads the Bible :-)

  • Lost and Confused // September 4, 2008 at 8:31 pm

    Ray Ladbury, that link is completely irrelevant to your point. The validity of your point is not tied to the validity of the criticisms of MBH PC methodology. The issue was whether MBH was of interest for more than “historical reasons” and clearly it is.

    Chris O’Neill, if you read my post again you should see I made no accusations or even comments regarding whether or not Mann lied. I am attempting to avoid any discussion of people or motives now, and I would appreciate it if you would not misrepresent my posts.

    Trying_to_make_sense, you are largely correct as to what the criticisms said. As you point out, the statement in the press release says now that criticism would be untrue. However, the statement does say a decade ago removal of tree ring proxies was not possible. This resolves a controversy where people had said you could remove all tree rings proxies and still get the same result. I had never heard anyone raise the point you raise here (that some tree ring proxies could be removed, but not all), so I had not considered it when making that post. I always heard the defenses raised refer to all tree rings, similar to the portion quoted from MBH.

  • David B. Benson // September 4, 2008 at 9:13 pm

    apolytongp // September 3, 2008 at 4:14 pm wrote “…Mathematical papers without misprints and errors are the exception rather than the rule…” and I asume he was quoting from E. Bright Wilson, Jr. AN INTRODUCTION TO SCIENTIFIC RESEARCH, 1955.

    This is false now and it was false then, at least regarding mathematical papers written by mathematicians, physcists and astronomers.

    Probably chemists, too, but I don’t read chemistry much.

  • t_p_hamilton // September 4, 2008 at 10:27 pm

    Lost and Confused: “The issue was whether MBH was of interest for more than “historical reasons” and clearly it is.”

    Since subsequent papers have been published with more data, clearly presented supplementary information, and numerous statistical methods, with resulting HIGHER RESOLUTION, why would the first paper be of anything but historical interest?

  • Hank Roberts // September 5, 2008 at 3:20 am

    “… only one of these series … exhibits a signicant correlation with the time history of the dominant temperature pattern of the 1902-1980 calibration period. Positive calibration variance scores for the NH series cannot be obtained if this indicator is removed from the network …”

    Let’s put that in context:

    ——excerpt follows——-

    … Further consistency checks are required. The most basic involves checking the potential resolvability of long-term variations by the underlying data used. An indicator of climate variability should exhibit, at a minimum, the red noise spectrum the climate itself is known to exhibit, see Mann and Lees, 1996 and references therein. A signicant decit of power relative to the median red noise level thus indicates a possible loss of true climatic variance, with a decit of zero frequency power indicative of less trend than expected from noise alone, and the likelihood that the longest secular” timescales under investigation are not adequately resolved. Only 5 of the indicators including the ITRDB PC1, Polar Urals, Fennoscandia, and both Quelccaya series are observed to have at least median red noise power at zero frequency for the pre-calibration AD 1000-1901 period. It is furthermore found that only one of these series — PC 1 of the ITRDB data — exhibits a signicant correlation with the time history of the dominant temperature pattern of the 1902-1980 calibration period. Positive calibration variance scores for the NH series cannot be obtained if this indicator is removed from the network of 12 in contrast with post-AD 1400 reconstructions for which a variety of indicators are available which correlate against the instrumental record. Though, as discussed earlier, ITRDB PC 1 represents a vital region for resolving hemispheric temperature trends, the assumption that this relationship holds up over time nonetheless demands circumspection. Clearly, a more widespread network of quality millennial proxy climate indicators will be required for more confident inferences.
    ——end excerpt———-

    You know how to find the source.

  • Barton Paul Levenson // September 5, 2008 at 11:02 am

    Lost and Confused, unbelievably, posts:

    The reason that comment is interesting is MBH98 was criticized by people saying without bristlecone proxies the “hockey stick” disappeared. This became an issue of a fairly large amount of controversy. Now Mann has stated the MBH98 reconstruction was dependent upon tree rings. This effectively resolves that particular controversy.

    Consider this passage from MBH98, “[T]he long-term trend in NH is relatively robust to the inclusion of dendroclimatic indicators in the network,

    Lost, READ what you’re quoting! Saying the trend is “robust to” the tree evidence means the trend is still there even without the tree evidence.

  • Barton Paul Levenson // September 5, 2008 at 12:13 pm

    Hank, the FAQ for a paper is a pretty darned good idea — can you write to those organizations that are responsible for several journals and suggest this? I’d be willing to sign letters.

  • Dano // September 5, 2008 at 8:00 pm

    Hank, the FAQ for a paper is a pretty darned good idea — can you write to those organizations that are responsible for several journals and suggest this? I’d be willing to sign letters.

    I’m not sure doing it for the reason Hank gave is a good enough reason. There are numerous journals that give brief synopses with applications for practice that get at what Hank is suggesting.

    I used to discuss with Chris Mooney way back about what science needed to communicate better, and while FAQs are a decent idea, if the researcher who writes them is disconnected from the lay public, the FAQ won’t do much good.

    Systemically, we need to require a communications class at the undergrad level for science majors, and a track where those who don’t want to be lab jockeys but communicate well can have a niche explaining relevance of papers. This has come up in one form or another, but little progress at the Uni level.

    Best,

    D

  • David B. Benson // September 5, 2008 at 10:41 pm

    Here ia a link Timo found to a paper doing a borehole based temperature reconstruction of the last 20,000 years. The most recent 2,000 years are of particular interest; see Figure 1 in the paper:

    http://www.geo.lsa.umich.edu/~shaopeng/2008GL034187.pdf

  • HankRoberts // September 6, 2008 at 12:38 am

    > FAQs for science papers

    Maybe one of the science journalism programs could make a project of that sort of thing. Idea’s free for the taking.

  • Chris O'Neill // September 6, 2008 at 9:00 am

    Lost and Confused:

    if you read my post again you should see I made no accusations or even comments regarding whether or not Mann lied. I am attempting to avoid any discussion of people or motives now, and I would appreciate it if you would not misrepresent my posts.

    It would help if you withdrew your implication that Mann lied in MBH98 with your interpretation of MBH98:

    MBH98 .. says this (1400 AD) reconstruction is robust to the removal of dendroclimatic indicators (tree rings).

  • Barton Paul Levenson // September 6, 2008 at 11:10 am

    Maybe not a FAQ per se but a sort of translation into layman:

    WHAT THE PAPER MEANS

    We used statistics to test whether the Arctic ice cap was melting at a faster and faster rate or not. We found that we couldn’t tell. We did not find that it was not melting; it is. Just that it doesn’t appear to be speeding up yet.

    Or something of the sort.

  • Hank Roberts // September 6, 2008 at 4:38 pm

    Yep. Simpler helps. I was thinking ‘ FAQ’ to collect the frequently asserted questions that pop up over and over wherever climate is mentioned, so each paper could gather up the list of copypaste stuff pertinent to it.

    I suppose that would only encourage more of it.

  • Hank Roberts // September 7, 2008 at 4:24 am

    http://viridiandesign.org/

  • Barton Paul Levenson // September 7, 2008 at 10:34 am

    TCO posts:

    The death rate will increase until at least 100-200 million people per year will be starving to death during the next ten years.”

    TCO, 200 million people DID starve to death in the last ten years! Go read some WHO statistics about malnutrition and famine. Keep in mind that something like 15 million people die of all causes every year.

  • Barton Paul Levenson // September 7, 2008 at 10:46 am

    Oops! I did it again — answered an old post (by TCO) because I was confused about the dates I was reading. Sorry about that.

  • apolytongp // September 7, 2008 at 3:34 pm

    “per year in the next ten years” NOT EQUAL to “in ten years”. It’s a difference of ten times.

  • Lazar // September 7, 2008 at 4:46 pm

    Re Mann et al 2008, the supp info pdf has interesting plots such as the effect of removing dendro proxies. But the scan at PNAS is blurred for some (e.g. fig S5 is barely legible at 400% zoom). Go here instead :)

  • Arch Stanton // September 7, 2008 at 6:52 pm

    Hi guys, I’m having a discussion with someone who claims that climate “anomaly” data is derived solely from low temp data. I have been unable to find anything about it. Is there any truth to it?

    Thanks, Arch

  • David B. Benson // September 7, 2008 at 9:55 pm

    Arch Stanton // September 7, 2008 at 6:52 pm — Your discussant is terribly confused.

  • grobblewobble // September 8, 2008 at 8:38 am

    I’d like to continue here with a discussion in the ‘(more) less ice’ thread, since it was getting rather off-topic.

    Barton Paul Levenson wrote:
    [quote]You are conflating the people who do this full-time with those who happened to be convinced by them. The latter indeed deserve help and not derision; but the former simply have to be stopped.[/quote]
    Sir, I beg to differ on this matter. First, I wonder if such a distinction can really be made. It requires an understanding of the motives of other people, which is risky bussiness at best.

    Secondly, how could they be ’stopped’? In the field of science, if someone is misinterpretating observations or using faulty logic, his work is prevented from being published through the process of peer review.
    However, in everyday life such a thing does not and should not exist. The right of free speech demands that every lunatic can spread as much disinformation as he desires.

    It is sad and it bothers me that this is getting in the way of making a complicated scientific finding clear to the general public – especially as it is something that many people hope to be false or at least exaggerated.

    Frustrating as this may be, ’stopping’ people from denying the truth sounds to me like a cure worse than the disease. IMHO, only an open minded exchange of knowledge can eventually make more people see the truth.

  • Ian Jolliffe // September 8, 2008 at 9:36 am

    Apologies if this is not the correct place to make these comments. I am a complete newcomer to this largely anonymous mode of communication. I’d be grateful if my comments could be displayed wherever it is appropriate for them to appear.

    It has recently come to my notice that on the following website, related to this one, my views have been misrepresented, and I would therefore like to correct any wrong impression that has been given.
    http://tamino.wordpress.com/2008/03/06/pca-part-4-non-centered-hockey-sticks/

    An apology from the person who wrote the page would be nice.

    In reacting to Wegman’s criticism of ‘decentred’ PCA, the author says that Wegman is ‘just plain wrong’ and goes on to say ‘You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe, one of the world’s foremost experts on PCA, author of a seminal book on the subject. He takes an interesting look at the centering issue in this presentation.’ It is flattering to be recognised as a world expert, and I’d like to think that the final sentence is true, though only ‘toy’ examples were given. However there is a strong implication that I have endorsed ‘decentred PCA’. This is ‘just plain wrong’.

    The link to the presentation fails, as I changed my affiliation 18 months ago, and the website where the talk lived was closed down. The talk, although no longer very recent – it was given at 9IMSC in 2004 – is still accessible as talk 6 at http://www.secamlocal.ex.ac.uk/people/staff/itj201/RecentTalks.html
    It certainly does not endorse decentred PCA. Indeed I had not understood what MBH had done until a few months ago. Furthermore, the talk is distinctly cool about anything other than the usual column-centred version of PCA. It gives situations where uncentred or doubly-centred versions might conceivably be of use, but especially for uncentred analyses, these are fairly restricted special cases. It is said that for all these different centrings ‘it’s less clear what we are optimising and how to interpret the results’.
    I can’t claim to have read more than a tiny fraction of the vast amount written on the controversy surrounding decentred PCA (life is too short), but from what I’ve seen, this quote is entirely appropriate for that technique. There are an awful lot of red herrings, and a fair amount of bluster, out there in the discussion I’ve seen, but my main concern is that I don’t know how to interpret the results when such a strange centring is used? Does anyone? What are you optimising? A peculiar mixture of means and variances? An argument I’ve seen is that the standard PCA and decentred PCA are simply different ways of describing/decomposing the data, so decentring is OK. But equally, if both are OK, why be perverse and choose the technique whose results are hard to interpret? Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.
    I am by no means a climate change denier. My strong impressive is that the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics. Misrepresenting the views of an independent scientist does little for their case either. It gives ammunition to those who wish to discredit climate change research more generally. It is possible that there are good reasons for decentred PCA to be the technique of choice for some types of analyses and that it has some virtues that I have so far failed to grasp, but I remain sceptical.

    Ian Jolliffe

    [Response: I apologize for having misrepresented your opinion, but I hope you realize that it was an honest statement of my interpretation of your presentation, in no way was it a deliberate attempt to misrepresent you.

    In your presentation you state: "It seems unwise to use uncentred analysis unless the origin is meaningful." I took this to mean that you endorse uncentered analysis when the origin is meaningful. If you disagree, I accept your disagreement, but it seems to me that I can hardly be blamed for thinking so. It also seems to me (and I'm by no means the only one) that the origin in the analysis of MBH98 is meaningful.

    I certainly agree with this statement from your comment: "... the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence ..."]

  • Arch Stanton // September 8, 2008 at 3:10 pm

    David B Benson

    >my discussant

    I thought so.

    Thanks

  • Lost and Confused // September 8, 2008 at 11:47 pm

    t_p_hamilton you say, “Since subsequent papers have been published with more data, clearly presented supplementary information, and numerous statistical methods, with resulting HIGHER RESOLUTION, why would the first paper be of anything but historical interest?” As I already said, parts of the MBH have been reused in a number of these subsequent papers. As long as parts of it are still being used, it is still of interest.

    Barton Paul Levenson, I do not understand your post. You say, “Lost, READ what you’re quoting! Saying the trend is ‘robust to’ the tree evidence means the trend is still there even without the tree evidence.” That is exactly how I interpreted the comment, so I do not know why you told me I need to “READ” the quote. Previously it was claimed the trend existed without “tree evidence.” Mann has now said “tree evidence” could not have been thrown away a decade ago. Could you explain what was so unbelievable about my post?

  • HankRoberts // September 8, 2008 at 11:55 pm

    Speaking of ‘twisted’:
    http://bravenewclimate.com/2008/09/04/twisted-the-distorted-mathematics-of-greenhouse-denial/#

  • TCO // September 9, 2008 at 12:16 am

    Tamino, as stated before, the most damning thing is that an expert on PCA can’t really even follow what Mann is doing, let alone opine on if it is right/wrong. We will have dhogza along in a second to say “well he didn’t say it was for sure wrong”. But that’s not even the point. The point is that someone who is an expert has significant questions. How are we supposed to evaluate Mann as analysis given the difficulties from an expert?

  • TCO // September 9, 2008 at 12:24 am

    Also my clear implication from Jolliffe originally and then especially given the recent comments is that off-centering is a sometime thing requiring some justification and still to be looked at curiosly. Given that Mann didn’t even cite that he had DONE THIS, perhaps he did not do what he should have?

    Tammy, you’re like one of my favorite libs, so how about breaking ranks with the cabal and at least say that Mann should have noted that he did the particular normalization within his description of methods? It’s such a minor point. Doesn’t require you to trade in the NASA pass, the Hybrid, the cabal mailing list, what have you. Just a little teensy minor point for proper documentation.

    ;-)

    [Response: I do agree that Mann et al. should have noted the conventions used for their analysis. I don't believe it was in any way an attempt to deceive.]

  • pough // September 9, 2008 at 12:45 am

    Ian, if it means anything, my reading of that post didn’t lead me to think you specifically endorsed anything. In fact, I was assuming what turns out had happened: that you hadn’t been consulted, just that your work (as interpreted by Tamino) seemed to be backing up usage of uncentered analysis in certain circumstances.

  • TCO // September 9, 2008 at 12:46 am

    Actually I lied. BigCityLiberal is my favorite. You’re on the list though.

  • Timothy Chase // September 9, 2008 at 1:53 am

    TCO wrote:

    Also my clear implication from Jolliffe originally and then especially given the recent comments is that off-centering is a sometime thing requiring some justification and still to be looked at curiously. Given that Mann didn’t even cite that he had DONE THIS, perhaps he did not do what he should have?

    I agree with both you and Tamino on this point, of course. But my personal view is that Michael Mann was probably writing for fellow climatologists who probably wouldn’t bat an eye at seeing or identifying the use of de-centered PCA. So much like your calculus professor might have skipped steps 1-10 because they were obvious to him — and he just naturally assumed everyone else, Mann omitted the obvious. And as I and others have noted, it gets used in a variety of disciplines and has been since the 1970s.

    [Response: I'll disagree. I don't think the use of decentered PCA is one of those "obvious" steps, and it should have been mentioned.]

  • TCO // September 9, 2008 at 2:24 am

    Cool. I don’t think it was an attempt to hide either. Sorry, you’re still behind BCL, though.

    [Response: I can accept that.]

  • george // September 9, 2008 at 3:07 am

    With all due respect Dr. Joliffe, based on your presentation alone, it would be difficult if not impossible for me (or anyone else) to know that Tamino had “misrepresented your views”.

    And under the circumstances, I think “misinterpreted” (rather than “misrepresented”) might have been a better word for you to have used here.

    I think it is important to view Tamino’s statement in its full context because doing so makes it clear that

    1) when Tamino commented that Wegman was “just plain wrong”, he was specifically referring to this statement by Wegman:

    Centering the mean is a critical factor in using the principal component methodology properly.

    .

    Perhaps it was not your intention to do so in your presentation, but you did seem to imply that using uncentered PCA might be warranted in certain case(s ) — specifically, as you said, when “the origin is meaningful”

    Forgive me, but your implication (intentional or not) does seem to stand in direct conflict with Wegman’s categorical claim that

    Centering the mean is a critical factor in using the principal component methodology

    2) When Tamino said

    “You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe”

    it seems quite likely that he was actually referring back to his immediately preceding sentence:

    Centering is the usual custom, but other choices are still valid; we can perfectly well define PCs based on variation from any “origin” rather than from the average.

    Again, perhaps it was not your intent to give this impression to those reading your presentation, but I too can see how your statement in your presentation

    “It seems unwise to use uncentred analysis unless the origin is meaningful”

    might be interpreted as Tamino interpreted it.

    I actually think it is unfair of you to hold Tamino completely responsible for any misinterpretation of your views on the subject of uncentered PCA.

    If you really did not believe that uncentered PCA was warranted when you made that presentation, perhaps you should have made that perfectly clear in uour original presentation.

    Perhaps your view on the subject has evolved since then?

    Full text of Tamino post below:

    First let’s dispense with the last claim, that non-centered PCA isn’t right. This point was hammered by Wegman, who was recently quoted in reader comments thus:

    “The controversy of Mann’s methods lies in that the proxies are centered on the mean of the period 1902-1995, rather than on the whole time period. This mean is, thus, actually decentered low, which will cause it to exhibit a larger variance, giving it preference for being selected as the first principal component. The net effect of this decentering using the proxy data in MBH and MBH99 is to produce a “hockey stick” shape. Centering the mean is a critical factor in using the principal component methodology properly. It is not clear that Mann and associates realized the error in their methodology at the time of publication.”

    Just plain wrong. Centering is the usual custom, but other choices are still valid; we can perfectly well define PCs based on variation from any “origin” rather than from the average. It fact it has distinct advantages IF the origin has particular relevance to the issue at hand. You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe, one of the world’s foremost experts on PCA, author of a seminal book on the subject. He takes an interesting look at the centering issue in this presentation.

  • Timothy Chase // September 9, 2008 at 3:22 am

    Tamino wrote:

    Response: I’ll disagree. I don’t think the use of decentered PCA is one of those “obvious” steps, and it should have been mentioned.

    Not a problem. In this area and a great many others (no doubt) I would strongly recommend that people give your views considerably more credence than mine. However, perhaps you will consider this: sometimes you yourself have questions that unlike so many nowadays cannot be answered on the web or in the privacy of your own mind. And perhaps this is one of those times.

  • Patrick Hadley // September 9, 2008 at 5:51 am

    George says that it would have been impossible for Tamino or anyone else to know that he had misrepresented Jolliffe’s views. If you look back at the thread you will see that this was pointed out on the thread by several posters who were roundly abused for their pains. It was patently obvious from his presentation that Jolliffe did not give carte blanche for the use of decentred PCAs.

    What about these comments by Jolliffe (who is certainly no climate change denier) that “given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA” and “It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics.”

    Surely it is time to admit that the hockey stick and all its later reincarnations are utterly bogus artifacts and that defending it gives ammunition to those who wish to discredit climate research more generally.

    [Response: Of course Jolliffe didn't give carte blanche for the use of uncentered or decentered PCA. Neither did he make a blanket condemnation of those procedures. From his latest comment it's evident that he didn't address the issue of decentered (as opposed to uncentered) PCA at all. It appears he now discredits decentering, and he's entitled to his opinion. But the hockey stick remains when using centered PCA, and when using no PCA at all. The claim that it's nothing but "utterly bogus artifacts" is what's really bogus.

    The case for global warming rests on a mountain of evidence, of which the hockey stick is only a small (and far from crucial) part. It's the denialists who focus on the hockey stick to the exclusion of all else, in an attempt to discredit climate science in general.]

  • Gavin's Pussycat // September 9, 2008 at 6:08 am

    Tamino:

    It also seems to me (and I’m
    by no means the only one) that the origin in the analysis of
    MBH98 is meaningful.

    FWIW that’s how I understood the whole point of Tamino’s PCA posts.

  • mikep // September 9, 2008 at 7:42 am

    Here is what McIntyre wrote in 2005, in response to initial comments by Mann using Joliffe as an authority:

    “The second presentation cited by Mann is a Powerpoint presentation on the Internet by Jolliffe (a well known statistician).
    Jollife explains that non-centered PCA is appropriate when the reference means are chosen to have some a priori meaningful interpretation for the problem at hand. In the case of the North American ITRDB data used by MBH98, the reference means were chosen to be the 20th century calibration period climatological means. Use of non-centered PCA thus emphasized, as was desired, changes in past centuries relative to the 20th century calibration period. (http://www.realclimate.org/index.php?p=98)
    In fact, Jolliffe says something quite different. Jolliffe’s actual words are:
    “it seems unwise to use uncentered analyses unless the origin is meaningful. Even then, it will be uninformative if all measurements are far from the origin. Standard EOF analysis is (relatively) easy to understand –variance maximization. For other techniques it’s less clear what we are optimizing and how to interpret the results. There may be reasons for using no centering or double centering but potential users need to understand and explain what they are doing.”
    Jolliffe’s presents cautionary examples showing that uncentered PCA gives results that are sensitive to whether temperature data are measured in Centigrade rather than Fahrenheit, whereas centered PCA is not affected. Jolliffe nowhere says that an uncentered method is “the” appropriate one when the mean is “chosen” to have some special meaning, he states, in effect, that having a meaningful origin is a necessary but not sufficient ground for uncentered PCA. But he points out that uncentered PCA is not recommended “if all measurements are far from the origin”, which is precisely the problem for the bristlecone pine series once the mean is de-centered, and he warns that the results are very hard to interpret. Finally, Jolliffe states clearly that any use of uncentered PCA should be clearly understood and disclosed – something that was obviously not the case in MBH98. In the circumstances of MBH98, the use of an uncentered method is absolutely inappropriate, because it simply mines for hockey stick shaped series. Even if Mann et al. felt that it was the most appropriate method, it should have had warning labels on it.”
    Joliffe has specifically confirmed that McIntyre’s interpretation of what he said is correct. So someone at least could interpret what Joliffe wrote correctly. The crucial mistake some readers seem to have made is to confuse a necessary condition with a sufficient condition. Blaming Joliffe for being insufficiently clear is ungracious in the extreme. Joliffe was far far clearer than the 1998 MBH article which failed to mention the use on non-centering at all, and, contrary to what is said above, non-centering is very non-standard and would not be assumed by the ordinary Nature reader. It’s a very eccentric thing to do. Can’t we just accept that uncentred PCA requires exceptional justification if it is to be used in this area (beginning by telling people it’s being used in the first place)?

  • Ian Jolliffe // September 9, 2008 at 9:10 am

    Thanks for the apology, Tamino.
    Some further clarification: a lot of the confusion seems to have arisen because of the terminology. Uncentred PCA and decentred PCA are completely different animals. My presentation dealt only with uncentred PCA (and doubly centred PCA). I’ve just looked at it again and it seems completely unambiguous that this is the case. Thus when I talked about the ‘origin’ being meaningful I meant the point at which all the variables as originally measured are zero, and nothing else. Using anything other than column means or row means to centre the data wasn’t even on my radar. It was only fairly recently that I realised the exact nature of decentred PCA so I couldn’t have endorsed it.
    A response from Timothy Chase (thanks for giving a name – I may be old-fashioned but I prefer to know who I’m talking to) suggests that decentred PCA ‘gets used in a variety of disciplines and has been since the 1970s’. I’m aware of uses of uncentred and doubly-centred PCA, but not of decentred PCA. I’d be grateful for the references.

  • Barton Paul Levenson // September 9, 2008 at 9:43 am

    grobble,

    I wasn’t advocating prior restraint. I was advising handling our own public relations efforts so that people stop listening to liars and crackpots.

  • null{} // September 9, 2008 at 12:13 pm

    Tamino said:

    “I certainly agree with this statement from your comment:”

    “It therefore seems crazy . . . and that a group of influential climate scientists have doggedly defended a piece of dubious statistics. “

  • dean_1230 // September 9, 2008 at 12:30 pm

    Tamino,

    Can we expect to see you revisit your tutorial with Joliffe’s correction in mind?

  • Chris O'Neill // September 9, 2008 at 1:10 pm

    Certain despite being Lost and Confused:

    Mann has now said “tree evidence” could not have been thrown away a decade ago.

    Not just now, but nine years ago also:

    In using the sparser dataset available over the entire millennium, only a relatively small number of indicators are available in regions (e.g. western North America) where the primary pattern of hemispheric mean temperature variation has significant amplitude, and where regional variations appear to be closely tied to global-scale temperature variations in model-based experiments. THESE FEW INDICATORS THUS TAKE ON A PARTICULARLY IMPORTANT ROLE (in fact, as discussed below, ONE SUCH INDICATOR, PC#1 of the ITRB data, IS FOUND TO BE ESSENTIAL)

    This is very, very old news.

  • dhogaza // September 9, 2008 at 1:22 pm

    Can we expect to see you revisit your tutorial with Joliffe’s correction in mind?

    The tutorial doesn’t change, only the reference to Joliffe.

    Null{}: Quote-mining is a sin.

  • AndyL // September 9, 2008 at 2:05 pm

    Tamino,
    In response to Ian Joliffe you say ” I certainly agree with this statement from your comment: “… the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence …” ”

    To be sure there is no further misunderstanding between you and Joliffe, can you confirm you agree that IPCC and Gore should not have given such prominence to the Hockey Stick.

    Further, do you agree with the remainder of his statement “it is crazy …that a group of influential climate scientists have doggedly defended a piece of dubious statistics”

    [Response: No I do *not* agree that "IPCC and Gore" should not have given such prominence to the hockey stick. Your question is itself dishonest; it's the denialist camp which has focused too much attention on the hockey stick, painting it as a crucial centerpiece of climate science, which it is not.]

  • AndyL // September 9, 2008 at 2:42 pm

    Tamino,

    thanks for your reply.

    My question was not dishonest. I wanted to draw out what you meant – which you have clarified.

    However you claim to agree with Joliffe. It is not clear is whether your statement agrees or disagrees with what Joliffe meant. It appears to me that you may have misinterpreted him again.

  • Ray Ladbury // September 9, 2008 at 3:12 pm

    Andy L.,
    While I would agree that there are some aspects of MBH98 that 10 years down the road are difficult to defend, I don’t think anyone is trying to defend them. Rather, members of the climate science community are defending the character of good scientists against calumny by the denialists. They are also pointing out that none of the errors in MBH98 substantively affect the basic conclusion: It is hotter now than it has been in a very, very long time. It would seem that the denialists are so eager to attack the characters of M, B and H precisely to divert attention away from the second point.

  • Timothy Chase // September 9, 2008 at 3:15 pm

    Ian Jolliffe wrote:

    A response from Timothy Chase (thanks for giving a name – I may be old-fashioned but I prefer to know who I’m talking to) suggests that decentred PCA ‘gets used in a variety of disciplines and has been since the 1970s’. I’m aware of uses of uncentred and doubly-centred PCA, but not of decentred PCA. I’d be grateful for the references.

    Here are a few that P. Lewis dug up while Tamino was going through his explanation of PCA, centered and non-centered. And there are more. Then I have run into multi-scale principle component analysis, non-linear principle component analysis, kernel principle component analysis, etc.. The latter is getting some use in the identification of climate modes where positive and negative phases aren’t simply negative images of one another. It seems to have a number of variations — which get used in a large variety of disciplines, including image and sound processing, facial recognition, ecological studies, medicine, genetics, economics, etc.. Google and Google Scholar bring up a fair amount.

  • Timothy Chase // September 9, 2008 at 3:16 pm

    In any case, you might check out Tamino’s presentation on principle component analysis…

    PCA, part 1
    http://tamino.wordpress.com/2008/02/16/pca-part-1/

    PCA, part 2
    http://tamino.wordpress.com/2008/02/20/pca-part-2/

  • Timothy Chase // September 9, 2008 at 3:17 pm

    Practical PCA
    http://tamino.wordpress.com/2008/02/21/practical-pca/

    PCA part 4: non-centered hockey sticks
    http://tamino.wordpress.com/2008/03/06/pca-part-4-non-centered-hockey-sticks/

    PCA part 5: Non-Centered PCA, and Multiple Regressions
    http://tamino.wordpress.com/2008/03/19/pca-part-5-non-centered-pca-and-multiple-regressions

    He expresses some reservations with respect to how it was performed in the original paper by Mann. But he also points out that you get essentially the same results if you use other methods including centered principle component analysis — as is demonstrated by other studies of temperature proxies.

  • Timothy Chase // September 9, 2008 at 3:18 pm

    Ian Jolliffe,

    One question. You write:

    Thus when I talked about the ‘origin’ being meaningful I meant the point at which all the variables as originally measured are zero, and nothing else.

    Wouldn’t this depend upon the coordinate system? Such that by choosing a different coordinate system, you could make all the variables equal zero for what ever point you like? I hope that by “being meaningful” you might mean something more restrictive than this — or at least I would prefer something a little more restrictive, such as especially meaningful within the historical context of problem or given the available data, such that the choice is not arbitrary.

  • Bill // September 9, 2008 at 4:50 pm

    dhogaza, I wouldn’t call Tamino a sinner for taking this quote:

    It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics.

    And reducing it to:

    It therefore seems crazy that the MBH hockey stick has been given such prominence…

    especially since the part that was removed seems to refer to Tamino. Of course, since this is Tamino’s blog, we should heed his instructions to trust the source, who states this:

    Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

  • dhogaza // September 9, 2008 at 5:23 pm

    Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

    And people have analyzed the data without doing so, and get the hockey stick.

    Mann et al have added a very large number of new proxies, analyze the set without using any type of PCA, and get the hockey stick.

    On and on, ad infinitum.

  • george // September 9, 2008 at 5:39 pm

    One thing is very interesting in this whole hockey stick debate:

    While many of the experts in various disciplines related to the debate have been able to view the whole hockey stick controversy in context for what it really means, some people 9on both “sides”) have a very hard time letting MBH98 go.

    Wegman criticized Mann’s statistics, but nonetheless said that the case for global warming did not rest on Mann’s results and that it was ‘time to put the “hockey stick” controversy behind us and move on.” ‘

    “We do agree with Dr. Mann on one key point: that MBH98/99 were not the only evidence of global warming.
    As we said in our report, “In a real sense the paleoclimate results of MBH98/99 are essentially irrelevant to the consensus on climate change. The instrumented temperature record since 1850 clearly indicates an increase in temperature.” We certainly agree that modern global warming is real. We have never disputed this point. We think it is time to put the “hockey stick” controversy behind us and move on.”

    The NRC issued a report that concluded that some of Mann’s claims (particularly about individual years in the 90’s being the hottest in the last 1000 years) were not supported with any certainty, but nonetheless stated quite unambiguously that the case for warming did not depend on Mann’s results.

    Dr. Jolliffe clarifies above (thank you Dr. Jollife) that “It was only fairly recently that I realised the exact nature of decentred PCA so I couldn’t have endorsed it” and “given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA”, but he also says
    “I am by no means a climate change denier. My strong impressive is that the evidence rests on much much more than the hockey stick.”

  • Gaelan Clark // September 9, 2008 at 5:47 pm

    If the hockey stick is not important, then why are we concerned over what has been termed—because of the hockey stick alone—”unprecedented warming” in the last few decades?

  • Timothy Chase // September 9, 2008 at 6:36 pm

    Bill quotes:

    Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

    Seems like a rather odd thing to say as PCA gets used in the processing of sound, economic analysis (which is pretty much all dynamic), climate modes (oscillations, which are by definition dynamic), etc..

    It is such a widely used technique, but given this statement, it is beginning to sound like it shouldn’t be used at all.

  • johnG // September 9, 2008 at 7:02 pm

    Can you or your readers recommend any good references for understanding astronomical forcing?

    I’m trying to build a presentation to my astronomy club on astronomical forcing, but also want to put this subject in the context of paleoclimate evidence, current greenhouse gas theory, and be able to construct some very simple models that illustrate changes in insolation with changes in orbit.

    My community is a hotbed of global warming denial, and so I’m hoping that my presentation will allow me to get some of the fine discussion I see on this and other climate-related blogs into places where it’s badly needed.

    Thanks in advance,
    jg

  • L Miller // September 9, 2008 at 7:08 pm

    “My strong impressive is that the evidence rests on much much more than the hockey stick. It therefore seems crazy that the MBH hockey stick has been given such prominence and that a group of influential climate scientists have doggedly defended a piece of dubious statistics.”

    Dr Jolliffe

    I’m a rather infrequent poster here but since I’ve already seen your post here linked on thee separate sites I thought I would give you some feedback on the nature of this debate.

    While I certainly agree that climate change evidence rests on much more then the hockey stick, the hockey stick itself rests on much more then a single paper published in 1998. Since there more then a dozen papers have seen the same result without PCA, and it’s been demonstrated that neither centering nor the use of PCA have any impact on the final outcome of MBH98.

    It isn’t at all uncommon for less then perfect choices to be made in first of it’s kind papers like the one in question. The ultimate test isn’t that such flaws exist but if the results hold up when those flaws are fixed in future papers, and the hockey stick certainly has held up. It’s not surprising, therefore that climate scientists should defend it.

    While I think it’s clear you are addressing your comments towards a specific part of one paper, that isn’t the claim being made by those who typically bring this topic up. I’ve already seen linking to you post here claiming it as “proof” the hockey stick shape doesn’t exist at all and that the issues you point out mean that every paper which yields the same result as MBH98 should be dismissed. I know that sounds ridiculous, but it truly is the line being spread about the 1998 paper and your comments on it.

  • Gavin's Pussycat // September 9, 2008 at 8:22 pm

    Gaelan Clark:

    what has been termed — because of the hockey stick
    alone– “unprecedented warming” in the last few decades

    Stop lying.

  • pough // September 9, 2008 at 8:39 pm

    If the hockey stick is not important, then why are we concerned over what has been termed—because of the hockey stick alone—”unprecedented warming” in the last few decades?

    I’m not entirely sure, but I think you’re referring to two things with one name (unfortunately easy and common). There is “the hockey stick”, which is sometimes one report: MBH98 and there is “the hockey stick”, which is a number of papers that all show a similar shape.

    MBH98 is not alone in showing unprecedented warming in the last few decades. For that reason (and because it was done so long ago and has been superseded) it is no longer important.

    Also keep in mind that “unprecedented” doesn’t just refer to temperature level, but also to rate of increase. I like to say that slowing from 100km/h to zero is nothing particularly interesting unless you happen to do it in the space of one meter.

  • Pete // September 9, 2008 at 9:41 pm

    L Miller, Jollife has simply admonished Tamino for mirepresenting his views as supporting the use of decentered PCA as used in MBH98. It seems that he has never seen that paper or any of the others claimed to have used this methodology. Perhaps Wegman was right in that its long overdue that this field used world-class statisticians given the importance being claimed for this research. It would be interesting to see Dr Jolliffe’s take on MBH98 and the papers you allude to, but he must be pretty busy to have not even noticed them given their high profile.

  • None // September 9, 2008 at 9:56 pm

    dhogaza,
    Have there been ANY non-PCA multiproxy studies which get a hockeystick WITHOUT reliance the Gaspe and extremely contentious Bristlecone pine series ?

  • David B. Benson // September 9, 2008 at 10:59 pm

    johnG // September 9, 2008 at 7:02 pm — I recommend W.F. Ruddiman’s “Earth’s Climate: Past and Future” as a good starter. You also should consider David Archer’s “The Long Thaw” or else papers available on his publications web page.

    For some mathematical treatments, it seems that Wikipedia is not a bad place to begin.

  • Dean P // September 9, 2008 at 11:18 pm

    Pough,

    One thing to keep in mind is that only the GISS shows the “unprecedented” rate of change since 1979. If you use the HadCRU data, the rate of change at the end of the 20th century is almost identical to the rate of change between 1910-1940, which as I understand it, was due to totally natural causes.

    And since neither of these records go back past the 1800s, then it may be vain to say that it’s “never” happened before. Never is a very long time.

    [Response: The early 20th-century warming is not attributed entirely to natural causes. And the warming rate according to HadCRU data is greater for the late 20th century than for 1910-1940, although the difference is not statistically significant.]

  • David B. Benson // September 10, 2008 at 12:20 am

    Dean P // September 9, 2008 at 11:18 pm — The rate of change for the last century is roughly comperable to the recovery, in central Greenland, from the 8.2 kybp event, a bit faster, and the recovery from Younger Dryas, maybe a bit slower.

    However, this is a comparison between the global temperatures of HadCRUTv3 and the regional temperature of Greenland; not really fair to imply that global temperatures went up that fast at those pre-(Holocene climatic optimum) times. In particular, Younger Dryas does not show up at all in the Antarctic and Pategonian paleodata.

  • cce // September 10, 2008 at 12:20 am

    RSS shows slightly more warming since 1979 than GISTEMP. 0.17 degrees per decade vs 0.16 degrees per decade (as of August ‘08).

    http://cce.890m.com/gistemp-vs-rss.jpg

    The difference between RSS, GISTEMP, and HadCRUT are negligible.

    http://cce.890m.com/giss-vs-all.jpg

  • dhogaza // September 10, 2008 at 3:27 am

    Have there been ANY non-PCA multiproxy studies which get a hockeystick WITHOUT reliance the Gaspe and extremely contentious Bristlecone pine series ?

    That would like sorta the point of the latest Mann et al paper.

  • Raven // September 10, 2008 at 7:46 am

    Tamino,

    Take a look at Chapter in AR4. Fig 9.5.

    They have two plots: with and without anthropogenic forcings. The plots prior to 1940 are identical which indicates that the IPCC believes that the rise until 1940 is almost entirely natural. This is clear evidence that the rise from 1960 until today is not unprecedented and that natural causes cannot be excluded as a contributor to the most recent rise.

    [Response: While not "wildly divdergent," the two are *not* identical before 1940. Put your glasses on.]

  • Gary Moran // September 10, 2008 at 8:34 am

    [quote="dhogaza"]
    And people have analyzed the data without doing so, and get the hockey stick.
    [quote]

    Yes, splicing the instrumental record on the end and ignoring the divergence problem is the usual technique. ;-)

    More seriously, I can’t recall any papers post MBH98 with so straight a handle; and once you factor in the uncertainties in the reconstructions I believe most are unable to demonstrate that the late 20th C is anomalous based purely on proxies (Moberg is a good example). And it is for the reason that MBH98 was such a poster child, and the selective myopia exhibited around this issue is interesting.

  • grobblewobble // September 10, 2008 at 10:49 am

    Barton Paul Levenson:
    My apologies for misinterpreting your words.

    JohnG:

    This looks like a good source:

    A. Berger. Milankovitch theory and climate. Review of Geophysics, 26 (4): 624-657, 1988.

  • dhogaza // September 10, 2008 at 12:41 pm

    Yes, splicing the instrumental record on the end and ignoring the divergence problem is the usual technique.

    Which flavor is that kool-aid you’ve been drinking?

  • chopbox // September 10, 2008 at 4:14 pm

    Tamino,
    I am puzzled by your reluctance to accept that early 20th century warming is natural. My puzzlement does not stem from holding a contrary point of view but rather because the position over at RealClimate (see example below) is that they DO accept that early 20th century warming is natural. I can quite accept that you’re right, and I can quite accept that Gavin is right, but it would seem now that I can’t accept that you’re both right. Oh, oh, what am I to do?
    (RealClimate example: Fred Staples posts “I am, incidentally, pleased to see that the rise in temperature from the Little Ice Age to the forties is now accepted on the site, without any attribution to AGW.” and Gavin replies [Response: “now”? please find some cites that indicates that our explanations have changed in any major respect. - gavin]
    Here’s the link: http://www.realclimate.org/index.php/archives/2008/09/simple-question-simple-answer-no#comment-97964)

    [Response: It's my opinion that the warming from 1910 to 1940 is not entirely natural. CO2 levels were significantly higher because of human emissions, and the radiative forcing of that extra CO2 is certainly not zero. Also, the graphs mentioned in the IPCC report, comparing model output with and without antrhopogenic factors, are not exactly the same. But according to those same model runs, the warming from 1910 to 1940 is mostly natural -- just not entirely.

    I don't interpret Gavin's statement as a claim that there's no anthropogenic component to early 20th-century warming at all, just that it's small compared to the natural component so it's safe to say that warming "from the little ice age to the forties" can be attributed to natural causes. The only thing I was pointing out is that the anthropogenic contribution is not zero. If Gavin truly believes otherwise, then I'd say he know a lot more about the subject than I do.]

  • Hank Roberts // September 10, 2008 at 4:40 pm

    Remember, about half the fossil fuel burned to date was burned before 1970 — that’s the era of dirty coal before the Clean Air Act — and the other half since 1970.

    Biogeochemical cycling did handle much of the fossil carbon burned, for a while!

    Include both rate of change, background level of CO2, and aerosols when you compare the warming early in the century and that after 1970. Up to 1970, over several centuries, other factors were stronger.

    In the latter much shorter time period an equal amount of fossil fuel was burned, while aerosols were cut back, and CO2 had already gone well above its prior level. The human-caused forcings are very different.

  • Otto Kakashka // September 10, 2008 at 5:20 pm

    re early 20th C warming Chpbox

    on Real Climate Gavin also says:
    “CO2 (and other GHGs) increases up to 1945 are a significant forcing but not substantially larger than other forcings on a decadal time scale (solar, volcanic etc.). The GHG signal only starts to be dominant by around 1980, but that isn’t the same as saying it had no effect before. ”

    http://www.realclimate.org/index.php/archives/2008/09/how-much-will-sea-level-rise/#comment-98150

  • Bill // September 10, 2008 at 7:14 pm

    So, the current thought on realclimate appears to be that warming in the early half of the 20th century occured without any attribute to AGW. HADCRU data indicates that there is no statistical difference between the warming in the early half of the 20th century and the latter half of the 20th century. Should care be used when declaring that warming in the latter half of the 20th century is unprecedented?

  • Frank // September 10, 2008 at 7:47 pm

    chopbox // September 10, 2008 at 4:14 pm Response:
    “But according to those same model runs, the warming from 1910 to 1940 is mostly natural — just not entirely.”

    Has the “natural” source of this warming been identified and documented?

    [Response: One of the factors is an unusual lull in volcanic activity during the 1st half of the 20th century. Because of the long response time of the oceans, this leads to notable warming *if* the lull is long-lived (which it was). The other factor usually given is a slight increase in solar output during that time period. However, some solar physicists (most notably Dr. Svalgard) don't believe that the proxy reconstructions are correct, and maintain that there was no notable change in average solar forcing in the early 20th century.]

  • Raven // September 10, 2008 at 7:48 pm

    Tamino,

    Try printing out the graphs from AR4 Chapter 9 Fig 9.5 and over lay them. You will find the pre-1940 trends match almost perfectly except for a slight (<0.05 degC) deviation in 1940, however, the curves reconnect by 1945.

    Obviously there were emissions during that period but the IPCC analysis clearly shows that they had no observable influence on the temperature trend during that period.

  • george // September 10, 2008 at 8:33 pm

    pough said:

    Also keep in mind that “unprecedented” doesn’t just refer to temperature level, but also to rate of increase. I like to say that slowing from 100km/h to zero is nothing particularly interesting unless you happen to do it in the space of one meter.

    Yes, I can see why you would refer to your two different acceleration scenarios as

    “unpressed-and-dented”

  • Timothy Chase // September 11, 2008 at 12:39 am

    Tamino wrote:

    I don’t interpret Gavin’s statement as a claim that there’s no anthropogenic component to early 20th-century warming at all, just that it’s small compared to the natural component so it’s safe to say that warming “from the little ice age to the forties” can be attributed to natural causes.

    Hey guys…

    Don’t forget the reflective aerosols. The negative forcing due to anthropogenic reflective aerosols largely cancelled the forcing due to anthropogenic greenhouse gases. But both were anthropogenic. Saying that the trend that would have resulted strictly from natural forcings and trend that resulted due to natural plus anthropogenic forcings are similiar in no way implies that the forcing due to anthropogenic greenhouse gases (where methane would have been the main culprit at the time) was small, only that the remainder after adding together the negative forcing due to reflective aerosols (and land use, I might add) and positive forcing due to greenhouse gases was small — or at least small enough that the graphs are fairly similar.

    And as a matter of fact, according to Gavin’s own model, forcing due to anthropogenic greenhouse gases has exceeded forcing due to solar variability for every year since 1880 with the exception of one: 1881.

    Please see the graphs at:

    Forcings in GISS Climate Model
    http://data.giss.nasa.gov/modelforce

    … as well as the data at:

    Global Mean Effective Forcing (W/m2)
    http://data.giss.nasa.gov/modelforce/RadF.txt

    … and of course the technical papers which are linked to at the bottom of the first.

  • Chris O'Neill // September 11, 2008 at 10:35 am

    Don’t forget the reflective aerosols. The negative forcing due to anthropogenic reflective aerosols largely cancelled the forcing due to anthropogenic greenhouse gases.

    Keep in mind that a fundamental characteristic of aerosols associated with fossil fuel burning is that the negative radiative forcing from aerosols has a half-life time of decades at the most while the CO2 positive radiative forcing has a half-life time of centuries.

    So fossil fuel burning initially causes a cooling effect from the aerosols and a later (and much longer-lasting) warming effect from CO2.

    So if the world has a sudden increase in fossil fuel burning (such as in developing countries) then the immediate effect could be to reduce or slow down the rise in global average temperature.

  • JimV // September 11, 2008 at 1:52 pm

    Chris O’Neill: your September 11, 10:35 am comment made me think of this:

    http://commons.wikimedia.org/wiki/Image:Pollution_over_east_China.jpg

    However, doing the Google search for it led me to some articles which stated that pollutants such as black carbon actually increase global warming (maybe by lowering albedo, although the articles I skimmed didn’t say).

  • Hank Roberts // September 11, 2008 at 3:09 pm

    JimV, compare the results you get from Google Scholar. Below I’ve pasted in a string from your comment as the query (you can do much better by refining your question of course).

    You’ll still find nonsense using Scholar, but less of it than using Google.

    http://scholar.google.com/scholar?q=black+carbon+actually+increase+global+warming

  • chopbox // September 11, 2008 at 3:43 pm

    I am not one of those who believes that diversity of thought is an indication of weakness, so personally, I am glad to see these different impressions coming from the warmer camp. In fact, one of my worries is that this debate has become so tribalized that even those who LEAD the debate (and yes, Tamino, I’ll put you in the leader category) get caught up in it and sacrifice the truth for a sort of consensus think.
    Look what we ask of these leaders. First, they must know the details of each and every aspect of the debate. Second, they must hold to the truth as they see it and communicate that to us (which among other things, requires that they admit to mistakes). And third, they must rally to whatever side we think we’re on, as if we’re at a football match (which is what I’m referring to as tribalism). Now obviously, this is a list of priorities that taken together are inconsistent.
    It is precisely for this reason that I find it refreshing to hear Tamino say BOTH why he may or may not believe what Gavin has been quoted as saying or what others are saying the IPCC charts are showing, AND add the quiet caveat that he really is not an expert in this sub-area.
    It would have been so much easier to simply look up the group-think and say “This is the truth.”, and Tamino didn’t do that. So, Tamino, thank you.

  • Ray Ladbury // September 11, 2008 at 5:02 pm

    Chopbox, Consensus doesn’t depend on what anybody thinks. It depends on what they publish and what is subsequently cited and reused, etc. Consensus occurs when one side stops having anything of relevance to say in peer-reviewed journals. With regard to the role of CO2, we’re there. The 90% CL for CO2 sensitivity spans just over a factor of 2. We have a way to go on clouds and aerosols. There is plenty of debate. That’s scientific consensus–when the facts are so indisputable that even bitter rivals have to accept them in order to make further progress.

  • Frank // September 11, 2008 at 6:10 pm

    Frank
    Response:
    Thanks for the info. I googled around for historical volcano info. I could not find anything about the lull in activity you referred to. I did find the Smithsonian web site Global Volcanism Program
    http://www.volcano.si.edu/faq/index.cfm?faq=06
    with graphs and text which they say indicates nearly constant volcanic activity for the past century. Will you please provide a link for your source? I will appreciate it.
    thanks.
    Frank

    [Response: The page you link to is mainly about the number of volcanic eruptions above a given threshold of intensity. But the climate impact of volcanic activity depends on a lot more than the number of eruptions. It depends on the strength of the eruptions, on the height to which they inject material into the atmosphere, and on the latitude at which they occur (which influences the degree to which ejecta is spread planet-wide or hemisphere-wide). The thesis of that link is that historical records of volcanic activity are not indicative of actual activity changes because of constantly increasing human population and interest in recording eruptions. That may well be true, but to a considerable degree the level of volcanic activity with significant climate impact does not depend on historical observation; it can be reconstructed by the residue observed in ice core analysis.

    The GISS forcing estimates can be seen graphed here, and the data are available here (the impact of volcanism is through stratospheric aerosols). Much the same story is told by the estimates of Crowley (2000). You can find further information at the Climate Forcing Data page of the WDC for paleoclimatology.]

  • Bill // September 11, 2008 at 6:14 pm

    Chopbox, you are right, Tamino does deserve a few kudos. But also, remember that Tamino has had many people telling him what Jolliffe had to post here. If you look at Tamino’s original post, there were 3 different people who replied that Tamino had misunderstood Jolliffe (one of them got a couple “bullshits” for their trouble). The evil Steve M had said the same thing years ago.

    Even after Jolliffe made his post, Tamino did some quote mining of his own. Notice who got called out for cherry picking quotes… The problem is that as a leader, Tamino has gotten to the point where he wants to win an argument just as much as the next guy. That means that everyone here who follows him has decided that what is important is winning an argument. How many times on this blog have we read that MBH98 has had its conclusions verified by other, different studies. Great. So why do people spend so much time and effort trying to prove that MBH98 is the right way to do things? I think that not only does Tamino want to win, he wants certain other people to lose…

    [Response: I'm not convinced that I did misrepresent Jolliffe. I don't think I claimed that he endorsed the analysis of MBH98, only that his presentation refutes Wegman's claim that centering is absolutely essential for PCA. Clearly Jolliffe's presentation does refute that claim. If I gave the impression that Jolliffe had endorsed the MBH98 analysis, that was not my intention, and I have already apologized for any misimpression I may have given. Clearly Dr. Jolliffe does not endores the methodology of MBH98, and at the time I quoted him as stating that he hadn't studied it sufficiently to comment. If he wishes to offer further opinions on the subject, they'll be welcome here.

    Your claim that I "want to win" and that I "want other people to lose" is unfair and unwelcome.]

  • Frank // September 11, 2008 at 7:33 pm

    Thank you.

  • george // September 11, 2008 at 7:47 pm

    Here’s what McIntyre says on CA regarding the Tamino/Ian Jolliffe issue:

    Ian Jolliffe, a noted principal components authority, has posted a comment at Tamino’s, which repudiates Tamino’s (and Mann’s) citation of Jolliffe as a supposed authority for Mannian PCA. He wrote to me separately, notifying me of the posting and authorizing me to cross-post his comment and stating that we had correctly understood and described his comments in our response here: :

    [Jolliffe quote]
    I looked at the reference you made to my presentation at http://www.uoguelph.ca/~rmckitri/research/MM-W05-background.pdf *after* I drafted my contribution and I can see that you actually read the presentation. You have accurately reflected my views there, but I
    guess it’s better to have it ‘from the horse’s mouth’.

    The following is from the McIntyre presentation about which Jolliffe commented (
    “I can see that you actually read the presentation. You have accurately reflected my views there”):

    [quote from McIntyre paper]
    Jolliffe’s presents cautionary examples showing that uncentered PCA gives results that are sensitive to whether temperature data are measured in Centigrade rather than Fahrenheit, whereas centered PCA is not affected. Jolliffe nowhere says that an uncentered method is “the” appropriate one when the mean is “chosen” to have some special meaning, he states, in effect, that having a meaningful origin is a necessary but not sufficient ground for uncentered PCA. But he points out that uncentered PCA is not recommended “if all measurements are far from the origin”, which is precisely the problem for the bristlecone pine series once the mean is de-centered, and he warns that the results are very hard to interpret. Finally, Jolliffe states clearly that any use of uncentered PCA should be clearly understood and disclosed – something that was obviously not the case in MBH98. In the circumstances of MBH98, the use of an uncentered method is absolutely inappropriate, because it simply mines for hockey stick shaped series. Even if Mann et al. felt that it was the most appropriate method, it should have had warning labels on it.

    There is something I find confusing (and quite frankly a bit inconsistent) about those comments and what Dr. Jolliffe says above .

    Dr. Jolliffe comments above that

    Some further clarification: a lot of the confusion seems to have arisen because of the terminology. Uncentred PCA and decentred PCA are completely different animals. My presentation dealt only with uncentred PCA (and doubly centred PCA). I’ve just looked at it again and it seems completely unambiguous that this is the case.Thus when I talked about the ‘origin’ being meaningful I meant the point at which all the variables as originally measured are zero, and nothing else. Using anything other than column means or row means to centre the data wasn’t even on my radar. It was only fairly recently that I realised the exact nature of decentred PCA so I couldn’t have endorsed it.

    Jolliffe stated to McIntyre that

    I can see that you actually read the presentation. You have accurately reflected my views there

    Really?

    In the paper in question (in which McIntyre refers to Jolliffe’s presentation) McIntyre uses de-centered and uncentered PCA as if they meant the same thing. In fact it is quite clear that McIntyre is under the impression that they arethe same thing.

    That’s understandable (many other people made the same assumption), but it is nonetheless curious (and a bit inconsistent) that Jolliffe would not have pointed this out to McIntyre as he was careful to do here in his comments above — or if Jolliffe did point it out, that McIntyre did not mention this.

    Also, in the McIntyre quote above (taken from the same paper McIntyre says this

    In the circumstances of MBH98, the use of an uncentered method is absolutely inappropriate, because it simply mines for hockey stick shaped series.

    Again, this confuses what Jolliffe was referring to in his presentation (uncentered PCA) with decentered PCA.

    Does Dr. Jolliffe nonetheless believe that the last statement by McIntyre accurately represents his views?

    Even if it represents his current views, it seems quite unlikely that it could have “accurately represented” the views of his presentation, when he was not even talking about decentered PCA in that presentation, and, as Jolliffe says above:

    It was only fairly recently that I realised the exact nature of decentred PCA so I couldn’t have endorsed it.

    Seemingly, neither could he have been criticizing it in that presentation.

  • t_p_hamilton // September 11, 2008 at 8:05 pm

    The link to the estimates on the magnitudes of forcing since 1880 will answer JimV’s question about how important black carbon is quantitatively, and how important it is relative to greenhouse gases. It will also answer other comments claiming that the rise in temperature in the first half of the century was not attributed to increase in greenhouse gases.

  • Peter C. Webster // September 11, 2008 at 8:38 pm

    I agree with you Tamino, I don’t see you trying to pick winners or losers here. I find your analysis insightful and usually even-handed.

    But let me give you my impression of your quote as written. Just the feeling I got; you said you were not implying this, but I certainly infered it.

    “Centering is the usual custom, but other choices are still valid”
    We can do it other ways, they’re all as valid. So the MBH98 method is valid.

    “we can perfectly well define PCs based on variation from any “origin” rather than from the average.”
    Where we put the origin matters little, we’ll still get good PCs.

    “It fact it has distinct advantages IF the origin has particular relevance to the issue at hand.
    MBH98 gets this advantage because its origin has particular relevance.

    “You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe”
    Ian Jolliffe, an expert on PCA, agrees with this assesment. Therefore, Wegman is wrong.

    That all said, it’s not what you said, but how you said it.

    I do however disagree that Jolliffe refuted Wegman; saying that ‘rare specific occasions of not using column centered PCA might be justified, if fully explained as to reason and method’ doesn’t refute Wegman; Wegman was specifically talking about in the case of MBH98, the document he was tasked with reviewing.

    Then you have Jolliffe further clarifying that PCA might not be appropriate at all, with “given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.” Which if anything, seems to be refuting MBH98 rather than Wegman.

    You also have the two statements:

    Wegman: “Centering the mean is a critical factor in using the principal component methodology properly. It is not clear that Mann and associates realized the error in their methodology at the time of publication.”

    Nothing about PCA in general, but about MBH; their methodology, in this case, in particular.

    Jolliffe: “It is possible that there are good reasons for decentred PCA to be the technique of choice for some types of analyses and that it has some virtues that I have so far failed to grasp.”

    Or in other words, he doesn’t know of any good reasons for using decentred PCA as a technique of choice.

    To say that Wegman said centering is “absolutely essential for PCA” is incorrect. He said that for MBH98, centering it on the mean of 1902-1995 decentered the mean low, so in that case, the larger variance gave it a preference to be chosen as a the first PCA.

    [Response: You certainly make an important point, that what people mean to say, and the way their statements are interpreted by others, can be wildly divergent. Perhaps that's why Galileo suggested giving every author the benefit of the best possible interpretation of what has been written.]

  • Peter C. Webster // September 11, 2008 at 9:06 pm

    George: Not sure. if it’s centered and you de center it so it’s no longer centered, isn’t it un centered, and not centered? You’re right, it’s confusing. In your quotes above, the only person who used decentered is Jolliffe, and I’m unsure of the context of what he was referring to. Perhaps he meant he was unaware that MBH98 did a decentering rather than an uncentering..

    However; “Seemingly, neither could he have been criticizing it in that presentation.”

    He didn’t as far as I can tell from the presentation in question.

    Temperature data – uncentred ‘covariance’ analysis

    -We are now looking at directions with the maximum variation with respect to the origin, rather than with respect to the mean. Hence the mean itself often determines the form of the first (frequently very dominant) ‘component’
    -In this example, PC1 & PC2 have similar loadings to those in the column-centred analysis, but the first PC is a much more dominant source of variation and a seasonal cycle is now apparent in PC1 reflecting the annual cycle in the means

    Temperature data – uncentred ‘covariance’ analysis II

    -Results are not invariant to choice of scale
    -Because values for Fahrenheit are further from the origin than Celsius, the PC1 is even more dominant (99.95% of ‘variation’ for °F; 99.73% for °C; 74.0% for column-centred)
    -Also loadings in PC1 are less variable for °F than for °C in uncentred analysis
    -It seems unwise to use uncentred analyses unless the origin is meaningful. Even then, it will be uninformative if all measurements are far from the origin

    Temperature data – uncentred ‘correlation’ analysis

    -Not invariant to choice of scale, but PC1 is very close to an equally weighted combination of all variables in both cases
    -PC2 is also quite similar in both cases – seasonal cycle again
    -Larger numbers for °F so more extreme behaviour (99.94% compared to 99.5% for PC1; greater uniformity of loadings in PC1)

    ——

    Final Remarks

    -Standard EOF analysis is (relatively) easy to understand – variance maximisation
    -For other techniques it’s less clear what we are optimising, and how to interpret the results
    -There may be reasons for using no centering or double centering, but potential users need to understand and explain what they are doing

  • JimV // September 11, 2008 at 9:58 pm

    Hank Roberts, thanks for the additional sources. Meanwhile, I dug a little deeper in my original search (to the source for the article which stated that reducing China’s pollution would reduce global warning), and found this, which I think is a good layperson summary:

    http://www.climatescience.gov/Library/sap/sap3-2/final-report/sap3-2-brochure.pdf

    (Sulfate pollution cools, black carbon pollution probably warms, there are still some uncertainties, current pollution sources contain more warming than cooling particles.)

  • Bill // September 11, 2008 at 10:44 pm

    Tamino:

    [Response: You certainly make an important point, that what people mean to say, and the way their statements are interpreted by others, can be wildly divergent. Perhaps that's why Galileo suggested giving every author the benefit of the best possible interpretation of what has been written.]

    Given that, would you instead say that Wegmans critique of MBH98 that you quoted here: http://tamino.wordpress.com/2008/03/06/pca-part-4-non-centered-hockey-sticks/
    was correct? Rather than concentrating on one sentence in the middle that is subject to interpretation, his jist seems to be that MBH98 incorrectly used the PCA technique. According to Jolliffe, it seems questionable if the technique should have been used in the first place.

    Instead, we should concentrate on those studies that don’t use PCA and look at those results to see if we should be using words like ‘unprecedented’ or not.

  • george // September 11, 2008 at 11:32 pm

    Peter Webster said:

    the only person who used decentered is Jolliffe, and I’m unsure of the context of what he was referring to.

    Actually, McIntyre also used the term “de-centered” in this quote (from above)

    But he points out that uncentered PCA is not recommended “if all measurements are far from the origin”, which is precisely the problem for the bristlecone pine series once the mean is de-centered and he warns that the results are very hard to interpret.

    And you can see the context of Dr. Jolliffe’s statement (in which he uses the word de-centered) in what I quoted from above

    Some further clarification: a lot of the confusion seems to have arisen because of the terminology. Uncentred PCA and decentred PCA are completely different animals. My presentation dealt only with uncentred PCA (and doubly centred PCA). I’ve just looked at it again and it seems completely unambiguous that this is the case.Thus when I talked about the ‘origin’ being meaningful I meant the point at which all the variables as originally measured are zero, and nothing else. Using anything other than column means or row means to centre the data wasn’t even on my radar. It was only fairly recently that I realised the exact nature of decentred PCA so I couldn’t have endorsed it.

    Finally, my final comment in the post above

    “Seemingly, neither could he have been criticizing it in that presentation.”

    was merely intended to point out that Jolliffe was certainly not criticizing de-centered PCA in that presentation (not even talking about it, in fact)

    The only reason that I brought this whole issue up is that it certainly appears that McIntyre is using what I quoted above as some sort of vindication for what he wrote in his paper (the one that refers to Jolliffe’s presentation).

    It should be clear to anyone who reads what McIntyre wrote and what Jolliffe wrote that they were not even referring to the same thing: Jolliffe to uncentered PCA and McIntyre to decentered PCA.

    When McIntyre referred to what Mann did as uncentered PCA, it appears that he was simply mistaken. And when McIntyre took what Jolliffe said in his presentation about “uncentered PCA” and used that to criticize what Mann had done, that appears to have been a mistake as well.

    One is certainly not justified in using a statement or statements about uncentered PCA to criticize decentered PCA if, as Jolliffe says, “Uncentred PCA and decentred PCA are completely different animals”.

    If Tamino has mixed apples and oranges, then so has McIntyre. Misinterpretation works both ways.

    I just am a bit surprised that Dr, Jolliffe would have said that McIntyre “accurately reflected my views there” (in McIntyre’s paper) when it appears that McIntyre did not even appreciate the difference between un-centered and decentered PCA and nonetheless was talking about both as if they were what Jolliffe had referred to in his paper — and, worst of all, using what Jolliffe had said about uncentered PCA to criticize Mann’s work.

  • David B. Benson // September 11, 2008 at 11:41 pm

    Bill // September 11, 2008 at 10:44 pm — “Unprecedented”? How about now warmer in British Columbia than at any time in the last 7000 years?

    http://news.softpedia.com/news/Fast-Melting-Glaciers-Expose-7-000-Years-Old-Fossil-Forest-69719.shtml

  • george // September 12, 2008 at 12:48 am

    Please ignore the first of my last two posts (stamped 11:29) and post the later one.

    I messed up the blockquotes.

    Thanks.

    And thanks again for all the work you do. You are performing a very valuable public service (though I’m sure it seems thankless at times)

  • Timothy Chase // September 12, 2008 at 1:32 am

    Peter C. Webster wrote:

    Then you have Jolliffe further clarifying that PCA might not be appropriate at all, with “given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.” Which if anything, seems to be refuting MBH98 rather than Wegman.

    Yes, he said that it is questionable whether any sort of PCA should be applied to dynamical systems — with no argument offered whatsoever. And there appear to be armies of intellectuals who aren’t paying him any attention:

    Google: pca “dynamical systems”
    http://www.google.com/search?hl=en&q=pca+“dynamical+systems”&btnG=Google+Search&aq=f&oq=

    19100 results as of September 11th

    Google Scholar: pca “dynamical systems”
    http://scholar.google.com/scholar?hl=en&q=pca%20“dynamical%20systems”&um=1&ie=UTF-8&sa=N&tab=ws

    1880 results as of September 11th

  • Timothy Chase // September 12, 2008 at 1:34 am

    Ps

    You will have to copy and paste the test as the website seems to have interpretted only part of the strings as the link-addresses.

  • apolytongp // September 12, 2008 at 3:23 am

    I remember wondering about the no center and double center when I looked at the Jolliffe PPT. I think I even asked about it. (Too lazy to dig, feel free.) I think we all need to be a little more careful about assuming we know things, about depending on slender reeds (I mean a PowerPoint? Sheesh.) Jolliffe with his statements that true understanding would require weeks doing calculations, access to data/code and EVEN to the authors (questions) is much more humble…and not just that…much more likely to be correct. That’s how this problem “feels to me”. That’s one of the reasons, why I was disappointed with the big buildup from Tamino with the PCA tutorial…and then doing a superficial job of analyzing the complicated methodology (and not just regurtitating a view of his side, but actually thinking it through on his own for inspection).

    BTW, before the CAers get carried away, Jolliffe suspects that there are some flaws with MM as well as with MBH. Which is how I feel as well. He hasn’t done all the work to nail this down (remember it’s weeks of work), but he knows enough to see some loose ends that are suspicious. I like that style of thinking. It’s how I tend to think about it. (Except Jolliffe is capable of running this to ground if he wanted to and I’m not, without a bunch of tutorial stuff.)

    What I think would be really cool would be a panel of Huybers, Zorita, Jolliffe, Wahl/Amman, and Burger…all chatting through the hockey stick. It really is a fascinating algorithm in terms of it’s complexity (some of it not even documented…so it was a puzzle to figure out.)

  • apolytongp // September 12, 2008 at 3:28 am

    Fairness moment (or perhaps a TCO versus the world moment): I have also been irked OFTEN by Ross or others at CA giving PCA tutorials in response to questions I asekd about a very specific aspect (and these tutorials did not address the nuance that I was asking about…so they ended up as a lot of work…athat wasn’t even suffiecint)(

  • Pete // September 12, 2008 at 8:07 am

    Apolytongp: ” Jolliffe suspects that there are some flaws with MM as well as with MBH. Which is how I feel as well. He hasn’t done all the work to nail this down (remember it’s weeks of work), but he knows enough to see some loose ends that are suspicious”. Sounds like you know or are working with Jolliffe, is that correct? Do you know that he is actually looking into this controversy?

  • Patrick Hadley // September 12, 2008 at 10:06 am

    Peter Webster and George: It was Wegman used the word “decentred” to criticise MBH98. This was quoted by Tamino in his PCA Part 4, and Wegman’s criticism was described by Tamino as “just plain wrong” with Ian Jolliffe named in support of that opinion. In his reply to Tamino IJ described Tamino’s use of his words in this way:
    Quote: In reacting to Wegman’s criticism of ‘decentred’ PCA, the author says that Wegman is ‘just plain wrong’ and goes on to say ‘You shouldn’t just take my word for it, but you *should* take the word of Ian Jolliffe, one of the world’s foremost experts on PCA, author of a seminal book on the subject. He takes an interesting look at the centering issue in this presentation.’ It is flattering to be recognised as a world expert, and I’d like to think that the final sentence is true, though only ‘toy’ examples were given. However there is a strong implication that I have endorsed ‘decentred PCA’. This is ‘just plain wrong’.(End Quote).

    On the issue of Jolliffe’s comment that “given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA” I would welcome an explanation of what Tamino thinks Jolliffe means by this. Was the data non-stationary? If so does that mean PCA might not be appropriate?

    [Response: I'm one of those who confused "uncentered" and "decentered."

    The data do indeed appear to be nonstationary, although there are those who oppose the reality of anthropogenic global warming who claim that it is stationary in order to argue that modern warming is only due to random fluctuations of the climate system; in that case we would expect such excursions to have happened before (without human intervention) and to happen again, but how often they would happen and how far apart is an open question.

    Frankly, I'm not sure why PCA would be inappropriate for a nonstationary time series. I am sure that Dr. Jolliffe knows a great deal more about the subject than I do. That doesn't mean I would necessarily agree with him on all opinions, but I'd be very interested to know his reasoning.]

  • Ian Jolliffe // September 12, 2008 at 10:44 am

    Thanks to Timothy for attempting to provide some references to decentred principal (not principle) component analysis, but it’s not clear to me that any of those provided deal with decentred PCA. Despite what I said in my last posting (‘Uncentred PCA and decentred PCA are completely different animals’) the confusion persists. It’s not always possible to tell from the abstracts, but as far as I can see all the references supplied pertain to uncentred PCA (i.e no centring) not to decentred PCA, where centring is done, but using the mean of a subset of the data. I am well aware of instances of uncentred PCA – Section 14.2.3 of my book gives references in ecology, geology and chemistry, and it is currently used for genetic microarray data – but I have yet to have see a pre-MBH example of centring using something other column means, row means (or neither or both). I have no real excuse for not reading MBH when I first heard of it, but my lame excuse is that I assumed from reports I’d seen that it was another instance of uncentred PCA. So if any of the references supplied, or any others, use a different centring as in MBH I’d be delighted to hear of them – if a third edition of my book ever appears I’d like to include the topic and attempt to clear up some of the confusion that clearly exists. Incidentally, thanks to george for pointing out that I hadn’t read the McIntyre document as carefully as I should and was guilty myself of confusing decentred (not a name I particularly like but I’d seen it used, so adopted it to distinguish from uncentred), so some clear account is certainly needed.

    Timothy also referred to Tamino’s 5-part PCA tutorial. I’d previously skimmed 1-3 and looked in detail at 4, but I have to confess that I hadn’t really looked at 5. Although I didn’t find any decentred PCA references, there is some interesting algebra there that I’d like to digest and discuss. However, discussing algebra is hardly an exciting thing for a forum like this, so I’d be really pleased if Tamino would identify himself so that we can conduct a dialogue on this.

    In response to pough and L Miller, distinguishing between the hockey stick and the MBH hockey stick is the key issue. The latter is where the problem lies because of what I deemed ‘dubious statistics’. It is this one particular paper, and in particular the defence of the technique used as recently as this year, which has caused so much grief. I agree with the quote from Wegman

    “We do agree with Dr. Mann on one key point: that MBH98/99 were not the only evidence of global warming.As we said in our report, “In a real sense the paleoclimate results of MBH98/99 are essentially irrelevant to the consensus on climate change. The instrumented temperature record since 1850 clearly indicates an increase in temperature.” We certainly agree that modern global warming is real. We have never disputed this point. We think it is time to put the “hockey stick” controversy behind us and move on.”

    The only reason I got involved is because the ‘dubious statistics’ were still being defended this year and my name was being used in support. If there now are people out there claiming that my first post undermines the whole global warming argument, tell me where and I’ll refute this misrepresentation as well. Almost any decent statistical model-fitting will give the upward trend at the end of the series, but more importantly there are all the climate models, based mainly on physics rather than statistics, that provide convincing evidence of climate change and the reasons for it. As a statistician, on principle I don’t believe anything is absolutely certain, but my view is that the chance of all the climate models having got things completely wrong and that by 2030 the Earth is cooler than in 1950 is of the same order of magnitude as the chance that the USA will decide that independence was a bad idea and ask to be taken back as a British colony by the same date. Not impossible, but I personally wouldn’t bet on it.

  • Bill // September 12, 2008 at 1:50 pm

    David B. Benson,

    I’m not sure what you mean by unprecedented here. Certainly, the temperature is not unprecedented. 7000 years ago it was warm enough in BC for a forest to grow. Is it warm enough for a forest to grow in that spot now?

    I’m certainly not an expert on glaciers, but I have read that some glaciers are bad proxies for global climate. But, they are good proxies for regional climate. I don’t know which kind this glacier is.

    Perhaps you are referring to the speed with which the glacier retreated? I guess I can’t answer. The fact that the article implies that the trees were in pretty good shape under that glacier implies that it advanced pretty quickly. Maybe glaciers in that part of the world just move quickly, either way. But, that is just conjecture.

    One thing I thought was weird… The article states that the trees lived at the end of the last ice age. However, since that time (the end of the last ice age), they have been covered in ice.

    So, if you were asking me if this is unprecedented, I would have to reply that it depends on what you mean by unprecedented.

    Did you have theories of your own why you thought this would be unprecedented? I’d be interested in hearing them.

  • george // September 12, 2008 at 2:00 pm

    Timothy Chase:

    From my work, I am aware of quite successful use of PCA to a dynamical (chemical) system (ie, one whose data changes over time): namely, monitoring the progress of chemical reactions (specifically applied to Raman spectra of the chemical reactants acquired over time). Here’s just one abstract that refers to this method, but it is by no means the only one.

    It’s not just a method of academic interest. Hardly. There are many companies ( pharmaceutical and petroleum, for example) currently using this technique. Perhaps they are just wasting their time and money, but I seriously doubt that is the case. in fact, I know it is not. The results have been shown to be consistent with those obtained with independent methods, at any rate.

    But given the interpretation mixup over the terms “uncentered” and “decentered”, I’d say that one has to be especially careful with terminology (especially in this case!)

    Scientists and mathematicians (and even scientists in different disciplines) sometimes attach different meanings to the very same words, and the technical meanings can certainly be very different than the ones used in ordinary usage.

    It is possible that Jolliffee has intended some meaning of the word “stationary” that is different from the traditional one ( “non-dynamic”) when he says

    given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA.

    Then again, it is also possible that Jolliffe is simply not aware of the successful application of PCA to dynamical systems (if that is indeed his meaning).

    Even experts can not be expected to know everything there is about their field and just because a particular application may be “non-standard” (as judged by experts in the field) does not necessarily mean that it is wrong. Of course, it may be wrong, but not necessarily. Successful new applications of old methods often (usually?) come from the “outside”, in fact.

    There is quite an irony in this whole discussion about “what Jolliffe meant.” In the most important regard, the entire conversation has essentially been sidetracked by something that most scientists (and probably most people here and on Climate Audit as well ) would probably normally criticize: “appeal to authority”.

    Most of us would probably agree that when it comes to science, it is not who makes the argument , but the argument itself that is important — ie, whether the argument stands on its own.

    That is not to say that all opinions on a subject (those of an expert and those of a novice) are equal. Just to say that in the final analysis, what matters most is the argument itself (not what some one person says).

  • cce // September 12, 2008 at 2:06 pm

    In addition to enhancing the PCA series of posts with Dr. Jolliffe’s concerns, it would be great if tamino and Jolliffe collaborated on a post about the R squared vs. RE bugaboo. That is, one that is independent of McIntyre.

  • Spence_UK // September 12, 2008 at 2:22 pm

    Re: george, 11:32 pm (and subsequent post from Ian Jolliffe)

    I don’t see where McIntyre has confused uncentred and decentred in that quote you give.

    McIntyre’s quote is carefully nuanced when read correctly. I’ll try and spell it out in more detail to make it clear.

    McIntyre notes that in uncentred analysis, one of the difficulties in interpretation that Dr Jolliffe refers to in his book is that interpreting the principal components can be particularly difficult if the mean of the series is some distance from the origin.

    McIntyre notes that under certain circumstances, decentred PCA can suffer from the same underlying problem (depending on the structure of the data – giving the Bristlecone pines as an example of the type of data that would cause this). He does not associate this assertion with Dr. Jolliffe, but merely notes that if it is a problem for uncentred PCA, it is going to be a problem for decentred PCA as well.

    There is no mixing up of terms there. The two are quite clearly distinct; McIntyre is merely noting that under certain circumstances, both uncentred and decentred PCA can suffer from a particular underlying problem, Dr Jolliffe notes that causes difficulties in interpretation with the former, and McIntyre infers that this underlying problem will cause difficulies in interpretation for the latter. This seems quite reasonable to me.

    For what it is worth, McIntyre has used other terms for this as well; he has referred to it in the past as “short-segment centred PCA” which is a little cumbersome on the keyboard but somewhat clearer as to what is taking place.

  • Bill // September 12, 2008 at 2:33 pm

    george, you make an interesting point, and if Jolliffe is still around, perhaps he will comment on his use of the word ’stationary’.

    I’m not sure I understand what you mean by dynamic. My understanding of stationary refers to the dependence of data on time. For example, I can mix ethanol and ethyl acetate and they will react with each other. That reaction does not depend on when I mix them. That is, the reaction is independent of time. However, if it is 60degreesF this morning, it may rain today. It can be 60 degrees F tomorrow morning, and it may not rain.

    I found this http://www.investopedia.com/articles/trading/07/stationary.asp and I think this definition of stationary is what Jolliffe is referring to (I use the rule, if the mean of a process will change over time, it is not stationary). I think that this applies to the study of climate. The expectation is, at least, that the mean temperature of the planet is different now than it was 400 years ago. This makes the temperature data non stationary.

    However, I’m not an expert, so I would love to learn if I am correct on this or not.

    [Response: In statistics we discriminate two types of stationarity. A series is strongly stationary if the joint probability distribution of any set of values is invariant under a time shift. It's weakly stationary (usually just called "stationary") if the mean of the series, and the covariances between an two values, are invariant under a time shift.]

  • apolytongp // September 12, 2008 at 2:53 pm

    George: Good example. I think PCA is very useful for pattern recognition, for sensing. For Nate Lewis chemical noses and the like. I’m more leary when we get into factor analysis that then gets fed into regressions and calibrations and significance tests.

    Also, we’re still left with basically no academic usage (and certainly not definition of the properties and patterns of) short-centering (TCO pattented term, although Huybers may have used it as well).

    I’m also still amazed that someone (Mann, Tammy?) was citing a Powerpoint (wtf?) as establishing a technique as proven and understood and normal usage.

  • Bill // September 12, 2008 at 3:20 pm

    cce, I think you are referring to Mann’s calculations in MBH98? If so, I actually would vote against something like that (If not, than I apologize). Personally, I think that Jolliffe and Wegman have pretty much put MBH98 to rest. At this point, we are all best served by looking at papers that aren’t based on “dubious statistics”. As mentioned, there are lots of other papers that generate hockey sticks. Those are probably much more interesting at this point.

  • Patrick Hadley // September 12, 2008 at 3:45 pm

    George, I had never even heard of non-stationary PCA until the other day, but from what you write and what I have managed to understand (I hope) from other sites, it seems that it is possible to do PCA when the relationship is non-stationary (i.e. changes over time) when there is some way of calculating what the changes are. If you can produce a function to describe the changing relationship then you can do non-stationary PCA.

    That kind of makes sense to me. However if in the climate proxies the relationship is non-stationary over time there might be no way of creating a function to describe this change. Is the problem Ian Jolliffe is suggesting about PCA with climate proxies is that he is not sure that we can ever know just what relationship the temperature has with the proxy along the whole length of the record? I could, of course be completely misunderstanding the concept, so I would welcome some informed criticism.

  • george // September 12, 2008 at 5:20 pm

    Spence_UK says:

    I don’t see where McIntyre has confused uncentred and decentred in that quote you give.

    I really don’t wish to belabor this any further (This whole conversation about “what he meant when he said” probably means very little at this point), but if nothing else is clear at this point, one thing is perfectly clear (at least to me)

    McIntyre almost certainly did confuse decentered and uncentered in the paper that I referenced above.

    First, unless I am completely mistaken, in which case i will shut up and never comment again (maybe), I take Jolliffe’s comment above that

    Using anything other than column means or row means to centre the data wasn’t even on my radar. It was only fairly recently that I realised the exact nature of decentred PCA so I couldn’t have endorsed it.

    to mean that what Mann did was “decentered PCA” — or at the very least, not uncentered PCA

    I didn’t quote the full comment but you can see the context by reading it above. Jolliffe was noting the confusion about the terminology decentered vs uncentered and noted that his presentation (which mcIntyre quoted) did NOT deal with decentered PCA (only uncentered, doubly centered and centered)

    I just noticed that Jolliffe has also commented immediately above (Thanks again, Dr. Jolliffe for all the clarification)

    I have no real excuse for not reading MBH when I first heard of it, but my lame excuse is that I assumed from reports I’d seen that it was another instance of uncentred PCA

    //end Jolliffe quote

    The implication is clear: MBH98 is not “uncentered PCA”.

    In light of Jolliffe’s above comments, the simple fact that McIntyre refers to the method of Mann as “uncentered” demonstrates (without any doubt) McIntyre’s clear confusion between terminology:

    The following (that I also quoted above) is from McIntyre’s paper, wherein McIntyre uses Jolliffe’s comments on uncentered PCA to criticize Mann’s method (decentered PCA):

    McIntyre says:

    he [Jolliffe] points out that uncentered PCA is not recommended “if all measurements are far from the origin”, which is precisely the problem for the bristlecone pine series once the mean is de-centered, and he warns that the results are very hard to interpret. Finally, Jolliffe states clearly that any use of uncentered PCA should be clearly understood and disclosed – something that was obviously not the case in MBH98. In the circumstances of MBH98, the use of an uncentered method is absolutely inappropriate, because it simply mines for hockey stick shaped series. Even if Mann et al. felt that it was the most appropriate method, it should have had warning labels on it.

    Incidentally, the terminology (uncentered, decentered, doubly centered, centered, noncentered?) could have been a bit better, IMHO. Now, if there is a God, I just pray she does not make me write uncentered and decentered again in the same sentence — ever .

  • Ray Ladbury // September 12, 2008 at 5:52 pm

    George: ” Now, if there is a God, I just pray she does not make me write uncentered and decentered again in the same sentence — ever .”

    Amen. Having to do so repeatedly could render you unbalanced… Debalanced???

  • Patrick Hadley // September 12, 2008 at 7:05 pm

    In the hope that Professor Jolliffe is still reading this thread I wonder if he knows that Michael Mann himself argued from IJ’s authority in support of MBH98.
    Quote from http://www.realclimate.org/index.php?p=98 Contrary to MM’s assertions, the use of non-centered PCA is well-established in the statistical literature, and in some cases is shown to give superior results to standard, centered PCA. …. For specific applications of non-centered PCA to climate data, consider this presentation provided by statistical climatologist Ian Jolliffe who specializes in applications of PCA in the atmospheric sciences, having written a widely used text book on PCA. In his presentation, Jollife explains that non-centered PCA is appropriate when the reference means are chosen to have some a priori meaningful interpretation for the problem at hand. In the case of the North American ITRDB data used by MBH98, the reference means were chosen to be the 20th century calibration period climatological means. End Quote.

    Looks like another apology might be needed. Why are some people still supporting this dubious use of statistics?

  • David Holland // September 12, 2008 at 8:15 pm

    We are now all better educated on the proper terms to use in relation to centring as it applies to PCA, but I still think that, for a layman, the question I asked Dr Jolliffe early in 2005 accurately describes what Dr Mann had done. I asked:

    “Given that the result of his reconstruction shows a dramatic trend in one part of the time frame does it not appear questionable to have centred his data over that same period?”

    Now he has had a look at this issue, and so far as I can tell sided firmly with Wegman et al., I would be interested to know what Dr Jolliffe thinks of another statististical dispute on the ‘hockey stick’, namely what is the proper significance level for RE using Dr Mann’s 1998/9 methodology?

    The IPCC retained the Mann et al. studies, despite expert reviewers’ criticisms, based partly on the then unpublished Wahl and Ammann (2007) which in turn relied upon the also unpublished Ammann and Wahl (2007), and which claimed to have validated Dr Mann’s RE tests. However the details were only published in July this year and were promptly dismissed by Steve McIntyre as the Texas Sharpshooter Fallacy.

    Again, as layman, this seems clear cut to me, and since Mann et al., (2008) cite Wahl and Ammann (2007) is not only of historic interest. Could Dr Jolliffe look at how Ammann and Wahl justify their claim to have validated Mann et al.’s RE test?

  • Hank Roberts // September 12, 2008 at 8:48 pm

    Wegman said some years ago that it was time to move on.

    Why not ask about _current_ science? You do realize that early work gets superseded, and that’s how science works? The farther we get from the early work, the more error-riddled it can be seen to be.

    Early work doesn’t have to be _right_ to be useful. It has to lead to interesting areas of investigation from which we learn new things.

  • apolytongp // September 12, 2008 at 8:58 pm

    George:

    I been trying to resist the “I told you so”, but…the flesh is weak…and I am drawn in by all the comments saying how easy it is to misread things. If you look at my comments (as TCO) on March 9th, ~6PM, you will see me asking for clarification of the centering labels to Tammy.

    It was not that hard to come up with these concerns/questions on the terminology. I mean I know little to no formal stats or linear algebra. I just ask a question when I don’t understand. Just like a curious high school math/science student. That’s what we all need to do. You, Ray, Tammy, Steve McI etc. (I think mosh-pit and JohnV already do so.) It’s what Eli (I hope) teaches in freshman chem class. It’s not even conservative or liberal or AGW or denialist.

    Heck, engagement on a concept is more vital than term recognition! So, if there is a possibility of confusion on a cryptic term, NAIL IT DOWN! That’s why the CAers hate me…or think me tedious…or “stupid”. But you know when ya ask things like that you find little nuggets (like the red noise which is really a refelction of the sample, not “noise” in SM’s simulations. Which SM got very uncomfortable, when I probed on…and which WA nailed him for in their paper.)

    I think Tammy has been hella gracious, so I want to take it easy on him. But my point going forward is that we need to read stuff (everything) with a questioning, “do I really understand this”, point of view. More so than a “this will support my POV”, point of view.

    Also even WITHIN that presentation which was being touted as the support for the well established strange centering (even if we conflate uncentered and short-centered), Jolliffe has a lot of concerns EVEN JUST WITH uncentered PCA. SM’s point is relevant as the cuations on uncentered apply in kind to short-centered.

    Look, I’m the first to beat him up. But let’s not assume that he was confusing things becausse you all did. The insight of issues just even with uncentered will apply (certainly in the limit) with short centered as well.

    We’ve just had a demonstration from your side that y’all were confusing things. And it’s not at all certain that he was confusing things. I don’t have the same sense that he is so quickly confused by terms as you. Wait for a smoking gun, given the recent victory by my side over yours. Then, of course, beat the shit out of us.

    Oh…and (broken record), I still can’t get over relying on a POWERPOINT as opposed to a textbook, a journal article, even a white paper.

    And a PPT that was actually composed after the MBH short-centered method.

    Which by the way wasn’t even NOTED as part of the paper’s text…as a difference to normal PCA was written in 1998 (or 1999).

    And BTW, which DOES have an impact on at least an intermediate part of the paper’s text (the commentary that PC1 had the “dominanat mode of variation”). So even if you do the Preisendorfer thingie and put in more PCs and just use the PC4 as well as the PC1 in the regression and get to a similar hockey stick…the comment’s in the paper about PC1 showing a hockey stick would still be wrong were the PCA done with conventional centering, not short centering.

  • David B. Benson // September 12, 2008 at 9:44 pm

    Ian Jolliffe // September 12, 2008 at 10:44 am — I, for one, would be pleased to have the ‘algebra’ discussion between you and Tamino conducted on this thread (or one devoted to the purpose) so that I (and I suspect many others) could (attempt to) follow along.

  • David B. Benson // September 12, 2008 at 9:52 pm

    Bill // September 12, 2008 at 1:50 pm — Let’s drop the ‘unprecedented’ for warmth, keep it for rate-of-change; the rapidity, globally, is about the same as in Greenland at the 8.2 kybp event recovery and the Younger Dryas recovery before that. I’ll doubt that in those two instances the global temperature changed so quickly.

    I just thought that article might help clarify some matters for you; however the reporter’s writing ‘end of the last ice age’ is wrong; that forest grew in the mid-Holocene, well past the end of the last ‘ice age’.

    All glaciers gain or lose mass according to local conditions only. When enough of the world’s glaciers are in retreat, one calls it global; that is what is occurring.

  • cce // September 12, 2008 at 10:53 pm

    Bill,

    Although the PCA stuff is an interesting sideshow, everyone agrees that it makes very little difference to the final reconstruction. The more relevant issue is the “verification statistics” which is also tied up with Wahl and Ammann and Ammann and Wahl. Few can understand the litany of accusations, much less (potentially) rebut them.

  • Patrick Hadley // September 12, 2008 at 11:15 pm

    David B Benson says that the rapidity of rate of change globally is unprecedented since Greenland 8.2 kybp.

    Perhaps he should look at Tamino’s latest article, “Don’t Be Fooled Again” which shows how easy it is to be misled by relatively short-term trends.

    I am not sure which period his referring to as having the unprecedented rapidity of rise. If it is the last 30 years then he should know that the rate of temperature rise over the last 30 years almost the same as it was in the 30 years before the second world war according to HadCRU.

    If he is talking about the last 60 years then the rate is roughly half what it was during the 30 years before it.

    If he is talking about the entire HadCRU instrumental record then there has been a rise of about 1C in 158 years.

    We simply do not have enough precise detail in the reconstructions when taking into account any reasonable level of margin of error, to say with any confidence that the rate of rise over the last 30 years, or the last 100 years has been greater than every period in the past 1500 years.

  • george // September 13, 2008 at 12:19 am

    TCO says

    SM’s point is relevant as the cuations on uncentered apply in kind to short-centered.

    Look, I’m the first to beat him up. But let’s not assume that he was confusing things becausse you all did.

    We’ve just had a demonstration from your side that y’all were confusing things. And it’s not at all certain that he was confusing things.

    If you actually read the McIntyre quotes that I posted above (from McIntyre’s paper) and what Dr. Jolliffe commented above you will see that it’s not a matter of “assuming that McIntyre was confusing things”. McIntyre’s own words indicate that he was confusing things.

    But I can’t force you to read anything, of course.

    That would involve doing what you suggested above: being curious and asking questions.

  • apolytongp // September 13, 2008 at 12:50 am

    I read them just fine. Oh…and I read the damn Joliffe Powerpoint (!) well enough to ask the critical questions in March. Did you go back and check that out? If so, let me know the timetamp from when I asked the critical questions on the terminology.

  • David B. Benson // September 13, 2008 at 1:05 am

    Patrick Hadley // September 12, 2008 at 11:15 pm — Pick 30 or 100 or 140, I tried these all. But I am comparing the HadCRUTv3 global temperatures with the GISP2 central Greenland temperatures. For a more global reconstruction using boreholes, see Figure 1 in

    http://www.geo.lsa.umich.edu/~shaopeng/2008GL034187.pdf

    for anomalous rates in the last 100 (or so) years.

  • Bill // September 13, 2008 at 4:49 am

    David B. Benson,

    You said that the rate of change in temperature is what was unprecedented. Then you referred me to two periods roughly 8000 years ago. Then you said that you doubted that the rate was as high then. Why? What was the rate of change during those periods? What is the current period that you are going to compare to those periods? I just said that we should be careful when using words like ‘unprecedented’. I’m sure you have been careful, and so maybe you could point me at your specifics.

    You replied to Patrick Hadley that we should check Figure 1 for rates in the last 100 or so years. I looked at figure 1, and I have to be honest, my eyes are good enough to read that figure with that kind of accuracy. There are 20,000 years packed into a pretty small number of pixels, I don’t think I could pull any one 100 year period out of that figure. What is the rate of change in the last 100 years according to that figure?

    I would have to say that it seems odd that you would compare global temps to temps from central Greenland and be able to infer anything about anything unprecedented globally from that comparison. Is there some precedent for using central Greenland as a proxy for global temps? Is there any reason why you wouldn’t just compare the HADCRU data with itself?

    I also noticed on page 4 of that paper, the authors note that their method loses the ability to track relatively rapid events in the distant past. I would hesitate to use this paper to make claims like ‘unprecedented’ when comparing current climate with past climate.

    I’ll confess that there may be more going on here than I can follow, but I don’t see anything that makes me want to claim that current climate is unprecedented. That doesn’t mean that you couldn’t, but I would just want to see some reason that I don’t see right now.

    As an aside note, you will see that on page 4 the authors point out that their reconstruction indicates broad cooling around 1800 years ago, followed by warming up to around 600 years ago (the MWP), subsequent cooling to around 200-300 years ago (LIA) followed by warming to the present. Their MWP isn’t as warm as current temps, but nonetheless, not very hockey sticky. Also, in their conclusions, the authors state that because of ‘noise’, this reconstruction had to exclude data about the 20th century and so you can’t use this reconstruction to compare changes in the 20th century to changes in the MWP. I wonder if the same holds true for the entire period of the reconstruction, or only for the MWP.

  • Bill // September 13, 2008 at 4:53 am

    correction…. when I said “my eyes are good enough”, I meant “my eyes are not good enough”. Sorry.

  • Bill // September 13, 2008 at 5:04 am

    I am prompted to ask, based on that borehole reconstruction, is there any validity to the question of which reconstruction is wrong? That is, there are reconstructions out that there that do not use PCA (presumably) and show a hockey stick. There is also this borehole reconstruction which most certainly does not show a hockey stick (it does indicate recent warming). Well, there either was a hockey stick or there was not. Is the climate science community working to figure out which reconstructions are wrong and what is wrong with them and how can we learn from those mistakes?

  • Jeff Id // September 13, 2008 at 5:16 am

    Ian Jolliffe

    I am fairly new to the AGW science world but not to science. In my digging regarding PCA, I happened across the anonymous review comments on Climate Audit of a paper submitted to Nature by McIntyre & McKitrick (MM) in2004. The reviewer’s comments appear to be remarkably similar to your recent post on this blog.

    http://www.climateaudit.org/correspondence/nature.referee.reports.htm

    Most notably referee #1 in the first submission states an expertise in PCA. This of course narrows the field considerably.

    The comment which really struck me was this

    “I think I understand better than before what the MBH98 PCA is doing, namely centering the data about the mean of the 1902-80 period rather than of the whole series. The question is why, and what properties and interpretation does such a procedure have?”

    In my admittedly limited understanding of MBH98 PCA this seems to be the same criticism in 2004 that you have posted here. Besides the comments about using a partial The phrasing of the what in the post here on this blog is a big flag.

    “I don’t know how to interpret the results when such a strange centring is used? Does anyone? What are you optimising? A peculiar mixture of means and variances?”

    So the easy question is -

    Were you in fact the reviewer of the MM paper to nature in 04?

    and more importantly-

    Now that you have had more time, has your position changed regarding the review?

    Thanks in advance for your reply.

    Jeff Id

    I left the link out in my first post, please delete it.

    [Response: Anonymous peer review is supposed to remain anonymous. Asking a scientist if he was the reviewer in a given case, strikes me as similar to asking your priest what you said in the confessional last week.]

  • Jeff Id // September 13, 2008 at 1:10 pm

    Tamino,

    I have been reading and enjoying your blog for a while now. It seems quite clear from the link in my previous post that Jolliffe was the reviewer, I was more interested in his current opinion.

    If Ian doesn’t want to respond he doesn’t have to, I think you’ll agree that he has shown the ability to take care of himself. Besides, the anonymity isn’t that important anymore given that the review took place over 4 years ago!

    I just have to add, comparison of a review of the statistics process in a Nature paper to confession is way over the top!

    [Response: Whether or not anonymity is still important, even after 4 years, isn't up to you or me. It's up to the reviewer whether or not he wishes to maintain anonymity.]

  • Hank Roberts // September 13, 2008 at 1:47 pm

    > borehole
    http://www.google.com/search?q=huang+pollack+shen+realclimate

    finds not only the rc thread but comments about it elsewhere from, er, various perspectives.

  • Jeff Id // September 13, 2008 at 4:37 pm

    Thanks for the reply,

    I agree that if Ian Jolliffe doesn’t want to explain his position he doesn’t have to. As I stated in my last post,

    “If Ian doesn’t want to respond he doesn’t have to”

    But as a skeptic (not a denier) I would like to hear from the expert on this matter. It is fairly important as Real Climate and others put a great deal of emphasis(celebration) on the fact that this particular paper by MM was denied publication.

    Since the reviewer is likely (p<0.1) Ian Jolliffe, I think many of your readers who haven’t got a ‘horse in the race’ want to understand his position.

    Thank you again.

    [Response: Anonymous peer review is a matter of confidentiality. In my opinion, only the reviewer has the right to decide when that confidentiality is no longer necessary, and asking outright in a public forum is not right. Obviously you don't agree.]

  • Jeff Id // September 13, 2008 at 6:13 pm

    Tamino,

    I appreciate your publishing of my questions, I had originally feared you might not.

    Of course I don’t agree. Especially on a matter as important as this.

    There should be no emotion in this subject, it either is or isn’t the correct analysis method. The suggestion that one shouldn’t ask implies such a high degree of aloofness that it leaves me in shock. Once you have a PhD and review a paper you are beyond question? These papers have a direct effect on millions of lives, something which scientists often miss.

    Of course we should ask, the question is reasonable and completely up to Ian Jolliffe how to answer.

    He has gone against his own stated belief in AGW on this one topic showing the ability to divorce himself from the emotion and tenuously from the politics. To me this gives him the highest degree of credibility and he deserves our accolades.

    For the rest of us mere mortals, we just want to know which experts and papers to follow. I followed much of your own very convincing analysis on this topic and I find myself now in a state of confusion.

    So again I unabashedly put forth my original questions.

    [Response: The truth or falsehood of AGW, and the correctness or faultiness of MBH98, are not properly subjects of emotional involvement. But attempting to unmask an anonymous reviewer in a public forum is, in my opinion, a breech of ethics.

    Also, there is no conflict between belief that MBH98 is faulty and belief that AGW is correct; the latter does not depend upon the former. Jolliffe has certainly not "gone against his own stated belief in AGW." That statement is nonsense.]

  • Jeff Id // September 13, 2008 at 8:18 pm

    I apologize for the less than clear comment about AGW.

    You are correct of course about AGW, my statement was unclear. I should have written it to say that he recognized the key support of this PCA type of analysis in the AGW community and still chose to represent math ahead of belief. This is what I meant.

    We disagree about the ethics of my question, I hope you will leave it to Ian Jolliffe to make the decision.

    Thank you again.

  • mikep // September 13, 2008 at 9:29 pm

    It’s time to move on here. Everyone should now be clear that there was no dominant pattern in the generality of MBH proxies. We know that what dominated what Mann called his first principal component was the bristlecone pine series, which are regionally specific (supported by the Gaspe cedars if necessary). What the MBH method did was to efficiently find hockey stick shaped patterns in the data. The instrumental global average temperature record is moderately hockey stick shaped over the period where proxies and instrumental methods overlap. A regression analysis (of some variety) then links a proxy largely based on the bristlecones to temperature allowing backward “prediction” of temperatures using the regression coefficients. Essentially the same sort of results can be achieved without any use of principal components of any kind by using crucial series e. g. the bristlecones directly. So the big issue which affects not just MBH1998 but virtually all the recent paleoclimate reconstructions is the quality of the proxies. Can we all agree on this?

    Assuming agreement here, I have concerns around the following points on bristlecones, some of which extend more widely to other much used proxies.

    Are there physical/biological reasons for expecting bristlecone pines to be linked not to local temperatures but to global average temperatures, or were they chosen as proxies on opportunistic correlation grounds?

    The bristlecone pine data appears not to have been collected as a sample testing relationship with global temperature, but instead to illustrate a presumed link with carbon fertilisation. How do such sampling procedures affect the use of the proxies for temperature reconstructions?

    There appears to be evidence that the anomalous growth of these samples may simply be caused by the stress of losing much of their bark, and have nothing to do with either temperature or carbon. Then any relationship with global temperature in the instrumental period would be a fluke just like the correlation of high tech stock prices with global temperature.

    Finally, supporting the fluke interpretation, recent resampling of the bristlecone trees and sites suggests that ring growth in these trees has not continued to be anomalously high in the post 1980s period when global temperatures have been rising.

    Surely these are the issues that are of relevance now, not any discussion of principal components (provided we do not repeat mistakes made and pointed out earlier).

    [Response: Considering that in the latest paleoclimate reconstruction the hockey stick appears clearly even with no tree ring data at all, I'd say "bristlecone pine" is rather irrelevant. It certainly is time to move on.]

  • ChrissyStarr // September 13, 2008 at 9:36 pm

    Tamino,

    Are you going to have that algebra discussion with Ian Jolliffe?

    [Response: We're in communication by email. It's a point which is, I assure everyone, irrelevant to the issue of MBH98 and to global warming.]

  • Gavin's Pussycat // September 14, 2008 at 9:33 am

    Jeff Id:

    Once you
    have a PhD and review a paper you are beyond question? These papers have
    a direct effect on millions of lives, something which scientists often
    miss.

    Oh no they don’t… scientists have children too. Hansen doesn’t miss it.

    …but what you’re missing is that no individual paper has such a direct effect. It is the body of the science to which the paper belongs, and of which the paper is only a small part. It either confirms, and is confirmed by, many other results in the same field, or it doesn’t — in which case it is soon forgotten.

    MBH98/99 has become a high profile paper partly because it was the first of its kind — deservedly — but also for political reasons — less so. But it’s not the Bible! Science doesn’t work that way.

    The history of science has shown what is the best — the only — way to figure out how things really are. It is through replication and peer review. That contains two words, ‘peer’ and ‘review’. Both are needed. You’re not a ‘peer’ of M, B or H, and won’t become one until you start publishing in the field. ‘Review’, like the demand for replication, is motivated by an awareness of one’s own limitations in both knowledge and emotion, even when having studied a subject for a lifetime. You cannot undermine that process without paying the price (yes, “millions of lives”). Your attempt to “out” Jolliffe counts for me as a small attempt to do precisely that. Fie!

  • apolytongp // September 14, 2008 at 12:02 pm

    Pussy: That would be fine if we had agreement on the faults of MBH. And then when denialists picked on it, alarmists could feelt that they were picking on a corpse and what’s the point? But we have people like Mike Mann and Gavin who won’t even concede obvious points like the short-centering should have been described in the methods! People are too invested in the meta-debate to concede points. Which is NOT GOOD PRACTICE of science.

  • Jeff Id // September 14, 2008 at 3:22 pm

    Let me ask a different way then which should satisfy everyone involved.

    Ian Jolliffe,

    Can you please take a moment to review the comments made by Referee #1 in the first submission and #2 in the second of the nature paper submitted by MM as a criticism of MBH 88/89?

    Link here
    http://www.climateaudit.org/correspondence/nature.referee.reports.htm

    Do you find yourself in general agreement with the comments made by this referee and can you expand on the comments made?

    I have also included a link to the final submission by MM to nature for your reference. It is quite well known and I feel many are now familiar with it and its criticisms.
    http://www.uoguelph.ca/~rmckitri/research/fallupdate04/MM.short.pdf

    It would be very helpful to the community in general and further the understanding of the science.

    Thank you in advance for your consideration,

    Jeff Id

  • Bill // September 14, 2008 at 5:04 pm

    Tamino,

    [Response: Considering that in the latest paleoclimate reconstruction the hockey stick appears clearly even with no tree ring data at all, I'd say "bristlecone pine" is rather irrelevant. It certainly is time to move on.]

    If you were to consider the borehole reconstruction recently referenced here on your blog, which clearly shows no hockey stick, how would that affect your statement on hockey sticks. Now, I have only heard of that paper on this blog and I have never read any other commentary on that paper (thanks to a previous poster for including a link, but I haven’t had time to follow up). But, the paper was published by the AGU and it is pretty recent and that seems to me to make it worth considering.

    I asked a previous question about what is being done when conflicting results occur and didn’t see a response. Personally, I think the only valid response is “We don’t know if there was a hockey stick or not.” Conflicting data leaves us no other choice. Please take this in the best possible light. I haven’t yet read anything that discredits that reconstruction (no one has done so here, yet) so if it has been discredited, I apologize. But if it is still being considered accurate, than regardless of how many reconstructions show a hockey stick, if one shows no hockey stick, then something must be wrong. They can’t all be right.

  • Bill // September 14, 2008 at 5:13 pm

    Hank Roberts,

    I apologize if I am misreading something, but many of the links that are returned by that search are discussion about a previous study by those authors. The study in question was published in July of 2008. Many of those threads on RC and CA are from several years ago. Do you have some specific links to discussion about this particular paper? Thanks in advance.

  • Ray Ladbury // September 14, 2008 at 5:36 pm

    Bill, at most, the borehole data would just be one more series to be consider with all the others. For what it is worth, it probably would be given good weight, as it seems to reproduce the strong upsurge in temperatures we know to have occurred in the last century. It is that which is more relevant to our current situation than whether there is a “hockeystick”.

    I do not understand why people fixate on irrelevant details–you on hockey sticks, Jeff Id on the fate of a single particular paper in peer review. What matters for the science is whether an approach is fruitful or not. Right now all of the fruitful approaches indicate that temperatures are at their highest level in at least 2000 years, that the rise has been extraordinarily rapid and that anthropogenic ghg are to blame.

  • Ray Ladbury // September 14, 2008 at 5:39 pm

    Jeff Id., Hmm, given that any climate scientist that plays the CA game is likely to be subpoenaed to appear before some Congressional “Audit”, can you think of any reason why folks might be just a wee bit cautious of responding to your request. There is a reason why peer review is anonymous–and you guys are it.

  • Gavin's Pussycat // September 14, 2008 at 6:51 pm

    Yes Bill, it is well worth reading. There’s a copy on the first author’s website, link (posted earlier by David Benson)

    http://www.geo.lsa.umich.edu/~shaopeng/2008GL034187.pdf

    > which clearly shows no hockey stick

    Get new glasses. It replicates the Mann et al. hockeystick, see Figure 2.

    Do remember however that these boreholes are all on land (“from all continents except Antarctica”).

  • David B. Benson // September 14, 2008 at 10:47 pm

    Bill — Comparing temperature change rates in central Greenland to global ones is certainly only indicative of what else might be considered. The fast increases in the Holocene (central Greenland) were the end of the Younger Dryas (about 11,500 ybp) and the recovery from the similar, but much smaller, 8.2 kybp event. The GISP2 data ends at 100 ybp = 1850 CE so there is no data for later dat3es; too bad.

    The 8.2 kybp event is of especial interest because it appears to have even affected the Pacific Warm Pool. This suggests that the changes then were at least in common in the northern hemisphere. By way of contrast, the much larger Younger Dryas did not affect Patagonia and the Antarctic although there is ample data to show that, at a minimum, the entire far north was affected.

    The ice cores from central and northern Greenland clearly show regional extreme changes in temperature, depending upon the SST of the far North Atlantic. So one way of connecting the paleorecord to the present would be to look at North Atlantic temperature products.

    But that the warming is in an importatn sense unprecidented is indicated by glacial melt; here is British Columbia:

    http://news.softpedia.com/news/Fast-Melting-Glaciers-Expose-7-000-Years-Old-Fossil-Forest-69719.shtml

    Here are the Alps, with meltback to 5200–5500 years ago:

    http://news.bbc.co.uk/2/hi/science/nature/7580294.stm
    http://researchnews.osu.edu/archive/quelcoro.htm
    http://en.wikipedia.org/wiki/%C3%96tzi_the_Iceman

    Some of James Hansen’s papers make reference to the data showing that the Pacific Warm Pool is now warmer than at any time in the past hundreds of thousands of years.

    With regard to

    http://www.geo.lsa.umich.edu/~shaopeng/2008GL034187.pdf

    reread paragraph [26]; the 20th century is indeed included. I agree that borehole data does not rapid changes in far past times; thaqt is why I illustrate with ice core data from Greenland.

  • Hank Roberts // September 15, 2008 at 1:41 am

    Bill, narrow your search once you have the specific article identified. Examples:

    Citing papers:
    http://www.nature.com/cited/cited.html?doi=10.1038/35001556

    Searching on the DOI number will turn up discussion citing it properly (rare but useful):
    http://www.google.com/search?q=doi%3A10.1029%2F2008GL034187

    Scholar with “Recent” search set to the year of publication is often useful, but it takes some months before you’ll find much.

  • Bill // September 15, 2008 at 2:00 am

    Ray Ladbury,

    It doesn’t seem fair of you to point out how people are fixating. Tamino is our host here. He spent quite a bit of time discussing that paper that you refer to as “a single particular paper”. This is, after all, an open thread that our host created for the explicit purpose of discussing things global warming related. It doesn’t seem fair for you to try to embarrass him like that.

    Now, for my part, I agree with you. What is important now is identifying approaches that lead us to truth. I would imagine that a reconstruction would be important because of its ability to correctly reproduce temperatures during the time frame under question. I’m not all that impressed by a 20,000 year reconstruction that gets temperatures in the last 150 years correct (albeit one that cautions against comparing those recent results against results from the past). You say that it should get strong weight because “it seems to reproduce the strong upsurge in temperatures we know to have occurred in the last century”. Um, no offense to the authors, but big deal. I would like to think that it should get weight because it correctly reproduces temps during the last 20K years. One way I have of checking that is to compare it to other studies. So, I just decided to compare Figure 2 in that paper with Figure 5 from here: http://holocene.meteo.psu.edu/shared/articles/JonesMannROG04.pdf

    Now, I’m only visually comparing graphs, but despite my bad eyes, one of those graphs shows that past temperature had great variability. The other shows that it did not. One paper describes in great detail the changes starting 1800 years ago and goes out of its way to use terms like Middle Warming Period and Little Ice Age. Other than the fact that it is warming now, these reconstructions have little in common. The big up and down and up swing shown in the borehole paper does not appear in Mann/Jones (although, the borehole reconstruction looks like it might fall within the 2 sigma error boundary of the Mann/Jones paper). So, lots of people keep talking about hockey sticks (believe it or not, not just me). Was there a hockey stick or wasn’t there? Not everyone can be right. I happen to think that figuring out which ones are not right is a good step towards figuring out which ones are fruitful.

  • Bill // September 15, 2008 at 2:10 am

    Gavin, thanks for the tip that I should read the paper. I thought that by quoting it, I would be tipping people off that I had read it. Sorry for the confusion.

    I got real close to the computer screen, but I compared Figure 2 with Figure 5 from here: http://holocene.meteo.psu.edu/shared/articles/JonesMannROG04.pdf
    and Figure 3 from here: http://www.realclimate.org/index.php?p=10

    I’m sorry, but maybe we just have a different definition of “hockey stick”. I think it means the relatively flat lines from those latter two papers, that show that from temps in the last 600 years didn’t vary by much more than .4 degrees until recent warming. You must think that it means that graph from Figure 2 of the borehole paper that shows in the last 800 years temps going from just slightly below the “zero mark” down to 1.0 degrees below zero back up to the current warming. Depending on which of the borehole authors reconstructions you choose, the variation is even greater. If that is what you mean by hockey stick, then I guess we do agree.

  • Jeff Id // September 15, 2008 at 2:11 am

    Ray Ladbury said =

    “I do not understand why people fixate on irrelevant details–you on hockey sticks, Jeff Id on the fate of a single particular paper in peer review.”

    This strikes me as a bit false Ray, I’m sure you have noticed a small group of people who have spent years proving over and over that this exact paper had this exact statistical problem.

    Perhaps you noticed the repeated rejections of their submissions and the overwhelming circular celebrations of these failures at real climate and other places.

    Perhaps you noticed the lengthly posts by Tamino, which were excellent but they were refuting the same hockey stick we’re talking about.

    Sure, you can change direction and say what about this other evidence, that is not currently the question. What were staring in the face here is a potential vindication of the CA guys entire focus.

    I, for one, would like to know who to believe. This issue needs to be put to bed!

  • Bill // September 15, 2008 at 2:18 am

    David B. Benson, I reread paragraph 26. That appears to be referring to a different study (HPS00) which is based on an entirely different set of data. I took the wording of paragraph 25 to be discussing the data that was originally used for HPS97, but then reused for this study. If that interpretation was not correct, mea culpa.

  • Bill // September 15, 2008 at 2:25 am

    Gavin, to save both our eyes a little bit, I took this bit of text from the borehole study:

    The LIA temperature minimum shows an amplitude about 1.2 K below the MWP maximum, and about 1.7 K below present-day temperaturesheat flux measurements.

    Now, they have the LIA at about 600 years ago. If you look at either that Mann/Jones graph or the Realclimate graphs, they don’t have anywhere near that level of variability. Both show a maximum of about .4 degrees below the reference, not 1.0 degrees. If there is another “hockey stick” that shows temps dropped and then rose by 1.0 and 1.7 degrees respectively, please send me a link.

  • Jeff Id // September 15, 2008 at 3:59 am

    Bill,

    Check out this plot on my blog. Of the next paper I am fixated on :)

    You might see something interesting and relevant to your questions.

    And Please don’t worry too much about my embarrassment. Like Ian, I can take care of myself.

    http://noconsensus.wordpress.com/2008/09/15/mann-08-the-missing-data/

  • Gavin's Pussycat // September 15, 2008 at 11:10 am

    Bill,

    I think I see the cause of your confusion. As I pointed out, “Do remember however that these boreholes are all on land”.

    Reading your Mann/Jones link, I see that that isn’t even the whole story: the original borehole curve was produced by arithmetic averaging and doesn’t even take the latitude effect (“polar amplification”) into account.

    See Figure 3 of Mann/Jones. Doing this properly (i.e., accounting for both latitude and “spatial variability”, presumably capturing the land/ocean amplification) reduces the borehole data variability quite a lot, as shown by the red curve.

    I am not an expert on these things but this much was clear to me :-)

    BTW also consider that the “zero line” is not the same in all graphs you are comparing.

    [Response: One should also remember that borehole reconstructions have a larger uncertainty as one goes more than few centuries back in time, and as one goes further back that uncertainty grows rapidly. My understanding is that they're reliable for the last few centuries, but beyond that the probable error becomes much larger than that of other proxies. They also have very large uncertainties in the *time* for which the reconstructed temperature applies.]

  • Bill // September 15, 2008 at 5:38 pm

    Gavin,

    A couple small things… First, the Mann paper notes that the borehole paper from 2000 had the error and Rutherford and Mann corrected for it. I’m not convinced that HSP08 has the same error. Based on Tamino’s proposition that we give authors of the written word the best possible interpretation, I’m not willing to assume that they made the same mistake a second time. Granted, they are using the same dataset, but the authors explicitly state that they are generating a new reconstruction. Like I said, I haven’t spend loads of time researching the discussions on this paper. If you have found something that I haven’t seen yet, I’d love it if you shared…

    Regarding the centering… I’m not sure I understand that. My point was the great variability. I understood the defining characteristics of a hockey stick to be the long flat handle followed by the sharp uptick at the blade. Assuming that the borehole reconstruction doesn’t use PCA, its center shouldn’t affect the magnitude of its variability and non hockey stickness of the handle. At the very least, we are talking about the time frame when the borehole reconstruction would be considered accurate (per Tamino).

  • HankRoberts // September 15, 2008 at 6:16 pm

    http://www.realclimate.org/index.php/archives/2008/08/north-pole-notes-continued/langswitch_lang/tk#comment-98548

  • Lazar // September 15, 2008 at 7:48 pm

    Bill;

    Was there a hockey stick or wasn’t there? Not everyone can be right. I happen to think that figuring out which ones are not right is a good step towards figuring out which ones are fruitful.

    Being or not being a’hockeystick’ is so far a subjective, qualitative binary. How do you then determine and justify a precise, quantitative threshold? How do you determine that a reconstruction is empirically ‘right’ or ‘wrong’ from that arbitrary choice? I would much rather think along the lines of degree of ‘hockeystickness’, and that all reconstructions are wrong to differing degrees. And using reconstructions to construct an average with error bars, if necessary discarding outliers. Or taking a Bayesian approach. I think this is much more empirically sound and informative than the binary ‘hockey stick’ or not ‘hockeystick’ therefore ‘right’ or ‘wrong’.

  • apolytongp // September 15, 2008 at 7:57 pm

    Bill: In general, the boreholes only go back about 400 years. So they don’t shed light on the MWP versus recent warming question.

  • Lazar // September 15, 2008 at 7:58 pm

    Bill;

    If there is another “hockey stick” that shows temps dropped and then rose by 1.0 and 1.7 degrees respectively, please send me a link

    Mann et. al 2008, supplemental information Fig. S5(e); EIV global land. LIA minimum ~ 1.0 deg C below MWP maximum, and ~ 1.5 deg C below present.

  • David B. Benson // September 15, 2008 at 9:58 pm

    Bill // September 15, 2008 at 2:18 am — Yes, paragraph 25.

    Bill // September 15, 2008 at 2:25 am — The borehole temperature difference from MWP maximum to LIA minimum approximately agrees with GISP2 central Greenland; there the change was about 1+ K.

  • Gavin's Pussycat // September 15, 2008 at 10:01 pm

    Bill,

    now you made me again read the HSP08 manuscript :-) I don’t find anything in it suggesting that they have taken either latitude or spatial location into account in their processing, rather to the contrary — they base themselves heavily on their earlier (1997, 2000) results. The extra processing involved would be nontrivial, so I would certainly expect them to describe it had they done so.

    Also, the authors themselves are clearly of the opinion that their results are “generally consistent” with those of other proxies as summarised in the latest IPCC report — they state so repeatedly. Who are we to question them?

    BTW for me “hockey-stick-ness” is defined by late 20th century decadal-plus being clearly warmer than whatever MWP-like feature the data turns up. This property is shared by the various Mann et al. results and HSP08, among many others.

  • Gavin's Pussycat // September 15, 2008 at 10:27 pm

    …and for absolute clarity (did I point this out already?) I believe that HSP, in their claim that they are compatible with other proxies, refer to “land only” reconstructions, not “land + ocean”. The former have a much larger variability (IIRC as much as twice) and are the proper target for comparison with “land only” boreholes.

    I remember a long discussion with one Rod B on realclimate who claimed that one press release by the American Met Society was grossly exaggerated, and in the end it turned out that the AMS release spoke about the global land-only record and Rod had been staring at land + ocean… it is very easily missed.

  • HankRoberts // September 15, 2008 at 10:42 pm

    Bill, see William:

    Category: climate science
    Posted on: September 15, 2008 4:38 PM, by William M. Connolley

    By S. P. Huang, H. N. Pollack, and P.-Y. Shen, also known as HPS. This is a very interesting paper. To understand why, you’ll need to at least browse The borehole mystery and More boring.
    (links in original, see below)

    http://scienceblogs.com/stoat/2008/09/a_late_quaternary_climate_reco.php?utm_source=hotness&utm_medium=link

  • Bill // September 15, 2008 at 10:53 pm

    apolytongp,

    Well, now I’m confused. The authors (as well as any reviewers from AGU) of HPS08 seem to think that boreholes are good to 20,000 years. Tamino says 2000 and you say 400. Anyone else got any bids?

    [Response: I don't recall saying 2000.

    The thing to keep in mind is that borehole reconstructions become less accurate, the further back in time you go. So it may be "good to 20,000" but only with an accuracy of +/- a few degrees, even though it's "good to 400" within a few tenths (I don't know the actual numbers).]

  • HankRoberts // September 15, 2008 at 10:59 pm

    Argh. Didn’t know the header had gotten caught in the Stoat clip; you can delete/ignore down to the “Posted on” line.

  • Bill // September 15, 2008 at 11:15 pm

    Well, a great deal of the recent discussion here looks like it boils down to what people mean by the words “hockey stick”. I guess I felt that it meant that temperatures for the last couple thousand years (or less) were pretty static around a mean a half degree or so below the 1960-1990 average. Then, in the last century (or less) they took a sharp upward turn. It seems that most people here seem to think that a hockey stick doesn’t have much of a handle, and that the only part of import is the blade (i.e. that upturn at the end).

    Well, if that is the definition we are going to use, then fine. But, it sure doesn’t seem like that is much to get excited about. Who cares if a reconstruction shows that? We have temperature records that show that! When people talked about the hockey stick of MBH98 being replicated in many other studies, I thought that it was that long handle part made famous in AIT that they were talking about. Hell, sorry if I have been making much ado about nothing. Of course the temperature has been rising for the past 100 years or so. We even agreed here that HADCRU data had shown that.

    David Benson, I have to admit. I’m confused. It seems to me that you keep comparing global data with information from central Greenland. I’m not sure why. I’ve tried to follow your links, but the only thing I can figure out is that you are claiming that because central Greenland happens to match global data in some key places, that it is safe to do so in all cases, so that is what you are doing and you are considering comparing global data and Greenland data “apples to apples”. Well, that just seems wrong to me. I must have misunderstood you.

    [Response: What about the graphs in this post? Are they "hockey stick" shaped? Are they anything to get excited about?]

  • apolytongp // September 16, 2008 at 1:13 am

    Jolliffe-stuff: I made a comment on a relevant thread of RC, related to the concept of showing the methods (the short-centering). It was not allowed on. Tammy, appreciate that you allow more free discussion with opponents.

    New stuff: Steve M. has a long post about “pea under thimble”, which could much more easily by summarized, by “Mann’s method is robust to removal of a single factor of concern, but not two factors of concern”. Maybe it even should be. (robust to those two, that is. Or maybe either of them should not be in there for other reasons.) But, my basic point is that McI describes stuff with an intricate narrative that is not needed. The concept is really very clear. When I see him do this, he comes accross as argumentative. As lawyerly. As blog-warriorly. As sophomore in college evasivish. Rather than as cut to the chase, simplify things with intellectual insight…ish.

  • apolytongp // September 16, 2008 at 1:15 am

    McI post referred to above: http://www.climateaudit.org/?p=3642

    (wish we could edit posts on this board.)

  • apolytongp // September 16, 2008 at 3:01 am

    I think for the sort of thing we are talking about (a few tenths degrees), the boreholes are good to about 400 years. There is a decent review by Mann on boreholes. Also some good stuff by Zorita.

    Boreholes are kind of a wonderful measurement in that they are direct and lack the confounding of precip and the like from tree rings. However, they have poor dating, poor reolution (don’t record annual variation)…and (TCO thing), I sorta really wonder if they can work, can really give a picture of temp with all the time and conduction and the like.

    I am not a proxy expert. Just a climate fan of the blogs, but the proxy I’ve seen that impressed my the most was coral. It had very nice “wiggle matching” calibration to temperature excursions during the instrumented period. much better than tree rings.

    [Response: Another confounding this about boreholes is that they don't indicate surface air temperature, but the *ground* surface temperature. But they are a direct indicator of temperature.]

  • Hank Roberts // September 16, 2008 at 3:57 am

    I keep tellin’ people, okay, there’s that little bump in the middle/Medieval, so it’s not a hockey stick.

    And we know we haven’t seen more than a portion of the ‘blade’ show up in the picture yet.

    _This_ is what it resembles, as we get the picture developing:

    http://content.answers.com/main/content/img/Gardeners/f0226.jpg

  • Hank Roberts // September 16, 2008 at 4:11 am

    Following up the earlier pointer, more detail, thanks and a hat tip to Timothy Chase who followed up there:
    http://www.realclimate.org/index.php/archives/2008/08/north-pole-notes-continued/langswitch_lang/tk#comment-98583

    I can’t post the actual pointer, apparently — I just get a message saying “discarded” when I try.
    Don’t let the facts spoil a good story
    by Ben Goldacre
    The Guardian, Saturday September 13 2008

  • Gavin's Pussycat // September 16, 2008 at 4:49 am

    apocalypsenow: nah… there’s enough historical revisionism as it is.

    Bill: “None so blind as he who will not see”

  • Ian Jolliffe // September 16, 2008 at 8:24 am

    I was clearly naïve to think that my foray into this forum would be brief. Since I last looked at the website my views on two topics have been asked for and I also owe email replies to two people regarding issues raised. [Having drafted this and gone back to the website again, I see there is a third issue but that can wait]

    The two topics for which my opinion is sought are PCA are non-stationary data and RE vs Rsquared. I feel reasonably well qualified to comment on the first, but not the second.

    My original comment: ‘Of course, given that the data appear to be non-stationary, it’s arguable whether you should be using any type of PCA’ was something of a throwaway remark. In its purest form PCA is relevant to (multivariate) data that are independent and identically distributed. In particular the latter implies that all data points have the same mean and the same covariance matrix. There are statisticians who would insist that these are the only circumstances in which PCA is relevant and indeed would go further and require multivariate Gaussianity as well. I am much more pragmatic than that.

    Using PCA on time series data violates either the independence assumption or the identical distribution assumption or both. So what should we do about it? One approach is to find alternative methodologies that acknowledge the time series nature of the data. Chapter 12 of my book describes quite a few of these though, given the large number of disciplines in which PCA is used, it is inevitable that I missed some, and others have been suggested since 2002. For example, Hannachi (2007) Int. J. Climatol, 1-15, suggests a variant of PCA that is geared towards looking for trends in time series.

    The other possibility is to go ahead and use ‘ordinary’ PCA on the time series data. As noted above, I’m a pragmatic statistician, and believe this is OK, provided that there is at least some understanding of what is being done and whether PCA might do unexpected things because of the structure in the data. The problem is that rather than using PCA to explain the main sources of variation in a covariance matrix that is common to all the data, we are now describing the main sources of variation in a complex covariance structure that might have contributions from trend, different groups of observations with different covariances, and so on. Ideally, the best way forward would be to model the mean and covariance structure in data in some way, but if all that is wanted is a low-dimensional descriptive summary of the data then PCA will do this just fine. However, if we then go on to interpret the PCs or use them in a non-descriptive way we need to be very careful that we have understood the implications of the underlying covariance structure – for example, might it produce PCs that give undue weight to particular data points, or that emphasise some parts of the structure but hide others that are no less interesting?

    R-squared and RE: Sorry but I don’t have an opinion on this. Shortly before I entered this discussion I read through Ammann & Wahl (2007) in attempt to bring myself more up-to-date with matters related to ’short segment centring’ (I agree that this is a better description than ‘decentring’ – it’s a shame it’s such a mouthful) and related hockey stick matters. It soon became apparent that to get close to understanding the details of the current debate (seeing the wood for the trees) I would need access to a non-trivial number of other papers, not to mention supplementary material. Simply getting hold of all of these in my retirement would be a challenge, whilst reading them all would need a large investment of time. So I do not expect to have an informed opinion on this in the near future.

    Finally, I’m still waiting to hear about references which use centrings other than column-centring, row-centring, double-centring or no centring, especially pre MBH. If they are out there I would really really like to know about them.

    [Response: It's the nature of forums like this, that if once you open pandora's box it won't be closed again; your comments were bound to elicit more questions about more topics, some more relevant than others.

    We're all grateful for your contributions! But I hope such queries are viewed as opportunities rather than obligations.]

  • Ray Ladbury // September 16, 2008 at 2:20 pm

    Ian Jolliffe,
    You can count me among those who appreciate your input.

  • apolytongp // September 16, 2008 at 2:41 pm

    Jolliffe:

    I enjoy your coming by. Also respect Tammy for having you on here as well.

    I’m also “TCO”. How come no “love” for me? I asked in that PCA part 4 thread (halfway through) if your work was being properly interpreted. I think I hit the nubbin of what we needed to be careful about when the Mannites were citing your Powerpoint (aside: a Powerpoint! as a reference?) as justification for short-centering. I don’t know matrice theory or linear algebra…so the partisans on both sides try to squash me. But I think I home in on some loose threads in both the defender and attacker camps’ analyses.

    Really respect your position, that understanding the arguments is not trivial, would require reading papers and source codes and SIs and the like. But that does leave the question of where the climate (or even stats) field should be in evaluating this stuff. I think it would be neat to hear a science interview from you on the more “philosophy of science” issues here. For instance, how should results be published (or blogs desirable). Does Wegman have a valid point about disconnect of climate from mainstream stats workers (this is an old topic going back to a famous Hotelling article on the role of the academic statistican.) Should full disclosure of methods be made (exact algorithms/code/data)?

    Personally, I find it very unfortunate that McI does not publish his work, does not really try to probe for revealing insight…into the nature of the algorithms…but instead looks too much for “gotchas” and writes them up in a manner that does not make math clearer…but is more designed to try to make Mann look bad.

    Similarly, Mike Mann gives me the “smell” of a young Turk academic. Hopping onto a hot field. Jumping from school to school for promotions. Too defensive of his work. And skirting the boundaries a bit in terms of reading things into the data by complicated methods. Just my feel.

  • kevin // September 16, 2008 at 2:50 pm

    Hank, I’ve also been telling people the shape is more like the tool you suggest than like a hockey stick. I try to resist making any symbolic point out of it, since that really could be seen as ‘alarmist,’ but the shape’s a good match.

  • Jeff Id // September 16, 2008 at 3:00 pm

    Thank you Ian,

    Your response is much appreciated.

  • Hank Roberts // September 16, 2008 at 5:36 pm

    Jeff ID writes above:
    > I, for one, would like to know who to believe.

    Jeff ID is proprietor of a website declaiming “noconsensus”

    Teach the controversy, eh?

  • Lazar // September 16, 2008 at 5:44 pm

    Bonus stats question (Bayesian estimator for Binomial proportion, anyone who can help pleeeease)…
    The MSE for the estimator theta_hat of the population parameter Theta;
    MSE(theta_hat) = Bias[^2](Theta) + Var(Theta)

    y, the number of successes in n trials, has a Binomial(n, Pi) distribution.

    The prior distribution of Pi is Beta(1, 1), giving a Beta(1 + y, 1 + n – y) distribution for the posterior.

    The Bayesian point estimator for Pi is the mean of the posterior distribution, pi_hat, which is just the mean of Beta(1 + y, 1 + n – y) and works out as…
    pi_hat = (y + 1)/(n + 2)

    … that’s for one random draw.
    Over all possible draws, the mean of the sampling distribution of pi_hat is obtained by substituting n * Pi for y in the above.

    The bias part of the MSE is simply the square of (mean of sampling distribution of pi_hat – true value Pi) which equals
    Bias[^2](Pi) = ((n * Pi +1)/(n + 2) – Pi)^2

    Now the Variance of pi_hat over all possible samples…
    I know the answer is…
    ((1/(n + 2))^2) + n * Pi * (1 – Pi)
    … but I haven’t the foggiest idea how to get there.
    The n * Pi * (1 – Pi) part is the variance of y over all possible samples of size n if that helps?

  • apolytongp // September 16, 2008 at 6:32 pm

    For Pete: Has the tree-ring data gathered from the CA car-bashing-up, coffee-drinking expedition been filed with an archive?

  • Hank Roberts // September 16, 2008 at 8:38 pm

    > Hannachi 2007

    adsabs.harvard.ed/abs/2007IJCli..27.1119H

    Empirical orthogonal functions and related techniques in atmospheric science: A review
    A Hannachi, IT Jolliffe, DB Stephenson …

    International Journal of Climatology, vol. 27, issue 9, pp. 1119-1152. Publication Date: 07/2007

    Cited by quite a few, including familiar names (more such found by searching in Scholar than by simply checking “Cited by” links)

  • Hank Roberts // September 16, 2008 at 8:40 pm

    P.S., you’ll have to supply your own
    aitch tee teep ee colon slash slash
    before the “adsabs” line there — the improved spam filter appears to reject anything that includes the preface usually used to create a clickable link.

  • David B. Benson // September 16, 2008 at 9:33 pm

    Bill — The borehole temperature reconstruction is some form of average over many boreholes. Each borehole only changes teemperature in response to local conditions.

    I just noted that central Greenland has temperatue changes from MWP to LIA which are in rough agreement with the land-only borehole reconstruction (as opposed to the land+sea reconstructions.

    Earlier I used central Greenland to attempt to illustrate that the warming of the last 100+ years is extremely rapid, if not entirely unprecendented.

    Apologies for not making the motivations more clear.

  • Lazar // September 17, 2008 at 2:58 am

    Wrong…
    The variance of pi_hat is
    (n * Pi * (1 – Pi))/(alpha + beta + n)^2
    … where alpha and beta are the parameters of the Beta(1,1) prior hence…
    Var(pi_hat) = (n * Pi * (1 – Pi))/(n + 2)^2
    … dividing the frequentist variance of y by the squared sum of the prior parameters and the sample size. Sorry to bug everyone, but can anyone explain why?

  • Timothy Chase // September 17, 2008 at 3:30 am

    Hank Roberts wrote:

    adsabs.harvard.ed/abs/2007IJCli..27.1119H

    Empirical orthogonal functions and related techniques in atmospheric science: A review
    A Hannachi, IT Jolliffe, DB Stephenson …

    International Journal of Climatology, vol. 27, issue 9, pp. 1119-1152. Publication Date: 07/2007

    Looks fascinating. It includes some of the history of developments, compares different approaches (e.g., oblique and orthogonal rotation, the synchronic and diachronic, and a whole host of things I haven’t the foggiest as of yet) — and shows how EOFs are used in the identification of propagating waves and climate oscillations as well.

    I don’t believe I am up to speed for this paper — but I believe it would be something worth aiming for.

  • Timothy Chase // September 17, 2008 at 3:47 am

    A sincere thank you from me as well, Ian.

    And my apologies for having misunderstood you earlier — frankly I embarrassed myself — which is why I thought that the most productive thing I could do for a time was simply to bow out. But I believe I now have a better understanding of the points you were making — and questions of my own.

    Incidentally, I looked at the first few links for decentered PCA and found that they may very well be simply noncentered, not decentered, and this may be the case with all of them. (Might “uncentered” be a better term than noncentered? It might suggest that centering was simply skipped better than “noncentered.”) Difficult to tell simply looking at the abstract. But there are a couple of libraries I can check when I get the chance.

  • Ian Jolliffe // September 17, 2008 at 8:01 am

    After some thought, I have decided to ‘come out’, for three reasons. The first is that it fairly obvious I was the Nature reviewer and the second is that I’d like to think that when I write a review, there is nothing in it that I can’t defend. A third reason is to warn others about how memory can let you down, especially when you are not as young as you used to be.
    I see nothing in the two reviews (Reviewer 1 first submission; Reviewer 2 second submission) that I would change with hindsight. Indeed some of my recent comments are remarkably similar despite not having read through these reviews in detail for 4 years.

    Looking back, I was interested to note the chronology. The first review was written in February 2004 and it is clear that I didn’t understand what MBH were doing at that time. The second review was in July 2004 when I said I thought I did understand, but the notorious Powerpoint presentation was in May 2004 when I had yet to see the light.

    Now for the scary memory bit. In July 2004 I clearly thought I understood what MBH were doing, though not why or how to interpret it. However, I must have felt that other things, involving less investment of time, were more interesting and moved on. Apart from another reviewing task a year later, I was unaware of the fierce controversy raging, until earlier this year when a co-worker and I started investigating algebraic relationships between PCAs with different centrings. Looking for examples, we revisited MBH, and I was genuinely surprised when my co-worker told me that MBH had not done an uncentred analysis, but something else. I had forgotten what I learnt four years earlier. So I was wrong in saying in an earlier posting ‘it was only fairly recently that I realised the exact nature of decentred PCA’; rather it was a case of being reminded. As I said, scary!

  • Barton Paul Levenson // September 17, 2008 at 1:02 pm

    apolytongp writes:

    Mike Mann gives me the “smell” of a young Turk academic. Hopping onto a hot field. Jumping from school to school for promotions. Too defensive of his work. And skirting the boundaries a bit in terms of reading things into the data by complicated methods. Just my feel.

    apolytongp gives me the “smell” of an AGW denier. Hopping onto a denialist strawman. Jumping from blog to blog for publicity. Too defensive of his ignorance. And skirting the boundaries a bit in terms of reading ad hominems into the personality of a complete stranger whom he knows nothing about. Just my feel.

  • dhogaza // September 17, 2008 at 3:26 pm

    Jumping from school to school for promotions.

    Is there something wrong about accepting promotions?

    Triple-A player to boss … “no, sir, I do not accept being promoted to the Major League club, nor do I accept the 10x salary increase that comes with it …”

    That’s one of the stranger ad homs I’ve read in my life.

  • Hank Roberts // September 17, 2008 at 3:33 pm

    Thanks Dr. Joliffe.

    Yep, memory fails. I think you’d enjoy this, from another longtime scientist who recently started blogging.

    Add the usual aitch tee tee pee colon slash slash, which the spam filter doesn’t allow this week:

    moregrumbinescience.blogspot.com/2008/09/shared-knowledge-and-sources.html

    “… Science is not only about knowledge, but about shared and sharable knowledge.

    If someone can’t share the source or support for their knowledge, it isn’t science for you. They might be right. But without that sharability, it isn’t science. The more interesting, surprising, or important the point is, the more important you be able to follow up someone’s first comment with a source….”

  • David B. Benson // September 17, 2008 at 9:30 pm

    Tamino — Time for Open Thread #6, methinks.

    [Response: Right you are. And so it's done.]

  • Like gas stations in rural Texas after 10 pm, comments are closed.