Open Mind

Open Thread #7

October 23, 2008 · 213 Comments

Open thread #6 is getting pretty big, which can make the page load slowly for some users. So here’s open thread #7.

Categories: Global Warming

213 responses so far ↓

  • johnG // October 23, 2008 at 2:48 pm

    Tamino,
    Thank you for your blog, and in general, thanks to all of the scientists/bloggers for this free college education you offer.
    Question: I’m creating an animated illustration of the Milankovic cycles and am having trouble understanding the orbital characteristics that correspond to the precession values I find under this topic in wikipedia. For example, wikipedia shows precession changing from -0.06 to 0.06. What does this number describe? (I understand tilt and eccentricity, but my artist background fails at precession).

    Thanks in advance.

    [Response: The quantity called "precession" in climate is the product of the eccentricity and the sine of the angle between the longitude of perihelion and the longitude of the equinox; it's an indicator of the magnitude of the impact on high-latitude climate forcing due to precession of the equinoxes (in the astronomical sense). There's more about the topic here and here.]

  • Ben Lankamp // October 23, 2008 at 6:27 pm

    A few weeks ago I was looking for global averaged radiosonde temperature data from the various homogenization efforts available, on a monthly timescale. Although this data is available, I found it is not in a ready-to-use format e.g. for importing in R. So I decided to gather the raw data and create a data file with the five major homogenizations of the last years. I put the file on-line for download, see link, for anyone who might be interested in it as well. You may notice that RATPAC is not in the file, I am working on that. Notice #1: there is no guarantee this compilation file will be updated in the future, right now data is available up to June 2008 (HadAT2). Notice #2: there is no guarantee this file is without errors, but I did find long-term trends were not different from those mentioned in the related articles.

  • chriscolose // October 24, 2008 at 3:57 am

    Two things Tamino

    Could you briefly reply to Lucia’s new post on the Santer paper
    http://rankexploits.com/musings/2008/santer-method-applied-since%20-jan-2001-average-trend-based-on-38-ippc-ar4-models-rejected/

    I know it’s Lucia, but I wonder about her “merging” of three different data sets with different base periods and her discussion on volcanic forcing

    Secondly, do you have any primary references on discussion of Milankovitch cycles from a more mathematical basis?

    C

  • David B. Benson // October 24, 2008 at 7:09 pm

    chriscolose // October 24, 2008 at 3:57 am —

    http://en.wikipedia.org/wiki/Milankovitch_cycles

    has lots of references. Try some.

  • David B. Benson // October 25, 2008 at 12:32 am

    I just remebered that LaGrande et al., with Gavin Schmidt as one of the co-authors, did a model study of the 8.2 kybp event, considering this event to have enough proxies to be well characterized. I obtained a copy of

    Consistent simulations of multiple proxy responses
    to an abrupt climate change event
    A. N. LeGrande, G. A. Schmidt, D. T. Shindell, C. V. Field, R. L. Miller, D. M. Koch, G. Faluvegi, and G. Hoffmann

    from

    http://pubs.giss.nasa.gov/abstracts/2006/LeGrande_etal.html

    TO the extent that I can follow this paper, it appears that the event onset (cooling) proceeded at over -1 K per decade in the North Atlantic and around -0.2 K per decade at Amersee, Germany. (See Figure 1). The GISP2 ice core temperature proxy gives about -3 K total excursion, but (possibly due to smearing on the record) over much longer than three decades.

    In the GISP2 proxy, the subsequent warming (the point here) proceeds at about 3/4 to 1/2 the rate of the prior cooling; the record is similar in shape to the excepted response to volcanic forcing, but both larger in magnitude and over a longer time.

    While it is certainly correct to state that the warming over the past century (or even 140 years) is the largest fast excursion since the recovery from the 8.2 kybp event. from this model study it appears that the high rates in the GISP2 data are only representative of the North Atlantic, not the global surface temperature. So it is probably the case that the current warming is proceeding faster, on the centennial scale, than any warming since (at least) the recovery from Younger Dryas. (But Younger Dryas wasn’t global either, so I now have no prior precident any more recent than about 74–71 kybp, the recovery from the Mt. Toba super-eruption.)

  • Hank Roberts // October 25, 2008 at 1:08 am

    One of two cites I posted at RC in a reply to Ike, arguing that temperate forests rebuild topsoil and so even a “mature” forest is not carbon neutral and ripe for clearcutting (as the industry has been arguing)
    http://dx.doi.org/10.1016/S0378-1127(01)00558-8

  • nanny_govt_sucks // October 25, 2008 at 10:10 am

    Less Ice In Arctic Ocean 6000-7000 Years Ago
    http://www.sciencedaily.com/releases/2008/10/081020095850.htm

    “ScienceDaily (Oct. 20, 2008) — Recent mapping of a number of raised beach ridges on the north coast of Greenland suggests that the ice cover in the Arctic Ocean was greatly reduced some 6000-7000 years ago. The Arctic Ocean may have been periodically ice free.”

  • Hank Roberts // October 25, 2008 at 4:15 pm

    http://www.agu.org/pubs/crossref/2008/2008GL035333.shtml

  • David B. Benson // October 25, 2008 at 9:52 pm

    “Part of the problem is refereeing. Many (I think most) papers in refereed journals are not refereed. There is a persumptive referee who looks at the paper, reads the introduction and the statements of results, glances at the proofs, and, if everything seems okay, recommends publication. Some referees do check proofs line-by-line, but many do not. When I read a journal article, I often find mistakes. Whether I can fix them is irrelevant. The literature is unreliable.”

    Melvyn B. Nathanson, from “Desperately Seeking Mathematical Truth” in the August 2008 issue of the Notices of the American Mathematical Society (as Opinion, not an Editorial).

    His speciality must have lower standards than the parts of mathematics I have read. I recall no mistakes at all, beyond some easily fixable typos.

  • Rick Brown // October 26, 2008 at 6:26 pm

    Hank, the timber industry’s arguments in favor of cutting mature and old-growth forests fail on many points. My slightly longer post at RC hasn’t shown up, but you can download my paper “Implications of Climate Change for Conservation, Restoration and Management of National Forests” at http://www.defenders.org/climatechange/forests. Pages 13-19 address some of the issues around carbon accounting for forests, including the fallacies of the industry’s arguments.

  • Hank Roberts // October 26, 2008 at 7:16 pm

    Thank you Rick. Great resource, delighted to see you here and at RC.

  • dhogaza // October 26, 2008 at 7:53 pm

    Rick Brown! Who’s been fighting the old growth wars in the PNW for what, 30 years, man?

    Welcome aboard!

  • Rick Brown // October 26, 2008 at 10:22 pm

    I’m not sure I wanted to be reminded how long it’s been, but thanks dhogaza, the same one. I spend more time than I probably should at RC, but check in here only sporadically I’m afraid. And thanks Hank, I’m glad you find the paper useful.

  • dhogaza // October 27, 2008 at 4:13 am

    I spend more time than I probably should at RC

    Good folk, there, but you can add considerably to any analysis of consequences in the biosphere, especially regarding ecosystems of the PNW of course.

    I think the time’s well-spent. The press pays attention, [edit]

    Glad to see you’re well and still in the fight.

  • David B. Benson // October 27, 2008 at 9:51 pm

    I notice that Nanny hasn’t been around lately.

    Stock market?

  • Hank Roberts // October 28, 2008 at 3:55 pm

    In science, a good paper is one that leads others into productive areas for further research — whether the first paper got it right, or just got lucky:

    http://scienceblogs.com/builtonfacts/2008/10/are_you_feeling_lucky_punk.php

  • HankRoberts // October 29, 2008 at 1:33 am

    PCA in use:

    http://www.nature.com/nature/journal/v455/n7216/abs/nature07366.html

    http://www.nytimes.com/2008/10/28/science/space/28obgala.html?_r=1&ref=science

    ________excerpt follows_____
    The researchers looked at six properties of the galaxies, including luminosity, mass of hydrogen, inclination and optical radius, the measure of part of the galaxy responsible for a certain percentage of its light. Many of these properties were already known to be related to one another, and the researchers discovered a new correlation, between luminosity and optical radius. But the bulk of their work, which is published in Nature, consisted of a statistical study of all the correlations using a technique called principal component analysis.

    The analysis shows that these properties are controlled by a single parameter. From their data, the researchers cannot say for certain what this parameter is….
    ——————-

  • HankRoberts // October 29, 2008 at 11:42 pm

    Rick Brown — this might merit a RC comment thread or a comment in one

    http://climateforests.blogspot.com/2008/10/initiative-founded-to-deliver-bi_27.html/

    (hat tip to http://thingsbreak.wordpress.com/ )

  • Hank Roberts // October 30, 2008 at 3:04 am

    PS, Rick Brown — this thread also I hope will interest you; it’s also forest-research-related:
    http://moregrumbinescience.blogspot.com/2008/10/discussion-role-for-atmospheric-co2-in.html

  • Hank Roberts // October 30, 2008 at 3:00 pm

    And one more for Rick Brown and others who care about habitat loss:
    http://www.kevinandkell.com/2008/strips/kk20081030.gif

  • 1luke // November 2, 2008 at 10:44 am

    Tamino - you’ve been honoured over at http://jennifermarohasy.com/blog/2008/11/ten-worst-blog-posts-a-note-from-cohenite/#comment-69193 as having the worst blog post of all time. Cohenite is just have a sour puss after being recently skewered by Eli Rabett over at the Run over the Miskolczi business. LOL

    Tell you what - there’s something about your lay Aussie contrarian that makes them more ornery. Think i’s the convict stock down here.

    [Response: If Cohenite (via Marohasy) ranks me worst of all time, I must be doing something right.]

  • David B. Benson // November 3, 2008 at 10:54 pm

    Tamino — Congradulations on your #1 spot!

  • David B. Benson // November 4, 2008 at 10:23 pm

    Tamino — Somebody posting on DotEarth is claiming that an ENSO-detrended analysis of the last eleven years shows a negative temperature trend. I linked to your ‘Don’t be Fooled Again’, but he claims that your analysis has nothing to do with climate, since slimate doesn’t have noise. I doubt I can sway him.

    But the question I have is: is it possible to ENSO-detrend in any statistically significant way? I’ve never seen an actual paper which does this.

    [Response: No noise in the climate system? There are none so blind as those who will not see; don't waste your time on the intransigent.

    As for removing the impact of ENSO on global temperature, I don't see why it can't be done. But since that's part of the climate system, and it appears to be random (certainly unpredictable) and therefore qualifies as "noise," one wonders why he/she feels it necessary to remove it.]

    [Response 2: I ran a multiple regression of GISS temperature against time, and the MEI (multivariate el Nino index). Result: from 1997 to the present, the time trend rate is strongly positive, at 0.016 +/- 0.006 deg.C/yr, from 1975 to the present the time trend rate is 0.017 +/- 0.002 deg.C/yr. The error ranges are not (yet) corrected for autocorrelation, but clearly the trend rate is NOT negative. Not even close.]

  • David B. Benson // November 5, 2008 at 7:28 pm

    Tamino — Thanks for the run. I’ll make use of it.

    I want to link
    http://www.huffingtonpost.com/2008/11/05/michael-crichton-dies-une_n_141457.html

    without further comment.

  • Ray Ladbury // November 5, 2008 at 10:31 pm

    Re: the death of Michael Crichton

    We will hope for the sake of the departed that getting past the pearly gates does not involve a science exam or a test of the ability to write in simple declarative sentences.

  • Lazar // November 6, 2008 at 5:46 pm

    Question… does spectral overlap of different molecular species need to be accounted for in line-by-line radiative integration, or is the resolution enough for lines to be seperated (i.e. is it only a broadband problem)?

    Thanks.

  • Lazar // November 9, 2008 at 5:21 pm

    Anyone know where to get representative (annual and global average) vertical profiles for temperature, pressure, water vapor and ozone abundances upto about 40 km? Tabulated numerical values, online? Apparently the standard reference is…

    RA McClatchy et al. 1972
    Optical properties of the atmosphere
    Environmental Research Papers, 411

    … it seems to be unavailable.

    Thanks.

  • Lazar // November 9, 2008 at 10:19 pm

    Aha… this looks good.
    Tamino, suggest you add this to your climate data links?

  • Gavin's Pussycat // November 10, 2008 at 5:24 am

    Lazar, come across any CO2 mixing ratio vertical profiles?

  • Curious // November 10, 2008 at 10:15 am

    Steve McIntyre is continuing to tear Michael Mann’s stuff apart. Looks decidedly dodgy. Yet no fearless defenders emerge. Not here, not RC, apparently not anywhere. You guys conceding the game??

  • Barton Paul Levenson // November 10, 2008 at 1:34 pm

    Curious — no, we’re not conceding the game. We’re just ignoring the rantings of an incompetent. Maybe if we stop responding to Steve McIntyre every time he puts up another wrong argument, people will stop listening to him. Worth a try, anyway.

  • Andrew Dodds // November 10, 2008 at 4:14 pm

    Curious -

    Which journal(s) is this takedown published in?

  • Lazar // November 10, 2008 at 11:12 pm

    GP,

    I haven’t looked. Sounds interesting. Do you need something like zonal averages up to 40km or something more local? If you let me know I’ll email someone I know studying carbon fluxes/budgets. My vague recollection is most of the data is comparing e.g. boundary layer above forest, grassland.

  • ChuckG // November 11, 2008 at 12:30 am

    Barton Paul Levenson // November 10, 2008 at 1:34 pm

    Curious — no, we’re not conceding the game. We’re just ignoring the rantings of an incompetent. Maybe if we stop responding to Steve McIntyre every time he puts up another wrong argument, people will stop listening to him. Worth a try, anyway.

    I visited McI and Watts every day till about nine months ago. I concur with BPL.

  • John Finn // November 11, 2008 at 12:36 am

    Maybe if we stop responding to Steve McIntyre every time he puts up another wrong argument, people will stop listening to him. Worth a try, anyway.

    Steve McIntyre and one or two others have just discovered something in the GISS global temp record. I was always prepare to give GISS the benefit of the doubt, but it now seems that those resonsible for data quality are at best incompetent and at worst complete idiots.

  • David B. Benson // November 11, 2008 at 1:35 am

    John Finn // November 11, 2008 at 12:36 am — And yet the three major global surface temperature products agree fairly well:

    http://tamino.wordpress.com/2008/01/24/giss-ncdc-hadcru/

    I think you are allocating your ‘dontism’ to the wrong party.

  • Ray Ladbury // November 11, 2008 at 1:56 am

    John Finn, Right. Let me know when they publish.

  • dhogaza // November 11, 2008 at 4:54 am

    Steve McIntyre and one or two others have just discovered something in the GISS global temp record

    Next, he’ll discover things in the Gospel that proves Jesus wanted to kill all Democrats, or somesuch crap.

    Let him do science, if he thinks he’s single-handedly overturned science.

    Am I the only one here who thinks his meglamania is getting worse as time goes on?

  • Alan Woods // November 11, 2008 at 5:54 am

    Davy B, for October this year GISS doesn’t agree with RSS and MSU - it’s way off beam.

    It’s probably just a glitch that would’ve been corrected in due course, but its definitely a SNAFU.

    Ray - just because it ain’t published doesn’t mean it ain’t right. Why do read this site? It’s unpublished as well.

  • Duane Johnson // November 11, 2008 at 6:11 am

    Re: GISS and the October Anomoly.

    Ray,

    “John Finn, Right. Let me know when they publish.”

    It seems that GISS has repeated most of Russia’s September temperature data for October. It would seem likely that temperatures decreased a wee bit at least. My Gawd!

    How can such a boner be justified! Or at least explained with a note at time of release. Publication has not a damn thing to do with it!

  • Duane Johnson // November 11, 2008 at 6:49 am

    Unless there is an appropriate mea culpa issued by GISS, before the MSM pushes their erroneous info, any remaining credibilty for GISS has been irrevocably lost as far as I am concerned.

    [Response: Have you even considered the possibility that the "fault" may not lie with GISS, but with GHCN? The folks who collect the data? Maybe even somewhere else?

    It sounds to me like you're ready to "lynch" GISS, not because a mistake has been made, but because that's what you wanted to do all along.]

  • John Finn // November 11, 2008 at 8:12 am

    Ray

    They won’t need to publish. The GISS errors in October speak for themselves. How on earth GISS didn’t pick up on the fact that their Oct temps for Russia were about 12 deg too high is astonishing - and deeply worrying. I (along with others) have also checked UK temps and the same thing has happened there.

    Sorry - it’s just occurred to me you may not realise what’s going on. GISS released Oct anomaly which at 0.78 was the warmest ever. Anomaly maps suggest huge areas of Russia/Asia was the main cause. It now seems that GISS has used Sept temperatures for Oct. October is the time when temps drop dramatically across Siberia in particularly. Most stations have never recorded temps above freezing in Oct, yet this year some have temps of +8 deg which is total cobblers.

    The same thing has happened with the UK temps though with less dramatic effect, obviously. Here we have stations, where a relatively cold October has temperatures which are the same as the recent average Sept. But perhaps more worrying is the fact that in the one or two places that don’t have duplicates the recorded temperatures are higher that those officially recorded by the UK Met Office. What’s more I suspect the Sept values are a bit high as well.

  • Duane Johnson // November 11, 2008 at 4:00 pm

    If “amateurs” oft derided on this blog can spot the error so quickly, is it too much to expect the responsible professionals to do the same before publishing the data, regardless of the source of the raw data? Surely, there will be a prompt correction, both to the published results, and to the procedures that permitted such an error to go undetected.

  • cce // November 11, 2008 at 4:07 pm

    The RSS data was wrong for the entire year of 2007, which McIntyre had no problem using. The UAH data was dramatically wrong and probably still significantly wrong for the entire satellite era, yet it is the “skeptics’” best friend. The GISTEMP “Y2K” error was comparatively miniscule, yet that was flogged far and wide by the skeptical media. “1934 was warmer” and all that.

    And as to how “obvious” this problem is, the flawed October anomaly is still not as warm as January 2007, which didn’t have this problem.

  • Barton Paul Levenson // November 11, 2008 at 6:55 pm

    Duane Johnson writes:

    If “amateurs” oft derided on this blog can spot the error so quickly, is it too much to expect the responsible professionals to do the same before publishing the data, regardless of the source of the raw data? Surely, there will be a prompt correction, both to the published results, and to the procedures that permitted such an error to go undetected.

    Probably. If you’re not aware of the fact, monthly climate data gets misreported every so often, and corrections are generally made after the result is published if the source people weren’t aware of the error when they published. Why you and John Finn et al. think this matters is beyond me. They found errors in HIPPARCOS parallax measurements, too, and don’t get me started on what happens to the economic time series every few years. Scientific data are rarely considered canonical. It’s always subject to revision as new information comes in or as mistakes are detected.

  • David B. Benson // November 11, 2008 at 7:44 pm

    Alan Woods // November 11, 2008 at 5:54 am & Duane Johnson — RealClimate now has a post up, “Mountains and Molehills”, discussing the data repeat error.

  • nanny_govt_sucks // November 11, 2008 at 8:44 pm

    I notice that Nanny hasn’t been around lately.

    Stock market?

    Tamino, are you going to let that go unanswered?

    [Response: Go ahead. But I warn you: your repeated attempts to smear Obama with every criticism you can conceive of really offend me. That will not be tolerated.]

  • Ray Ladbury // November 11, 2008 at 9:52 pm

    Boy, you can sure tell who hasn’t worked much with big datasets, can’t you? I don’t think I’ve ever worked with a dataset that didn’t have errors, and what’s more it never occurred to me to rush out and tell the person I got the data from, “This is inexcusable. Harumph, harumph, harumph…”
    Did it ever occur to you gentlemen that GISS is understaffed? It certainly is if it’s anything like the rest of NASA. Do you really expect them to look at every region on Earth and see if it looks “normal”? Can you imagine the clucking we’d get from the denialosphere if they institutes lots of checks and the data started being late? What is more, a 0.78 anomaly, while large, is not without question? Guys, they only way they would have caught this error is if they had compared October’s data to Septembers and set some threshold for concern for repeating values. Since they had not seen this sort of error before, that’s not bloody likely. Do you have any idea how silly you guys look getting this bent out of shape over a routine error?

  • David B. Benson // November 11, 2008 at 10:11 pm

    Ray Ladbury // November 11, 2008 at 9:52 pm — I am sure they have no conception whatsoever.

  • John Finn // November 11, 2008 at 10:25 pm

    Anyone defending GISS is missing the point. Of course errors will happen and sometimes those errors will slip through even the most rigorous and sophisticated checks. This, though, is incomprehensible. That such laughably and obviously incorrect data has actually been released into the public domain beggars belief. The global anomaly alone ought to have triggered a few alarm bells.

    Have any of you actually looked at the affected data sets? We’re not talking the odd degree or two here.

    If the GISS quality checks have missed the Russian (and UK & Ireland) what confidence can we have that data from less well monitored regions has been handled correctly.

    As I said earlier I’ve always backed GISS (despite my views on AGW) but this latest episiode is so pathetically incompetent you have to question their authority and credibility.

    DJ summed it up. “Amateurs” picked up the errors (and the source(s)) within hours of the data being released.

    [Response: Truly you are being absurd. The error was made by NOAA, not GISS. If you insist that GISS perform quality checks on NOAA data processing before releasing summary results, then I suggest you lobby for additional funding for GISS.

    Your characterization is so pathetically incompetent it undermines your credibility and objectivity.]

  • David B. Benson // November 11, 2008 at 10:51 pm

    John Finn // November 11, 2008 at 10:25 pm — One can always check the various surface temperature products against each other:

    http://tamino.wordpress.com/2008/01/24/giss-ncdc-hadcru/

  • Lazar // November 11, 2008 at 11:14 pm

    John Finn,

    Anyone defending GISS is missing the point.

    Not “the point”. Your point, that you wish everyone would throw out the temperature record, bury their heads in the sand.

    you have to

    I do not “have to”. Nobody who is serious will.

  • nanny_govt_sucks // November 12, 2008 at 1:49 am

    I notice that Nanny hasn’t been around lately.

    Stock market?

    Tamino has been censoring many of my posts lately. Even posts unrelated to Obama.

    [Response: Your numerous attempts to malign the president-elect have gone to the trash bin. And you're still trying. Give it up or take a hike.]

  • TCO // November 12, 2008 at 2:45 am

    McI has not finished his analysis of the Mann paper. He says his posts are a “lab notebook”. You should not jump to conclusions until seeing the final result. In particular, it is important to quantify the impact of errors found. (I think some algorithm mistakes were found.) One can also discuss the issue of whether methods are fair (for instance smooths and such), which is more debatable as to right/wrong. Many of the hoi polloi who think McI is blowing Mann out of the water, don’t realize that mcI has a lot of repetition, that he also segues and retells previous paper issues…and that he mixes in adjectival silliness (stupid pet tricks…calling Mann a dog.) If you actually distill the real insights, it would be much smaller, simpler, etc. Unfortunately, Steve is lazy…also he enjoys the blog cheering…and the free ride to pound his opponent on his own site and to basically hawk PR continuous with doing analysis. There are still very fundamental things in terms of quantifying impact of contested methods, that McI has not done with the MBH98 papers, which I pointed out to him years ago and which he agreed was important.

    Really, the guy only has one real paper (GRL05). He claims to have 5, but two of those are in EE (a cold fusion type journal, not abstracted, not held by science libraries) and the other two are replies to comments on the GRL paper.

    My advice is to blow Steve off until he writes up his criticisms clearly and professionally. At least in a well done white paper (better in a real published paper). Unfortunately, if you’ve seen Steve’s draftsof talks and posters and such, you realize that the guy is a very disorganized writer.

  • Lee // November 12, 2008 at 4:27 am

    I keep reading people saying such things as:

    “GISS… Oct temps.”

    “GISS has repeated most of Russia’s September temperature data for October”

    Many, many people who make criticism of GISS a routine activity, keep attributing the temperature data to GISS.

    This tells me that either those people are remaining aggressively ignorant of the actual sources of the data and the procedure for compiling the GISS anomaly - despite their repeated criticism of that process - or that they are dishonest. I am hard put to think of other possibilities, but I’m open to suggestions.

  • Gavin's Pussycat // November 12, 2008 at 4:56 am

    Lazar,
    global average / standard model. An argument / challenge over at RC ;-)

  • Lazar // November 12, 2008 at 1:12 pm

    McNeil & Matear (2008) Southern Ocean acidification: A tipping point at 450-ppm atmospheric CO2. PNAS

  • nanny_govt_sucks // November 12, 2008 at 11:10 pm

    Truly pathetic.

  • Lazar // November 13, 2008 at 12:32 am

    Ellingson et al. 1991 has temp, pressure, water vapor and ozone vertical profiles at 1km resolution upto 25km, then at 5km upto 50km, for the tropical, midlatitude, and subarctic bands. Seperate summer & winter profiles for the midlatitudes and subarctic. No CO2 or other standard-model gases, unfortunately.

    Ellingson et al. 1991
    The intercomparison of radiation codes used in climate models: Long wave results
    Journal of Geophysical Research, Volume 96, Issue D5, p. 8929-8953

  • John Finn // November 13, 2008 at 1:01 pm

    Not “the point”. Your point, that you wish everyone would throw out the temperature record, bury their heads in the sand.

    GISS released figures which were clearly rubbish. Lee: It doesn’t matter from where these figure originated. GISS, if it intends to release data into the public domain, has a duty to ensure it has at least performed a basic quality audit on the data.

    Gavin on RC reckons that there is only about 0.25 FTE employed for data gathering. This equates to somewhere between 30 and 40 hours
    per month. Pay me 20 hours per month and I’ ll ensure cock ups like this don’t happen again.

    Mistakes will always happen. This is not the issue. What is the issue (or point) is that the mistakes got as far as they did and it suggests a laxity in procedures which in my view reflects on the credibilty of GISS.

  • John Finn // November 13, 2008 at 1:16 pm

    oh and by the way, Lazar.

    At no time on this or any other blogs will you find me questioning GISS data - before now, that is, so I’m not “looking to throw out the temperature record and bury my head in the sand” as you put it.

    As far as I’m concerned the 0.17 deg/decade trend is no big deal. After all we had a similar trend between 1915 and 1944. I think if we’d had a network of thermometers around the world 200, 300, 500 or 1000 years ago we’d have seen numerous periods of similar rates of warming (and cooling).

  • Ray Ladbury // November 13, 2008 at 1:24 pm

    John Finn, you really don’t have a clue how science is done, do you? You wouldn’t have the foggiest notion of how to do data quality on a large dataset, and all you do by making such inflated claims and feigning such outrage is reveal yourself for the poseur that you are.

  • Lazar // November 13, 2008 at 1:58 pm

    John Finn

    GISS released figures which were clearly rubbish.

    No. The erroneous global land+ocean anomaly 0.78 C was corrected to 0.58 C. The two-sigma distribution of monthly anomalies over the period Oct 2006- Oct 2008 is 0.21<T<0.81.

    Pay me 20 hours per month

    Hubris.

  • Gavin's Pussycat // November 13, 2008 at 2:01 pm

    > Pay me 20 hours per month and I’ ll ensure cock ups like this don’t happen again.
    It would almost be worth the money to see you fail ;-)

  • Barton Paul Levenson // November 13, 2008 at 2:03 pm

    Lazar writes:

    Ellingson et al. 1991 has temp, pressure, water vapor and ozone vertical profiles at 1km resolution upto 25km, then at 5km upto 50km, for the tropical, midlatitude, and subarctic bands. Seperate summer & winter profiles for the midlatitudes and subarctic. No CO2 or other standard-model gases, unfortunately.

    CO2 is a well-mixed gas all through the troposphere and stratosphere. It will only vary by a few percent, at most, from place to place or level to level.

  • Barton Paul Levenson // November 13, 2008 at 2:05 pm

    John Finn writes:

    As far as I’m concerned the 0.17 deg/decade trend is no big deal. After all we had a similar trend between 1915 and 1944. I think if we’d had a network of thermometers around the world 200, 300, 500 or 1000 years ago we’d have seen numerous periods of similar rates of warming (and cooling).

    What you think doesn’t seem to match any of the painstaking reconstructions anyone has actually done of those periods.

  • Ray Ladbury // November 13, 2008 at 2:32 pm

    John Finn: Pay me 20 hours per month and I’ ll ensure cock ups like this don’t happen again.

    GP: It would almost be worth the money to see you fail ;-)

    Hmm, want to take up a collection? 20 hours a week times $6.55/hr minimum wage…

  • Phil. // November 13, 2008 at 3:28 pm

    Lazar // November 6, 2008 at 5:46 pm

    Question… does spectral overlap of different molecular species need to be accounted for in line-by-line radiative integration, or is the resolution enough for lines to be seperated (i.e. is it only a broadband problem)?

    It does, see Clough & Iacono, but the resolution does allow many lines to be separated that are not at lower resolution.

  • Hank Roberts // November 13, 2008 at 4:26 pm

    > truly pathetic

    Damned straight. Read the abstract at least, anyone who missed the link above:

    McNeil & Matear (2008) Southern Ocean acidification: A tipping point at 450-ppm atmospheric CO2. PNAS

    Southern Ocean acidification via anthropogenic CO2 uptake is expected to be detrimental to multiple calcifying plankton species by lowering the concentration of carbonate ion (CO32-) to levels where calcium carbonate (both aragonite and calcite) shells begin to dissolve. Natural seasonal variations in carbonate ion concentrations could either hasten or dampen the future onset of this undersaturation of calcium carbonate. We present a large-scale Southern Ocean observational analysis that examines the seasonal magnitude and variability of CO32- and pH. Our analysis shows an intense wintertime minimum in CO32- south of the Antarctic Polar Front and when combined with anthropogenic CO2 uptake is likely to induce aragonite undersaturation when atmospheric CO2 levels reach ~450 ppm. Under the IPCC IS92a scenario, Southern Ocean wintertime aragonite undersaturation is projected to occur by the year 2030 and no later than 2038. Some prominent calcifying plankton, in particular the Pteropod species Limacina helicina, have important veliger larval development during winter and will have to experience detrimental carbonate conditions much earlier than previously thought, with possible deleterious flow-on impacts for the wider Southern Ocean marine ecosystem. Our results highlight the critical importance of understanding seasonal carbon dynamics within all calcifying marine ecosystems such as continental shelves and coral reefs, because natural variability may potentially hasten the onset of future ocean acidification.

  • John Finn // November 13, 2008 at 6:02 pm

    What you think doesn’t seem to match any of the painstaking reconstructions anyone has actually done of those periods.

    I had anticipated this response. Rather than providing a list of reasons why proxy reconstructions are simply unable to capture the true variabilty of the climate (remember the 1915-44 warming can’t be due to CO2), I’ll just ask this:

    Which proxies do you think best represent temperature wrt to both magnitude and sensitivity.

  • John Finn // November 13, 2008 at 6:07 pm

    No. The erroneous global land+ocean anomaly 0.78 C was corrected to 0.58 C

    Lazar (and Ray)

    Have you actually cast your eye over any of the corrupted data sets.

  • Ray Ladbury // November 13, 2008 at 7:55 pm

    John Finn, Unlike you, I do not consider a single months data important, and moreover, since, again, unlike you, I don’t assume that the people involved are morons, I am confident that errors will be corrected. That is because I’ve actually worked with data before. You?

  • David B. Benson // November 13, 2008 at 10:22 pm

    John Finn // November 13, 2008 at 1:16 pm — The warming in the first half of the twentieth century is, in part, due to increased concentrations of CO2.

    Using 1850 CE as the base year (288 ppm), the standard formula for warming due to CO2 gives good agreement with the decadal average for the 1950s, with CO2 concentration of 315 ppm in 1958 CE.

  • Lazar // November 13, 2008 at 11:04 pm

    Have you

    Yes.

  • Lazar // November 13, 2008 at 11:34 pm

    Thanks Phil!

  • TCO // November 13, 2008 at 11:38 pm

    Steve McI seems like a really brittle, pompous sort. He is babbling away about shell games and plagiarism. What a wimp. He does something interesting (or a poster of his does) with finding an error. Then he lards it up with this fever swamp bullshit. [edit]

  • Lazar // November 13, 2008 at 11:52 pm

    Phil, re your comment in the CO2 blip thread.

    The NOAA screening failed because neighbouring station data were effected similarly, thus the qc algorithm assumed the anomalies were real.

    A 12 C difference in temperatures of a single station for successive Octobers in that region, although excessive, is not greatly so. 3-sigma is 13 C, 2-sigma is 9 C. 8 and 9 C differences are surprisingly common.

  • John Finn // November 14, 2008 at 12:40 am

    That is because I’ve actually worked with data before. You?

    Yes, Ray I have worked with data before.

    The warming in the first half of the twentieth century is, in part, due to increased concentrations of CO2.

    The warming began in ~1915. What were CO2 concentrations then? What were the concentrations in 1930 when more than half the warming had already happened? What about the so-called lag between CO2 in the atmosphere and temperature change. By which I mean the lag that means we’ve still got 0.5 deg warming “in the pipeline” according to RC. Or perhaps you’re saying that temperatures responded immediately to CO2 in the early 20th century but the physics changed aound 1950.

    David B - CO2 had nothing to do with with 1915-44 warming. If you disagree -show me the numbers i.e. the forcings and the resultant temperatures.

    Lazar

    Spot the odd one out : -11, -4.8, -7.9, -5.5, -4.1, -4, -4.3, -7.9, -3, +8.1

    There are time when common sense can be just as effective as statistics.

  • Lazar // November 14, 2008 at 12:51 am

    Some perspective…
    The global anomaly was not unusual.
    The regional anomaly may not have been unusual.
    Checking individual station data for outliers must be done by an automated qc algorithm given the large number of stations.
    If a value for one station appears abnormal, that is checked against values from neighbouring stations.
    In this instance, the readings for one month for a large number of neighbouring stations were simultaneously copied over into the next month. The mistake by GISTEMP and NOAA was a failure in foresight.
    And for that, some people with hindsight wish to trash the entire GISTEMP temperature record via aspersions on the competence of those who manage GISTEMP.
    That is nonsense.
    The value of GISTEMP and the competence of its creators is as good as its agreement with all other surface-based and satellite reconstructions.
    The error resulted in a minor adjustment to one month of data.
    The ones who are complaining about the competence of GISEMPT, are not the one who found the error.
    Those who work on GISTEMP have about 40 man-hours to create the analysis for that month, including spotting errors.
    There are likely hundreds, maybe thousands, of people outside of GISTEMP who scrutinize the data product every month.
    Well done to the person who spotted the error.
    To the others, we heard you the first time that you have no faith in GISTEMP, now can we move on please.

  • Ray Ladbury // November 14, 2008 at 1:04 am

    Re: Michel’s comment on defending the indefensible.

    Actually, there’s nothing to defend. There were errors in the dataset. Now surely, one could check for such errors. One quarter-time researcher cannot check, however, for all possible errors–at least he/she can’t do so and deliver the dataset on time. So on rare occasions, errors do get through–and are immediately found by one of the many, many users (many being much greater than 1/4 FTE)of the dataset and corrected. That is how science works.

    Of course, maybe they should start introducing errors on a more regular basis. It gives the denialist community such joy and allows them to go parading around in their latest proctological haberdashery.

  • David B. Benson // November 14, 2008 at 1:08 am

    John Finn // November 14, 2008 at 12:40 am — Even the increase from 1750 CE’s 280 ppm to 1850 CE’s 288 ppm resulted in about 0.06 K of warming; hidden in the climate variability noise.

    As for the increase from 1850 CE to 1950s, I’ve posted about it here on Tamino’s Open Mind several different times and not inclined to do so again. Either find my previous comments or else work it out yourself. Here is the formula:

    http://forecast.uchicago.edu/samples.htm

    I used a climate sensitivity of 3 K. However, only 60% of that is immediate, the rest taking a long, long time to equilibrate. So whatever number you obtain, multiply by 0.6. Then compare with the decadal averages from HadCRUTv3:

    http://tamino.files.wordpress.com/2008/04/10yave.jpg

    If you do this correctly, you will discover that the fit is fairly good for the 1950s.

  • Phil. // November 14, 2008 at 4:57 am

    Lazar // November 13, 2008 at 11:52 pm

    Phil, re your comment in the CO2 blip thread.

    The NOAA screening failed because neighbouring station data were effected similarly, thus the qc algorithm assumed the anomalies were real.

    A 12 C difference in temperatures of a single station for successive Octobers in that region, although excessive, is not greatly so. 3-sigma is 13 C, 2-sigma is 9 C. 8 and 9 C differences are surprisingly common.

    Yes but NOAA say that they do both time-series and spatial consistency tests so you’d think that should still have found it.

  • Lee // November 14, 2008 at 5:38 am

    John Finn:

    I am fascinated at your claim. I’d love to hear more about your strategy for implementing a QC process that will catch all data collation errors of any kind, anticipated errors or otherwise, going forward.

    Remember, your procedure has to catch ALL errors whether currently known and anticipated or not - in 20 hours a month - and it can’t introduce bias. Just flagging outliers ain’t gonna do it. What if a month of high values got replaced by a month from an earlier year, for example, that looked right line with expectations?

    I’m all ears.

  • dhogaza // November 14, 2008 at 10:19 am

    From the other thread…

    You’re defending the indefensible. Its easy to write screens for the kind of errors we are talking about. Take a student a half day if that to write, and a couple of seconds to run once written. All it has to do is flag up a case for possible manual verification. But so what if it took a student a week? What’s so hard about that? You’re saying they really cannot do sanity checks once a month on a few hundred numbers because its too hard or would take too long? Its totally nuts.

    Someone thoroughly underestimates the creative ability of the human race to screw up, and thoroughly overestimates the creative ability of other members of the species to anticipate, in advance, just how others will screw up.

    In other words, the fact that the existing sanity checks coded into the GISS stuff didn’t catch the specific screw-up of a large set of stations simply repeating last month’s data simply means that the crew at GISS didn’t predict that such a mistake would be made, with the resulting regional temp still lying within the bounds of natural variability.

    So, now they’ve been presented with a new form of screw-up by one of the groups that delivers the data they assimilate into their monthly calculations. A blindingly obvious error in retrospect, but most errors are much more obvious in retrospect. If that weren’t true, every engineering design would be perfect.

    This is nothing more than politically-driven slander. And I think Finn and the others know it.

  • John Finn // November 14, 2008 at 10:37 am

    I used a climate sensitivity of 3 K. However, only 60% of that is immediate, the rest taking a long, long time to equilibrate. So whatever number you obtain, multiply by 0.6. Then compare with the decadal averages from HadCRUTv3:

    Ok so what were the CO2 forcings in (i) 1915; (ii) 1930 ; (iii) 1944.

    Lazar:

    The global anomaly was not unusual

    Well it would have been the warmest October ever recorded and at a time when we have ENSO-neutral conditions I would have thought it might have caused the odd raised eyebrow. It certainly did among the ignorant rabble of deniers.

    The regional anomaly may not have been unusual.

    [snigger]

    Checking individual station data for outliers mu st be done by an automated qc algorithm given the large number of stations.

    The problem being what exactly? After all, there are algorithms which claim to deal with UHI and missing data. Although in this particular case spotting the problem only required eyesight and a rudimentary knowedge of temperature data. Which of these attributes are the GISS data “professionals” lacking?

    Those who work on GISTEMP have about 40 man-hours to create the analysis for that month, including spotting errors.

    Those who spotted the errors found it AND the source of the errors in about 20 mins.

  • John Finn // November 14, 2008 at 11:05 am

    Even the increase from 1750 CE’s 280 ppm to 1850 CE’s 288 ppm resulted in about 0.06 K of warming; hidden in the climate variability noise.

    David B

    Could you just define “climate variability noise” as you understand it . Provide a few examples, e.g. ocean circulation., with some estimate of their effect.

    Incidentally I can’t access your first link.

  • Uli // November 14, 2008 at 11:24 am

    Hallo Gavin’s Pussycat, hallo Lazar,
    maybe one of this sites help, but I don’t found vertical CO2-data yet.
    http://www.esrl.noaa.gov/gmd/ccgg/carbontracker/index.html
    http://gems.ecmwf.int/

  • Sekerob // November 14, 2008 at 11:46 am

    an ‘l’ behind the .htm link makes it work :D

    Think the AGGI index to be brilliant http://www.esrl.noaa.gov/gmd/aggi/
    . There’s 1 Watt per square meter added since 1979. Nearly matches the difference between a solar cycle minimum and maximum.

  • TCO // November 14, 2008 at 2:57 pm

    It’s just amazing the amount of text (3+ posts, plus hundreds of comments) and the unprofessionalism (credibility crunch, watch the pea, etc.) from SM and his CA fever swampers. Makes me feel bad to have such sorts on my side.

    Go back and crunch some more numbers, Steve. That’s what you’re good at.

    Steve, learn to make simple, non-editorialized, non-meandering reports. Can’t beleive you tout your business experience and never learned to give tight reports. It’s really gross, what you put up on Climate Audit. And seeing your overweight posters and first drafts of papers. Really disgusting. Science benefits from clear communication. It makes it easier to hone in and prove/disprove assertions. The endless Clintonian word parsing games, failure to take stands, veiled comments that are not explicit (so you get impact without having to stand behind an assertion) is just disgusting to me. Americans have fighting men in harm’s way taking bullets. And you, you don’t even have the balls to make definite assertions on technical matters. I really hate this crap and how bad it makes my side look.

  • Ray Ladbury // November 14, 2008 at 3:23 pm

    TCO says “Makes me feel bad to have such sorts on my side. ”

    We-eeeelllll, I know how to remedy that.

  • TCO // November 14, 2008 at 4:10 pm

    Maybe I should form a new skeptic community. We will have Zorita, Von Storch and Burger as our scientists. Myself and Mosh-pit will be the hoi polloi.

  • t_p_hamilton // November 14, 2008 at 4:41 pm

    “I am fascinated at your claim. I’d love to hear more about your strategy for implementing a QC process that will catch all data collation errors of any kind, anticipated errors or otherwise, going forward.”

    As a matter of fact, if you can do that it would be like minting money. How much money do you think companies would pay for a computer program that guarantees elimination of all possible data errors, even those that have not been seen yet?

  • conard // November 14, 2008 at 5:29 pm

    TCO: Speaking of balls, why vent here and not at CA?

  • Hank Roberts // November 14, 2008 at 5:29 pm

    > feel bad to have such sorts on my side.

    Ahem.

    “Just because you’re on their side doesn’t mean they’re on your side.”

    What you’re feeling bad about is the notion that you’re on their side, and that it’s a binary choice.

    No need to stay on the sidelines with them.

  • Hank Roberts // November 14, 2008 at 5:33 pm

    > would you please define …?

    Change “htm” to “html”

    The standard definition is easy to find and clear, eh?

    http://scholar.google.com/scholar?q=climate+variability+noise

  • cce // November 14, 2008 at 5:48 pm

    This is how McIntyre responds to criticism of using faulty data.

    http://www.climateaudit.org/?p=2648

    See, it’s not his fault because it’s someone elses data. It’s OK because he immediately corrected the problem when it was pointed out to him. And he complains that RSS should have issued a “proper notice of the error on their public web pages.”

    On the other hand, if it’s not GISTEMP’s data it’s still GISTEMP’s fault. If they corrected it as soon as possible that still isn’t OK, presumably because the error was “obvious” (assuming you looked at individual stations). Of course, so was RSS’ error which lasted an entire year, but never mind that. GISTEMP issued a “proper notice of the error on their public web pages,” but that also doesn’t matter because the real point is to bash Hansen.

  • Former Skeptic // November 14, 2008 at 5:53 pm

    TCO:

    You’re right. McI is doing no favors to his side with his incoherent ramblings. His pitiful recent “woe-is-me” post when Santer declined his “request” made me cringe. It’s even worse when someone like RPJr. fell for this like a swooning girl on a first date and offered him the blog equivalent of a hug. Notice he (and Mr. Prometheus) did not include details of how Santer was demonized by the GCC ~1995 as a reason for why McI’s request was turned down. Now that’s true Clintonian truth economics.

    I used to lurk for years at CA. Once I saw through the mist and realized that McI was just a vacuous person with no intention of advancing the state of climate knowledge, I decided to leave. I’m certain I’m not the only one. Heh.

  • Lazar // November 14, 2008 at 6:02 pm

    John Finn,

    I would have thought

    Answered previously.

    [snigger]

    Time-waster.
    No more responses from me.
    Rest binned unread.

  • Ray Ladbury // November 14, 2008 at 6:03 pm

    Former Skeptic, You have grasped what it’s all about–advancing the state of climate knowledge. Do that and you’re part of the consensus. Fail to do that and you’re a wannabe–or worse.

  • Lazar // November 14, 2008 at 6:24 pm

    Phil.,

    NOAA say that they do both time-series and spatial consistency

    That is precisely the problem; spatial consistency checks are used to validate outliers that are flagged by time series analysis.

    See Peterson et al. 1998
    Global Historical Climatology Network (GHCN) Quality Control of Monthly Temperature Data
    International Journal of Climatology 18: 1169–1179

    In particular,

    While a data point may be extreme from a time series perspective, it may also be completely valid; i.e. it was just an exceptionally cold or warm month in that area that year. So simply flagging a data point from a time series perspective is not sufficient to judge the validity of the data point. If the climate of the region was exceptionally cold that month, nearby stations should confirm that assessment. Therefore it is
    necessary to integrate a spatial QC into the total assessment and use the spatial QC to determine if the ‘suspect’ flag should be removed from 2.5[sigma] outliers or not. Similar to many other QC schemes, this provides a second independent test (Smith, 1991). Unfortunately, errors can be spatially homogeneous, such as when one month a nation reports over the Global Telecommunications System that all its stations
    have the decimal point in the wrong place. This problem can cause false positives with any kind of spatial QC. Fortunately, such cases are fairly rare and the results are often extreme enough that data points are gross outliers. That is why data points greater than 5[sigma] were not accepted no matter what the spatial QC might indicate.

  • Lazar // November 14, 2008 at 6:29 pm

    Uli, thanks for the links!

  • David B. Benson // November 14, 2008 at 8:26 pm

    John Finn // November 14, 2008 at 10:37 am — Here is the correct link:

    http://forecast.uchicago.edu/samples.html

    The CO2 forcings are obtained by using the formula there, using the CO2 concentrations. I don’t have the concentrations for those particular years of some (strange) interest to you. With some effort, these can be closely estimated from knowing the annual emissions

    http://cdiac.ornl.gov/trends/emis/tre_glob.htm

    and the 1958 CE level of 315 ppm, but you might want the entire Keeling curve:

    http://www.esrl.noaa.gov/gmd/ccgg/trends/co2_data_mlo.html

    Easier, and almost as good, is to fit an exponential through the 1850 CE vlaue of 288 ppm, the 1958 CE value of 315 and the current value of 387 ppm. Then pick off the years of interest to you.

    For these purposes, I include as climate variablity noise all other ‘forcings’ such as TSI and volcanoes and the variablity arising from ocean oscillations, etc.

  • John Finn // November 14, 2008 at 10:54 pm

    Lazar

    Apologies for earlier post. The ’snigger’ comment wasn’t intended to be as dismissive as it may have come across.

    The two-sigma distribution of monthly anomalies over the period Oct 2006- Oct 2008 is 0.21<T<0.81.

    But if the anomaly had turned out to be 0.24 (i.e 0.03 above the lower limit rather than 0.03 below the upper) would the figures still have been released. I suppose we’ll never know. But either case warrants furhter investigation.

    Your 2-sigma distn. comes from data which is the result of widely different known conditions (i.e. el nino in 2006; la nina in 2007; neutral in 2008) plus, of course, it is based on just 3 readings.

    I’m not sugesting that the data should should necessarily have been pulled on a dodgy looking global anomaly alone, but a quick look at the anomaly maps PLUS the size of the anomalies ought to have set alarm bells ringing. Checking a few stations within the hot spot region would then have confirmed that a problem existed.

    Note - no QC program required. The whole process, from suspicion to confirmation, would be almost as quick as it took me to describe it.

    In fact about as quick as it took those bloggers.

  • Dave A // November 14, 2008 at 11:49 pm

    Ray

    Boy, you can sure tell who hasn’t worked much with big datasets, can’t you? I don’t think I’ve ever worked with a dataset that didn’t have errors

    Now were getting close to the meat. Big datasets always have errors, don’t they Ray? So do climate models because they too are basically big datasets. Yet on the basis of these error prone data sets we are supposed to take policy decisions that will cost trillions and impose fundamental changes on our societies.

    Don’t you think such decisions should be based on something better?

  • Hank Roberts // November 15, 2008 at 12:37 am

    > AGGI index
    Hmm! the little blip in the CO2 rate of change associated with the collapse of the USSR seems to correlate with the more marked change in the methane rate.

    I wonder how leaky the old USSR gas system was, and how much of it was shut down and rebuilt before the various national governments that inherited it started putting gas into it again.

    Anyone recall if there’s information on the C13/C12 ratio of atmospheric methane? Makes me wonder if a change from fossil fuel gas could be distinguished from a change from biological turnover.

    Maybe not since — I’d guess — permafrost carbon all has fossil-age isotope ratios anyhow.

  • Lazar // November 15, 2008 at 12:47 am

    John Finn,

    No problem.
    I apologize for my hostility earlier.

    if the anomaly had turned out to be 0.24 (i.e 0.03 above the lower limit rather than 0.03 below the upper) would the figures still have been released

    Probably. January 2008 anomaly was 0.14 C. February 2008 was 0.25 C.

    just 3 readings.

    25.

    Deseasonalized anomalies. Same distribution. ‘Warmest October‘ doesn’t mean much in climatological terms.

    a quick look at the anomaly maps PLUS the size of the anomalies ought to have set alarm bells ringing

    Probably, but that’s in hindsight, I don’t think it’s necessarily obvious. It was a spatially homogenous anomaly. Regions typically vary between -8 C and +9 C on the monthly anomalies. January 2007; 11.6 C anomaly, roughly equivalent in size and location to red October. February 2005; 11 C.

    as quick as it took those bloggers

    As noted earlier, you have many people scrutinizing the product, far many more than create it. The time taken for the first person to spot an error decreases as the number scrutinizing the data increases. Infinite number of monkeys etc.

  • John Finn // November 15, 2008 at 1:46 am

    Oops bit of a misunderstanding on my part.

    Lazar was clearly talking about the whole period between Oct 2006 and Oct 2008 for his 2 sigma distribution. I interpreted this as just the October anomalies (silly - too much wine) but my argument still stands in that there were known climatic/weather changes. Consider this

    Oct 2006-Sep 2007 - an established El Nino faded and early phases of the recent La Nina began.

    Oct 2007-Sep 2008 - La Nina strengthened and then faded to leave neutral ENSO conditions later in the period.

    There is justification, therefore, to consider the 2 periods separately.

    For Oct 2006-Sep 2007 the 2 sigma distribution
    is 42 < T < 80 so an anomaly of 78 just about makes it NOT significant.

    For Oct 2007-Sep 2008 the 2 sigma distribution
    is 14 < T < 68. This time, given the ENSO conditions, an anomaly of 78 is significant.

    I did the SD calculations by hand so, by all means, check them out, but I think they’ll support what simple common sense tells us. That is, a global anomaly of 78 is an ‘outlier’ (in this particular period at least) in it’s own right even before considering regional anomalies.

  • Philippe Chantreau // November 15, 2008 at 2:13 am

    Dave A, big datatsets and models used in economics and finance have errors too. What’s the cost of the world’s credit crunch so far? What is it going to be in total?

  • Ray Ladbury // November 15, 2008 at 2:28 am

    Dave A. posits: “Big datasets always have errors, don’t they Ray? So do climate models because they too are basically big datasets. ”

    WRONG. The models are based on the known physics, not the data. Parameters in the models are determined from independent data. There is a huge difference between a dynamical model and a statistical model.

    And then Dave says: “Yet on the basis of these error prone data sets we are supposed to take policy decisions that will cost trillions and impose fundamental changes on our societies.”

    Wrong again, Dave. We know the globe is warming. We know it must warm given the physics of greenhouse gasses unless the physics magically changes between 280 and 380 ppmv. We know from paleoclimatic studies that things can get significantly warmer than they are now, and that this tends to bring on nastiness like mass extinctions.

    It is the models that tell us that CO2 sensitivity is more likely to be 3 degrees per doubling, rather than 6 degrees per doubling. We might just be able to deal with the former, whereas the latter would demand we keep CO2 aroung 280 ppmv or we are cooked. If you want to avoid desperate measures, Dave, you had better pray the models are reliable.

    We make policy based on models all the time. They are often the best tools we have. You keep trying to hide behind uncertainty. You really need to learn that when you are talking about credible threats, uncertainty is not your friend.

  • thingsbreak // November 15, 2008 at 2:51 am

    @Hank Roberts:

    Hmm! the little blip in the CO2 rate of change associated with the collapse of the USSR seems to correlate with the more marked change in the methane rate.

    I wonder how leaky the old USSR gas system was, and how much of it was shut down and rebuilt before the various national governments that inherited it started putting gas into it again.

    Anyone recall if there’s information on the C13/C12 ratio of atmospheric methane? Makes me wonder if a change from fossil fuel gas could be distinguished from a change from biological turnover.

    Maybe not since — I’d guess — permafrost carbon all has fossil-age isotope ratios anyhow.

    Bousquet et al., 2006 and Lassey et al., 2007 look relevant.

  • Hank Roberts // November 15, 2008 at 5:37 am

    > relevant
    Thank you! Yum, weekend reading.

  • Philippe Chantreau // November 15, 2008 at 7:54 am

    The more aggressive fringe of denialists has found a new outlet to its energy: the war on error

  • John Finn // November 15, 2008 at 12:29 pm

    Wrong again, Dave. We know the globe is warming. We know it must warm given the physics of greenhouse gasses unless the physics magically changes between 280 and 380 ppmv

    I don’t believe we can a) quantify the warming - many are suggesting that it’s much lower than 3k per CO2 doubling which can only be confirmed by the data. b) do we really know where that warming will become evident. For example is it not possible that any additional energy absorbed by the extra 100ppm CO2 will simply increase the rate of convection .

  • Ray Ladbury // November 15, 2008 at 12:49 pm

    John Finn, We can see the warming. Even if we didn’t have a global temperature record, winter comes later and ends earlier, tropical and subtropical species are extending their domain, ice is melting. Again, you are trying to preserve ignorance because you think you can hide behind it.
    The aggregate of the data favor 3 K per doubling. It is simply not possible for it to be much below 2 K, and it is much more likely to be above 3K than below.
    Your assertion re: convection doesn’t hold water. That’s physics, and we know how to do physics. The only way you get rid of energy is via radiation of IR, and the only way we radiate more IR is if we warm up. Why don’t you at least learn enough about the physics to pose suggestions that make sense.

  • Julian Flood // November 15, 2008 at 1:30 pm

    Ray Ladbury // November 15, 2008 at 12:49 pm wrote
    “Your assertion re: convection doesn’t hold water. That’s physics, and we know how to do physics. The only way you get rid of energy is via radiation of IR, and the only way we radiate more IR is if we warm up. Why don’t you at least learn enough about the physics to pose suggestions that make sense.”

    I’ve been puzzled about the physics business. If ‘we’ know the physics so well, how come the models output anomalies and not absolute temps? Or do they give absolutes and get converted for some obscure reason. Do you know?

    Re the radiation of energy vs convection and then radiation: my favourite quote of all time was the climate guru who stated that yes, the climate was warming, but the extra warming was being radiated away into space and that’s why it was cooling.

    Re the methane: I know that acid rain suppresses methane production from bogs, tundra etc. Maybe the suppression of SO2 production is just beginning to show.

    JF

  • Hank Roberts // November 15, 2008 at 3:28 pm

    Julian asks
    > If ‘we’ know the physics so well
    > how come the models output anomalies and
    > not absolute temps?

    You’re thinking of a machine model — wind it up and it runs to completion. The climate models aren’t like that.

    Look at how they work:
    http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php

  • EliRabett // November 15, 2008 at 4:30 pm

    Big data sets always have inconsequential errors. It’s the nature of the thing.

  • dhogaza // November 15, 2008 at 4:40 pm

    I’ve been puzzled about the physics business. If ‘we’ know the physics so well, how come the models output anomalies and not absolute temps?

    This has nothing to do with physics, per se.

  • Barton Paul Levenson // November 15, 2008 at 5:05 pm

    Dave A writes:

    Big datasets always have errors, don’t they Ray? So do climate models because they too are basically big datasets. Yet on the basis of these error prone data sets we are supposed to take policy decisions that will cost trillions and impose fundamental changes on our societies.

    The error is in your second sentence. Global Climate Models are NOT programmed with climate statistics. The only climate data used is the surface albedo, mean cloud cover, elevation, etc. of each grid square. The rest is pure physics.

  • Barton Paul Levenson // November 15, 2008 at 5:16 pm

    My post seems to have disappeared. I’ll try again. Apologies to all if this is a duplicate.

    John Finn writes:

    I don’t believe we can a) quantify the warming - many are suggesting that it’s much lower than 3k per CO2 doubling which can only be confirmed by the data.

    “Many” doesn’t really count — how about the mean and standard deviation? Eliminate the hyphen below and have a look at the 61 estimates for CO2 climate sensitivity I’ve looked at:

    http://www.geoci-ties.com/bpl1960/ClimateSensitivity.html

    b) do we really know where that warming will become evident.

    Yeah, pretty much.

    For example is it not possible that any additional energy absorbed by the extra 100ppm CO2 will simply increase the rate of convection .

    Probably not by enough to matter. Note that surface heat loss from conduction/convection is only about 24 watts per square meter, whereas incoming sunlight is 168 w/m^2 and back-radiation from the atmosphere is 324 W/m^2. Even if you add in the cooling from latent heat (evaporation) = 78 W/m^2, you’re still talking 102 W/m^2 compared to 492.

  • David B. Benson // November 15, 2008 at 5:22 pm

    John Finn // November 15, 2008 at 12:29 pm — Charney sensitivity of about 3 K is established by looking at times in the past when the climate was in equilibrium, or nearly so; LGM is one such time. The paper by Annan & Hargreaves is excellant; study it.

    Julian Flood // November 15, 2008 at 1:30 pm — GCMs do physics; it is a convenience to the reader to produce graphs of anomalies.

  • John Finn // November 15, 2008 at 8:44 pm

    John Finn, We can see the warming. Even if we didn’t have a global temperature record, winter comes later and ends earlier,

    When I was a much younger than I am now I was regularly told by my elders that winters seemed never ending and that summers were colder and wetter than when they were young. Going further back over the centuries there is plenty of evidence from poets and writers that climate has gone through any number of changes. Proxy reconstructions aren’t going to change the fact that populations were constantly on the move due to the climatic conditions of the day. Warming and cooling will always be with us. There is no such thing as a stable climate.

    An old argument I know but true nevertheless.

  • Julian Flood // November 15, 2008 at 9:01 pm

    David B. Benson // November 15, 2008 at 5:22 pm wrote

    ” GCMs do physics; it is a convenience to the reader to produce graphs of anomalies.”

    Do they publish the actual output anywhere? Presumably they’re getting pretty close to the measured data — I’m just wondering how close.

    Also, I don’t see hown the three replies above are reconcilable: if the models don’t do physics ‘per se’ then how do they actually compute all the interactions? And if a model doesn’t actually finish a run then I find the idea of snipping out a bit in the middle that matches what you’re trying to prove to be a bit dubious. What happens if you run the model outside the set time bracket? Does it run away to unrealistic figures? If so, how can we then claim that we have a good handle on the physics?

    Does increased computer power make it easier to answer these questions?

    JF

  • David B. Benson // November 15, 2008 at 9:43 pm

    Julian Flood // November 15, 2008 at 9:01 pm wrote “Do they publish the actual output anywhere?” Yes, somewhere. But Chapter 8 of IPCC AR4 does model assessment; I suggest you start there. More about model development is in “The Discovery of Global Warming” by Spencer Weart:

    http://www.aip.org/history/climate/index.html

    Review of above:

    http://query.nytimes.com/gst/fullpage.html?res=9F04E7DF153DF936A35753C1A9659C8B63

    and still more in a book with a title something like “History of General Circulation Model Development”.

    The GCMs run until one has told the program to stop. A typical assessment run uses various initial conditions for a period of known external forcings; the outputs are compared to collected data.

    As I explained earlier, all GCMs conserve enthalpy so the models never ‘run away’; see the cited “Discovery”.

  • John Finn // November 16, 2008 at 1:50 am

    “Many” doesn’t really count — how about the mean and standard deviation? Eliminate the hyphen below and have a look at the 61 estimates for CO2 climate sensitivity I’ve looked at:

    Ok - I’ve looked at them and so what? I notice Stern is included in the list - would this be Nicholas Stern? If so why? Why is his eatimate any more valid than, say, Lindzen’s - or mine for that matter.

  • Hank Roberts // November 16, 2008 at 2:16 am

    > … why? Why is his eatimate any more valid …

    Attention to detail?

  • Julian Flood // November 16, 2008 at 9:38 am

    Re David B. Benson // November 15, 2008 at 9:43 pm

    quote “Do they publish the actual output anywhere?” Yes, somewhere. unquote

    I was hoping for a slightly more informative answer! Thanks for the references, but I don’t flatter myself that I’m up to understanding a dense exposition. Maybe there’s a blog somewhere which covers this for the layman? I’ve Googled for it but had no luck.

    Does anyone know of a nice simple graph which shows what actual temps (not anomalies) the models output? If we’ve really got a grip of atmospheric physics then they’ll be within a few decimals of a degree (or much less considering that we’re talking about an AGW component of .6 deg/century). Discrepancies of more than a few tenths would be not reassuring about accuracy of modelling assumptions.

    Unless,of course, problems of initial conditions make getting a good final absolute figure impossible.

    JF

  • John Finn // November 16, 2008 at 10:48 am

    Note that surface heat loss from conduction/convection is only about 24 watts per square meter,

    How has this value been determined? I only ask because I was trying to imagine a world without convection. Sunny days would obviously be much hotter - even in regions with temperate/cool climates. Think of sitting in a car (or greenhouse) in direct sunlight. The poles would presumably be much colder in their respective winters. Calculating a global figure for average energy loss via convection would appear to be quite a complex problem. I am a bit suspicious that the 24 w/m2 is just a convenient fudge factor ( a bit like aerosols).

    With respect to convection from additional ghg warming - why would this affect the surface energy budget?


    > … why? Why is his eatimate any more valid …

    Attention to detail?

    Stern is an economist. As such one might ask why, a few years back, he hadn’t come up with a model which forecast the current financial catastrophe that’s about to be unleashed on the developed world. But not Nick he’s predicted the increase in global temperatures due to CO2 doubling in several decades time. Brilliant!

    This bozo didn’t spot an imminent crisis in what is his supposed area of expertise. With all due respect I’d give more credibility to the opinion of the bloke who cleans my windows.

    [Response: Perhaps before you crucify "Stern" for his estimate of climate sensitivity, you should absolutely confirm that 1) the "Stern" in question is the economist you're thinking of, and 2) the estimate of climate sensitivity is his, he's not just quoting some other source.]

  • John Finn // November 16, 2008 at 11:24 am

    From the UK Telegraph

    A GISS spokesman lamely explained that the reason for the error in the Russian figures was that they were obtained from another body, and that GISS did not have resources to exercise proper quality control over the data it was supplied with. This is an astonishing admission: the figures published by Dr Hansen’s institute are not only one of the four data sets that ….

    If true GISS can no longer be considered a reliable source of temperature data.

    [Response: Bullshit.

    YOU can no longer be considered a reliable source of information or opinion on this, or probably any, subject.]

  • Sekerob // November 16, 2008 at 12:34 pm

    Just fell off my chair

    John Finn // November 15, 2008 at 8:44 pm wrote

    “Proxy reconstructions aren’t going to change the fact that populations were constantly on the move due to the climatic conditions of the day.”

    Over 50% of the global populations live in cities. Where should they move? To Amish Land? 6.74 billion today. 9.2 billion predicted now for 2050.

  • Lazar // November 16, 2008 at 2:34 pm

    Julian Flood,

    Does anyone know of a nice simple graph which shows what actual temps (not anomalies) the models output? If we’ve really got a grip of atmospheric physics then they’ll be within a few decimals of a degree

    GISS Model E; homepage,
    and documentation.

    Table 3 gives annual mean diagnostics comparing model output with observations.

    Modelled mean global SAT for three configurations; 14.4, 14.5, and 14.3 C. Observed is 14.0 C.

    You can find documentation for GCMs using google and google scholar.

    (or much less considering that we’re talking about an AGW component of .6 deg/century)

    Nope. Different sources of error.

  • John Finn // November 16, 2008 at 3:22 pm

    [Response: Perhaps before you crucify "Stern" for his estimate of climate sensitivity, you should absolutely confirm that 1) the "Stern" in question is the economist you're thinking of, and 2) the estimate of climate sensitivity is his, he's not just quoting some other source.]

    Tamino

    1. I did ask if it was THE Nicholas Stern
    2. It was BPL who attributed climate sensitivity value to Stern not me.

    Over 50% of the global populations live in cities. Where should they move? To Amish Land? 6.74 billion today. 9.2 billion predicted now for 2050.

    Yep and less than 1.5 billion in 1900. This is one of the things that puzzles me . That is, during this period of unprecedented warming the human race seems to have been a remarkably successful species . Not only that but we’re apparently going to increase our population growth as we head into the “catastrophic warming” era.

    [Response: Bullshit.

    YOU can no longer be considered a reliable source of information or opinion on this, or probably any, subject.]

    I wasn’t aware that I ever was. GISS data, on the other hand, is widely cited in numerous academic studies.

  • dhogaza // November 16, 2008 at 5:54 pm

    If true GISS can no longer be considered a reliable source of temperature data.

    It’s never been a reliable source of temperature data.

    It’s a reliable analyzer of OTHER PEOPLE’S temperature data.

    Really, the Q/A shoe is on the collector’s foot.

    GISS does automated Q/A on the data (despite what the newspaper quote you provide says), because of various issues with the data they receive. They just didn’t happen to catch *this particular error* immediately.

    They’ve now added an additional check for this particular problem.

  • Ray Ladbury // November 16, 2008 at 6:37 pm

    My heart goes out to people like John Finn. On the one hand, he makes it clear, he has no understanding of climate science, but he’s sure it must be wrong, based on… who knows what. On the one hand, the only way they’d ever be able to make an intelligent contribution to the subject would be to actually learn the science. On the other hand, the science clearly supports the anthropogenic causation he opposes. His resolution: Pontificate from the pulpit of ignorance. Good luck with that, John.

  • TCO // November 16, 2008 at 6:46 pm

    What is so bad about global warming? I hate winter.

  • TCO // November 16, 2008 at 7:20 pm

    This looks cool. Seriously. Speed up the movie. I want to watch.

    http://www.worlddreambank.org/D/DUBIA.HTM

  • Lazar // November 16, 2008 at 8:03 pm

    Math help needed, please…

    The definite integral beween 0 and 1 of u du, where u=cos x
    I think that’s -1/2, right?
    As in, cos^2(-pi/2) - cos^2(0)

    Thanks.

    [Response: I think = +1/2, whether is or not.]

  • Lazar // November 16, 2008 at 8:05 pm

    … the last bit, divided by 2.

  • Lazar // November 16, 2008 at 8:26 pm

    Thank you.

  • Hank Roberts // November 16, 2008 at 8:28 pm

    John Finn, if you paid attention to the economists you’d know some of the predictions made. The “it can’t possibly happen” argument is well made in the first video clips here.
    http://econvideo.blogspot.com/2008/11/peter-schiff-retrospective-on-fox-news.html

    2005 academic work:
    http://www.ess.ucla.edu/faculty/sornette/prediction/images/20050819_FigVAY_pop.gif

    Current links here:
    http://www.er.ethz.ch/publications/finance/asset_pricing

    Now, can you let go of the notion that economists can’t forecast crashes and begin considering the scenarios from the climatologists?

    Both groups are ignored by people making money from unsustainable conditions as long as they can prolong their opportunity.

  • Lazar // November 16, 2008 at 8:43 pm

    Final question, promise…
    The integral of u e^(-1/u) does not have a closed form solution?

  • John Finn // November 17, 2008 at 1:01 am

    hank

    the video you link to was made in what ? 2006? 2007 ?

    I live in the UK and occasionally go for a few beers to a small insignificant little club that no-one outside the local area will have heard of.

    At least 4 years ago - most of the guys over the age of 50 who used the club were adamant that people - the young in particular - were borrowing too much money and that this was pushing house prices to unsustainable levels (my words - not theirs). They were convinced it would all “end in tears”.

    They were right about that despite having no background whatsoever in economics. Few of them have any knowledge of statistics or climate physics but they all think (human-induced) global warming is a crock. I’m beginning to suspect they may be right about that too.

    You see, Hank, they are able to compensate for their total lack of expertise in areas of science and finance, by using 2 under-rated but far more useful atributes - i.e. experience and common sense.

  • Philippe Chantreau // November 17, 2008 at 1:05 am

    John Finn in his all skeptica glory. And skeptics, keep accusing their opponents of name calling…
    I understand you’d call Stern names, but, really the guy who cleans your windows, that’s unfair. Who knows, perhaps the “bloke” does agree with you, which would probably unbloke him in your view.

  • Ray Ladbury // November 17, 2008 at 2:24 am

    John Finn,
    Ah, yes, the common myth that expertise and concerted study only get in the way of “common sense”. Common sense only works when one is thoroughly familiar with the system. As you and your drinking buddies are not, by your own admission, anything like experts, your efforts at “common sense” really are pretty laughable.
    Like it or not, John, science works, and it works where common sense fails.
    John, there were plenty of economists who called this one right. That the bubble would burst was utterly predictable, but people couldn’t resist trying to squeeze just a little more out of the good times. Prosperity seems to make us stupid.

    As for you, I think it speaks volumes about you that you take the opinion of other alcohol-soaked ignoramuses above that of people who have devoted their lives to studying the climate. You seem to have adopted the equivalence of ignorance and bliss as an axiom. I can see where a couple of pints might help there.

  • michel // November 17, 2008 at 7:37 am

    This is nothing more than politically-driven slander. And I think Finn and the others know it.

    Its neither. Its not politically driven, and its not slander.

    The correct response, the only response, to the detection of a quality problem in a data series, is to just say, yes there is a problem, yes the existence of the problem does show we have to look at our processes.

    They only have to get a few hundred numbers right once a month. They are spending almost nothing on it, by their own admission. Just fix it. Why do so many people find this do hard to accept? Why do they react with such hostility to the idea that there is any room for improvement?

    It seems to be that for lots of people, any suggestion that GISS is anything but the the best of all possible data generators feels like an attack on everything they hold dear. Its not, its just a suggestion that their data handling methods may be could be improved. Which they obviously could. Get used to it.

    [Response: First: the mistake wasn't made by GISS, it came from NOAA. But the criticism has been levelled against GISS, not NOAA. Why do you suppose that is? My opinion: it's because of the desire to personally discredit James Hansen. That's politically-driven slander.

    Second: it's most definitely not just a case of folks saying "data handling methods may be could be improved." It's the ridiculous claim that GISS estimates are worthless. And John Finn (the subject of the comment) is STILL at it. His desire to trash GISS is nothing short of character assassination.

    Of course data handling methods can be improved -- in this particular case by checking for mistakes made by *other* organizations.]

  • dhogaza // November 17, 2008 at 9:58 am

    Few of them have any knowledge of statistics or climate physics but they all think (human-induced) global warming is a crock. I’m beginning to suspect they may be right about that too.

    And a few of them probably think the earth is 6,000 years old, too.

    Why would anyone rational care?

  • John Finn // November 17, 2008 at 10:20 am

    Final question, promise…
    The integral of u e^(-1/u) does not have a closed form solution?

    Has this been answered? I assume by “closed form solution” means it cannot be solved analytically (i.e. using pencil and paper).

    It doesn’t look as though it can be solved using the most obvious approach i.e integration by parts but I’m way too rusty to know for sure.

    There are often neat little tricks to solve integrals like this, but again rustiness is a problem . I tried using the series expansion for e [ i.e. e(x) = 1 + x +x^2/2...] so the expression above would become

    u {1 -1/u + 1/2.u^2 ……..} but this doesn’t seem to lead to anything.

    One long shot (i.e I’m not sure it’s valid) is to try this approach

    y = u e^(-1/u)

    So ln(y) = ln (u) -1/u then integrate both sides - No Forget it!!! as I was writing this I’ve realised it’s rubbish. I was hoping we would end up with an expression for the integral of y whereas, at best, we’ ll only end up with an expression for y, i.e. back where we started.

    Sorry about that - but if anyone does have a solution I’d like to hear it because it’s going annoy me for the rest of the day now.

  • John Finn // November 17, 2008 at 10:22 am

    Common sense only works when one is thoroughly familiar with the system.

    Do you mean like when you get a temperature anomaly 2 or 3 sigma above normal?

  • dumskalle // November 17, 2008 at 1:03 pm

    –Do you mean like when you get a temperature anomaly 2 or 3 sigma above normal?

    Well, I guess you know how often that will happen? You know, statistical sense.

  • Gavin's Pussycat // November 17, 2008 at 1:08 pm

    Maxima sez int(u*exp(1/u)) = (1/2)*int((1/u)*exp(1/u))du + (1/2)*8u^2+u)*exp(1/u). And then it doesn’t know what to do with the integral on the right hand side :-(

  • Gavin's Pussycat // November 17, 2008 at 1:16 pm

    Lazar, sorry minus sign, not plus. Maxima is a fine piece of software.

  • Gavin's Pussycat // November 17, 2008 at 1:24 pm

    John Finn, no, the Stern BPL is quoting is David Stern — as it says in the link you provide.

  • Ray Ladbury // November 17, 2008 at 1:52 pm

    John Finn, No, I mean knowing that if you have a 2-3 sigma error, it’s likely to be caught in fairly short order–as this one was, and as it would have been anyway as soon as somebody looked in any detail at the output for Siberia.

    John, you seem to be satisfied with half an understanding. You are willing to attribute the current financial meltdown to “easy credit” fueling a real-estate bubble. It doesn’t seem to occur to you to ask why credit was easy, or why this particular bubble has burst globally rather than causing a few small, local banks to fail. That’s where the story gets interesting.

    Likewise, you assume that humans cannot affect climate primarily because you have a picture of it as BIG and of human impacts as small. You haven’t bothered to look into the real science–and that, also is where things get interesting. Unfortunately, the inescapable implication of our understanding of climate is that humans are indeed changing it. There really isn’t an alternative. It’s just a question of how much things will change and how bad that will make things as we try to support 9 billion people on the globe.

  • JCH // November 17, 2008 at 2:32 pm

    People claim to not think - I do not believe they are being truthful when they make the claim - humans cannot change climate, but has anybody else noticed how quickly they glom onto any and every hair-brained geo-engineering solution to climate change?

  • Hank Roberts // November 17, 2008 at 3:34 pm

    > geoengineering

    That’s the “hey, let’s try giving 700 billion dollars to our friends and the people we used to work with and live around and went to school with and who have kids our kids can marry, and see if that helps” approach to problem-solving.

    Same reason there’s far more interest in adding huge expensive carbon capture devices to the huge expensive coal-fueled utility system, rather than funding the development and use of individually owned, widely distributed small solar and wind collectors — concentrate, centralize, and control the money.

    Next up: the utility would like to take control of the sunlight falling on the property you’re paying the mortgage on, to capture your rooftop’s solar potential (or have the government nationalize it if necessary to get control of it). Then they can place their solar collectors there.

    Goal?? Continue to have only a few companies own everything from the coal mines out to the fuse box, and only a few banks own all the mortgages from the fuse box on in.

    It’s a problem.

  • John Finn // November 17, 2008 at 3:57 pm

    John Finn, no, the Stern BPL is quoting is David Stern — as it says in the link you provide.

    GP , old bean, I didn’t provide any link. It was BPL who provided the link I simply asked the question whether it was Nicholas Stern the economist who did actually produce a GW related about the same time.

    re: the integral - does anyone use mathematica anymore that used to be pretty impressive solving complex integrals and such like.

  • David B. Benson // November 17, 2008 at 7:26 pm

    Julian Flood // November 16, 2008 at 9:38 am — You should be able to read “The Discovery of Global Warming” with profit; it is a work of history.

  • Dave A // November 17, 2008 at 10:43 pm

    David B Benson,

    Are you sure you’re not a pseudonym for Spencer Weart?

  • Ray Ladbury // November 18, 2008 at 1:42 am

    Dave A, I know Spencer. Dave B. ain’t him. Maybe if you read the Spencer’s “Discovery…” you’d understand why we all recommend it so heavily.

  • HankRoberts // November 18, 2008 at 1:45 am

    http://www.newscientist.com/article/dn16060-eastern-bloc-collapse-boosts-climate-outlook.html

    “The world is on track to meet its Kyoto protocol target, according to very preliminary data released for the first time today.”

  • HankRoberts // November 18, 2008 at 1:47 am

    See also:

    http://www.newscientist.com/article/dn16058-prophesy-of-economic-collapse-coming-true.html

  • Gavin's Pussycat // November 18, 2008 at 6:16 am

    > has anybody else noticed how quickly they glom onto any and
    > every hair-brained geo-engineering solution to climate change?
    Yeah… another example of the Rabett’s “Believing ten impossible things before breakfast”
    http://rabett.blogspot.com/2008/10/believing-ten-impossible-things-before.html
    Like, “it ain’t happening and is caused by the Sun, and BTW is good for us”.

    John Finn old bean, you copied the link (my bad, didn’t spot that) but didn’t bother to look behind it, which would have answered your question :-)

  • Sekerob // November 18, 2008 at 8:28 am

    Meeting Kyoto? The US alone consumes 1 million barrels of oil a day less since the credit crunch came to full bloom… so without signing they might get close if not “achieve” the target.

    Certainly, reading some loose press there is some interesting accounting going on to get to the paper targets. Specifically there was a while back notes how the UK did it without doing much at all.

  • Hank Roberts // November 18, 2008 at 11:10 am

    John Finn, ‘bucket shop’ contracts were illegal in most if not all of the US states and illegal under federal law for almost all of the 20th century, until the very end when someone (unidentified) slipped a preemption of state law and an explicit permission at the federal level into a huge federal budget bill at the conference committee stage. No hearings, no record, no information.

    It took eight years for the economy to collapse with 30x leverage deals, once the regulation and prohibitions were thrown out.

    This is not a case of new stupid. This is old, proven, widely recognized stupid returned.

    Haven’t read about it? Read:
    http://www.cbsnews.com/stories/2008/10/26/60minutes/main4546199.shtml?source=RSS&attr=_4546199

    Or try the videos.
    Linked here:
    http://www.distressedvolatility.com/2008/11/credit-default-swaps-and-derivatives.html

    “… some videos from 60 Minutes discussing credit default swaps and derivatives and thought I’d present them on this blog. These contracts have had a huge role in today’s financial crisis …”

    No, it wasn’t the Community Reinvestment Act; those loans have performed very well.

    No, it wasn’t Freddie and Fannie.

    You can look this stuff up. Don’t rely on some guy on a blog. Read and think for yourself. Check the primary sources. Else you’re just gossipmongering spreading PR for others free.

  • David B. Benson // November 18, 2008 at 7:13 pm

    Dave A // November 17, 2008 at 10:43 pm — I am absolutely positive I am not.

    :-)

  • Ray Ladbury // November 18, 2008 at 7:26 pm

    Hank, You’re taking all the fun out of revisionist history. ;-)

  • John Finn // November 19, 2008 at 12:29 am

    John Finn old bean, you copied the link (my bad, didn’t spot that) but didn’t bother to look behind it, which would have answered your question :-)

    Fair comment. I could have investigated a bit further, but to be fair it only started as a casual, throwaway question which just sort of snowballed. Whatever - it appears to have set Hank off on a mission.

    The fact still remains though, Nicholas Stern did produce a recent widely publicised report dealing with the economics of climate change which at no point mentioned the possibility of a significant downturn in the near or even medium term future.

  • HankRoberts // November 19, 2008 at 2:22 am

    > at no point mentioned the possibility of a
    > significant downturn

    Scenarios. That’s what they’re like.
    They’re not predictions.
    They’re “if this goes on” documents.

  • TCO // November 19, 2008 at 2:55 am

    I like global warming because I enjoy tropical climate.

  • Gavin's Pussycat // November 19, 2008 at 3:43 am

    > The fact still remains though, Nicholas Stern did produce a recent
    > widely publicised report dealing with the economics of climate
    > change which at no point mentioned the possibility of a
    > significant downturn in the near or even medium term future.
    No… because that possibility doesn’t exist. We live in a physical, not a magical world. He studied the science before basing his own work on it, a feature not a bug.

  • Hank Roberts // November 19, 2008 at 6:33 am

    Which ‘downturn’ are you talking about there?
    We do have a reduction — comparable to the one that happened in the years after the USSR collapsed — in fossil fuel use showing up right now. It’s a backassward way of meeting the Kyoto numbers but it looks like it’s happening at this point. Early days of course.

  • michel // November 19, 2008 at 7:16 am

    No, it wasn’t the Community Reinvestment Act; those loans have performed very well.

    No, it wasn’t Freddie and Fannie.

    Well, it was partly Fannie and Freddie, or they would not now be requiring bailout. But this is right, the main cause was elsewhere. The essential cause was the Fed. When a central bank sets interest rates below the rate of inflation, this results in borrowers being paid to borrow. When you do this, they do indeed borrow. Since we lowered the interest rate to banks, it was banks that borrowed and subsequently lent what they had borrowed.

    If you like, this was a way of appearing to multiply bank capital and so lending capacity. We then permitted securitization. This was only possible because of the failure of the rating agencies, and this in turn was only possible because the two rating agencies are run as a government sponsored cartel and paid by issuers of financial instruments. We were now in a situation in which bank reserves appeared to have been multiplied, but the extra reserves were somewhere in the air, floating as it were, and when called on, would turn out not to exist. The loans were not AAA, the counterparties could not really insure.
    When this came to light, as it eventually had to, the crunch was inevitable.

    We now have a problem which however complicated it may appear, comes down to one thing: huge quantities of bad debt and to go with it, huge quantities of failed investments in the real world. All this has to be written off.

    This scale of increase in lending capacity has only been done once before on this scale in recent history, and that was the huge increase in lending capacity by lowering of reserve requirements that attended the formation of the Fed.

    As BPL is fond of saying on other topics, do the work for yourself and find the key data. The key indicator is the amount of growth in the monetary aggregates post Volcker as compared to the growth in the real economy.

    Sow the wind, reap the whirlwind. Greed and wickedness existed and were the transmission. The engine was the government, and the Fed was the accelerator. This was not the failure of unregulated markets. This was the failure of a certain kind of regulation.

  • nanny_govt_sucks // November 19, 2008 at 8:29 am

    I like global warming because I enjoy tropical climate.

    Yes. Hawaii is commonly considered a paradise, yet if the world were to become more Hawaiian it would somehow be a disaster. Go figure.

  • John Finn // November 19, 2008 at 9:00 am

    GP, Hank

    I think we can probably agree to differ on this. But Stern produced a report which was little more than speculation since it dealt with economic scenarios over the timescale of decades. The current downturn has the potential (if only a possibility) to dwarf any of his projected economic effects from climate change.

    In other words his paper (if some of Hank’s kinks ares correct) is obsolete and irrelevant - and it’s only about a year or two old.

    I’m trying to think of an analogy. Let’s say we knew the Sun was going to diminish by 30% in the next decade. No-one is going to be bothered funding an expensive study on the effects of CO2 over the next century. Not a great analogy but you get the idea.

  • michel // November 19, 2008 at 9:44 am

    You could also blame the Commodity Futures Modernization Act of 2000, that was a very important element in allowing the nefarious exploitation of the flood of money created by low interest rates. Without that, the whole derivatives business would never have taken off.

  • P. Lewis // November 19, 2008 at 9:44 am

    Things that the Nicholas Stern Review report also did not mention:

    1. Volcanic activity on the scale of what laid down the Deccan Traps, if repeated, will lead to a recession lasting 100s of years, global cooling, changes to weather/climate patterns and major species extinctions.

    2. A bolide on the scale of what rearranged the face of the Earth at Chicxulub will result in global cooling lasting decades, changes to weather/climate patterns, an unparalleled drop in economic activity, wholesale species extinctions and possibly set off volcanic activity on the scale of what laid down the Deccan Traps (see 1).

    3. A scientific and technological breakthrough due in 2010 means that nuclear fusion power generation, which was once (in the 50s or 60s) just 50 years from providing unlimited cheap energy is now really just 50 years from realising that aim, thus allowing oil to be used for what it should be: a chemical feedstock.

    4. Father Christmas is a myth! :-O Nobody reads the letters you him send every year, not even his helper elves; they’re a myth, too! (See 3).

    [What beats me, though, is how I get that present I want every year (especially as I haven’t sent him a letter in “2 or 3 years” now): that aftershave I never use (I don't) and that hair gel of which I have no need. Weird! But it's the thought that counts, I guess.]

    5. Idiots abound (see most posts by “so-called sceptics” for affirmation of that axiom). Theirs are the thoughts that don’t count, here or anywhere.

  • John Finn // November 19, 2008 at 11:52 am

    Sorry about the typos in the previous post particularly the one about “Hank’s kinks” which should, of course, be “links”.

  • JCH // November 19, 2008 at 4:17 pm

    “Well, it was partly Fannie and Freddie, or they would not now be requiring bailout. …”

    This is not really true. Fannie and Freddie were forced to buy up exotic/predatory subrime by the Bush Administration. That has nothing to do at all with the CRA and CRA lending. The CRA lenders had standards.

    GWB had a goal, and he meant every word of it - 5.5 million houses to be sold to minority families by 2010. That meant their toxic financing had to be bought up by the secondary market to keep the goal on target, and he made certain it was.

  • Hank Roberts // November 19, 2008 at 5:38 pm

    > finance
    http://calculatedrisk.blogspot.com/
    It’s all there.

    > CO2 change
    Compare the collapse of the USSR; we’re not close to that yet. The economic blip shows it’s possible, but certainly doesn’t devalue the projections made. Remember ocean pH change is the first disaster in the pipeline, and it’s simple chemistry, nobody’s arguing against it.

  • Julian Flood // November 19, 2008 at 5:42 pm

    Lazar wrote

    quote Modelled mean global SAT for three configurations; 14.4, 14.5, and 14.3 C. Observed is 14.0 C. unquote

    Very impressive! Thanks.

    JF

  • Philippe Chantreau // November 19, 2008 at 11:24 pm

    Funny comment about the tropical climate, TCO and NGS; let’s examine a few other tropical locations (i.e. located near he tropics):
    Fderik (Mauritania), Taoudeni (Mali), Gouro (Chad), Riyadh, Medina (Saudi Arabia), Windhoek (Namibia), Tshane (Botswana), Alice Springs (Australia).

    We could expand our search, which would lead us to notice that most deserts are located near the tropics. Nice climate enjoyed in these locations.

  • David B. Benson // November 20, 2008 at 12:17 am

    Philippe Chantreau // November 19, 2008 at 11:24 pm — The climatological tropics are the two Hadley cells, one on each side of the ITCZ. North and south of the Hadley cells are the semi-tropics.

    As you note, those regions tend to be desert.

  • David B. Benson // November 20, 2008 at 12:31 am

    Forgot to mention: with AGW, I would suppose that the Hadley cells are expanding somewhat. This drives the semi-tropics further north and south in the respective hemisphere. Then the westerlies, still further north and south in the respective hemisphere are pushed towards the respective pole.

    Notice what this does in Australia, for example?

  • Philippe Chantreau // November 20, 2008 at 2:33 am

    Thanks for the precision David.

  • Philippe Chantreau // November 20, 2008 at 5:42 am

    Michel, I think I get your point, but I have to somewhat disagree. All this could easily be interpreted as the failure of the government to say no to private interests, resulting in protections eroded by a thousand cuts. One could argue that it is reason for the fed to be entirely public, instead of partially. Although the intent behind them was different, TILA and HMDA could have allowed to put limits on the frenzy leading to the the recent crash.

    The fed could have gone the wise way. Instead, it yielded to all sorts of pressures acting to facilitate the making of a fast quick buck by some parties. That is in part a result of the free market integrist ideologies driving some players in the economic world that are way too big.

    One problem is that the Constitution long predates the existence of the enormous private interest influences seen since the 2nd half of the 20th century. No element should ever have that kind of influence or power in a representative republic.

    The regional telecom monopolies, the size of the oil majors, the merging of airlines leaving just a few players, the enormity of certain banks, all this works against what the end result of a free market should be. In essence, it makes it no longer free.

    The free market can, and in the long term will work ONLY if it serves the greater good. The people’s representatives have the responsibility of tailoring regulations that will make that possible. Corruption is a major hurdle. You can say they have failed, and it would be difficult to disagree. That does not mean we shouldn’t try harder.

    “Democracy is the worst possible system of goverment, except fot all the others.”

    Democracy needs to evolve to integrate the power and influence of economics. In my opinion, it has not yet done that for real.

  • Lazar // November 20, 2008 at 1:39 pm

    “Taking into account the recent global financial market crisis, how important is it for your government to take action to combat global warming?”

    (h/t BigCityLib)

  • HankRoberts // November 20, 2008 at 8:31 pm

    10.1073/pnas.0700609104 PNAS June 12, 2007 vol. 104 no. 24 10288-10293

    http://www.pnas.org/content/104/24/10288.long

    Global and regional drivers of accelerating CO2 emissions

    1. Michael R. Raupach,
    2. Gregg Marland,
    3. Philippe Ciais,
    4. Corinne Le Quéré,
    5. Josep G. Canadell,
    6. Gernot Klepper, and
    7. Christopher B. Field

  • TCO // November 20, 2008 at 11:21 pm

    Phil: Interested in your comments on deserts versus tropics. I know the only equatorial (not same as tropics of course) is Somalia. And that there are deserts well north of Cancer and sout of Capricorn. Have you perhaps plotted percent land desert versus lattitude (it would prove/disprove your assertion). Also, my understanding is that AGW proponents generally take as simplest case that relative humidty is a constant (amplifaction). So not, so sure that an AGW world is more arid.

    Let’s discuss.

  • Anna // November 21, 2008 at 12:52 am

    Hank and others, I need your help over at, ahem, my blog - when faced with an intelligent journalist asking “why *not* give a Heartland scientist airtime?”, how would you put the case?

  • David B. Benson // November 21, 2008 at 12:59 am

    TCO // November 20, 2008 at 11:21 pm — Meteorlogically the global is divded into Hadley cells, from the thermal equator to about 30 degree latitude, the semi-tropics and prevaling westeries from 30 degrees to 60 degrees, and the polar cells the rest.

    The Tropic of Cancer and Capricorn are astronomical and have less to do with the general structure of the atmosphere.

    Nobody is claiming that the AGW world, as a whole, will be more arid; on the contrary, more precipitation (which may or may not translate into more soil moisture) is anticipated but it seems not yet observed.

    What matters for agriculture is that the temperate zones are moving further poleward. It is becoming more arid in regions with already established argicultural infrastructure. (It also may be the case that regions dependent upon a monsoon may well find that the rain is too intense.)

  • TCO // November 21, 2008 at 1:17 am

    I was addressing Phil’s remark where he DID discuss more deserts.

  • HankRoberts // November 21, 2008 at 1:45 am

    Dunno, Anna, they’re a belief tank that’ s currently claiming they are a peer reviewed science journal.

    http://www.google.com/search?q=Heartland+“peer+review”+science

    This might help:
    http://www.desmogblog.com/heartland-debate-challenge-update-may-3-2007

  • Hank Roberts // November 21, 2008 at 4:03 am

    OMG, Tamino, Anna’s onto something here.

    http://www.agu.org/pubs/crossref/2008/2008GL035209.shtml

    It appears the problem started with the introduction of the Gregorian Calendar.

    Testable?

  • Lazar // November 21, 2008 at 2:29 pm

    Gauss Hermite quadrature for the integration of g(x) dx over [-inf,+inf], and g(x)=exp((-(u-x)^2)/v)

    Express g(x)=w(x)f(x)
    For GHQ the weighting function is w(x)=exp(-(x^2))
    Factor out exp(-(x^2)) gives…
    f(x)=exp((x^2)-((u-x)^2)/v)
    Obtain the weights w[i] and nodes n[i] from standard table
    Sum w[i]f(n[i]
    … right?

    PS I’m working on a line-by-line radiative transfer code for thermal IR as a simple, educative, derived-from-first-principles demonstration of the GHG effect. The code will be open-source and hopefully cross-platform. All help with my clunky maths is gratefully received, and will be going toward a good cause… if it works!

    PPS, for an integral over [-inf,+inf] and for the same number of nodes, is GHQ always more accurate than Gauss Legendre quadrature over an appropriate finite interval?

  • Lazar // November 21, 2008 at 3:07 pm

    … when the function is continuous.

  • Lazar // November 21, 2008 at 3:21 pm

    … non-zero.

  • TCO // November 21, 2008 at 8:56 pm

    Now McI is babbling on about “data deletion”

    http://www.climateaudit.org/?p=4414#comments

    but he refers to a whole other post, which is not billed as data deletion, nor does it at the front tell you it is about data deletion. What a meandering meathead. He embarrasess me. [edit] And his writing and speaking is disorganized, overtime and meandering. Too bad my side seems to specialize in nutter, old retired types who make little internet communities.

    At least we have real people like Zorita, who is skeptical, but not a PR oriented type. One who writes real papers…including ones that support AGW if that’s how the data cut, and the reverse when it doesn’t. But with McI, you get a totally skewed view. I feel bad for the little hoi polloi sucking down his view and just slapping each other on the back…instead of thinking critically.

  • David B. Benson // November 21, 2008 at 10:00 pm

    TCO // November 21, 2008 at 1:17 am — Oh. If the future is going to be like the last 50 years, then more desert area. For whatever reason.

  • Dave A // November 21, 2008 at 10:35 pm

    TCO,

    Re your obsession with Steve M - don’t you know its not good for you to let someone else rule your life?

  • TCO // November 21, 2008 at 11:45 pm

    Voodoo doll? Vendetta? stop reading the net?

  • Hank Roberts // November 22, 2008 at 4:03 am

    I keep tellin ya’, man, you’re not on their side.

    There are intelligent, competent, people who insist on proof for claims everywhere. Thin and far between in some areas, yes. Hard on each other, yes. But they get along better with facts than they do with other people’s beliefs.

    You can’t be on anyone’s side well enough to get the believers on your side, because you’ll end up asking them to prove something.

    Occupational hazard of thinking. Love it while it lasts, few of us manage to retain the standoffish insistence on proof all our lives.
    We can hope.

  • TCO // November 22, 2008 at 6:26 am

    Volokh had a post on this. How much of the discussion on blogs is social and confirmatory and how re-examining the beliefs of your own side is challenged and unappreciated. Part of our monkey nature.

  • Philippe Chantreau // November 22, 2008 at 9:12 am

    TCO, my post about the tropics was not really intended to be much more serious than yours. I do recall a few things from old readings, however.

    DB reminded us that the Hadley cells circulation keeps dry air around30 degrees latitude and promotes large persistent high pressure systems. Both hemispheres have “desert belts” located approximately between 15 and 35 degrees.

    However, other factors are necessary. The Sahara is robbed of moisture by the Hadley cells, but also so large as to have most of its interior simply too far away from any source of moisture. The central Asian deserts (Gobi, Taklamakan) are remote interior basins also too removed from moisture sources.

    Coastal deserts ( Atacama, Namib, Baja) owe some of their dryness to cold ocean currents that limit evaporation and cloud formation.

    North American deserts all have a strong “rain shadow” component. The rain shadow can keep some areas very dry regardless of latitude, but closer to the tropics the evaporation will be such as to severely limit soil moisture (Sonora, Somalia)

    At any latitude, you won’t really have a desert if there is warm water nearby. Florida is at the same latitude as Baja but surrounded by warm water. Take a chunk of the Sahara, anchor it near in the Bahamas and it will eventually become like the rest of the islands. However, if you anchor it at the same latitude off the Skeleton Coast, it won’t change as much.

    I realize that this is quite simplified and all from memory from a long time ago, all feel free to correct/elaborate.

  • TCO // November 22, 2008 at 2:14 pm

    Desserts are cool too. But anyhow, if I can make the ocean warmer, things will be like Miami, not like Cabo.

  • Gavin's Pussycat // November 22, 2008 at 3:57 pm

    Hank, not really due to the establishment of the gregorian calendar, but due to the fact of it being in use.
    Testable? Certainly. Compare leap years with non-leap years as a group. Finding a systematic difference, even a statistically significant one, is unsurprising.
    The qiestion to ask however, and having not read the paper I don’t know if the authors ask it, is “what is the effect on long-term trends?”
    Now if that were anywhere near significant, I would be surprised.

  • Gavin's Pussycat // November 22, 2008 at 4:19 pm

    Lazar, I’m not sure you’re on the right track here.
    I assume that your g(x) is a single line profile? Do you want to integrate it on its own from -inf to +inf? It is called the Gaussian integral and has a closed solution, sqrt(pi*v). See Wikipedia. Am I missing something?

  • george // November 22, 2008 at 10:52 pm

    Forget the potential threat of climate change for the moment.

    With actual terrorists bent on attacking Americans here at home, it’s really comforting to know that some of the officials charged with protecting us are focusing instead on climate change activists. Great use of limited resources.

    “Police spy on climate activist while global warming goes unarrested”
    Police spied on activist Mike Tidwell for months as a ’suspected terrorist’. — From Grist, part of the Guardian Environment Network

  • Lazar // November 22, 2008 at 10:59 pm

    GP,

    I’m not sure you’re on the right track here.

    I think you’re right there. There ought be an (x-w)^-2 term in the integral. It’s the Voigt profile, described as a convolution between a Gaussian and a Lorentzian. Apparently, Hermite quadrature will not converge near the line center. I need to look at sommat called the Humlicek algorithm.

    An Efficient Method for Evaluation of the Complex Probability Function: The Voigt Function and its Derivatives
    J. Humlicek
    J. Quant. Spectosc. Radiat. Transfer, 21, pp. 309-313
    1979

    Optimized Computation of the Voigt and Complex Probability Functions
    J. Humlicek
    J. Quant. Spectosc. Radiat. Transfer, 27, 4, pp. 437-444
    1982

  • michel // November 23, 2008 at 9:44 am

    All this could easily be interpreted as the failure of the government to say no to private interests, resulting in protections eroded by a thousand cuts.

    Philippe, yes, it depends how you look at it. The facts are that the various agencies acted in ways which, in combination, created an environment in which bad debt multiplied. We can point to the specific acts which contributed.

    Now your suggestion is this is not best described as a government sponsored credit boom, but rather as a failure to regulate.

    There is no doubt that part of it was a failure to regulate, or rather, there was a positive dismantling of existing regulation. It can be argued this was a mistaken belief in the benevolent effects of free markets, rather than a positive boom creation initiative. But the case where I think your argument does not apply was the lowering of interest rates under Greenspan. This was, it cannot be questioned, action by a government agency directly on the markets. You could argue that the deregulation which led to the growth of the CDS and CDO markets, and their combination into that lethal pathogen the synthetic CDO, those were a failure of regulation rather than a positive act of government. But the interest rate cuts were not.

    It seems if you look at history, that you have two things, one of which is always present in all periods, the other of which, when the government supplies it, leads to a credit bubble and then a bust. We always have greed and ambition and stupidity. It is a bit like peasants always are trying to plant the most profitable crop. Most of the time, this works out more or less OK.

    Sometimes however the government does something which floods the economy with credit. In the 1920’s it was the foundation of the Fed and the consequent changes in reserve requirements that multiplied banks’ ability to lend. In the 1990’s it was the dramatic lowering of interest rates below inflation, which led to the perception on the part of participants that money was free, or better still, you were being paid to borrow it. The South Sea and Mississippi bubbles had similar causes and effects.

    It is analogous to the EC Common Agricultural Policy. Farmers were always ready to plant wheat and grow grapes. When the EC raised demand beyond previously unknown levels, and made the demand certain, they grew more, and the result was wheat mountains and wine lakes.

    Bankers were the same. They were always ready to lend and borrow wildly. It just took the Fed to give them the money.

    So nationalizing the banks or the Fed or Fannie and Freddie will not in itself help. What is needed is an almost religious prohibition on governments creating credit bubbles. How you enforce this I do not know. But what I am sure of is that just putting everything into the government will not do it. Greed ambition and stupidity exist just as much there too, and it is as hard, or harder, to protect against it there as in the private sector.

  • Ray Ladbury // November 23, 2008 at 2:23 pm

    Michel, The thing you are ignoring is that the Fed was not the only source of credit. The Chinese were eager to lend money, and later the petro-kleptocracies were looking for places to stash their cash. One must ask when you think Greenspan should have raised interest rates: in 1998 during the Asian Tiger meltdown? We faced a real risk of deflation then–as indicated by the historic lows in the prices of gold, platinum and oil. After 9/11/2001? Nope. In 2003. Can you imagine the screams from the incumbent asministration? In 2005, then? Maybe, but inflation at the time seemed tame, and Greenspan was probably reluctant to take actions that would impact his successor.

    Yes, prosperity makes us stupid. Sometimes we are prosperous nonetheless. It’s the regulation that must limit our stupidity. The US is much less the engine of growth to the global economy than it was even a decade ago. We’ll have to get used to outside influence pumping money into our economy and just as quickly pulling it out.

  • TCO // November 23, 2008 at 3:18 pm

    The solution to a bubble is to let it pop. Most of the bubble was not because of government action OR inaction. It was just a market miscalculation. Let the bubble pop and let financiers take losses. This whole bailout is hurting more than helping. It will socialize the whole economy and then we will really have a depression.

  • Hank Roberts // November 23, 2008 at 5:20 pm

    TCO, eight years ago, a protection in the law of most states and the federal government was cancelled by a last minute addition to a huge federal budget bill — passed after Reagan was elected and before he was inaugurated.

    That removed a hundred years of smart law that made “bucket shop” side bets on the market illegal — contracts with nothing deliverable, where anyone could put in a little money betting someone else’s deal would go up or down and maybe make a lot of money.

    These became illegal in the late 1800s because they were responsible for financial panics unrelated to real business and production.

    Eight years after those protections were deleted from the laws, the bucket shop side bets amounted to something like 60x the amount of money in the world — unregulated, unreported, nobody knew for sure who held what contract to be triggered by what change.

    http://www.google.com/search?q=bet+blew+up+wall+street

    Mathematician Mark Chu-Carroll at Google summed it up well. His piece from his blog is reposted in context with other related stuff here:
    http://www.vcresearch.info/open/forums.asp?TopicId=12378&ForumId=119

    When you don’t notice someone changed the rules and tilted the field, you may imagine your sudden loss is due to fair play.

    ‘Tweren’t.

  • tamino // November 23, 2008 at 5:42 pm

    Since this thread is getting large, I’ve started open thread #8. Please continue discussions there.

Leave a Comment