Open Mind

Open Thread #6

September 17, 2008 · 326 Comments

Because Open Thread #5 is getting full.

Categories: Global Warming

326 responses so far ↓

  • Dean P // September 17, 2008 at 11:38 pm

    dhogaza

    It all depends on why people move. If they move often and don’t seem to be advancing, then theres a real chance that they’re running from their failures.

    If, however, they move often and at each position gain more and more power, then there’s a clear signal of competence.

    Using your baseball analogy, imagine a Triple-A player that’s been on 12 different Triple-A teams? It wouldn’t surprise me if the consensus among general managers is that he’s not Major League material…

  • dhogaza // September 18, 2008 at 3:22 am

    Using your baseball analogy, imagine a Triple-A player that’s been on 12 different Triple-A teams? It wouldn’t surprise me if the consensus among general managers is that he’s not Major League material…

    TCO’s claim was “moving for promotion”. Moving amongst 12 different AAA teams is not analogous, they’re all at the same level. No promotion. It’s something one would do if they’re incapable of promotion. Actually, I suspect no such player exists, baseball gives up on minor league players before they have a chance to play for 12 different AAA teams.

    But, regardless, TCO didn’t say “moves parallel because he can’t advance”, but rather “moves for promotion”. Totally effing different.

    Thank you for playing.

  • Magnus W // September 18, 2008 at 5:41 am

    Of topic but I might as well ask here when I have the chance if you have time take a look at this weird global warming theory.

    [Response: Now I've seen it all. Is there a crack in every pot?]

  • chopbox // September 18, 2008 at 5:53 am

    An interesting post, Dr. Jolliffe, about your writing the two reviews for Nature. I am completely sympathetic to the tricks your memory has played on you; mine these days is moving from non-reliable to dangerous.
    About your original reviews, what’s done is done, of course, but I do wonder how things would have played out had you known (and written) then what you know now about the MBH98 short-segment centring PCA procedure.
    Would you consider taking a turn at it now? Speaking certainly of myself, and probably for many others, you certainly have an interested audience. Some might say it would be about 4.5 years too late, but I would disagree. Others might also say that science is self-correcting, but I think a more reasonable view of the world may be found in Burke’s quotation that all that is required for the triumph [of evil] is that good men do nothing. (Before I get jumped on here, I am NOT saying that anybody is evil. Burke was talking about evil; I think the quote works just as well in talking about bad science. That is, if it gets corrected at all, it is only through the efforts of the “good men (or women)” who do the correcting.)

  • Nick Barnes // September 18, 2008 at 7:53 am

    An open thread seems like a reasonable place to announce my Clear Climate Code project. Rewriting GISTEMP in Python, to make it clearer.

    [Response: A noble effort, and a lot of work.]

  • Spence_UK // September 18, 2008 at 8:12 am

    To george, from open thread #5

    McIntyre almost certainly did confuse decentered and uncentered in the paper that I referenced above.

    I originally just responded to the quote you gave, but looking at the paper the use of the words is not so distinct. McIntyre is not confused between the two though - he understands the difference between short-segment centring and no centring quite clearly. The confusion is in the terminology - but the source of this confusion is not McIntyre. If you look at McIntyre’s paper, the uncentred comment actually derives from a quote by Mann from the RealClimate blog. Of course, in the original paper Mann claimed that “conventional” PCA was applied, and then in his RealClimate response argued uncentred PCA was justified - neither of which (by the terminology adopted here) are correct. McIntyre uses the uncentred terminology in his response because he has to - he is answering this claim by Mann, but at least McIntyre tries to highlight the difference between uncentred and the decentred methods.

    I don’t see any confusion here in McIntyre’s comments. I do see that it could be confusing to a reader, mainly caused by the need to respond to criticisms by RealClimate that originally invoked the confusing terminology.

  • Deech56 // September 18, 2008 at 9:44 am

    Are we still talking about Dr. Mann? After he received his PhD he went to UVa for 6 years and then to Penn State. That’s one move, one promotion. If this is a big deal, I think we are in an alternative universe.

  • cougar // September 18, 2008 at 5:29 pm

    So we have SIX open threads now! That’s 10% MORE open threads than last year’s thread minimum. So threads are increasing now, not decreasing! There is NO open thread crisis. All those global thread-ists are going to have to sing another tune. Al Gore is a fraud. The economy is fundamentally sound.

    OK, I think that should set the proper tone of things going forward! You are entirely welcome.

    cougar
    act fast | decide fast

    [Response: I try to make a habit of closing the previous open thread to comments, when I open a new one. The purpose is to prevent any single thread from becoming excessively long.]

  • apolytongp // September 18, 2008 at 7:36 pm

    TCO (apolytongp) still needs “love”.

  • Atmoz // September 18, 2008 at 11:44 pm

    In the “new” Douglass and Christy “paper”, they claim that the UAH satellite temperature is more accurate than RSS. No real surprise there, but it once again ignores the fact that RSS and UAH temperature are almost exactly the same, except for a step change in 1992, and a cyclic artifact in recent years (for the global monthly data, probably due to changes in the UAH temps).

    I used PCA on the 5 major indices (GISS, HadCRUt3, UAH, RSS, NCDC), standardized so they all had a mean of zero and a variance of one.

    The first PC is obviously the global warming signal. The second PC picks up the bias between the 3 surface derived temperatures and the 2 satellite derived temperatures.

    The third, fourth, and fifth PCs (combined, and degenerate) seem to pick up the differences between the RSS and UAH temperaturs. Not being a real statistician, I don’t know the proper way (if any exists) to combine them in a meaningful manner.

    And if such a method exists, what can be said about the other 3 temperatures based upon that combined time series.

    I was expecting the first 2 PCs, but does the 3-5 actually mean anything significant?

  • Steve Bloom // September 19, 2008 at 12:01 am

    FYI, chopbox, “auditing” is not how science advances. Papers hardly ever get formally refuted, rather they more or less just fade away. Consider that time spent on refutation is time not spent on doing useful new work. I’m afraid this way of proceeding does tend to make science less interesting as a spectator sport.

    As Gerry North has commented on several occasions, it’s often the case that scientific reputations are made on being willing to take the conclusions of a paper a bit farther than other scientists might. The trick is doing that and getting it right as MBH did.

  • Jeff Id // September 19, 2008 at 12:24 am

    Ian,

    I waited a bit for the immediate comments to subside to thank you again for being completely indifferent to the pressures of each side of this discussion.

    Your uncolored and open remarks are always welcome to my ears. As you now realize, there is a much larger audience here than the numerous comments suggest.

    One thing the rest of us have learned for sure, Ian Jolliffe will tell us his mind whether or not it is expedient, easy or convenient. It doesn’t matter, for that he has earned my respect.

    [Response: He will also do so dispassionately and politely, even more reason he deserves our respect.]

  • Jeff Id // September 19, 2008 at 3:04 am

    Tamino,

    Nice little dig there.

    I enjoy your site, and I will accept lectures on statistics from you. You are clearly very knowledgeable. I can even accept the fact that my questions were too blunt for many in the comfortable academic world. However in reading your site and enduring some of your responses, you are quite unqualified to make points to me about passion! It is my hope that this is the last we’ll hear of it.

    [Response: You may not believe this, but I didn't have you or any of your comments in mind when I made my response. I have noted that Dr. Jolliffe is a model of dispassionate politeness, unafraid to speak his mind but never hostile or rude. That's a quality I admire, and he's better at it than I am. My only intention was to praise his demeanor.]

    As Dr. Jolliffe said, it was rather obvious that he was the reviewer anyway. From his comments to you, he struck me as the kind of person who wouldn’t worry so much about an old review and wouldn’t mind enlightening us about his current views. Looks like I was right.

    Another thing I was right about is the enormous interest level of your readers as to his opinions in your area of expertise, and how it relates to the endless hockey stick proxies we are currently forced to accept.

    After all, contradictions often lead to understanding.

    Let’s put this behind us and move on to more interesting matters.

    I call my blog noconsensus, not for the clear AGW meaning but rather for the more general implication that excessive agreement and group think often leads to false conclusion. I post on all kinds of science related issues, recently though I have been working on the latest hockey stick.

    Are you aware that Mann 08 used extrapolated proxies prior to calibration. They were extended using a RegEM method designed to extend the proxies according to the trends of more complete proxy series. These more complete series amounted to less than 5% of the total dataset? More than 60 other full length series (another 5%) were examined and scrapped without clear mention in his latest paper.

    I am curious as to what your thoughts are on infilling proxies with extrapolation algorithms prior to significance comparison?

    I would like to invite you to view my post on this subject. To me it seems like an absolutely unreasonable method, I however, am not an expert and am willing to listen. I think everyone could use some clear dispassionate enlightenment.

    http://noconsensus.wordpress.com/2008/09/18/the-all-important-blade-of-the-stick-uses-less-than-5-of-the-data/

  • Ian Jolliffe // September 19, 2008 at 10:52 am

    First, many thanks to those who have said such kind things about me. It encourages me to keep an eye on this site in case there are other things that I might helpfully comment on. However, if I overdo my ‘wise elder’ act and become arrogant, or simply boring, let me know and I’ll pipe down.

    In response to chopbox, I’ve said almost everything I can at present regarding short segment centring. I think there are three stages of understanding to go through in the evaluation of any new statistical techniques: understanding the mathematics behind the technique, understanding the technique in statistical terms and understanding how to interpret the results when the technique is applied to real data. I don’t believe that I have progressed in the last four years regarding the second or third of these so there is really nothing to add to my earlier comments. It may be that someone can provide a good explanation of the second stage, but I haven’t seen it yet. It also seems to me that because the second stage hasn’t been thought through properly, there are deficiencies in how the third stage has sometimes been presented, for example in comparing proportion of variation accounted for in short-segment centring with that from ‘ordinary’ PCA, which is like comparing apples with bananas.

    As an applied statistician, the second and third stages interest me more than the first, but it crucial to know about the first before going further. Also anyone with a mathematical background takes pleasure in elegant mathematics. For this stage I have learnt something recently mainly from the work of a co-worker, who has derived relationships between the results (eigenvalues, eigenvectors and the PCs themselves) for different types of centring. For various reasons, I won’t be going public with these just yet, but in due course we hope to publish something.

    Now for something completely different … in response to atmoz. I can’t say too much without more knowledge of the data, but my suspicion is that the example is a bit like that in Example 3.8.1 of my book (a shameless advert). The 5 variables are presumably all positively correlated so the first PC has coefficients of the same sign on all 5 variables. Given this, the mathematics of PCA means that all other PCs have coefficients with a mixture of positive and negative signs. Example 3.8.1 has this feature but also has subgroups of variables that are more highly correlated within groups than across groups and this leads to a simple structure of coefficients for all PCs. Is this what is happening here with the surface and satellite based measurements?

    One thing to remember about PCA is that it also works in reverse. The first PC is the linear combination of the original variables (standardised in your case) with maximum possible variance, but equally the last PC is the linear combination with minimum possible variance. If a pair of your 5 variables has much higher correlation than other pairs then the difference between them has a very small variance and I predict that PC5 is mainly a difference between those two variables. Sticking my neck out a bit further I predict that two of PC3-PC5 are mainly contrasts between the 3 surface derived temperatures and the other is mainly a difference between the 2 satellite measurements. This prediction may fail if all 3 eigenvalues are close together. If my prediction is right, it may worry you – are the PCs just mathematical artefacts? Not really - they appear because of the special correlation structure. With a different correlation structure the PCs will be different.

    Finally, I’m not sure why you’d want to combine PCs 3-5?

  • Boris // September 19, 2008 at 2:19 pm

    “I think everyone could use some clear dispassionate enlightenment.”

    Upon browsing your “no consensus,” blog I don’t see much promise of “enlightenment” on Mann 2008 considering how you misinform wrt the IPCC reports.

  • Pete // September 19, 2008 at 6:07 pm

    In the light of Ian Jolliffe’s last post, where does this leave MBH98? Is fatally flawed, and should be withdrawn? Is it still of interest but with caveats that the statistical analysis is invalid? Where do we go from here?

    [Response: Dr. Jolliffe has convinced me that applying decentered PCA invalidates the selection rules which are applied when choosing which PCs to include in one's model. But the "relevant" (hockey-stick shaped) PC would have been included anyway, applying valid selection rules to centered PCA. And the PCs which are omitted (because they're suppressed by the method rather than the statistics) don't seem to correlate with temperature in the calibration interval. Therefore it seems to me that the method is flawed, but the flaw has little or no impact on the final result.

    I would also agree with Dr. Jolliffe that the impact of "different centering" on PCA is not yet perfectly clear. There may be disadvantages -- or advantages -- yet to be discovered.]

  • apolytongp // September 19, 2008 at 7:43 pm

    Pete:

    Jolliffe is careful to comment on specific areas. He is also careful to comment and describe his level of knowledge and what he doesn’t know. You should NOT extend that to unsophisticated comments like “MBH98 is fatally flawed”. There is a lot more to that study than the short centering.

    This is not necessarily positive by the way. But neither is it nesesarily negative. Life is like that. We need to do more PCA and mulitple regression within our logical processes. Rather than always thinking single factor.

    It is reasonable to say that Jolliffe is concerned about some of the non-standard methods used by Mann and unwilling to endorse the Mann work until those are verified/calibrated. (Which they weren’t when Mike published…heck they weren’t even all disclosed.) Also that those defending the Mann work should be a bit less strident to do so and should not cite Jolliffe as backing Mann’s short-centering

  • apolytongp // September 19, 2008 at 7:55 pm

    Tammy:

    I would not nescessarily agree that the impact was negligeable. For one thing, Mike could not have touted the temperature signal as “the dominant mode of variation” and the “PC1″.

    Yes, his follow-on training method would still have gotten the same effect using the PC4. This is not nescessarily good that his method digs so hard for signal.

    For one thing, it becomes pretty clear that much of the network is detritus and the bcps are driving the sled. That could be good if bcps really can detect a “world climate pattern”. Of course, it does also raise the danger of data mining, etc.

    ————————

    In addition, the issues that came up with short-centering, should make us more wary of complicated, new methods that are not well pre-understood. Means that we need to look at all of MBH with a bit more wariness.

    Note: I think that McI often overstates his case. So you have to watch him, too.

  • Dave A // September 19, 2008 at 8:50 pm

    Steve Bloom,

    As Gerry North has commented on several occasions, it’s often the case that scientific reputations are made on being willing to take the conclusions of a paper a bit farther than other scientists might. The trick is doing that and getting it right as MBH did.

    So this was all about making reputations and not about real science in the real world. Is that what you are saying?

    BTW MBH did not get it right by any stretch of the imagination.

  • Dano // September 19, 2008 at 11:14 pm

    BTW MBH did not get it right by any stretch of the imagination.

    Boy, Abbott and Costello were the cat’s meow, weren’t they? And Ingrid Bergman, man, what a dame! The 1934 Yankees - why, they were the bee’s knees! And King Kong, gosh that picture show gave me the heebie jeebies, but it sure was swell!

    Best,

    D

  • dhogaza // September 19, 2008 at 11:27 pm

    BTW MBH did not get it right by any stretch of the imagination.

    If it weren’t for the fact that followup work shows hockey stick … hockey stick … hockey stick … you might have a point.

  • Gavin's Pussycat // September 20, 2008 at 6:44 am

    Tamino writes:

    Response: Dr. Jolliffe has convinced me that
    applying decentered PCA invalidates
    the selection rules which are applied
    when choosing which PCs to include in
    one’s model.

    Makes a lot of sense… but isn’t the validation testing — the RE/CE, i.e. R^2, “Variance Explained” testing — of the reconstruction, no matter how it was obtained, the proof of the pudding?

  • chopbox // September 20, 2008 at 6:45 am

    Thank you for your response, Dr. Jolliffe, and for taking the time to talk to us. I look forward to reading more of your comments.

  • mikep // September 20, 2008 at 11:08 am

    What follow up work do you mean? There are ten proxy reconstructions used in IPCC 2007. These tend to share in a small group of proxies, including in some cases the much discussed bristlecone pines. The use of updated data, available to but not included in IPCC 2007, for just three dendro sites used reverses the estimated medieval-modern temperature differential in 9 of these 10 studies. For full discussion see:

    http://www.climateaudit.org/pdf/mcintyre.2008.erice.pdf

  • dhogaza // September 20, 2008 at 1:50 pm

    Let’s just say … if your only exposure to the issues are climate audit, you may be less informed that you think.

    If McI’s work is such a slam-dunk debunking of the entire field of climate science, why isn’t he publishing?

  • Lazar // September 20, 2008 at 2:13 pm

    … the McI paper doesn’t even claim “reverses”. It claims an “impact” but does not quantify, qualify, or provide cites to supporting work.
    Sigh.

  • mikep // September 20, 2008 at 3:55 pm

    Read and learn. The link is to a conference presentation.

  • apolytongp // September 20, 2008 at 4:14 pm

    Burger is better.

  • Lazar // September 20, 2008 at 4:50 pm

    MikeP

    The link is to a conference presentation.

    What does that have to do with the price of fish?
    The paper does not support your claim re “reverses” [...] “in 9 of these 10 studies”.

  • apolytongp // September 20, 2008 at 6:06 pm

    Mike:

    Don’t tell Lazar to read the basics. He has. He’s one of the best amateur followers of the brouhaha. He’s like JohnV. Actually does math.

  • apolytongp // September 20, 2008 at 6:09 pm

    And I agree with Lazar that McI is both a sloppy thinker AND disingenuous in not quantifying impact of flaws in MBH. He’s more like a (very amateur) lawyer or a late night college debator, rather than a real Feynman style scientist. He’s trying to make the other side look bad. Not to clarify the complex.

    Mann is no saint either. He’s a young Turk ego scientist in a politicized field. In physics, they would crucify him for all the fun and games with statistics rather than really figuring things out. Then again, Mann dropped from physics into a softer field.

  • Dave A // September 20, 2008 at 8:01 pm

    Dhogaza

    “If it weren’t for the fact that followup work shows hockey stick … hockey stick … hockey stick … you might have a point.”

    The follow up work has generally been conducted by people who either have a link to MBH or use the same proxies so it is hardly surprising that they reinforce one anothers results.

    If there were problems with the original science, specifically the statistical methods used, then the supposed replication doesn’t actually count for much since the faulty reasoning has merely been continued.

  • dhogaza // September 20, 2008 at 9:04 pm

    Read and learn. The link is to a conference presentation.

    It can’t cover Mann’s latest paper, because McI describes Mann’s publication to be a “rude interruption” suffered as he was getting ready to leave for the Erice conference.

  • apolytongp // September 20, 2008 at 11:27 pm

    That’s a bit rich, that you cite stuff that came out a few weeks ago since it takes a while to pick apart complex work. look how long it took Tammy on the short centering.

  • Hank Roberts // September 21, 2008 at 12:47 am

    > the method is flawed, but the flaw has little or
    > no impact on the final result.

    This can be said for the early papers in most any area of science. That’s how science works. Methods improve over time with effort.

  • John Finn // September 21, 2008 at 12:59 am

    I’m a bit puzzled. There seems to be a general acceptance (on both sides) that the MBH hockey stick graph supports the case for AGW. But does it?

    The graph has a huge, unnatural-looking inflexion just after 1900. This sudden upturn appears to be “unprecedeted” in the previous 900 years (and possibly before that if other reconstructions are to be believed).

    So, according to MBH, there was a substantial global-scale climate shift in 1900 (or 1902 I guess). But what caused it? What can explain such an unprecedented event?

    It can’t have been due to CO2 because CO2 concentrations were only marginally above pre-industrial levels. The officially recorded 295ppm was simply a median value of ice core readings. In any case, we are constantly being told that there is a lag of several decades before the effect of increased ghgs is observed.

    We can only conclude, therefore, that the huge early 20th century increase is due to “natural variability”. In the absence of any other possible cause, the direct or indirect effects of the sun seem to be the most plausible explanation. But, whatever driver is responsible it appears to have been remained fairly constant (according to MBH) between 1000AD and 1900AD.

    I’d just like to point out here that there are a number of long-term temperature records from the NH and NONE of them exhibit the anomalous spike that is seen in MBH. What’s more any early 20th century warming seems to occur around 1910 or later. It’s tempting to think that the MBH uptick is simply an artifact of the statistical methd used. Note the MBH “normailisation period” is 1902-1980.

    Anyway, back to the unprecedented ~1900 climate shift (it surely warrants further study). If it was the sun then this surely explains the late 20th century warming. The 3 most intense solar cycles (19, 21 & 22) recorded all occurred in the second half of the 20th century.

    MBH has provided us with an explanation for all the warming in the past 100 years.

    [Response: Let's dispense with nonsense: your description "huge, unnatural-looking inflexion just after 1900" is exaggerration taken to the extreme. There are excursions of similar magnitude and duration in 1100, around 1350, and around 1450. The only "huge, unnatural-looking" feature is the blade of the hockey stick.

    The warming which did occur in the early 20th century has been discussed here many times. A substantial part of its cause is the lull in volcanic climate influence during that time. An additional cause may have been a slight increase in solar output, although some dispute that. In any case, the early 20th-century warming is not much different from what the reconstruction indicates has happened before on more than one occasion -- so there's no evidence to support a "dramatic shift" theory for either volcanism or solar output, or anything else for that matter.

    The one and only "huge, unnatural-looking" feature is the blade of the hockey stick]

  • L Miller // September 21, 2008 at 1:44 am

    “The follow up work has generally been conducted by people who either have a link to MBH”

    So it’s either a conspiracy?

    “or use the same proxies”

    So the proxies show a hockey stick, therefore new proxies are required?

    “If there were problems with the original science, specifically the statistical methods used, then the supposed replication doesn’t actually count for much since the faulty reasoning has merely been continued.”

    The beauty of truly independent reproduction (as opposed to “auditing”) is that there is little opportunity to repeat mistakes in methodology. If you can do a similar analysis from scratch and get the same result, then it’s very unlikely to an artifact of a mistake made along the way.

  • Jeff Id // September 21, 2008 at 2:46 am

    Are you guys aware that the latest Mann paper used proxies which he chopped short because of divergence and then pasted on other “high correlation” proxy data to 90 percent of the series????

    He then ran his correlation analysis to see if it was temperature!!!!!!

    Whether you agree with McI or not, this kind of science should be stopped.

    You can slam me all day, I don’t care. but look at the data, Mann posted it.

  • Hank Roberts // September 21, 2008 at 4:23 am

    Is a 30-year data set enough to say statistically whether there’s a change in trend?
    http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg

  • Hank Roberts // September 21, 2008 at 5:16 am

    Worth reading:
    http://scienceblogs.com/effectmeasure/2008/09/bpa_causation_and_scientific_r.php?utm_source=readerspicks&utm_medium=link

    —-excerpt follows—-
    … Our view is that there is no way to prove causation but many ways to demonstrate it. Unfortunately this subject quickly gets us into deep water and it can’t be done in a single post. Indeed, since many books have been written on this subject and there is no consensus, many posts won’t do the trick either. So we’ll settle for making a couple of points.

    The most important is that the question of causation and how to demonstrate it is not settled by philosophers of science. The only ones who think it’s settled are scientists and that’s because they aren’t experts in the subject. As one wag once said, expecting a scientist to understand scientific method is like expecting a fish to understand hydrodynamics. Scientist are experts in doing science. But they do not often understand exactly the logic of what they are doing.

    Consider the role of deductive reasoning, which most scientists take to be one of the hallmarks of scientific method. Yet its use is fairly restricted, mainly to constructing mathematical tools. Beyond that it has limited relevance because deductive reasoning requires something we don’t have in empirical science, absolute certainty. Here’s an example from the late ET Jaynes’s book, Probability Theory:…

    —-end of excerpt, click the link for full post—–

  • Philippe Chantreau // September 21, 2008 at 8:51 am

    Dave A, your statement does not seem to make sense. If the problem is with the method, as you say, then other studies using different methods should yield different results. If the problem is with the data, enlarging the range of proxies (as was done by a number of studies), should yield results more and more divergent of the original work, but that’s not the case. If the problem is with both data and method, then there should not be more convergence of results than what random noise would generate and that’s not the case either.

  • apolytongp // September 21, 2008 at 3:30 pm

    Steve McI is going on about a new Gaspe cedar series:

    http://www.climateaudit.org/?p=3731

    You have to read past some of the adjective-laden language. McI seems incapable of just having a topic sentence and then supporting sentences with facts within a paragraph. Instead he needs to beat you over the head with labels on every supporting item.

    If you look at the source, it’s pretty evident that this new series was not at treeline. Was selected for archeology of buildings, for representative lumber, not for temp limited specimens.

    http://www.grdh-dendro.com/fileadmin/user_upload/Rapport_GRDH_D6__ecran_.pdf

    Page 54 is especially good as it compares several cedar series. Evident that they are all different.

    ——————————–

    My first thought is that perhaps the difference is a result of not picking temperature sensitive trees. Interesting that McI does not label this as a possibility. Rather than all the sturm and drang and inferred conspiracies and the like.

  • Lazar // September 21, 2008 at 5:48 pm

    Decreasing snowpack and earlier Snow Driven Runoff in the breadbasket of the world under scenario A2… improved modelling of topography effecting estimates due to snow-albedo feedback.

    Rauscher, S. A., J. S. Pal, N. S. Diffenbaugh, and
    M. M. Benedetti (2008),
    Future changes in snowmelt-driven runoff timing over the western US,
    Geophys. Res. Lett., 35, L16703,
    doi:10.1029/2008GL034424.

    We use a high-resolution nested climate model to investigate future changes in snowmelt-driven runoff (SDR) over the western US. Comparison of modeled and observed daily runoff data reveals that the regional model captures the present-day timing and trends of SDR. Results from an A2 scenario simulation indicate that increases in seasonal temperature of approximately 3 deg C to 5 deg C resulting from increasing greenhouse gas concentrations could cause SDR to occur as much as two months earlier than present. These large changes result from an amplified snow-albedo feedback driven by the topographic complexity of the region, which is more accurately resolved in a high-resolution nested climate model.

    [...]

    For the 25th DQF (the Julian Day on which 25% of that year’s flow has occurred, analogous to
    the spring pulse onset of SDR), the largest changes of 70 days or more are projected to occur in the Sierra Nevada of California, the Cascades of Washington, and in the Bitterroot Range of northeastern Idaho and western Montana. Earlier timing of 20–40 days are projected in the eastern Rocky Mountains in Colorado, the Wasatch Range in northern Utah, and the Sangre de Cristo in southern Colorado and northern New Mexico.

    [...]

    temperature seems to be the dominant factor in
    determining changes in runoff, consistent with observations [Dettinger and Cayan, 1995]. Also, despite the increase in precipitation over the Northwest, accumulated snow decreases in the A2 simulation even at the highest elevations
    of the Cascades, in agreement with GCM simulations

    If anything the effects are likely underestimated…

    in many areas the model lags the observations, especially over northern Nevada, southern Utah, and southern Colorado. These biases can be attributed to a combination of factors which may be operating differently in different regions. First, the RF run displays a negative surface air temperature bias (compared to observations) and a positive precipitation bias during winter and spring (auxiliary material Figure S2), which will tend to increase model snowcover and delay melting.

    [...]

    Reduced snowpack and early SDR are likely to
    result in substantial modifications to the hydrologic cycle, including increased winter and spring flooding;

    … note that DJF runoff is predicted to increase alongside decreasing snowpack…

    changes in lake, stream, and wetland ecology; and reduced riverflow and natural (snow and soil) storage [Cayan et al., 2007]. For example, lower summer soil moisture could increase forest fire frequency and intensity [Westerling et al., 2006]. Moreover, water supplies for sectors including (but not limited to) agriculture [e.g., Purkey et al., 2008], energy
    [e.g., Markoff and Cullen, 2008; Vicuna et al., 2008], and recreational use [e.g., Hayhoe et al., 2004] could be severely affected, necessitating additional reservoirs and/or extended reservoir capacity. These changes to the hydrological
    cycle are likely to result in numerous societal and economic impacts that will pose serious challenges for water and land use management in the future.

  • Dave A // September 21, 2008 at 7:30 pm

    L Miller

    “. If you can do a similar analysis from scratch and get the same result, then it’s very unlikely to an artifact of a mistake made along the way.”

    You know as well as I do that they didn’t do a “similar analysis from scratch” but rather relied on MBH and built on it, and that many of them had links in various ways to MBH. So how much ‘independence’ was involved-very little.

    Phillipe Chantreau,

    The BPCs skewed MBH, along with its inappropriate statistical method, and the BCPs were used by the others, building on MBH’s work. Nobodywithin the climate community took a look at how MBH arrived at a hockey stick and it was outsiders, 5 years after MBH was published, who cried “foul”. Climate scientists just accepted MBH and used it in their own published work.

  • Steve Bloom // September 21, 2008 at 11:31 pm

    John Finn: “There seems to be a general acceptance (on both sides) that the MBH hockey stick graph supports the case for AGW.”

    Where does this zombie idea come from, and why does it keep getting repeated despite a multitude of statements (including on CA) that it’s wrong.

    Repeating: The correctness of MBH (in essence the relative amplitude of the MWP and LIA) has little or nothing to do with the correctness of climate disruption (nee AGW) theory. A significantly larger amplitude would mean much greater climate sensitivity than the middle of the IPCC range, but still within the upper limit (and would be really bad news for the future).

    Looking at this from a different direction, let’s say that the MWP and LIA really had the amplitude that seemed plausible 20 years ago (and that was reflected in the AR1 diagram that IIRC was based largely on Lamb’s prior work). Since those fluctuations didn’t involve a significant anthropogenic GHG forcing, we might reasonably conclude that the same natural forcings (volcanism and/or insolation changes) that did lead to them would have similar effects in the future. Taking that assumption, now throw anthropogenic GHGs into the mix. Is there a physical reason to expect that the anthropogenic effect would be other than additive? No.

    The focus on the hockey stick is mainly political, and has been from the start. The IPCC used it to impress the ignorant (and in particular to avoid wasting time answering the near-meaningless question of how common the late 20th-century temperature excursion was), and IMHO the subsequent non-scientific attacks on it have had a similar motivation.

  • John Finn // September 21, 2008 at 11:52 pm

    Tamino re: your repsonse to my post

    ” Response: Let’s dispense with nonsense: your description “huge, unnatural-looking inflexion just after 1900″ is exaggerration taken to the extreme. There are excursions of similar magnitude and duration in 1100, around 1350, and around 1450. ”

    Are there? Not in the original reconstruction there aren’t. But let’s have a look at the original data. You then say

    ” The only “huge, unnatural-looking” feature is the blade of the hockey stick. ”

    which begins in ~1900 … and is significantly enhanced by superimposing the thermometer record on the reconstruction. A case of comparing apples with pomegranates. Proxy data does not support the late 20th century warming. One of them is wrong. I think proxy data is unrelaible in that it dos not capture the natural variability of climate.

    ” The warming which did occur in the early 20th century has been discussed here many times. A substantial part of its cause is the lull in volcanic climate influence during that time. An additional cause may have been a slight increase in solar output, although some dispute that. In any case, the early 20th-century warming is not much different from what the reconstruction indicates has happened before on more than one occasion — so there’s no evidence to support a “dramatic shift” theory for either volcanism or solar output, or anything else for that matter. ”

    1. There is a dramatic shift as you have already acknowledged.
    2. The fact that you have discussed volcanism doesn’t mean it’s correct. Volcanos can and do cause cooling( e.g. Pinatubo, El Chichon etc) but volcanos of this magnitude are few and far between and, in any case, the cooling effect only lasts a year or two.
    3. Interesting aside about Pinatubo. Hansen claims it caused 0.5 deg cooling worldwide which, if correct, means that without Pinatubo 1992 or 1993 (not 1998) would have been the ‘warmest year on record’.
    4. We haven’t had a major volcano since 1991 yet the rate of warming has slowed (and possibly stopped) but CO2 concentrations have risen more since 1991 than they did in the 150 years between 1750 and 1900 (or between 1900 and 1958).
    5. I don’t pretend to have the answers but ocean circulation possibly driven vy soalr activity might be a good place to start.

    The one and only “huge, unnatural-looking” feature is the blade of the hockey stick]

  • John Finn // September 21, 2008 at 11:57 pm

    Sorry - I should have proof read my previous post.

  • David B. Benson // September 22, 2008 at 1:46 am

    John Finn // September 21, 2008 at 11:52 pm — You do understand that CO2 forcing is logarithm in the concentration?

  • Philippe Chantreau // September 22, 2008 at 1:58 am

    “Climate scientists just accepted MBH and used it in their own published work.”

    I am not sure that this applies to Moberg, Rutherford, and others over the past 5 years.

  • Jeff Id // September 22, 2008 at 4:15 am

    I can’t believe I got nuthin.

    No response whatsoever. Not even Gavin’s cat.

    What are your thoughts on first truncating and then infilling, pasting, tacking on data to the end of 90 percent of proxies prior to correlation. Data which directly contradicts actual measured data. Shweingruber was truncated to 1960 because of a ‘ divergence’ problem, then other less ‘divergent’ data was tacked on. Correlation was performed and low and behold more than 90% of the data correlated with temp!!!

    This is the latest Mann 08 paper, if you can’t agree with it then let the authors know it won’t be accepted!

  • L Miller // September 22, 2008 at 4:18 am

    “You know as well as I do that they didn’t do a “similar analysis from scratch” but rather relied on MBH and built on it, and that many of them had links in various ways to MBH”

    So point us to where they used the PCA technique in question.

    “The BPCs skewed MBH, along with its inappropriate statistical method, and the BCPs were used by the others, building on MBH’s work.”

    If you have a peer reviewed paper that shows the BCP do not produce a valid reconstitution then please provide a link. Otherwise it would seem you are just calling for new proxies because the existing ones don’t give the results you want to see.

  • Barton Paul Levenson // September 22, 2008 at 9:10 am

    John Finn writes:

    the rate of warming has slowed (and possibly stopped)

    No, it hasn’t.

    http://members.aol.com/bpl1960/Ball.html

    http://members.aol.com/bpl1960/Reber.html

  • Boris // September 22, 2008 at 2:01 pm

    Jeff,

    Perhaps you’d get some serious replies if you refrained from such statements as:

    “I can’t prove it yet but this now looks like deliberate manipulation of data to me!”

    (Of course, data is manipulated all the time, but I’m assuming you meant there was dishonest intent here.)

    Even in your post, you plot the proxies that show a blade.

    There is no MWP there. The bump you notice between 400-700 is not the MWP, which is (usually) defined as 800-1200, and, besides, is beyond the range for which Mann claims significance.

    So your claim that Mann deliberately fudged the data is belied by the fact that he didn’t need to to get pretty much the same result.

    As I noted before, your blog is a denialist exercise and I, for one, have no use in getting into a debate with you. You are welcome to the last word.

  • Jeff Id // September 22, 2008 at 5:43 pm

    Boris,

    My last statement above is not related to my blog, but rather to an obviously false manipulation of the data sets. Come on, the guy pasted a temperature curve on the end of his proxies to correct for divergence. This allowed 95 of 105 Schewingruber series to be accepted. He cut them back to 1960 and pasted a temp curve on the end using RegEM!!!!

    Also, I am not a denier I am a skeptic who strongly dislikes bad science. Heck, I don’t have enough experience in climatology modeling (as opposed to paleoclimatology reconstruction) to be a denier. But if you cannot reject obviously bad science, what credibility do you have in defending the rest.

    There is no question that the way the data was processed in this paper leaves a lot of room for manipulation, intentional or otherwise.

    I also have no idea what you mean by your MWP reference. I just plotted the data. The series I showed in the graph from your link were used to paste on the most recent years to a bunch of data. The MWP had no effect on it and is irrelevant to my point.

    A point which also should be yours and everyone else on this blog! You can’t paste on data to the end of proxies prior to correlation!

  • HankRoberts // September 22, 2008 at 6:16 pm

    Increasing Antarctic sea ice under warming atmospheric and oceanic conditions

    Author(s): Zhang JL
    Source: JOURNAL OF CLIMATE Volume: 20 Issue: 11 Pages: 2515-2529 Published: JUN 1 2007
    Times Cited: 1 References: 34

    Abstract:

    Estimates of sea ice extent based on satellite observations show an increasing Antarctic sea ice cover from 1979 to 2004 even though in situ observations show a prevailing warming trend in both the atmosphere and the ocean. This riddle is explored here using a global multicategory thickness and enthalpy distribution sea ice model coupled to an ocean model. Forced by the NCEP-NCAR reanalysis data, the model simulates an increase of 0.20 x 10(12) m(3) yr(-1) (1.0% yr(-1)) in total Antarctic sea ice volume and 0.084 x 10(12) m(2) yr(-1) (0.6% yr(-1)) in sea ice extent from 1979 to 2004 when the satellite observations show an increase of 0.027 x 10(12) m(2) yr(-1) (0.2% yr(-1)) in sea ice extent during the same period. The model shows that an increase in surface air temperature and downward longwave radiation results in an increase in the upper-ocean temperature and a decrease in sea ice growth, leading to a decrease in salt rejection from ice, in the upper-ocean salinity, and in the upper-ocean density. The reduced salt rejection and upper-ocean density and the enhanced thermohaline stratification tend to suppress convective overturning, leading to a decrease in the upward ocean heat transport and the ocean heat flux available to melt sea ice. The ice melting from ocean heat flux decreases faster than the ice growth does in the weakly stratified Southern Ocean, leading to an increase in the net ice production and hence an increase in ice mass. This mechanism is the main reason why the Antarctic sea ice has increased in spite of warming conditions both above and below during the period 1979-2004 and the extended period 1948-2004.

    IDS Number: 177NH
    ISSN: 0894-8755
    DOI: 10.1175/JCLI4136.1

  • HankRoberts // September 22, 2008 at 6:20 pm

    Rapid freshening of Antarctic Bottom Water formed in the Indian and Pacific oceans

    Author(s): Rintoul SR
    Source: GEOPHYSICAL RESEARCH LETTERS Volume: 34 Issue: 6 Article Number: L06606 MAR 24 2007
    Times Cited: 6 References: 31

    Abstract:
    Repeat hydrographic sections occupied in 1995 and 2005 reveal a rapid decline in the salinity and density of Antarctic Bottom Water throughout the Australian Antarctic Basin. The basin-wide shift of the deep potential temperature-salinity (theta - S) relationship reflects freshening of both the Indian and Pacific sources of Antarctic Bottom Water. The theta - S curves diverge for waters cooler than - 0.1 degrees C, corresponding to a layer up to 1000 m thick over the Antarctic continental slope and rise. Changes over the last decade are in the same direction but more rapid than those observed between the late 1960s and the 1990s. When combined with recent observations of similar freshening of North Atlantic Deep Water, these results demonstrate that dense water formed in both hemispheres is freshening in response to changes in the high latitude freshwater balance and rapidly transmitting the signature of changes in surface climate into the deep ocean.

    IDS Number: 149ZT
    ISSN: 0094-8276
    DOI: 10.1029/2006GL028550

  • HankRoberts // September 22, 2008 at 6:24 pm

    A modified method for detecting incipient bifurcations in a dynamical system

    Author(s): Livina VN, Lenton TM
    Source: GEOPHYSICAL RESEARCH LETTERS Volume: 34 Issue: 3 Article Number: L03712 FEB 15 2007
    Times Cited: 0 References: 28

    Abstract:
    We assess the proximity of a system to a bifurcation point using a degenerate fingerprinting method that estimates the declining decay rate of fluctuations in a time series as an indicator of approaching a critical state. The method is modified by employing Detrended Fluctuation Analysis (DFA) which improves the estimation of short-term decay, especially in climate records which generally possess power-law correlations. When the modified method is applied to GENIE-1 model output that simulates collapse of the Atlantic thermohaline circulation, the bifurcation point is correctly anticipated. In Greenland ice core paleotemperature data, for which the conventional degenerate fingerprinting is not applicable due to the short length of the series, the modified method detects the transition from glacial to interglacial conditions. The technique could in principle be used to anticipate future bifurcations in the climate system, but this will require high-resolution time series of the relevant data.

    IDS Number: 138KG
    ISSN: 0094-8276
    DOI: 10.1029/2006GL028672

  • HankRoberts // September 22, 2008 at 6:27 pm

    Understanding public complacency about climate change: adults’ mental models of climate change violate conservation of matter

    Author(s): Sterman, John D., Booth Sweeney, Linda
    Source: CLIMATIC CHANGE Vol. 80 Issue: 3-4 Pages: 213-238 FEB 2007
    Times Cited: 4 References: 66

    Abstract:

    … most Americans believe climate change poses serious risks but also that reductions in greenhouse gas (GHG) emissions sufficient to stabilize atmospheric GHG concentrations can be deferred until there is greater evidence that climate change is harmful. US policymakers likewise argue it is prudent to wait and see whether climate change will cause substantial economic harm before undertaking policies to reduce emissions.

    Such wait-and-see policies erroneously presume climate change can be reversed quickly should harm become evident, underestimating substantial delays in the climate’s response to anthropogenic forcing.

    We report experiments with highly educated adults - graduate students at MIT - showing widespread misunderstanding of the fundamental stock and flow relationships, including mass balance principles, that lead to long response delays.

    GHG emissions are now about twice the rate of GHG removal from the atmosphere. …most subjects believe atmospheric GHG concentrations can be stabilized while emissions into the atmosphere continuously exceed the removal of GHGs from it.

    These beliefs … violate conservation of matter.

    Low public support for mitigation policies may arise from misconceptions of climate dynamics rather than high discount rates or uncertainty about the impact of climate change. …

    IDS Number: 130MI
    ISSN: 0165-0009
    DOI: 10.1007/s10584-006-9107-5

  • Paul Middents // September 22, 2008 at 7:31 pm

    JeffID,

    Eschew the exclamation point. Even single ones cause me to question the content. Multiple ones essentially eliminate any possibility of pursuing your point.

  • dhogaza // September 22, 2008 at 9:21 pm

    The problem is, Jeff, we have no reason to trust anything said by McI or his acolytes. If Mann were really guilty of academic fraud, as McI so repetively insinuates and as you insinuate here, the scientific community would ferret it out.

  • Lazar // September 22, 2008 at 10:54 pm

    Jeff Id

    I can’t believe I got nuthin.

    No response whatsoever.

    People are busy. A thorough assessment of your claims requires a large investment of time. Unless a person’s interest is picking apart methodology, there are more productive ways of learning about climate and moving the science forward.

    Y0u might consider contacting the authors with your concerns. You could try submitting a comment to the journal if you find that their response is wanting.

    A few notes on your claims…

    He cut them back to 1960 and pasted a temp curve on the end using RegEM

    Note that the SI says…

    Dendroclimatic data included a tree ring network of 105 maximum latewood density (“MXD”) gridbox (5° latitude by 5° longitude) tree-ring composite series (Briffa et al, 1998;2001; Rutherford et al, 2005)

    [...]

    MXD data were eliminated for the post-1960 interval. The RegEM algorithm of Schneider (2001) was used to estimate missing values for proxy series terminating prior to the 1995 calibration interval endpoint, based on their mutual covariance with the other available proxy data over the full 1850-1995 calibration interval. No instrumental or historical (i.e., Luterbacher et al) data were used in this procedure.

    another claim…

    Correlation was performed and low and behold more than 90% of the data correlated with temp

    Two calibration periods were used, one covering 1850-1949, the later covering 1896-1995. The basic result of Mann et al. 08 is shown to be insensitive to the calibration period used as well as to the presence of tree-ring data.

  • Steve Bloom // September 23, 2008 at 3:13 am

    OT, but this seemed like a good place to post it:

    Santer, B. D., P. W. Thorne, L. Haimberger, K. E. Taylor, T. M. L. Wigley, J. R. Lanzante, S. Solomon, M. Free, P. J. Gleckler, P. D. Jones, T. R. Karl, S. A. Klein, C. Mears, D. Nychka, G. A. Schmidt, S. C. Sherwood, and F. J. Wentz, submitted:

    Consistency of modelled and observed temperature trends in the tropical troposphere.

    International Journal of Climatology. 3/08.

    Abstract: “A recent report of the U.S. Climate Change Science Program (CCSP) identified a “potentially serious inconsistency” between modelled and observed trends in tropical lapse rates (Karl et al., 2006). Early versions of satellite and radiosonde datasets suggested that the tropical surface had warmed by more than the troposphere, while climate models consistently showed tropospheric amplification of surface warming in response to human-caused increases in well-mixed greenhouse gases. We revisit such comparisons here using new observational estimates of surface and tropospheric temperature changes. We find that there is no longer a serious and ubiquitous discrepancy between modelled and observed trends in tropical lapse rates.

    “This emerging reconciliation of models and observations has two primary explanations. First, because of changes in the treatment of buoy and satellite information, new surface temperature datasets yield slightly reduced tropical warming relative to earlier versions. Second, recently-developed satellite and radiosonde datasets now show larger warming of the tropical lower troposphere. In the case of a new satellite dataset from Remote Sensing Systems (RSS), enhanced warming is due to an improved procedure of adjusting for intersatellite biases. When the RSS-derived tropospheric temperature trend is compared with four different observed estimates of surface temperature change, the surface warming is invariably amplified in the tropical troposphere, consistent with model results. Even if we use data from a second satellite dataset with smaller tropospheric warming than in RSS, observed tropical lapse rates are not significantly different from those in all model simulations.

    “Our results contradict a recent claim that all simulated temperature trends in the tropical troposphere and in tropical lapse rates are inconsistent with observations. This claim was based on use of older radiosonde and satellite datasets, and on two methodological errors: the application of an inappropriate statistical “consistency test”, and the neglect of observational and model trend uncertainties introduced by interannual climate variability. ”

    Unfortunately the full text isn’t available, but with Karl and Solomon as co-auths this looks to me to be the official NOAA shiv to S+C’s vitals (although see below). This is what the latter get for not working and playing well with others. Could there be a funding cut in their near future?

    The new RSS dataset (v. 3.2) that seems likely to be the one discussed in this paper was posted on their site with no fanfare whatsoever (that I can recall) within the last couple of months, along with a separate in-press paper (full text here) analyzing it. Per a note from this second paper:

    “This paper does not describe uncertainty estimates in these datasets, or the lower tropospheric temperature (TLT) datasets constructed by extrapolating MSU2 and AMSU5 lower in the atmosphere (Christy et al., 2003; Mears and Wentz, 2005; Spencer and Christy, 1992). These topics will be addressed in upcoming papers.”

    So perhaps there’s another truck or two headed S+C’s way.

    For those who aren’t already aware of it, RSS maintains a very nice page discussing all things A/MSU.

  • Steve Bloom // September 23, 2008 at 3:16 am

    Just to add that that someone who was really interested in fraud would find a close look at S+C rather more fruitful, yet somehow the auditors find that boring.

  • apolytongp // September 23, 2008 at 3:44 am

    The hoi polloi are kvetching that treeline species are not temp limited now. Funny that the don’t realize the circularity in disproving an opponent’s point while assuming that they have alreadyt done so. But then again, I’ve seen Willis stupid before. Still remember how he tought dividing periods of observation made confidence limits get tighter.

  • Jeff Id // September 23, 2008 at 6:23 am

    Lazar,

    So what you’re implying is that it seems reasonable to you to cut off the ends of divergent temperature series which Mann states was done due to a “lack of sensitivity” and paste on other data so that a highly sensitive correlation could be performed?

    “Because of the evidence for loss of
    temperature sensitivity after 1960 (1), MXD data were eliminated
    for the post-1960 interval.”

    Let’s translate, “Because the up-to-date data I have doesn’t fit the conclusion it was eliminated”

    He then filled in the actual tree data with his own improved version.

    If you believe it didn’t have an effect as he stated in the paper then why would he waste his time filling in over 90% of the series?

  • andy // September 23, 2008 at 11:05 am

    dhogaza; you can yourself check the original sources used by Mann. Eg the finnish lake sediment study clearly states that the sediments were contaminated after ~1750, and even polluted in the 1900’s due the waste water dumping. These sediment series are used four times, and consist 4 of 15 non-tree ring series used in validation. I’ve tried to ask on RealClimate this, but didn’t get through, maybe better luck here: How on earth the waste water amounts of 1900’s is expected to be correlating with MWP temperature reconstructions? I’d suppose this kind of questions could be handled with somehow lghter process than the peer reviewed journal articles. RC is anyway answering or commenting even sillier questions than the above one, and Mann is named as one of the editors of RC, so he could very easy to answer these questions, I suppose.

  • dhogaza // September 23, 2008 at 12:34 pm

    In other words, the fact that some of the potential proxy datasets are incomplete, and end in 1995.

    This leaves Mann with two choices: give up, or work over such datasets and be accused of academic fraud.

    Nice world we live in, thanks to McI and friends.

  • Lazar // September 23, 2008 at 3:49 pm

    Jeff Id

    So what you’re implying is that it seems reasonable to you to cut off the ends of divergent temperature series

    No, I am not implying anything since I have not looked at the raw data nor the papers by Briffa and Rutherford. In general, when faced with data quality issues there are choices between discarding the data, including the data as-is, and including the data with adjustments. There is no hard-and-fast and generally applicable rule. Scientists make the decisions, let it be known what they have done, and let others adjust their opinions accordingly.

    why would he waste his time filling in over 90% of the series

    Before accepting that premise I would need to look at the data and the code, modify the code to run on Octave, and read-up on RegEm. If the permise is accepted, then even more work in thinking about what it all means… which series, by how much, and with what? Then sensitivity analysis… what are the effects of doing things differently? It’s too much. It is not productive work for me personally. I’m 40 miles away from the nearest university library. I’m just a beginner finding my way re statistics and climate science. The point is that it is better for you to raise your concerns with the authors, then consider publishing a comment.

  • Jeff Id // September 23, 2008 at 5:24 pm

    dhogaza,

    You’re missing the point. It isn’t that the sets were incomplete. They were complete but they didn’t match temperature due to a “lack of sensitivity. ”

    So he cut off the inconvenient insensitive data and provided scaling and calibration based on infilled data!

    This is entirely different from your suggestion that he just innocently filled in the missing series (which would also be faulty).

    Can you imagine the effect that scaling a proxy of high correlation data vs the same graph with actual low correlation data through EIV. This makes a huge difference and clearly wouldn’t provide correct scaling for the historic proxy data.

    The big study Lazar suggests would be interesting but on the surface, doesn’t this strike you more like statistical painting than science?

  • wolf // September 23, 2008 at 5:34 pm

    “Therefore it seems to me that the method is flawed, but the flaw has little or no impact on the final result.”

    It seems to me that if the method is flawed - no conclusion can be drawn at all on the final result.

  • HankRoberts // September 23, 2008 at 7:11 pm

    > raise your concerns with the authors,
    > then consider publishing a comment.

    Depends on what he wants. Doing what you suggest would potentially improve the science.

    Not all commenters want to improve the science. Some want to choke it with confusion. Those people post widely on blogs. By their works, and all that.

  • John Finn // September 23, 2008 at 9:04 pm

    David B. Benson // September 22, 2008 at 1:46 am

    John Finn // September 21, 2008 at 11:52 pm — You do understand that CO2 forcing is logarithm in the concentration?

    Yes I do, David. What’s your point?

  • Lazar // September 23, 2008 at 9:46 pm

    Steve Bloom,

    Our results contradict a recent claim that all simulated temperature trends in the tropical troposphere and in tropical lapse rates are inconsistent with observations. This claim was based on use of older radiosonde and satellite datasets, and on two methodological errors: the application of an inappropriate statistical “consistency test”, and the neglect of observational and model trend uncertainties introduced by interannual climate variability

    It must be Douglass, Singer, Pearson and Christy 2007.

    Radiosondes…

    Toward Elimination of the Warm Bias in Historic Radiosonde Temperature Records—Some New Results from a Comprehensive Intercomparison of Upper-Air Data
    Leopold Haimberger, Christina Tavolato, and Stefan Sperka
    Journal of Climate
    Volume 21, Issue 18 (September 2008)
    DOI: 10.1175/2008JCLI1929.1

    Both of the new adjusted radiosonde time series are in better agreement with satellite data than comparable published radiosonde datasets, not only for zonal means but also at most single stations. A robust warming maximum of 0.2–0.3K (10 yr)−1 for the 1979–2006 period in the tropical upper troposphere could be found in both homogenized radiosonde datasets. The maximum is consistent with mean temperatures of a thick layer in the upper troposphere and upper stratosphere (TS), derived from M3U3 radiances. Inferred from these results is that it is possible to detect and remove most of the mean warm bias from the radiosonde records, and thus most of the trend discrepancy compared to MSU LS and TS temperature products.

  • Jeff Id // September 23, 2008 at 9:53 pm

    I did try to publish comments at real climate. They are deleted before they get published.

    I expected some of the experts here to have deeper thought about the problems these methods would create. I see now that it’s not to be.

    I am working on other ways to improve the science which are more subtle and interesting. Hacking away at one paper doesn’t hold much interest for me beyond the obvious flaws in methodology. Flaws which should never be accepted in science.

    Anyway, I give up. You guys win, congrats.

  • John Finn // September 23, 2008 at 9:55 pm

    Never mind, David. I know what point you’re trying to make. However, I know from reading these blogs that you are pretty well informed so I’m guessing you’ve already calculated the relative forcings for the stated periods and found that the forcing for the 1991-to date period is greater than for both 1750-1900 and 1900-1958 periods.

    So we have greater forcing - no volcanos - a positive PDO - 6 years of El Nino type conditions (+ve ONI throughout) and BPL has to drag up the one data set which indicates warming.

    Incidentally, Barton, old bean, I notice you’re still posting those 20 (19, 18, …) year trends which show significant warming for each of the starting years between 1988 and 1994 to date.

    You do know that the Pinatubo eruption occurred in 1991 which would have had an impact on any trend which uses a starting year before ~1994 and would be particularly influential from ~1988 onwards, i.e. the very years that show a significant trend.

  • apolytongp // September 23, 2008 at 10:52 pm

    Steve and his hoi polloi are busy trying to engage on the key issue of temp limited site now. I wonder why they didn’t bother at the beginning of the discussion. They have an annoying tendancy to argue, not against the best characterization of their opponents, but one from their own mind with lots of assumptions of already having proved the argument that itself is in debate.

    And AHA, we hear that the Gaspe site was already characterized as treeline.

    I am constantly amazed by the studity and tediousness of my side.

  • apolytongp // September 23, 2008 at 10:54 pm

    Of course, they don’t agree that it is tree line. But simply that one source called it treeline…brings the issue into much more debate. And when contrasted with the Ile de la Cite cedars that definitely are NOT tree line…it just brings up another area where these guys fail to think multi factor.

    The only thing that occasionaly makes me feel better is the stupidity and dishonesty from the believer side.

  • apolytongp // September 24, 2008 at 1:14 am

    He’s got another thread up:

    http://www.climateaudit.org/?p=3821

    a. Varies choices of correlation test, but does not do a full factorial.

    b. Has cutesy terms like “keno pick two” that are annoying to read through and probably there for his little nitwit cheering section.

  • Phil Scadden // September 24, 2008 at 2:01 am

    Unless your message was also full of invective, the most likely reason for something to be rejected by Real climate is that it ran foul of their spam filters. This gets a lot of complaints. Check you text for any obvious spam red lighters, modify spelling, try again.

  • David B. Benson // September 24, 2008 at 2:11 am

    John Finn // September 23, 2008 at 9:55 pm — Actually, I’ve only done two, using 1850 CE as a base date. One was until 1958 CE, the beginning of the Keeling curve; the other was 2007 CE. In both, using 3 K as climate sensitivity of which 60% is ‘immediate’, I obtained good-enough agreement with decadal averages from HadCRUTv3.

    A bit crude, but illustrates that fairly decent approximations do not require a GCM. Also practice in using the formula in

    http://forecast.uchicago.edu/samples.html

  • dhogaza // September 24, 2008 at 3:29 am

    You’re missing the point. It isn’t that the sets were incomplete. They were complete but they didn’t match temperature due to a “lack of sensitivity. ”

    And you have proof that his claim is wrong?

    Post it, please.

  • dhogaza // September 24, 2008 at 3:31 am

    P.S. I know enough about how science works to know that a claim of “lack of sensitivity”, if simply made to jack around data to fit a preconception, ain’t going to be missed by reviewers. PNAS ain’t E&E, no matter how strongly you wish this to be true.

  • Hank Roberts // September 24, 2008 at 6:36 am

    > Jeff Id // September 23, 2008 at 9:53 pm
    > I did try to publish comments at real climate.

    Try comparing the websites and blogs of scientists who have published in the field to your own. It may help sort out how to draft a letter comment to the journal that published the paper you’re criticizing, without appearing biased.

  • John Finn // September 24, 2008 at 9:56 am

    David B

    Why is 60% immediate? why not 80% or 40% or 100% for that matter.

    This is just curve fitting. You have a fixed parameter, i.e. 3K per doubling and everything else is tweaked to get as close to that target as possible.

    Even then the 1940-75 period is a problem. Of course this will be due to a convenient sprinkling of aerosols - despite logic (and the cooling pattern) suggesting this is nonsense.

    On the 60% point . When does the other 40% come into play. Let’s say we stabilise CO2 concentrations at to-days levels i.e. ~385 ppm when will we see the full effect of the resultant forcing. By “full” effect I really mean “almost all (95%+)”.

    Those who support AGW are quite happy to toss in all sorts of variables to help explain the CO2/temperature link. To be fair this applies to some sceptics as well. But look a Tamino’s response to my first post where he says

    “A substantial part of its cause is the lull in volcanic climate influence during that time. An additional cause may have been a slight increase in solar ”

    So take a few less volcanos, a little bit of solar (not too much we wouldn’t want to give the impressions the sun was a main driver); stir for a few decades then add a large helping of aerosols all the while seasoning the mix with regular additions of CO2 and there you have it.

    This pretty much sums up the IPCC “detection and attribution” research, but no-one wants to acknowledge that the total lack of certainty surrounding the effect (if any) of all the variables (includng solar).

  • Boris // September 24, 2008 at 12:46 pm

    “You do know that the Pinatubo eruption occurred in 1991 which would have had an impact…”

    So, you accept that anthro radiative forcing can be masked by other factors in 1991 but not 2008? Fascinating.

  • Hank Roberts // September 24, 2008 at 2:53 pm

    Sigh. No cites, no sources, no science, just more shovelfuls of warm steaming reprocessed opinion that the science can’t possibly be so inconvenient so it has to be wrong.

  • Bob North // September 24, 2008 at 3:16 pm

    Boris - I think the question is what (if anything), has masked anthropogenic radiative forcing over that past few years? In other words, in face of ever increasing GHG forcings and the lack of any major volcanic eruptions, why has temperature appeared to be flat over the past several years. Is it PDO changes? Is its ENSO? Is the energy all going into the oceans and/or melting of polar ice? Is it just unexplained interannual variability (aka “noise”)?

    Your comment suggests that you think something might be masking the forcing. What is it?

    regards,
    Bob North

  • t_p_hamilton // September 24, 2008 at 4:11 pm

    Bob North:

    ” Boris - I think the question is what (if anything), has masked anthropogenic radiative forcing over that past few years? In other words, in face of ever increasing GHG forcings and the lack of any major volcanic eruptions, why has temperature appeared to be flat over the past several years. Is it PDO changes? Is its ENSO? Is the energy all going into the oceans and/or melting of polar ice? Is it just unexplained interannual variability (aka “noise”)?”

    It is noise. Up to 1990 tropospheric aerosols countered the CO2 effect. I can’t recommend this figure highly enough: http://data.giss.nasa.gov/modelforce/

  • t_p_hamilton // September 24, 2008 at 4:54 pm

    Jeff Id might actually get a response to his comments on Mann’s latest paper if he would actually quote the text that he says he has problems with, and make careful arguments.

    His website looks like crank science - poorly argued and hence not worth the time to debunk.

  • Lazar // September 24, 2008 at 6:23 pm

    The increasing intensity of the strongest tropical cyclones
    James B. Elsner, James P. Kossin & Thomas H. Jagger
    Nature 455, 92-95(4 September 2008)
    doi:10.1038/nature07234

    A follow-up to Kossin et al. 2007 (”A globally consistent reanalysis of hurricane variability and trends”, GRL).

    Cyclone intensities for the period 1981-2006 are reconstructed using satellite observations of brightness (UW/NCDC).

    Whereas this paper estimates annual trends in the wind speed of each quantile, Kossin et al. 2007 focussed on the frequency of cyclones of maximum wind-speeds 2-sigma or more above the mean of the observation period. The other main difference is this paper uses a weaker alpha=0.1 significance level.

    Figure 1b. shows the results for the global average. Above the median, trends in wind speed are shown to be a) positive and b) increasing monotonically by quantile. a) and b) are predicted by theory; the maximum potential intensity increases with increasing SSTs.

    Constraints on significance seem to be less observation error (Kossin et al. 2007 shows excellent agreement between UW/NCDC satellite data and “best track” data despite systematic biases), and more due to internal variability and sample size, which naturally diminishes for upper quantiles.

    Regional differences are shown in Figure 2 (a-f) for each of the six ocean basins studied. Note that trends in the South Pacific Ocean appear to be negative.

    Although not significant, regression against global SST shows the linear trend for the 0.9 quantile to be +6.5 m/s/deg C… roughly a 15 mph increase in wind speed for a 1 deg C rise in SST.

    From the abstract:

    We find significant upward trends for wind speed quantiles above the 70th percentile, with trends as high as 0.36 +/- 0.09 m s-1 yr-1 (s.e.) for the strongest cyclones. We note separate upward trends in the estimated lifetime-maximum wind speeds of the very strongest tropical cyclones (99th percentile) over each ocean basin, with the largest increase at this quantile occurring over the North Atlantic, although not all basins show statistically
    significant increases. Our results are qualitatively consistent with the hypothesis that as the seas warm, the ocean has more energy to
    convert to tropical cyclone wind.

  • HankRoberts // September 24, 2008 at 6:37 pm

    One for Dr. Joliffe — you can’t do anything about this sort of statement attributed to your paper, but so you’re aware that this kind of person is using your name:
    http://dotearth.blogs.nytimes.com/2008/09/10/small-car-house-is-beautiful/#comment-34168

  • dhogaza // September 24, 2008 at 6:46 pm

    This is just curve fitting. You have a fixed parameter, i.e. 3K per doubling and everything else is tweaked to get as close to that target as possible.

    Except, well, no, that’s now how the models are built.

  • dhogaza // September 24, 2008 at 6:48 pm

    *not* how …

  • David B. Benson // September 24, 2008 at 9:02 pm

    John Finn // September 24, 2008 at 9:56 am — Thee 60% comes from a GCM study in a paper by Reito Knutti et al. In the graph of the global warming due to a doubling of CO2, one sees that about 60% of the warming occurs within 7 years, i.e., ‘immediately.

    The equilibrium climate sensitivity of 3 K is the IPCC AR4 most likely value. So in neither case did I choose parameters to fit the HadCRUTv3 decadal averages; I choose 3 K and 60% and then went to do my two calculations.

    The remainder of the warming is the warming of the oceans, which eventually warms the air a bit, which warms the ocean a bit more … This process takes centuries, maybe a dozen for your 95%.

    Over the last 140+ years, the effects of all the other variables just happen to closely cancel each other out; the result is that using the formula for CO2 alone gives a fairly good answer for the warming. This simplified approach won’t work for a mere 35 year period.

  • David B. Benson // September 24, 2008 at 9:10 pm

    Oh, I forgot to mention that James Annan, who ought to know (to put it mildly) agrees that 60% immediate is about right.

  • Michael Hauber // September 24, 2008 at 10:59 pm

    Been looking at UAH temps recently and noticed that 900hp temps have trended up quite significantly in the last 8 years. Stratospheric temps (<100hp) have trended down, both trends being what you’d expect from CO2.

    So if something is masking the CO2 trend, it seems to be masking at the surface level, but not higher up in the atmosphere. This to me would rule out the sun as a significant factor, and implicate ocean temperature variations. Although even then I’d be surprised that ocean factors could affect surface temperature trends strongly, but not affect 900hp temperature trends (gut feeling based on the idea that 9oohp is in the lower level of our cloud systems, so coupled strongly on a day to day basis with the surface). So maybe the difference between surface and 900hp is more random variation than something significant.

  • Dave Rado // September 24, 2008 at 11:01 pm

    Off topic, but as there isn’t a topic I presume that’s okay. I’m confused about CO2 atmospheric residence times and hoped someone here could clarify.

    First of all, I understand that the residence time for an individual CO2 molecule is thought to be around 10 years or less, due to molecules being absorbed by natural carbon sinks, but that this has nothing to do with the atmospheric residence time of CO2, because almost all of the CO2 that is absorbed by carbon sinks is balanced by CO2 being emitted by natural carbon sources. Some disinformation sites seem to intentionally confuse average residence times of an individual CO2 molecule with the atmospheric residence time of CO2. However:

    Press reports frequently quote the atmospheric residence time of CO2 as being 100 years on average - e.g. http://tinyurl.com/45wsfl .

    The IPCC TAR quotes a figure of “5 to 200 years” (without giving an average) at http://tinyurl.com/ar7gl.

    Wikipedia, on the other hand, states at http://tinyurl.com/3jnyxr that “Recent work indicates that recovery from a large input of atmospheric CO2 from burning fossil fuels will result in an effective lifetime of tens of thousands of years” and cites Archer, David (2005), and Caldeira, Ken & Wickett, Michael E. (2005).

    But Global Warming Art states, at http://tinyurl.com/2fe3k5 , that:

    “The dilution of carbon is such that only 15-30% is expected to remain in the atmosphere after 200 years, with most of the rest being either incorporated into plants or dissolved into the oceans. This leads to a new equilibrium being established; however, the total amount of carbon in the ocean-atmosphere-biosphere system remains elevated. To restore the system to a normal level, the excess carbon must be incorporated into carbonate rocks through geologic processes that progress exceedingly slowly. As a result, it is estimated that between 3 and 7% of carbon added to the atmosphere today will still be in the atmosphere after 100,000 years (Archer 2005, Lenton & Britton 2006). This is supported by studies of the Paleocene-Eocene Thermal Maximum, a large naturally occurring release of carbon 55 million years ago that apparently took ~200,000 years to fully return to pre-event conditions (Zachos et al. 2001).”

    How does one make sense of these apparently contradictory yet apparently authoritative statements?

    Another thing that confuses me: according to the ice core records, in the distant past, when temperature has risen, the oceans have apparently become a net source of CO2 (several hundred years after the onset of the warming). Yet they cannot have been completely saturated with carbonic acid, so why did they become a source rather than a sink?

    Dave

  • HankRoberts // September 25, 2008 at 12:07 am

    Dave, you might want to email both Archer and Rohde and invite them to answer you. I don’t see any obvious contradiction there myself, just extreme oversimplification to state a simple range for CO2 without addressing the complexity.

    Biogeochemical cycling is a major research area and very complicated. Scott Saleska, who is an expert in this specific area, had some relevant comments here:
    http://sciencepolicy.colorado.edu/prometheus/archives/climate_change/001004less_than_a_quarter_.html

  • David B. Benson // September 25, 2008 at 12:10 am

    Dave Rado // September 24, 2008 at 11:01 pm — I can sorta answer. The excess CO2 in the atmosphere eventually goes to equilibrium with the oceans, terrestrial vegitation and even the soils. But the process is rather complex, so saying that most of it is gone in several hundred years is about right. However, due to ocean chemistry (don’t ask me to explain further), some portion has a very long residence time.

    Your second question involves, again, more chemistry than I really know, but to put it simply, under quasi-equilibrium conditions warming waters express net CO2 and cooling waters absorb net CO2.

  • Boris // September 25, 2008 at 12:21 am

    Bob North,

    Personally, I think the warming is being masked by la Nina and, to a lesser extent , the downtrend in solar irradiance.

    Hank,

    Ah, conspiracy theorists always misrepresent at least one expert to buttress their otherwise unsupported beliefs. What’s amusing is that kim thinks (s)he is winning.

  • Ray Ladbury // September 25, 2008 at 12:59 am

    Dave Rado, I agree that it sounds confusing, but here’s the way I understand it. When a CO2 molecule leaves the atmosphere where does it go? Either to the biosphere or the ocean. But these reservoirs are only temporary. Shallow ocean water mostly stays shallow, so the ocean can cough up the CO2 molecule any time. Likewise, everything dies and eventually yields its carbon back to the atmosphere (unless buried in a coal bed). So, while the CO2 molecule leaves the atmosphere in ~10 years, it doesn’t leave the system. Or rather only SOME of it does–about half every 70-100 years or so.
    Thus, CO2 keeps on giving for a very long time indeed.
    As to how the oceans turn into a source of CO2–you know from opening a soda that CO2’s solubility in water varies inversely with temperature. Raising the temperature shifts the equilibrium enough so that you get more out than goes in.

  • Lab Lemming // September 25, 2008 at 2:34 am

    Arctic ice minimum?

  • Marion Delgado // September 25, 2008 at 3:15 am

    It was totally cold this morning.

    I guess you communists lose again!

    Maybe Fat Al can run for the Politburo of the Sierra Club. In North Korea.

  • Marion Delgado // September 25, 2008 at 3:19 am

    Nick:

    As you may or may not know, Michael Tobis is completely into all of that stuff, he wants better climate software that combines the best of commercial and open source and current academic software, and actually mostly favors python.

    http://initforthegold.blogspot.com/

    No doubt you knew this but it never hurts not to assume, IMO.

  • Barton Paul Levenson // September 25, 2008 at 11:59 am

    John Finn posts:

    Even then the 1940-75 period is a problem. Of course this will be due to a convenient sprinkling of aerosols - despite logic (and the cooling pattern) suggesting this is nonsense.

    There is nothing nonsensical about it. The ’40s began with the world ramping up industry fast to support armies in World War II. Effective pollution controls weren’t put on until the 1970s, which is why you had pollution emergencies like Donora, PA and London in the ’40s and ’50s.

    Those who support AGW are quite happy to toss in all sorts of variables to help explain the CO2/temperature link.

    Nobody in his right mind, except for a few crackpots worried about an imminent ice age, “support[s] AGW.” We support people taking science seriously and dealing with the problem.

    To be fair this applies to some sceptics as well. But look a Tamino’s response to my first post where he says

    “A substantial part of its cause is the lull in volcanic climate influence during that time. An additional cause may have been a slight increase in solar ”

    So take a few less volcanos, a little bit of solar (not too much we wouldn’t want to give the impressions the sun was a main driver);

    Right, they adjust the amount of solar to make it look like a contributor, but not too much of a contributor. Did it ever occur to you that people can attribute such things by means other than the ideological? I mean, maybe they say the sun isn’t a primary driver of the recent warming because the sun isn’t a primary driver of the recent warming. Your conspiracy-theory view of the world seems not to recognize that people can actually do scientific research and test these things.

    This pretty much sums up the IPCC “detection and attribution” research,

    Nope. It just shows your thorough misunderstanding of that research, apparently because you see everything through ideology-colored glasses.

    but no-one wants to acknowledge that the total lack of certainty surrounding the effect (if any) of all the variables (includng solar).

    Maybe they don’t want to acknowledge the total lack of certainty because there isn’t a total lack of certainty.

    There were two schools of thought on propaganda during World War II. For the Nazis, Josef Goebbels promoted the “Big Lie” — just keep repeating something until people believe it; whether it’s true or not is irrelevant.

    The guy in charge of British propaganda — can’t recall his name offhand — had a countervailing theory — that people won’t believe propaganda unless there is a core of truth to it. I think he was right and Goebbels was wrong. That’s why your rhetoric fails. You can’t resist the urge to portray the people you disagree with in the blackest possible terms, and throw in phrases like “total lack of certainty” which are just not believable by anyone familiar with the field.

    Rhetoric is not a substitute for logic. Check with Aristotle.

  • Barton Paul Levenson // September 25, 2008 at 12:01 pm

    Bob North writes:

    why has temperature appeared to be flat over the past several years.

    Because people don’t understand statistics:

    Ball’s errors

    Reber’s errors

  • Lazar // September 25, 2008 at 1:18 pm

    James Hansen is so cool.

    During the eight-day trial, the world’s leading climate scientist, Professor James Hansen of Nasa, who had flown from American to give evidence, appealed to the Prime Minister personally to “take a leadership role” in cancelling the plan and scrapping the idea of a coal-fired future for Britain. Last December he wrote to Mr Brown with a similar appeal. At the trial, he called for an moratorium on all coal-fired power stations, and his hour-long testimony about the gravity of the climate danger, which painted a bleak picture, was listened to intently by the jury of nine women and three men.

    … he undcrstands economics.
    … he understands energy economics.
    … he really understands ownership.
    … political dynamics.
    … the media.
    … yet he’s no enviro-luddite.
    … he’s a pragmatist.
    … thank the Heavens we have James Hansen.

    Testimony available here.
    Another pdf explaining the drive against coal…

    Most remaining oil, much of it in the Middle East, surely will be used with the CO2 injected into the air. Limitations on drilling in the Arctic, off-shore areas, and public lands can help keep exploited reserves closer to the IPCC estimate than the larger EIA estimate, but most readily available oil will end up as CO2 in the air. In contrast, scenarios that keep coal in the ground, or used only where the CO2 is captured, are feasible.

    The upshot is that large climate change, with consequences discussed above, can be avoided only if coal emissions (but not necessarily coal use) are identified for prompt phase-out.

  • apolytongp // September 25, 2008 at 2:56 pm

    Climate Audit appears to be doing some good work, finding and prompting Mike to correct mislabeled data and algorithm glitches wrt the latest Mann paper, SI. It’s a shame that Steve is so snarky in his comments and Mike so defensive/political. But the main thing, is that things are getting fixed and drilled down to exact mathematics.

  • apolytongp // September 25, 2008 at 2:58 pm

    P.s. I still like that Wilson comment about it being the exception, a mathematical paper without a mistake of some sort (notation, etc.). I think Mann’s complicated papers, algorithms, SIs, etc. are good examples of how easy it is to make errors of some sort. And that it is worthwhile to go over things with a fine toothed comb (on one’s own)…and even to have others check the work.

  • Gavin's Pussycat // September 25, 2008 at 3:10 pm

    Been reading up on regEM and Mann et al.’s use of it. Schneider’s 2001 article is utterly clear on that the method, when applied correctly, does not create any information out of nothing. Yes, it fills in missing values, but it also generates a variance-covariance matrix for the complete (filled in) data table where the dependence between the filled-in data items and those that were used in the filling-in, is fully taken into account.

    Now about Mann et al. 2008, it says that 25% of the proxies end by 1979, 50% by 1984, 75% by 1991 and some 90% have ended by 1995. So how much filling in is taking place? I’ll draw a picture.

    ——— 25+25+25+15+10=100%
    1991-1995 F–F–F–F–D
    1984-1991 F–F–F–D–D
    1979-1884 F–F–D–D–D
    until1979 F–D–D–D–D

    Here ‘D’ means that data is available, ‘F’ that fill-in was applied. If you don’t accept filling in, you should throw away the leftmost 25% column and truncate the reconstruction at 1979. So the use of regEM actually makes useful the real data available after 1979 (the six ‘D’s on the right) that would otherwise be thrown away, as well as the 25% proxies that end before 1979.

    …and remember, “making useful” doesn’t mean contributing to the blade of the hockey stick — we know what that looks like already ;-) It means constraining the pre-industrial era. Also time series ending before 1979 help there.

  • Lazar // September 25, 2008 at 3:23 pm

    Assisted migration… last-ditch efforts as rapid climate change drives species into human-made firewalls.

    Experts who once disregarded it as a nutty idea are now working out the nuts and bolts of a conservation taboo: relocating species threatened by climate change.

    [...]

    This August in Milwaukee, Wisconsin, a group of scientists, lawyers, land managers, economists and ethicists gathered to discuss the nuts and bolts of breaking a conservation taboo. Whether called ‘assisted migration’, ‘assisted colonization’ or ‘managed relocation’, the idea of manually relocating species is decidedly controversial, and some in the Milwaukee working group feel it would most likely be a disaster.

    [...]

    Stephen Schneider, a climatologist at Stanford University in California and a key contributor to the Intergovernmental Panel on Climate Change (IPCC), attended the working group. Schneider hopes its output will be similar to IPCC reports in providing information to assist decision-making without giving direct advice on whether or not to relocate threatened species.

  • Bob North // September 25, 2008 at 3:36 pm

    Boris and T_P_Hamiltion - Thank you for your responses. Both are potentially valid answers and may all or in part represent what is actually occurring.

    BPL - perhaps you missed my reference to “unexplained interannual variability” aka noise as one possible answer or to the use of term “appeared”. Rather than simply repeating the mantra that “short-term cooling doesn’t negate the long-term trend”, it may be more appropriate to be exploring whether there are factors that can explain the “apparent” flatline. Although we may not yet have the ability to explain this interannual variability and therefore attribute it to “noise”, since surface temperatures are dependent on a whole host of other factors there is some combination of physical factors (e.g., end of solar cycle, la nina, PDO switch, energy going into melting of ice, non-GHG changes in atmospheric composition, landuse changes, etc.)that are the actual reason for the “apparent flatline”. even “unexplained interannual variability i

  • HankRoberts // September 25, 2008 at 7:13 pm

    > the “apparent” flatline.

    Explained by the astonishing human ability to see patterns even when they don’t exist. Part of Statistics 101.

  • Hank Roberts // September 26, 2008 at 4:29 am

    Oh, please:

    http://imgs.xkcd.com/comics/listen_to_yourself.png

  • Lazar // September 26, 2008 at 8:45 am

    GP,

    Thanks for looking into this.

    Re the MXD data, quoting Briffa et al. 2001, “Low-frequency temperature variations from a northern tree ring density network”;

    The period after 1960 was not used [in calibration] to avoid bias in the regression coefficients that could be generated by an anomalous decline in tree density measurements over recent decades that is not forced by temperature [Briffa et al., 1998b]

    … and they cite Briffa et al. 1998, “Reduced sensitivity of recent tree-growth to temperature at high northern latitudes”, Nature, and Briffa et al. 2002, “Tree-ring width and density data around the Northern Hemisphere: Part 1, local and regional climate signals”, the Holocene, for further detail.

    Plate 2 shows a ubiquitous pan-regional decline in the Northern Hemisphere post 1960… northern Europe, southern Europe, northern Siberia, eastern Siberia, central Asia, the Tibetan Plateau, western North America, north-western North America, and eastern and central Canada.

    Reconstructed temperatures over 1480-1991 pass verification at alpha=0.05, for r^2 and RE statistics, with r^2 values typically around 0.25. Apart from 1667-1682, r^2 values would be significant at alpha=0.01. The verification period is 1871-1905, and calibration is from 1906-1960.

    So I have no problem with Mann et al. 2008 discarding MXD data post 1960.

    I don’t understand a ‘divergence argument’ against using tree-ring data which have a known, i.e. calibrated, verified response to temperature. All instruments diverge, so all measurements are useless?

  • Lazar // September 26, 2008 at 8:58 am

    just to clarify…

    “Reconstructed temperatures over 1480-1991 pass verification [...]”

    is from Briffa et al. 2001.

  • Lazar // September 26, 2008 at 9:11 am

    … to be fair, the possibility that data prior to 1960 have been effected is a fair point… but to exclude tree ring data on that basis is also unreasonable.

  • Bob North // September 26, 2008 at 4:01 pm

    Hank - I’ll take your answer to be that it’s just noise. However, to be a bit more precise, I think your answer should have been something more like “… to see patterns when statistically significant patterns do not exist.”

  • Bob North // September 26, 2008 at 5:21 pm

    Lazar - I came across this very interesting article regarding evaluating the “divergence problem”.

    Lloyd and Bunn 2007

    I think the authors summarize well the potential pitfalls relative to climate reconstruction

    “The finding of widespread temporal instability in climate response clearly poses a significant challenge to attempts to use tree rings to reconstruct climate back in time, as it suggests that the response of many—if not most—boreal trees to climate is quite plastic through time, and statistical models developed for one time period may not adequately describe the response of tree growth to climate during another time period. To some extent, the effects of this temporal instability can be dealt with through careful selection of chronologies (see, e.g., Wilson et al 2007); indeed, climate response at some of our sites was relatively stable through time, and historical climate inferences based on these sites are likely to be comparatively robust. However, if, as our data indicate, plasticity in the face of climate variation is a characteristic of most boreal species, the possibility remains that varying climate responses may have occurred during previous time periods, even at sites at which climate response is stable during the 20th century. ”

    The question is if growth response to temperature varies through time, how well can we reconstruct past temperatures outside of the calibration period since there will be uncertainty in past growth response.

  • Bob North // September 26, 2008 at 6:59 pm

    Lazar - Here is another interesting study ( Wilmking et al 2004 ) that shows both negative and positive growth responses to temperature in treeline stands of white spruce throughout the 20th century.

    Again, if growth response to temperature changes with time or if individual specimens of the same species exhibit highly varibable responses to temperature, this definitely complicates historic temperature reconstructions. From this paper at least, it doesn’t appear that divergence is limited to the latter half of the 20th century.

  • Gavin's Pussycat // September 26, 2008 at 7:03 pm

    Bob, what the authors are saying is actually well known in the business. But what conclusion do you want us to draw: that paleoclimate reconstruction is just hopeless and that we should just as well give up? Of course not.

    If reconstructions from a variety of proxies consistently produce the same results — within their, as we know, generous uncertainty bounds — shouldn’t we then lend credence to them?

    And when we find that proxies behave clearly differently over the late 20th century than over the earlier part of the calibration period — which of these periods do you think resembles more the recent pre-instrumental period(s)? And note that the late 20st century divergence behaviour is consistent — “instability” isn’t the term I would choose :-)

    This is not to deny that uncertainties exist; but there are ways to deal with them. Robust conclusions are
    always based on redundancy and multiple converging lines of evidence. Yours appears to be the common fallacy of confusing uncertainty about the magnitude of a phenomenon with uncertainty on its existence.

  • dhogaza // September 26, 2008 at 8:42 pm

    Thankfully we now have sufficient proxy data to reconstruct paleoclimate without using tree rings at all …

  • Hank Roberts // September 26, 2008 at 8:43 pm

    Bob North writes:
    > to be a bit more precise, I think your answer
    > should have been something more like
    >“… to see patterns when statistically
    > significant patterns do not exist.”

    Nope.

    I know you think that, but you’re wrong — and you’re illustrating that you don’t understand the idea of detecting trends with statistics.

    Try reading Tamino’s threads on trend detection.

  • Hank Roberts // September 26, 2008 at 8:44 pm

    Bob, here’s another example of the problem of understanding trends. See if this helps:

    http://environment.newscientist.com/article/dn14826-cod-delusion-leaves-devastated-stocks-on-the-brink.html?DCMP=ILC-hmts&nsref=news2_head_dn14826

  • Hank Roberts // September 26, 2008 at 10:03 pm

    Bob — here’s the difference.

    In the jungle, either there’s a tiger, or there’s not.

    In statistics, our visual processing system isn’t helpful. We’re dealing with probabilities and the best we can do is assign a probability.

    We use statistics when reality is beyond what our visual system can analyze — and the best we can do is estimate the probability, using multiple observations over a period of time.

    Our visual system works on visual patterns.

    Statistics works on data.

    Charts present raw data points, connects them, gives you a line — in visual form.

    Statistics turns the points into a gray fuzz, a probability range.

    We fool ourselves if we look at a chart and think our visual processing system can detect a trend.

    Wrong tool.

  • Dave A // September 26, 2008 at 10:15 pm

    Hi,

    Can anyone tell me why Michael Mann apparently believes Al Gore’s claims about the disappearing snows on Mount Kilimanjaro?

    Many thanks

  • Dave A // September 26, 2008 at 10:24 pm

    Ooops, you can see the report of Mann’s comments athttp://www.projo.com/news/content/URI_Honors_Colloquium_25_09-25-08_LUBN73R_v12.1607f3c.html

  • TCO // September 26, 2008 at 10:34 pm

    more stream of consciousness posting on CA, now. Guy’s got a whole post about “Esper and Frank 2008″, lacking a citation for the paper he is talking about. You know something with the journal, volume number, etc. But then what do you expect from a fossil-fueled amateur Canadian shell-company stock promoter.

  • Aaron Lewis // September 27, 2008 at 1:02 am

    How long until we will know if the recently reported CH4 in the Arctic is adding to the atmospheric load?

    I look at http://www.esrl.noaa.gov/gmd/aggi/aggi_2008.fig2.png
    and all I see is a “Tiger in the Jungle”.

    However, my only rule of thumb for tigers is that if you can see a tiger in the tall grass, it is too late. What is the rule for seeing tigers in the jungle?

  • Hank Roberts // September 27, 2008 at 1:57 am

    Looks tasty. Almost lifelike.
    Worth a response?
    Nah. You can figure it out for yourself.
    Just look it up.

    Any grade schooler need homework help?
    Here’s a good start:

    http://scholar.google.com/scholar?q=disappearing+snows+on+Mount+Kilimanjaro%3F

    Among the first page of hits, this is excellent:

    http://www.historycooperative.org/journals/eh/12.3/carey.html

    which is an excellent history.

    He references the early history of the concern global change in glaciers:

    http://clinton3.nara.gov/WH/EOP/OVP/speeches/glacier.html

    and the more recent:
    http://news.nationalgeographic.com/news/2003/09/0923_030923_kilimanjaroglaciers_2.html

    Keipper’s photos speak for themselves, dramatic proof of a scientific near-certainty: Kilimanjaro’s glaciers are disappearing. The ice fields Ernest Hemingway once described as “wide as all the world, great, high, and unbelievably white in the sun” have lost 82 percent of their ice since 1912—the year their full extent was first measured.

    If current climatic conditions persist, the legendary glaciers, icing the peaks of Africa’s highest summit for nearly 12,000 years, could be gone entirely by 2020.

    “Just connect the dots,” said Ohio State University geologist Lonnie Thompson. “If things remain as they have, in 15 years [Kilimanjaro's glaciers] will be gone.”

    The Heat Is On

    When Thompson’s reports of glacial recession on Kilimanjaro first emerged in 2002, the story was quickly picked up and trumpeted as another example of humans destroying nature. …

    “There’s a tendency for people to take this temperature increase and draw quick conclusions, which is a mistake,” said Douglas R. Hardy, a climatologist at the University of Massachusetts in Amherst, who monitored Kilimanjaro’s glaciers from mountaintop weather stations since 2000. “The real explanations are much more complex. Global warming plays a part, but a variety of factors are really involved.”

    “Global warming began to take effect in East Africa by the early 20th century.

    “The warming increases humidity, and as the air gets more moist, it hinders evaporation,” Hastenrath explained. “The energy saved from evaporation is instead spent on melting. That might seem like a good thing—to stop evaporation of the glaciers—but it’s certainly not. Melting is eight times more energy-efficient than evaporation, so now, with global warming, the glaciers are disappearing eight times faster than before.” … When it was drilled for ice core samples in 2000, the Furtwängler was completely water-saturated. Some scientists attribute the overflow to volcanic vents, heating the base of the glacier and melting the bottom layer of ice. Others, including Hardy and Lonnie Thompson, who released the 2000 Ohio State University report, believe that colder air surrounding the glacier kept its walls frozen even as portions of the interior melted away.

    “If enough of that water pressure built up, it seems likely that there was enough energy to burst through the frozen glacier wall,” Hardy said.

    The Furtwängler Glacier may continue to disappear in massive chunks, just like the icy boulders Keipper and his colleagues saw tumble down the crater floor. If conditions remain as they have, the rest of Kilimanjaro’s ice will follow suit, but rather than exploding, they will steadily and stealthily evaporate into African air. …”

  • Bob North // September 27, 2008 at 4:43 am

    Boy, lots of points to answer here.

    Gavin’s Pussycat - No, I don’t believe that attempts at paleoclimatic reconstruction are fruitless and that we should give up. Tree rings are an excellent indicator of past growing conditions and, therefore, a reasonable proxy for the overall climactic conditions, including temperature, at a given location. However, we should be ever mindful of 1) the assumptions inherent in making such reconstructions, such as that the response of a proxy to changes in climatic conditions reamins the same through time; 2) the inherent uncertainties in temperature estimates from proxy where temperature may explain only 20-60% of the variation in the proxy; and 3) the additional uncertainty that unknown variations in the other factors that affect our proxies impart to any reconstruction. Certainly, multiple converging lines of evidence provide us a the ability to make a more “robust” conclusion regarding temperature trends, but the underlying confidence limits may still be just as large

    Dhogaza - Yes, I agree that more and potentially better proxy are being investigated and quantified. Some (e.g., dO18,) may be much better than others (e.g., speleothems) . Whether the spatial coverage of these other proxies.

    To both (and Lazar) - the point of the references I posted was in response to Lazar’s post ( 9-26-08 8:45AM and specifically to this statement “I don’t understand a ‘divergence argument’ against using tree-ring data which have a known, i.e. calibrated, verified response to temperature.” From the cited references, it is not so clear that we “know” the precise response through time.

    Hank - I’d like to say that we seem to be talking at cross-purposes or simply past each other. I do agree with some of your points relative to fooling ourselves into seeing something that really isn’t there. For example, I do agree that, relative to correlations between two different variables (e.g., sunspots and temperature), we can often fool ourselves into seeing a higher degree of correlation than actual calculations will tell us. However, your statement “We fool ourselves if we look at a chart and think our visual processing system can detect a trend.” would suggest that I can’t look at a chart of the global mean annual temperature data since 1880 and conclude that, overall, temperatures have risen since that time or look at a chart of the CO2 data from Mauna Loa and easily see that atmospheric CO2 concentrations have increased markedly since 1959. Maybe you are unwilling to conclude that the trends so readily visible in such charts aren’t meaningful until you complete a statistical analysis, but I am willing to conclude that the average global temperature has risen since the 1880s and that atmospheric CO2 concentrations have increased based on such plots without conducting formal trend analysis. I find your resistance to visual analysis interesting since a former of colleague of mine and a very fine statistician always admonished me to plot the data first to help evaluate what types of statistical tests might be appropriate and to serve as a gut check on any conclusions made from the numbers.

    Now, specifically relative to the temperature history over the last several years, my use of the term “apparent flatline” was not meant to imply any definitive change in the longer term trend, but rather to indicate that, for the last 7 to 8 years, there has not been any statistically significant increase or decrease in global temperature anomalies (IIRC, GISS shows a very slight increase while HadCRU has essentially zero trend and the satellite date might be ever so slightly negative). There are many possible reasons for this “apparent flatline” including ENSO, PDO, “it’s just noise and is not meaningful”, etc.

    Finally, the relevence of your link to the “cod delusion” and the apparently poor policy decisions of the Spanish government regarding setting limits on the timing of fishing for other species to the problem of understanding trends we are discussing here escapes me. Any further insights as to what I am missing would be appreciated.

    Regards,
    Bob

  • Raphael // September 27, 2008 at 5:36 am

    Hank Roberts,

    In the jungle, either there’s a tiger, or there’s not.

    In statistics, either there is a statistically significant pattern, or there is not.

    Compare with Bob’s correction, “… to see patterns when statistically significant patterns do not exist.”

  • Gavin's Pussycat // September 27, 2008 at 7:17 am

    Hi,
    Can anyone tell me why Dave A apparently is fat?
    Many thanks

  • Gavin's Pussycat // September 27, 2008 at 9:21 am

    Lazar, most authors of scientific papers will be happy to provide you with a reprint if you ask them politely.

  • Boris // September 27, 2008 at 1:01 pm

    “The question is if growth response to temperature varies through time, how well can we reconstruct past temperatures outside of the calibration period since there will be uncertainty in past growth response.”

    The divergence problem is a real issue. But assuming that the divergence is caused by changes in the trees response to temperature seems to be the least plausible reason for divergence, at least in my view. A more plausible explanation would be regional pollution and global dimming. But, of course, all factors could be involved to different extents.

    Rob Wilson had a good paper on the DP not long ago, so that would be a good place to start for anyone who wants to look at the issue more seriously. I would also note that Wilson has found many series not affected by the DP.

  • Hank Roberts // September 27, 2008 at 3:03 pm

    Raphael, there’s no absolute proof in science. Statistics is about probability, not certainty.
    They say mathemathics allows certain proof — that’s not true for statistics.

    Bob, you can look at a line on a page for a century’s temperature trend and feel confident because you know what the picture represents — extensive scientific work. Your confidence is not from the shape on the page, it’s from your knowledge about the work done to make that picture.

  • Hank Roberts // September 27, 2008 at 3:15 pm

    Bob writes:

    > a very fine statistician always admonished
    > me to plot the data first to help evaluate
    > what types of statistical tests might be
    > appropriate

    Tamino, sanity check please? While it’s been many decades since I took statistics, the final message from our instructor was that we had to decide on the question and the test before collecting data, to avoid fooling ourselves. Yes, I recall and took the advice to plot the results — that can reveal interesting things — but is it done nowadays as Bob says his friend tells him?

  • trrll // September 27, 2008 at 3:27 pm

    It seems fairly clear that a major factor obscuring the long-term warming trend over the past few years is the ENSO. Here is a paper that attempts to correct for this:

    http://www.aussmc.org/documents/waiting-for-global-cooling.pdf

  • Hank Roberts // September 27, 2008 at 4:03 pm

    Try this:
    http://www.edge.org/3rd_culture/taleb08/taleb08_index.html
    —-
    Statistical and applied probabilistic knowledge is the core of knowledge; statistics is what tells you if something is true, false, or merely anecdotal; it is the “logic of science”; it is the instrument of risk-taking; it is the applied tools of epistemology; you can’t be a modern intellectual and not think probabilistically—but… let’s not be suckers. The problem is much more complicated than it seems to the casual, mechanistic user who picked it up in graduate school. Statistics can fool you. In fact it is fooling your government right now. It can even bankrupt the system (let’s face it: use of probabilistic methods for the estimation of risks did just blow up the banking system).

    THE FOURTH QUADRANT: A MAP OF THE LIMITS OF STATISTICS [9.15.08]
    By Nassim Nicholas Taleb

  • Dave A // September 27, 2008 at 6:35 pm

    Hi

    Can anyone tell me why Gavin’s Pussycat is blind?

    Many thanks

  • Dave A // September 27, 2008 at 6:43 pm

    TCO,

    I think your fixation with Steve M is becoming abnormal.

    Give it a rest for a while and you may feel a whole lot better.

  • Horatio Algeranon // September 27, 2008 at 6:59 pm

    Who’ll stop the (runaway) train?

  • John Finn // September 27, 2008 at 7:49 pm

    trrll // September 27, 2008 at 3:27 pm

    “It seems fairly clear that a major factor obscuring the long-term warming trend over the past few years is the ENSO. ”

    Not to me it doesn’t. If anything any warming trend should have been amplified. The years 2001-2007 were dominated by El Nino conditions (i.e. warming).
    See

    http://www.cpc.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml

    The ONI index remained positive throughout the 6 year period and includes 3 El Ninos (2002-2003, 2004-2005 and 2006-2007). The recent La Nina is the first since 2000/01.

  • Paul Middents // September 27, 2008 at 11:12 pm

    John Finn,

    Do you see any significance in the relative lengths and strengths of El Ninos you list for the last few years?

  • Boris // September 28, 2008 at 12:09 am

    “I think your fixation with Steve M is becoming abnormal.”

    Nothing about Steve M’s fascination with Mann? Not to mention Gavin and Hansen. McIntyre is trying to settle some personal vendetta. I like how he said in one of his threads not too long ago that you couldn’t find him saying rude things about the “hockey team.” Of course not: he erases those posts (e.g. the “vicious, little men” post.)

    Just shut up and publish.

  • P. Lewis // September 28, 2008 at 12:36 am

    2001 was not an El Niño year at all. And in 2007, El Niño conditions persisted only in the 3-month “season” centred on the month of January.

    And 2002-2006 were hardly dominated by El Niño conditions either, apart from 2002-03, which was a dominant El Niño, though not on the scale of 1998 (LN = La Niña, N = normal, EN = El Niño):

    2001 LN “months” = 2, N “months” = 10, EN = 0
    2002 LN = 0, N = 4, EN = 8
    2003 LN = 0, N = 9, EN = 3
    2004 LN = 0, N = 6, EN = 6
    2005 LN = 0, N = 10, EN = 2
    2006 LN = 0, N = 7, EN = 5
    2007 LN = 5, N = 6, EN = 1

    Which gives totals of LN = 7, N = 52, EN = 25.

    A positive ONI in any 3-month “season” may well be important locally in terms of a slightly warmer sea, but it doesn’t amount to an El Niño or El Niño conditions (nor the reverse for La Niña); and, so far as I’m aware, such positive (negative) ONIs that don’t meet the criteria for calling an El Niño/La Niña have little or no implications for global weather until they do meet the criteria for classification as an El Niño/La Niña.

    The period 2001 to 2007 was one largely dominated by normal conditions.

  • dhogaza // September 28, 2008 at 12:40 am

    Not to me it doesn’t. If anything any warming trend should have been amplified. The years 2001-2007 were dominated by El Nino conditions (i.e. warming).

    And, gosh, what do we see? 1998 sticks out like a sore thumb. Do you think climate scientists are dumb and are unaware of such things?

    Or do you think that maybe they’ve looked at what happens when you remove ENSO to see if there’s still a statistically significant rising trend?

  • pough // September 28, 2008 at 1:09 am

    The ENSO thing interested me, so I snagged data and fired up OpenOffice. (I haven’t got a clue what I’m doing, so in the end it’s likely more of an exercise in learning how to use Calc than anything else, but it was fun.)

    I plugged in the temp anoms from GISS from 1997-2007 and then the ONI values from NOAA for the same years. Since El Niño and La Niña seem to go from summer to summer I figured that initial point leading up to 1998 might come in handy. And keeping that in mind I did two sets; one averages ONI from Jan-Dec and the other from July-June.

    It’s mildly interesting that averaged over 10 years, the ONI values are pretty much at zero*. And from 2001-2007 - the “dominated by El Niño conditions” years - not much more**.

    * If you don’t use the “summer-before” values, it’s -0.01. Using the “summer-before” values, it’s 0.15 (because it includes the end of 1997 and excludes the end of 2007).

    ** “Summer-before”: 0.27; otherwise, 0.25.

    I suppose one could point out that 2005 and 2007 had much smaller El Niños than 1998, but they still matched for temps. Ah well. I think the next 5 years or so of data will make things much more interesting.

  • P. Lewis // September 28, 2008 at 9:59 am

    I meant N = neutral, not normal

  • John Finn // September 28, 2008 at 10:54 am

    P. Lewis

    The definition of an El Nino is not relevant. It isn’t some magic switch that once a threshold has been reached (i.e. 3 periods of +0.5) then anomalous warming kicks in.

    Between 2001 and 2007, the ONI was predominantly positive (and includes 3 periods which have exceeded a certain threshold). This makes things warmer than they would be if the ONI were negative - REGARDLESS of whether El Nino/ La Nina criteria had been met.

    Now it’s possible the reason the ONI measurements have been positive is due to AGW but that’s another argument.

    The key point here is that conditions were favourable for continued warming (i.e. no volcanos, positive ONI, and let’s not forget “global brightening”).

    Pough:

    Recent Temps matched 1998 only in the GISS record. UAH, RSS and Hadley all show an anomalous spike in 1998 which has not been matched since.

  • Barton Paul Levenson // September 28, 2008 at 10:59 am

    Bob North writes:

    for the last 7 to 8 years, there has not been any statistically significant increase or decrease in global temperature anomalies

    Maybe because the relevant period for statistical certainty of a climate trend is 30 years?

  • Gavin's Pussycat // September 28, 2008 at 2:51 pm

    > why Gavin’s Pussycat is blind?

    Ah… that explains why I have been commenting here using lynx ;-)

    None so blind as he who is dumb — old sumerian proverb

  • John Finn // September 28, 2008 at 3:03 pm

    Correction: I actually mean the 6 years from the end of 2001 to the end of 2007.

  • Dave A // September 28, 2008 at 9:06 pm

    > old sumerian proverb

    Ah….. that explains a lot

    None so dumb as he who is blind — 21st century apophthegm

  • Lazar // September 28, 2008 at 11:17 pm

    Bob North –
    Thanks, interesting papers. One point though, they examine the changing prevalence of types of temperature response within the general population, i.e. the frequency and average strength of positive and negative responses. They’re not directly addressing the loss of sensitivity in individual series, i.e. they do not address whether series with a positive response lose predicitive power because they are becoming negatively correlated, or vice-versa.

  • trrll // September 29, 2008 at 12:52 am

    “The definition of an El Nino is not relevant. It isn’t some magic switch that once a threshold has been reached (i.e. 3 periods of +0.5) then anomalous warming kicks in.”

    Correct. This is why Fawcett & Jones took into account the intensity of the ENSO. When the ENSO effect is removed, there is indeed continued warming over the last few years.

  • Barton Paul Levenson // September 29, 2008 at 9:54 am

    John Finn: The temperature here declined rapidly from noon to midnight last night. Clearly this has interrupted the global warming “trend” and we should prepare for a new ice age.

  • Ray Ladbury // September 29, 2008 at 12:52 pm

    Dave A. and Gavins felinus domesticus,
    Boys, boys, boys…

  • Hank Roberts // September 29, 2008 at 4:54 pm

    CCNet today publishes email from Ferguson at SPPI announcing Monckton’s published w hat he learned at McIntyre’s CA blog.
    ———-
    … we have posted a new paper at SPPI by Christopher Monckton. … No one who reads it will ever again trust the IPCC or the “scientists” and environmental extremists who author its climate assessments.”
    ———

    Those who enjoy reading that kind of thing will know how to find it.

  • pough // September 29, 2008 at 6:59 pm

    No one who reads it will ever again trust the IPCC or the “scientists”

    I’m gonna go ahead and guess that the bulk of them never started doing that in the first place.

  • John Finn // September 29, 2008 at 8:24 pm

    Dear All

    You’re clearly intent on missing the point. I think a pause of few years in warming is perfectly possible and does not invalidate the AGW hypothesis.

    However the pause must be due to a reason. If you don’t know what that reason is then there is a gap in your knowledge.

    BPL - we know why the temp falls at night-time and why temperatures are lower in winter.

    There isn’t, though, a reason for the plateau between 2002 and 2007 (even ignoring the La Nina which took hold late 2007). Simply satying it’s due to natural variability is a fudge. What natural variability? Which bit are we talking about ? Why can’t we quantify it? How long will it continue for ? 2 years ? 10 years ? 100 years ?

    All the natural variability factors were favourable to warming - ONI, volcanic activity, increased insolation at the earths surface - so what is the unknown factor which has suppressed the warming trend?

    Just an aside: In January 2007 the UK Met Office (Climate Change bit) issued a forecast which predicted that 2007 would be the warmest year ever recorded.

    One of the leading Climate Change research groups hadn’t even realised that there was a potential La Nina developing. In April 2007 they came up with some drivel about a new study which shows global warming would stall for a year before resuming in 2009 (or 2010 I can’t remember). They’d probably, by now, noticed that the 2006/07 El Nino had faded and that a La Nina was now a possibility.

    The Met Office basically haven’t got a clue and they’re not alone. Don’t be fooled by the assured confidence with which these various bodies deliver statements about the climate or similar phenomena.

    Didn’t NASA announce the start of solar cycle 24 in 2006?

  • Ian Jolliffe // September 30, 2008 at 8:40 am

    Hank

    ‘Tamino, sanity check please? While it’s been many decades since I took statistics, the final message from our instructor was that we had to decide on the question and the test before collecting data, to avoid fooling ourselves. Yes, I recall and took the advice to plot the results — that can reveal interesting things — but is it done nowadays as Bob says his friend tells him?’

    I’m a big fan of plotting the data. OK, looking at a visual representation of data set can mislead, and any pattern you ‘see’ should be checked out by some independent means before claiming it to be real.

    But not plotting the data is a bad option. Suppose you’re in the ideal situation where you’re able to decide on your question, and choose your test in advance of collecting your data. The test will certainly depend on assumptions about the data to a greater or lesser degree and unless you’ve got large quantities of previous similar data you won’t know whether the assumptions are met for the data you’re about to collect. So you’ve now collected your data. A few simple plots will tell you whether the assumptions that your test requires look reasonable or are clearly wrong. If you go ahead without the plots and the assumptions are invalid, the chances are that you will come up with an erroneous result. At best, you’ll be embarrassed to find out later that you’ve wasted your time, and need to do a different analysis and probably collect more data.

  • Barton Paul Levenson // September 30, 2008 at 9:29 am

    “Dumb blind blind dumb bumble bummy blue.”

    -A three-year-old child.

  • Gavin's Pussycat // September 30, 2008 at 2:33 pm

    John Finn, natural variability is a reality of (climate) life. It shows up in the models as well as in the observations, and the spectral characteristics are very similar. It has a (1/f) type spectrum with an exponent around -0.8, and that over a very wide range of time scales/frequencies. You can see the spectrum in IPCC figure 9.7 (if I remember correctly). Natural variability has no cause. It just happens, following the laws of physics. A bit like the weather (which is also natural variability, but on a shorter time scale). It is chaotic and you cannot predict it over much longer than its own typical time scale — just like you cannot predict the weather over more than a week or so. No surprise if the Met Office got egg on their faces… not the first time for it to happen to meteorological predictions.

  • Hank Roberts // September 30, 2008 at 3:23 pm

    Dr. Joliffe, you misunderstood my question.
    I didn’t say anything about not plotting the data.
    I asked Tamino’s opinion of the actual suggestion above — as I read it, the expert suggested collecting the data before [plotting the data to help in] deciding what test to use.

    Collect the data before deciding what test to use.
    Right?

  • Hank Roberts // September 30, 2008 at 3:26 pm

    And yes, I realize people may find their procedure didn’t acquire anything useful, and may decide from that to use a different test.

    But — again I’m decades past Statistics 101 — I thought once you’d decided you needed to do a different test, you _had_ to collect _more_ data on which to do the work. Not look at the data first then decide what kind of test to use.
    Maybe statistics is more sophisticated now?

  • David B. Benson // September 30, 2008 at 6:57 pm

    Somebody asked about ‘trends’ this century. Might care to learn about ENSO and the other ocean oscillations.

  • Dave A // September 30, 2008 at 9:53 pm

    Gavin’s Pussycat,

    Is BPL saying we are behaving like three year olds? I think I will go and throw a tantrum. LOL

    Why don’t some people understand humour?

  • TCO // October 1, 2008 at 12:03 am

    If I am going to get over my Steve McI obsession, I need Jolliffe to give me some love. After all, I asked the critical questions that Tamino neglected (when considering the ppt (!) as a support for Mannian PCA). And I don’t even know linear algebra. I just ask dumb questions to check on things…

    Love me!

  • Barton Paul Levenson // October 1, 2008 at 11:07 am

    John Finn writes:

    There isn’t, though, a reason for the plateau between 2002 and 2007 (even ignoring the La Nina which took hold late 2007). Simply satying it’s due to natural variability is a fudge. What natural variability? Which bit are we talking about ? Why can’t we quantify it? How long will it continue for ? 2 years ? 10 years ? 100 years ?

    All the natural variability factors were favourable to warming - ONI, volcanic activity, increased insolation at the earths surface - so what is the unknown factor which has suppressed the warming trend?

    If you do the regression starting with the El Nino year of 1998 and ending with the La Nina year of 2007, you get a flat trend. If you leave out 1998 and 2007, you get a significant rising trend. Does that answer your question above?

  • Bob North // October 1, 2008 at 1:10 pm

    Hank - I think my statement regarding using data plots to help decide what type of tests to use may need some clarification. Maybe it would have been better to say something along the lines of using the data plots to clarify underlying assumptions or which analysis might be worth further pursuing.

    Let’s say I thought there might be a correlation between arsenic and lead contamination at a particular superfund site. A quick plot can tell me if it is worth doing further analysis.

    What if I am not sure if my data is normally distributed? A quick plot will tell me if it is close enough to normal to proceed with classic statistics or if I should attempt a data transform or simply use non-parametric techniques. If I am examining a time-series of ground water contamination data, I can use plots to quickly check to see if a trend of sufficient magnitude to be of interest is possibly present. Bottom line, plots are a very useful screening tool and reality check.

    Bob

  • BBP // October 1, 2008 at 3:58 pm

    Bob,
    I think the point Hank is making is that using plots for screening can be tricky. In the example you gave of using a quick plot to look for a correlation between lead and arsenic the danger lies in how many variable pairs you look at and what signifigance is required for a saying ‘we need to look further’. If you look at 100 pairs of random data sets you will expect 5% of them to be correlated at >= 95% confidence. So if you want to look at a lot of data you need to figure out how to weed out the false positives before you look at any plots (or at least be sure you have later tests that can do it).

  • Gavin's Pussycat // October 1, 2008 at 4:38 pm

    Dave, but we are behaving like three year olds, right?

  • Lazar // October 1, 2008 at 4:40 pm

    TCO, Al Gore loves you even if you are a naughty, conservative, skeptical potty-mouth (ref- Frank Zappa, PMRC, Senate hearing, haha).

  • Lazar // October 1, 2008 at 6:03 pm

    Using an F-test on a linear contrast…
    Yet another simple view of global temperatures.
    5-year non-overlapping means that start in 1968 and end on 2007, centered on 1970, 1975, 1980 etc…
    The trend is 0.16 deg C / decade, significant at alpha=0.001 using an F-test with (1,32) degrees of freedom. The assumption of no autocorrelation in the residuals of the mean values is reasonable given the acf for the annual data.

  • David B. Benson // October 1, 2008 at 8:41 pm

    Gavin’s Pussycat — Was it you that commented about the 1/f spectrum of climate variability? I gather you ment 1/(f^0.8)?

    Whatever, is there a link?

  • Dave A // October 1, 2008 at 9:51 pm

    Gavin’s Pussycat,

    You might have been but I was four!

  • John Finn // October 2, 2008 at 7:22 am

    BPL

    If you do the regression starting with the El Nino year of 1998 and ending with the La Nina year of 2007, you get a flat trend. If you leave out 1998 and 2007, you get a significant rising trend. Does that answer your question above?

    No it doesn’t. 2007 wasn’t a La Nina year as such. It began still in an El Nino phase. Note there is a lag of ~3 months before surface temp changes following ENSO events.

    Most of the effect of the 2007/08 La Nina is seen in the 2008 temperatures. In 1999, on the other hand, there was a full blown La Nina which had established itself in 1998 and lasted throughout the year.

  • Ian Jolliffe // October 2, 2008 at 11:11 am

    Hank:

    ‘Maybe statistics is more sophisticated now?’

    Statistical reasoning always was sophisticated, but statistical techniques have increased their sophistication over the past decades largely to the possibilities that increased computer power has opened up. Statistical science, like any other science, moves on.

    With experimental data (e.g. in the laboratory, clinical trials, agricultural field trials) you can and should pose questions before you collect the data, and design the data collection to optimise your chances of answering the questions. But a lot of data (observational data) are not like that. Climate data are an obvious example – you can’t go back and say ‘let’s do the experiment again, change the temperature for 1600, and then go out and measure how the tree rings have changed’.
    Most current mainstream statistics is concerned with statistical model fitting (descriptive techniques like PCA are distinctly unfashionable), whether it’s ARMA models for time series, or PC regression, or much more sophisticated models. The exact form of the model is rarely known (all models are wrong but some are useful – George Box) so that the choice of final model is largely determined by how well the model fits the data. This leads to the possibility of over-fitting.

    What has increased computer power allowed us to do? Among other things it allows more complex models to be fitted. It also allows re-sampling or cross-validation techniques to be implemented that help to reduce the chance of over-fitting. The complex models can incorporate prior knowledge, and/or allow for the incorporation of uncertainty about the fitted model being a correct one, so they too can reduce the over-fitting risk. Sometimes this all gets too complicated, but as the science progresses I’m optimistic that the best ideas will survive.

  • Ian Jolliffe // October 2, 2008 at 11:14 am

    TCO (apolytongp) still needs “love”.
    TCO // March 9, 2008 at 6:50 pm
    You’ve flushed me out on this one. As you can tell I’m not a browser, but I did go back to what you said. Spot on. You were asking all the right questions – I guess you deserve ‘love’ for that, if that’s what you crave. If only someone had contacted me then (or even better when the ppt was first cited – I wouldn’t been hard to find) to ascertain what my views really were we could have had the discussion much earlier.

    I’m all in favour of asking questions – if after some thought you can’t understand, then the questions aren’t ‘dumb’. It’s almost certain that others would like to ask the same questions but worry that they and the questions will be deemed ‘dumb’. I fear that there is more of this going on amongst reviewers of papers than we like to admit.

  • Gavin's Pussycat // October 2, 2008 at 3:33 pm

    David, yes. Now that I look at the IPCC figure 9.7, I see that it’s more like 1/(f^1.0) — but do consider that the periods close to one year are “under-powered” as the graph is based on annual averages. (Note also that the vertical axis unit is wrong, it’s Celcius squared YEAR, (i.e., “per year^-1″), not C^2 yr^-1. It’s power SPECTRAL density in spite of the only horizontal axis being time period.)

    The 0.8 figure comes from http://tinyurl.com/manabestouffer . It’s pretty old and the two are not directly comparable.

  • Hank Roberts // October 2, 2008 at 3:56 pm

    Thank you, very helpful about statistics, and I hope for more.

    Re ‘dumb’ questions I often point to Eric Raymond’s guide — he’s got a good outline of how to prepare:
    http://www.catb.org/~esr/faqs/smart-questions.html

    On notice — how does it work in academic publication? Is it a courtesy to tell someone when you’ve cited a paper? or do people look themselves up in the citation services, or get invited by journals to check claims made? I really don’t know what’s usual.

    Climate bloggers could make the effort for their own posts to email authors when citing them. I’ve assumed that climate scientists making main posts were treating references as though they were in journal articles, however that worked. But it’d be an interesting thing to try to get science bloggers to do as a routine courtesy to scientists they cite. Hmmm.

    Beyond that, maybe asking their readers to do both cite comments and check with authors could elevate the level of discourse (or create a jump in junkmail for the authors, sigh).

    It’s a rare site that even insists on cites for statements from commenters — notably, by: http://moregrumbinescience.blogspot.com/

    Good challenge to us all there. It is a puzzle.

  • Eli Rabett // October 2, 2008 at 10:03 pm

    I have some questions about the new Lean and Rind paper. Fortunately I am away from home and cannot download the paper, but I have read it, and reports are that so has RP Sr. Basically, I was suspicious of the boundary conditions for the fit, esp since the HADCrut series was used which cuts off below 90 degrees latitude. OTOH these are experienced folk. Anyone else had a look

  • Dave A // October 2, 2008 at 10:17 pm

    Anyone care to comment on this piece of eco-fascism emanating from a portion of academia in the UK?

    http://www.guardian.co.uk/environment/2008/sep/30/food.ethicalliving

  • Hank Roberts // October 3, 2008 at 3:44 pm

    I’m sorry, Dave, you’ve got your politics backwards.

    http://farm3.static.flickr.com/2302/2351910375_e29f773bf9.jpg

    http://blog.kir.com/archives/images/rationing.jpg

  • Hank Roberts // October 3, 2008 at 3:53 pm

    Even better:
    http://www.cartoonstock.com/newscartoons/cartoonists/twi/lowres/twin179l.jpg

  • dhogaza // October 3, 2008 at 4:56 pm

    I think we’ll need a precise definition of “eco-fascism” before we can comment. I thought we were communists, not fascists … so perhaps your definition could clarify the difference between eco-communists and eco-fascists?

  • george // October 3, 2008 at 6:27 pm

    TCO(apolytongp)

    As a general rule, I don’t approve of doing things that appear self-flattering, but I think your comment to me on the other thread needs to get a response because you are clearly quite caught up in the “I’m right and you guys were all wrong and I told you so” theme.

    In fact, you are still repeating it above in your “lookin for love” post and I am quite concerned that it is distorting your view of reality.

    Here’s your comment to me from the other open thread in which Jolliffe clarified his position:

    George:

    I been trying to resist the “I told you so”, but…the flesh is weak…and I am drawn in by all the comments saying how easy it is to misread things. If you look at my comments (as TCO) on March 9th, ~6PM, you will see me asking for clarification of the centering labels to Tammy.

    It was not that hard to come up with these concerns/questions on the terminology. I mean I know little to no formal stats or linear algebra. I just ask a question when I don’t understand. Just like a curious high school math/science student. That’s what we all need to do. You, Ray, Tammy, Steve McI etc. (I think mosh-pit and JohnV already do so.) It’s what Eli (I hope) teaches in freshman chem class. ”

    So, TCO, since you were obviously lecturing me directly in that comment, let me bring you back to reality for a moment if I may.

    Had you actually taken the time to read, you would be aware of the comments/questions that I actually posted in the “PCA part 4: Non-centered Hockey sticks thread

    Comment by me:

    First, McIntyre et al claimed that it was decentered-PCA that “created” the hockey stick shape….

    Then, when it was shown that this was not the case — indeed that you can do the analysis without even using PCA (which is probably only a surprise to some statisticians and spectroscopists who treat PCA like it was the greatest thing since sliced bread) — they transitioned smoothly (watch carefully or you might miss it) into “the strip-bark bristlecones are not valid temperature proxies”

    and this one by me

    The only thing that matters is whether his conclusion is correct — and if the analysis done without using PCA also yields a hockey stick (which it does), then whether Mann did the PCA analysis correctly is basically moot as far as the science is concerned.

    and this one by me

    Mr pete:
    Analysis done without using PCA also yields a hockey stick.

    So, how does it matter one way or the other whether Mann was right or wrong with the PCA analysis?

    and this one

    I’m still a little unclear exactly why PCA would be used in this case.

    How (or more to the point is) the answer obtained with PCA somehow superior (more reliable, more robust, etc) than the answer obtained with simpler, more straightforward less manipulative, techniques?

    It strikes me that sometimes itis better to do less data processing than more.

    I guess the main question for me is this:

    Why change the data representation with PCA at all if there is the possibility that it might introduce ambiguity in the final result? (depending on which components are selected, in this case)

    If one has to include a certain (possibly unspecified) number of the components to be certain that one has not missed something important, what has one gained?

    I never got an acceptable answer to the latter question, by the way.

    From my (admittedly limited) experience, I had a hunch when I asked that (and still do) that many of the claims made about PCA are overblown.

    PCA is a way of representing data — period.

    It’s not some magical technique and it is certainly not the only — or best — way to do things in all (or even most) cases.

    I think Tamino made those points on the other thread pretty well.

    So, TCO, spare us all the I’m right and you were wrong BS, will you?

    And, in the future, take the time to actually read people’s comments before you criticize.

  • Gavin's Pussycat // October 3, 2008 at 6:58 pm

    Eli, yes I read it. Seems good to me, but I would have liked to see separate attributions for greenhouse gases and anthropogenic aerosols, which have different temporal signatures.

    The study is at the margin of what is detectable, and we’ll have to see if it holds up. What the authors remark about a possible explanation is valid however: our greatest current uncertainty is cloud feedback, and that includes uncertainty on its possible latitude dependence.

  • David B. Benson // October 3, 2008 at 9:07 pm

    Gavin’s Pussycat // October 2, 2008 at 3:33 pm — Thanks. I find that a rather strange power law; that just means that the climate is somehow behaving differently than many of its fluid dynamic components.

  • Jeff Id // October 3, 2008 at 9:11 pm

    Dave A,

    If the cows are the problem, maybe we should eat them faster.

  • Dave A // October 3, 2008 at 10:32 pm

    Ally the link in my previous post to that here

    http://www.guardian.co.uk/environment/2008/sep/26/ecotowns.ethicalliving

    Welcome to big brother land!

  • Gavin's Pussycat // October 4, 2008 at 5:15 am

    Dave, what eco-fascism? You must have your links mixed up — please let us have the correct one.

  • Gavin's Pussycat // October 4, 2008 at 5:47 am

    As a matter of interest, the misinterpretation of Ian Jolliffe can be found in an RC article by Michael Mann himself,http://www.realclimate.org/index.php?p=98 and thus is probably not Tamino’s invention. Unfortunately comments are closed now.

    It is an oldish article having only nine comments. See especially also comment #5 referring to regEM.

  • Deech56 // October 4, 2008 at 10:51 am

    Hank, it is not customary to notify an author when citing his or her published work. Permission is not needed if the information is publicly available, but to copy figures and tables, one must ask permission of the holder of the copyright (usually the journal).

    I’ve checked out my cites in Google Scholar (prior to that, the Science Citation Index). One important reason for making sure that one is accurately citing a manuscript is that the author of that MS may end up being a reviewer, and an incorrect citation (or failure to cite the author’s work) can work against the submitting author (to put it mildly).

    I hope this helps.

  • Neven // October 5, 2008 at 11:50 am

    It certainly is eco-fascism as it doesn’t deal with the root of the problem: animal concentration camps. Once these get outlawed, which is a perfectly ethical and rational thing to do, meat will stop being so preposterously cheap (and unhealthy) and people will eat less of it.

    Very simple, very beneficial idea, but of course it won’t happen because most people in the West are addicted meat-SUV’s and they are prepared to kill anyone who dares take their burgers from them.

  • Robert Grumbine // October 5, 2008 at 6:22 pm

    Hank: It probably varies by area, but it seems to be quite uncommon professionally to tell people that you’re citing their paper in your journal paper. I’ve never had anyone do so, for instance, and have never done so. The Web of Science does have a service (now) that you can sign up and receive a notice when your paper is cited. I don’t know that many find that very useful. What has happened with more frequency is that someone sends a note asking if they’ve understood your paper correctly. Not terribly common, but happens.

    Where it’s relatively common to send word to an author is when you’re going to publish something highly critical of their published work. But even that’s not a guarantee and depends much on the people involved.
    But that’s also looking at old days.

    For the emerging world of science and science communication, I think it makes sense to send word to scientists whose work you’re commenting on, particularly if critically. It’s now pretty common that they have a blog themselves, or at least read a number and comment from time to time. It’d be a good, new, thing to have scientists discussing their work more directly. Or at least much better informed about how it’s being taken.

  • Lazar // October 5, 2008 at 10:21 pm

    The British government of 1940 instituted rationing on a massive scale because of failure to apply moderate intervention eight years earlier.
    The taxpayer bailout FMA and FMC could have been averted two years ago when the opportunity for moderate regulation presented.
    If the West had acted in the 1980s to head off the combined energy-environment crisis, would we be talking of rationing today?

    When the situation was manageable it was neglected, and now that it is thoroughly out of hand we apply too late the remedies which then might have effected a cure.

    There is nothing new in the story. It is as old as the Sibylline Books. [...] Want of foresight, unwillingness to act when action would be simple and effective, lack of clear thinking, confusion of counsel until the emergency comes, until self-preservation strikes its jarring gong

    – Winston Churchill, “Air Parity Lost”, House of Commons, May 1935

  • TCO // October 6, 2008 at 1:05 am

    George:

    IMPACT OF THE CENTERING CHOICE

    The discussion of whether McI asserted that abnormal centering caused the hockey stick and then later switched to saying that bcps were needed is NON-GERMANE to the question as to whether McI confused the concepts (or misused the terms) of uncentered and short-centered.

    This is a segue, since it’s non-germane, but I’m interested as to when you think the switch occurred? My impression is that McI has for quite a while mentioned that bcps were needed along with the transform.

    I will give you, that McI has certainly exaggerated the impact of the undocumented, abnormal 1920-1980 mean subtraction transform (perhaps not mathematically, but in emphasis). I have bitched at him for this. I think he does this because the centering is so clearly “wrong”. So he wants that “clear mistake” to carry the water of other methods in the paper which are more arguable either way. I have bitched at him for not doing full factorials, for not estimating impacts.

    Basically, McI is a bit lawyerly, a bit high school debate student in trying to get away with these games. I would prefer if he trained his intellect on characterizing the method-data interaction ala Burger05.

    DRAWBACKS OF PCA AND PREFERENCE FOR ANALYSIS OF DATA ITSELF (versus selection of components to keep):

    This is not germane to the issue as to whether McI misunderstood or misused terminology. Aside: I am not an expert, but I agree with your concerns on use of PCA as opposed to the data itself.

    Did McI not understand the Jolliffe PPT or concepts of centering: I find it very unlikely that he did not understand the difference in uncentered and short centered as he is the one WHO FOUND and documented the transform that Mann did.

    As Spence mentioned and you have still not moved forward since (despite lengthy non-germane posts), you have not shown quotes where McI is misunderstanding the terms. You need to do that. I think I know where you are getting confused–it’s a two clause sentence. But I will wait for you to assert your position in minutia before responding.

    —————————-

    Oh…and I very much reject any view that this terminology difference (uncentered and Mann-centered) is so confusing that poor Tammy couldn’t understand it (presumably your comments on McI are meant to show that he didn’t understand the PPT as well). The point is that if you don’t understand something (a term), you ASK AND MAKE SURE. Tammy should have done that first. Instead, he decided to prosyletize for the team, without taking the time on his own to understand what was really in the Jolliffe PPT. Similarly Mann and the RCers citing that PPT (and heck…a POWERPOINT! Can’t we do better than that?) as documentation of Mann-centering were in the WRONG.

  • TCO // October 6, 2008 at 1:13 am

    George:

    1. Btw, I DON’T NEED TO READ all your comments to criticize a specific one. I actually think that you can be right in one area and wrong in another. That is why I butt heads with people on both RC and CA. They are so wrapped in the “winning” and “not looking bad”, that they lose sight of this idea of disaggregation. Also, I’m completely capable of being wrong as well on some specific. That doesn’t reduce me either. I want to know about it and correct it. (I probably need to be shown beyond a reasonable doubt…but I’m capable of learning.) Note: I think it’s safer to start with correcting things that are incontrovertibly wrong, before devolving into things that are more uncertain.

    2. You’re one of the sharpest guys on the site. Keep pushing for insight–thumbs up.

  • Barton Paul Levenson // October 6, 2008 at 9:16 am

    Animal concentration camps? Let’s see, concentration camps… you want to concentrate the animals where you can keep an eye on them, as with the UK camps for the Boers in 1899 or the US camps for Japanese-Americans in the ’40s. Or you want to concentrate them with an eye to exterminating them, as with the Nazi camps, or to keep them away from political participation, as with the Soviet GULAG. Which is our motive for putting animals in concentration camps? Are we bent on wiping out cows and chickens?

  • Dave A // October 6, 2008 at 2:51 pm

    Hank , GP

    I’m all in favour of fairer shares for everyone, and indeed think many people probably eat far too much meat than is good for them.

    What I was referring to was Tara Garnett, the report’s author, who “warned” that campaigns to get people to voluntarily modify their approaches would not work and that therefore government coercion of various kinds would be necessary.

  • george // October 6, 2008 at 4:15 pm

    TCO:

    I found Tamino’s appeal to Jolliffe’s authority on the PCA part 4 thread unimportant.

    So I guess if you want to criticize me for that, you are justified in doing so.

    I’m guilty as sin.

    Show me my cross and i will carry it.

  • Gavin's Pussycat // October 6, 2008 at 6:17 pm

    Dave, context.

    …urged the government to use caps on greenhouse gas emissions and
    carbon pricing to ensure changes were made. [...] “Study upon study
    has shown that awareness-raising campaigns alone are unlikely to work,
    particularly when it comes to more difficult changes.”

    Does the price mechanism constitute “coercion”?

    I’ll grant that the article uses “voluntary” in a muddled way, to mean getting people to change their ways just by talking to them, without even as much as a cost incentive (which would thus be “involuntary”). Not the way I would use it, and no, I agree with the author that isn’t going to help a lot.

    Welcome to big brother land!

    Your credit card company, and various “loyalty card” issuers, know more about you than these folks are trying to find out, and for not remotely as legitimate reasons.

  • Phil. // October 6, 2008 at 8:31 pm

    Lazar, Churchill’s speech to the Commons to which you refer was not in respect of food rationing but rather the failure of the governments of France and Britain to address the problem of Germany’s rearming several years earlier.
    Food rationing was imposed very early in the war (8th Jan 1940) and continued for some items until 1954.

  • AlarmedNotAlarmist // October 6, 2008 at 9:39 pm

    Is there no end to the boneheadedness of Anthony Watts?

    Based on nothing stronger than press reports of an Al Gore speech which describe Gore as blaming climate change for the recent floods in Iowa, Mr Watts bellows …

    “Gore demonstrates he doesn’t understand basic meteorology, much less climate”

    http://wattsupwiththat.com/2008/10/05/gore-demonstrates-he-doesnt-understand-basic-meteorology-much-less-climate/

    Why? Well, the report is from the Des Moines Recorder no less and paraphrases the Nobel laureate thus : “Al Gore attributed the historic floods that devastated Iowa in June to man-made emissions causing more water to evaporate from oceans, increasing average humidity worldwide.”

    But the meteorologist has his rebuttal ready “In my opinion, the biggest error Gore makes is that water vapor in the atmosphere (and water cycle) has a much shorter residence time than his worrisome CO2; days to weeks from evaporation to precipitation, and thus would not be linked to “warming” now, since warming has subsided globally.”

    And he helpfully reproduces the UAH global mean plot to illustrate how much cooler the globe is now than, er, last year.

    Difficult to know where to start. First there’s the lazy journalism. Did Gore really explicitly blame climate change for the floods? Is the author of “Earth in the balance” really unaware of the residence time of H2O? After all he has been studying the topic for decades and he combines this with a lawyer’s training. He knows better than to make any claims that would not ’stand up in court’

    Amother report has a direct quote … “The scientists have warned us for years that the accumulation of global warming pollution in the atmosphere is trapping more of sun’s heat and raising temperatures and in the process evaporation more moisture off the oceans and the warmer air holds more of the moisture,” Gore said. “The average humdity worldwide, everywhere in the world, has gone up dramatically and when storm conditions present themselves more rainfall and snowfall falls at the same time and it causes historic flooding.”

    http://www.radioiowa.com/gestalt/go.cfm?objectid=CB2A235E-F2CB-F0AF-417F75F036B86FEE

    and also an audio file of the speech. It is clear that Gore never explicitly attributes the flooding to GW. He talks about the flooding, he talks about climate change, he correctly says that increased humidity will likely increase precipitation, leading to a greater incidence of serious flooding events. It is the Des Moines Recorder that has joined those dots.

    Secondly, in what way does a record of the global mean temperature disprove the proposal that elevated humidity may have made the floods more probable? Here’s Watts, responding to a poster who points out that humidity has increased steadily for three decades

    “No link, period, between supposed water vapor trend increases since 1975, and the present when the flooding occurred because of two simple facts:

    1) Cooler global temperature since 2007, near zero anomaly i.e. “normal”
    2) Average residence time of water vapor is 10 days ”

    So a 1-year cooling trend in the GLOBAL temperature is enough to disprove any link between water vapour and floods in ONE STATE! Because, do you see? All the ‘extra’ water vapour will be long gone. And anyhow temperatures are back to ‘normal’. (He’s using the UAH data so ‘normal is the 1978-2000 mean).

    Rarely have I seen so much wrongness packed into a single sentence. Remember he is ‘demonstrating’ that Gore does not know basic meteorology. You Americans CAN do irony, then.

    AnA.

  • HankRoberts // October 7, 2008 at 12:18 am

    Dave A, try this:

    http://www.pimco.com/LeftNav/Featured+Market+Commentary/IO/2008/Investment+Outlook+Gross+October+2008+Fear.htm

    “… capitalism is the best and most effective economic system ever devised, but it has a flaw: it is inherently unstable….”

  • TCO // October 7, 2008 at 1:00 am

    George:

    a. That’s non-germane to the horse that I am bloodying the road with, though. ;-)

    b. I don’t have a problem with that. We shouldn’t take people on authority. Instead we should get them to explain things to us.

  • TCO // October 7, 2008 at 1:07 am

    ANA:

    Watts is a numskull. (Not meant as an insult.) Wasting time on him is not worth it. Better to spend time on Mann/SM/Tammy stuff. Watts is just hoi polloi.

  • Bill // October 7, 2008 at 4:13 am

    AlarmedNotAlarmist,

    I certainly won’t try to defend Anthony Watts. Sometimes picking apart his arguments is like shooting fish in a barrel. But I have to say, I can understand his frustration with Al Gore. He chooses the occasion of a political speech to mention Iowa flooding, climate change, increased humidity from climate change, increased precipitation from increased humidity and increased flooding from increased precipitation. You are correct, he never explicitly connects those dots. But then, he is a lawyer and is smart enough not to connect those dots. But how possible is it, do you suppose, that he is standing there, with a bunch of dots in his pocket, just looking for someone who might do the connecting for him? I actually listened to the speech. He never explicitly says anything. But he certainly does leave that to the listener. We may do irony, but it also looks like we aren’t the only ones that do naiveté .

  • Hank Roberts // October 7, 2008 at 4:15 pm

    It’s an election year.
    Bogus is the new normal.

  • HankRoberts // October 7, 2008 at 9:40 pm

    Hat tip to New Scientist:

    http://www.pnas.org/content/early/2008/10/03/0711129105

    Temperature increase of 21st century mitigation scenarios

    1. D. P. Van Vuurena,b,
    2. M. Meinshausenc,
    3. G.-K. Plattnerd,e,
    4. F. Joose,f,
    5. K. M. Strassmanne,
    6. S. J. Smithg,
    7. T. M. L. Wigleyh,
    8. S. C. B. Raperi,
    9. K. Riahij,k,
    10. F. de la Chesnayel,
    11. M. G. J. den Elzena,
    12. J. Fujinom,
    13. K. Jiangn,
    14. N. Nakicenovicm,
    15. S. Paltsevo, and
    16. J. M. Reillyo

    (Affiliations listed by number in original)

    Edited by Stephen H. Schneider, Stanford University, Stanford, CA, and approved August 18, 2008 (received for review November 23, 2007)

  • HankRoberts // October 7, 2008 at 9:43 pm

    TCO, no _single_ event (like no individual person’s actions, hen) can be attributed solely to climate change, let alone to _anthropogenic_ climate change.

    But — point to the real world.

    We expect more problems — like this, and worse — because of the changes we see caused by human activity that we have the ability to control. We should do what we can do.

    That’s appropriate

  • Dave A // October 7, 2008 at 10:01 pm

    Hank,

    I quite agree that there are flaws in capitalism and many aspects of it that I, personally, do not like. But given that any system cannot possibly deliver everything that is necessary for everyone, capitalism does have the ‘virtue’ of providing for more people than any other system we have. We obviously have to curb its excessess, a la the present financial situation, however.

    GP,

    Context, well yes. These steps are the start of a slippery road. Yes my credit card or loyalty card company may know a lot about me but it is not TELLING me what to do nor monitoring me to make sure I conform to certain ways of behaviour.

    But that is precisely what the Sussex scientists and CABE want to do

  • Observer // October 8, 2008 at 9:21 pm

    http://climatesci.org/2008/10/02/an-essay-the-ipcc-report-what-the-lead-authors-really-think/

    Strange. What gives??

  • TCO // October 9, 2008 at 12:10 am

    Hank: I think there’s a mistake in responding to me. Don’t remember that I was talkikng about said subject. (I could be wrong, but am confused if so.)

  • Ray Ladbury // October 9, 2008 at 12:19 pm

    Observer,
    What, pray, is strange about the post. First, Roger will never pass up an opportunity to post something critical about the IPCC, and second, the criticisms are nothing out of the ordinary whenever scientists are called to come to an agreement about what the science supports. What is more, if anything this post emphasizes the urgency of the problem. It should hardly give comfort to complacent apologists for inaction.

  • JimV // October 9, 2008 at 3:29 pm

    A nitpick for Dave A re: “Yes my credit card or loyalty card company may know a lot about me but it is not TELLING me what to do nor monitoring me to make sure I conform to certain ways of behaviour. ”

    Mine is monitoring me. I tried to order a computer monitor (no pun intended) on the Internet recently. It triggered something in their “fraud division”, and despite phoning them and sending them a secure email from their website (and never having failed to pay my balance in full each billing cycle and being at about 6% of my credit limit), I could not get them to authorize the payment. They said they would, but after it was declined for the third time, I gave up and used another credit card to get the monitor which I am using as I type this.

    Most of that was probably just standard bureaucratic error, but they definitely are monitoring me to make sure I confirm to some expectations they have. (Welcome to the 21st century.)

  • Hank Roberts // October 9, 2008 at 3:50 pm

    Yep, I was replying to Bill re Watts.

  • Neven // October 9, 2008 at 11:08 pm

    DaveA,

    Actually your credit card company is telling you what to do (consume and worry about it later) and monitoring you to make sure you conform to certain ways of behaviour (ie pay interest) but that is not the point.

    I agree with you that you cannot TELL people how much meat they can eat. Maybe ‘eco-fascism’ is a bit strong, just like ‘animal concentration camps’ (or for Barton: extermination-reproduction camps ;-) ), but I don’t believe in that kind of coercion either.

    The root of the problem is, like I said, the animal factories where they stuff as many animals they can in cramped spaces. They fill these animals with hormones, antibiotics and genetically manipulated crops from the other side of the world where I don’t know how many football fields per hour is being cleared every hour. I don’t think anybody can deny these animals suffer greatly during their short life and at the moment they’re killed. And for what? For keeping meat at ridiculous low prices so people can eat way too much of it (because they’re addicted to it) and suffer all kinds of health problems that is costing society a lot of money.

    Why not ban these animal factories and get rid of a whole bunch of problems at once? It will increase the price of meat to a more realistic level and then people will automatically eat less of it. Even without the CO2/methane benefit it would be a wise thing to do, IMO.

    I hope I have expressed myself better this time. My father used to own a grill restaurant and I ate (a lot of) meat every day for the first 25 years of my life and it took me a long time of hard thinking to get to the ‘once per week chicken’ I am at right now.

    Anyway, I humbly step back into the shades for the wonderful science discussions I can be no part of. :-)

  • nanny_govt_sucks // October 10, 2008 at 6:28 am

    We obviously have to curb its excessess, a la the present financial situation, however.

    Wow. Someone else blaming capitalism for today’s financial failures? Sheesh. Freddie and Fannie were GSEs, the Fed set conditions for easy money, and the CRA and other “feel good” legislation put people into houses that couldn’t afford them. This is capitalism? Nope. Sorry. This is misguided government force interfering in our free society and mucking things up.

  • David Holland // October 10, 2008 at 11:13 am

    Ian Jolliffe // October 2, 2008 at 11:14 am tells TCO:

    “If only someone had contacted me then (or even better when the ppt was first cited – I wouldn’t been hard to find) to ascertain what my views really were we could have had the discussion much earlier.”

    Well, I think I did:

    —– Original Message —–
    From: David Holland
    To: Ian Jolliffe
    Sent: Thursday, February 24, 2005 12:58 PM
    Subject: Non Centred PCA

    Professor Jolliffe,

    Forgive me for approaching you but you are referenced by Professor Michael Mann as authority for using non centred PCA in his seminal work on historic temperatures. You may have heard him on the Today programme earlier today.

    As the doubt over the validity of his PCA calculations is one of the main arguments of sceptics I am wondering if you looked into the matter and formed a view. I have looked at your brief presentation which Professor Mann cites but do not see the justification. Given that the result of his reconstruction shows a dramatic trend in one part of the time frame does it not appear questionable to have centred his data over that same period?

    David Holland MIEE
    ——————–

    I had not published your reply until this year though I had made it known to several people and attempted to post your views on RealClimate. Indeed in a submission to the Stern Review and letters to Defra I urged that you be consulted on the statistical issues that were in dispute over the “hockey stick”.

    I apologise for publishing your reply, without asking you, which may have brought you here, but I felt resurrecting your alleged support for Mann’s PCA after the Wegman study was inexcusable.

    The real issue that bedevils climate science is the opacity which you and your fellow referees correctly identified. It is a fault for which almost all of us will be paying for years to come in international financial matters. Some of us pressing for transparency in climate science are doing so because we fear equally disastrous consequences if well meaning people refuse to disclose the basis of the science that they say supports their alarming beliefs.

  • Gavin's Pussycat // October 10, 2008 at 12:42 pm

    Rejoice, a sunspot!

    ;-)

  • TCO // October 10, 2008 at 3:27 pm

    Defensive, blog-oriented post by Steve McIntyre at CA and my response to it:

    Post: http://www.climateaudit.org/?p=4064#comments

    My comments:

    What a long-winded, defensive, cowardly, sea-lawyerly post.

    A. The whole thing can easily be explained as, “I looked at the SI BEFORE, reading the comment by Gavin, but neglected to look at it AFTER reading his comment to see if the info was now there.” This is a possibility that had occurred to almost anyone and should have occurred to SM at the time.

    B. Don’t bother with all the drawn-out discussions of how the SI was shifting. They are true. They may be relevant in and of themselves. They may have some place as ameliorating your mistake. But they have no place in a discussion, where you admit you were wrong. And just say “I was wrong, I made a mistake”. Don’t say “was wrong-footed”.

  • TCO // October 10, 2008 at 3:33 pm

    There is a comment there about SM making deliberate obfuscations. I do think that on occasion he has done so, within discussions. Think the other side as well. It is a similar sort of high school debate style. Not saying of course that either side always does so, or that they never have valid points (I should not have to put this caveat in, but I have long learned that everyone here is digital and thinks in terms of 100% for one side or t’other). But they each do that on occasion. And each should stop.

  • Dave A // October 10, 2008 at 10:01 pm

    Nanny,

    Are you denying that there were excesses in the financial sector and perhaps an excessive use of mathematical algorithms and models which did not accurately represent real world situations?

    Now where have I come across that sort of thing before…hmmmm

  • Hank Roberts // October 11, 2008 at 12:52 am

    Nan, at least ask a librarian to help you check your assumptions. Think, Nan. Use your skepticism to test what’s real.

    http://www.frbsf.org/news/speeches/2008/0331.html

    “…There has been a tendency to conflate the current problems in the subprime market with CRA-motivated lending, or with lending to low-income families in general. I believe it is very important to make a distinction between the two. Most of the loans made by depository institutions examined under the CRA have not been higher-priced loans,16 and studies have shown that the CRA has increased the volume of responsible lending ….”
    ____
    16. According to the 2006 HMDA data, 19 percent of the conventional first lien mortgage loans originated by depository institutions were higher-priced, compared to 23 percent by bank subsidiaries, 38 percent by other bank affiliates, and more than 40 percent by independent mortgage companies. Robert B. Avery, Kenneth P. Brevoort, and Glenn B. Canner, “The 2006 HMDA Data,” Federal Reserve Bulletin, Volume 94 (2007), p. A89.
    ——————-
    Those numbers got worse after 2006, much worse — and the predators were not covered by the CRA. You must know this, you can check it.

  • Paul // October 11, 2008 at 2:04 am

    But there was consensus…

    Although the US didn’t sign onto the Basel Capital Accords, I find it sad that the European Banks (who did) couldn’t statistically calculate the correct amount of capital to cover a 2/100 year event never mind a 1/200 year event that the accords call for.

    Statisticians should be licensed.

  • nanny_govt_sucks // October 11, 2008 at 3:54 am

    Are you denying that there were excesses in the financial sector

    What do you mean? Excesses where? Can you be specific?

  • nanny_govt_sucks // October 11, 2008 at 4:11 am

    Hank, you provided a link to a speech from a Federal Reserve bank CEO, hardly an unbiased source in this financial mess. Check the changes to the CRA that occured in the 1990’s under Clinton and their effects. For more:

    The CRA Scam and its Defenders
    http://mises.org/story/2963

  • Gavin's Pussycat // October 11, 2008 at 6:33 am

    TCO

    > sea-lawyerly

    What’s new. This whole thing is so all-over-again. Just like the centering issue in MBH98: you get precisely the same result if you repeat the calculation in a centered way. And Tamino showed, in a way that even I could understand, why this is so.

    So — big deal. Mann overestimated his audience’s mathematical intuition. I guess he never doubted that the centering is a free parameter to be chosen expediently. Yes, he could have been clearer and more forthcoming. I guess that he felt he didn’t owe anything to folks impugning his honesty, just like Gavin now.

    And David Holland repeating the old canard that makes all of climate science stand or fall by one paper and one methodology, replication and independent lines of evidence be damned.

  • TCO // October 11, 2008 at 2:29 pm

    Gavin:

    1. I agree that SM’s gracelessness, longwindedness and lack of balls to say he was wrong directly is something we have seen before.

    2. I disagree that you get exactly the same answer. It’s not EXACTLY the same answer, regardless. And even in terms of being approximately the same, there is still an issue.

    The intermediate result of PC1 was trumpeted and said to be the “dominant mode of variation”. And the hockey stick goes into PC1 with the off-centering, but not without it. True, if you select PCs to include a hockey stick variation, Mann’s algorithm (which is very amplifying) will still promote a hockey stick. But the intermediate result is different. And that was considered important enough to tout it. Heck, seeing that a PC4 (or PC2 or whatever) is what gets promoted brings the amplifying rest of the algorithm much more into suspicion. Instead of touting the “dominant mode of variation”, you have to say, “look the ‘dominant’ mode of variation DIDN’T get promoted”, but that we selected this piece that did. Could still even be possibly correct, if he has a magic fishing pole. But it is a different discussion and puts the whole fishing pole under much more scrutiny.

  • TCO // October 11, 2008 at 2:32 pm

    Oh…and if off-centering is a free parameter, we need to expose what was done, examine that, see where else it can apply etc. Mike Mann has lacked the grace to even STILL admit that he was deficient in just EXPLAINING the algorithm. And btw, Tamino NOW says that he thinks at least the EXPLANATION was needed. I asked Tammy the question as to that before, but he was silent. But now after the Joliffe event, Tammy says this. And I’ll bet you some SERIOUS money that Jollife will say that the off-centering needed to be documented in the methods section of the paper. And Mike Mann STILL REFUSES to address this.

    That’s why I see Mike and Steve as similar.

  • TCO // October 11, 2008 at 2:37 pm

    Gavin:

    When someone like Mike comes along with a very complicated algorithm, that promotes certain signals to get a very difficult answer (what was temp a thousand years ago), we NEED to have full and accurate description of said algorithm. Not press release style articles in ego journals like Nature/Science.

    Heck, for all I know Mike is right. He’s certainly doing interesting things with some sophistication. But encumbant on that is FULL disclosure of the complicated methods.

  • Ray Ladbury // October 11, 2008 at 3:41 pm

    Nanny… and we know that Mises.org must be objective…right! As to excesses: well, how about bundling liars (e.g. no-paper, no-verification) loans and then using traditional loans (large down payment, verified income and employment, etc.) to estimate risk. Face it, Nanny, your Masters of the Universe screwed the pooch on this one, and since they’ve bought the government, the government is just another subsidiary of the NYSE.

  • Gavin's Pussycat // October 11, 2008 at 5:40 pm

    TCO:

    > disagree that you get exactly the same
    > answer. It’s not EXACTLY the same answer,

    The difference is under the noise, which is good enough for me.

    > Heck, for all I know Mike is right.

    I know he is… the replication without decentering demonstrates it.

    > Jollife will say
    > that the off-centering needed to be documented in
    > the methods section of the paper.

    And so say I… but that does not invalidate the science.

    Actually it was rather amusing to see that Mann can be as big an asshol^Wlawyer as the rest of them when the right button is pushed… “intellectual property”, right :-) Not nice, very human.

    > When someone like Mike comes along with a very
    > complicated algorithm, that promotes certain
    > signals to get a very difficult answer (what was
    > temp a thousand years ago), we NEED to have full
    > and accurate description of said algorithm.

    Very complicated? Very textbook. The decentering was the only thing that was special, and not properly explained in the original article. Scientists are getting very difficult answers all the time. If different scientists get the same answers in different ways (and what that means is its own discussion) they must be doing something right.

  • nanny_govt_sucks // October 11, 2008 at 5:41 pm

    Ray, the Austrian economists have been predicting boom and bust cycles since the 1930’s. What do they have to gain in this compared to the Fed that is in serious CYA mode right now. Bundling liars? Do you mean the way that Freddie Mac bundled securities and sold them? Freddie Mac was a GSE! My masters of the universe!?!?. Where do you get this Ray? Libertarians oppose government intervention in our free society. The Fed, GSEs, The CRA are all to blame and all originate from our “benevolent” government.

  • cce // October 11, 2008 at 6:08 pm

    http://www.traigerlaw.com/publications/traiger_hinckley_llp_cra_foreclosure_study_1-7-08.pdf
    http://www.traigerlaw.com/publications/addendum_to_traiger_hinckley_llp_cra_foreclosure_study_1-14-08.pdf

    Summary:
    Our study concludes that CRA Banks were substantially less likely than other lenders to make the kinds of risky home purchase loans that helped fuel the foreclosure crisis. Specifically, our analysis shows that:
    (1) CRA Banks were significantly less likely than other lenders to make a high cost loan;
    (2) The average APR on high cost loans originated by CRA Banks was appreciably lower than the average APR on high cost loans originated by other lenders;
    (3) CRA Banks were more than twice as likely as other lenders to retain originated loans in their portfolio; and
    (4) Foreclosure rates were lower in MSAs with greater concentrations of bank branches.

  • Gavin's Pussycat // October 11, 2008 at 7:56 pm

    > That’s why I see Mike and Steve as similar.

    And that’s what I’d call a false equivalence, TCO. Heck, has anyone called you a fraudster? Not in your face, I’m sure.

    McI doesn’t just need to apologize, he needs to change his act. Both Mann and Schmidt are scientists whose records speak for them. Their reputation and dedication to science defines them. They have their warts, but fraudsters don’t get far in science.

    What can McI put against that? The one paper that made it to a respectable journal is iffy, as you well know (or invalid, but not actually proven so due to methodological disclosure “issues” :-/ ).

    Anyway, under the circumstances I see why Mann or Schmidt see no basis for any discussion. For what it’s worth, neither do I.

  • Dave A // October 11, 2008 at 8:02 pm

    Nanny,

    excesses that you can’t see? - take the ultra dark glasses off

  • David Holland // October 11, 2008 at 8:55 pm

    Gavin, // October 11, 2008 at 6:33 am

    I think people should finish one thing before they start another. I also think you should attack the weakest part of a fort. If the ‘hockey stick’, and more to the point palaeoclimatology as a whole was invulnerable you wouldn’t bother to defend it. I don’t see many blogs for and against Newtonian physics with threads this long.

    And what replication? What independence? The nearest you got to independence was Wegman and NRC, 2006, which Gerry North said concluded that you could not put any numerical probability on current temperatures being warmer than any time in 1300 years.

    Hey, but let me quote a copy of an email that landed in my computer last week. The redactions are mine but I am not giving prizes for guessing them as it is too easy.

    15/07/2008 11:27:44
    Email from: REDACTED
    Subject: Re: Further FOI requests
    Dear REDACTED I have made enquiries and found that both the REDACTED and REDACTED are resisting the FOI requests made by Holland. The latter are very relevant to us as UK universities should speak with the same voice on this. I gather that they are using academic freedom as their reason. I have been given the name of the person who is dealing with this matter at REDACTED. It is: NAME AND ADDRESS REDACTED. I urge you to contact him so that we can get our act together. Best wishes REDACTED

    Get that! We’ll cite academic freedom to stop someone finding out why we didn’t cite Wegman and NRC properly. Never mind the Aarhus convention – it doesn’t apply to climate science.

    Like TCO says the hockey team may be dead right and I could be totally wrong but I can’t remember a case where people refused to let you see how they did what they claimed to have done that did not turn out either wrong or worse.

  • TCO // October 11, 2008 at 10:25 pm

    Gavin’s Pussy: It’s not “very textbook”. The algorithm has multiple stages where different choices are made on what type of method to use. Burger05 did a very nice full factorial of those choices showing that different decisions on the algorithm give very different reconstructions. Heck, we STILL don’t know how the error limits are calculated. The methods description is insufficient to duplicate the drawn graphs.

    In terms of “scientist get the same answer with different methods, thus supporting blablabla”, I DISAGREE. Look at Burger05, for how much the reconstructions vary. Look at Moburg05. Heck, look at Mann08 versus Mann98.

  • David B. Benson // October 11, 2008 at 10:28 pm

    David Holland // October 11, 2008 at 8:55 pm — You do understand that in many regions it is now warmer than at any time in the last 5000–7000 years, by direct observation?

    90–7000 years ago:

    http://www.npr.org/templates/story/story.php?storyId=914542
    http://www.physorg.com/news112982907.html

    7000 years ago:

    http://news.softpedia.com/news/Fast-Melting-Glaciers-Expose-7-000-Years-Old-Fossil-Forest-69719.shtml

    5200–5500 years ago:

    http://news.bbc.co.uk/2/hi/science/nature/7580294.stm
    http://researchnews.osu.edu/archive/quelcoro.htm
    http://en.wikipedia.org/wiki/%C3%96tzi_the_Iceman

  • TCO // October 11, 2008 at 10:29 pm

    GP:

    I’m not saying that McI and Mann are equal in evilness. I’m saying that they commit a similar sin and it smells the same…and when they do so, I find them similar.

    ;-)

  • Hank Roberts // October 11, 2008 at 11:10 pm

    > I don’t see many blogs for and against
    > Newtonian physics with threads this long.

    Media change. Cranks turn in only one direction.

    Preparing the Battlefield: Fighting For and Against Newton after 1715 165
    The Newton Wars in France 233
    The Invention of French Newtonianism …
    search.barnesandnoble.com/The-Newton-Wars-and-the-Beginning-of-the-French-Enlightenment/J-B…/9780226749457

    #
    The baptist Magazine - Google Books Result
    1837
    … among themselves, have divided for and against Newton, on the points in debate. Brewster says, that M. Biot, Newton’s French biographer, ‘ well observes …
    books.google.com/books?id=71gEAAAAQAAJ…

    Try Darwin for a more recent example.

    False Fear? - The Panda’s Thumb
    … examined” (including, I expect, evidence for and against Newton, evidence for and against Lavoisier, evidence for and against Einstein, etc etc etc?) …
    http://www.pandasthumb.org/archives/2006/02/false-fear.html

  • Hank Roberts // October 11, 2008 at 11:42 pm

    Nan, read:
    http://www.mcclatchydc.com/251/story/53802.html

    CRA loans were not the hot fast profit transactions that made brokers rich and homeowners underwater. They were way too closely watched.

    Seriously, read something besides political PR. McClatchy (link above) is a good start.

  • TCO // October 12, 2008 at 12:56 am

    I think melting glaciers are one of the most powerful peices of evidence for it being warmer now than before.

    But puh-leeze. Glaciers don’t validate the Mann witches brew of PCA, etc.

  • L Miller // October 12, 2008 at 1:34 am

    “Ray, the Austrian economists have been predicting boom and bust cycles since the 1930’s.”

    The power to predict something that has a pattern going as far back as records are kept astounds…

    The fact is there have been far fewer boom/bust events in the last 70 years, and the ones we have had are minor in comparisons.

  • Ray Ladbury // October 12, 2008 at 1:52 am

    Nanny, you know the joke: Economists have predicted 10 our of the past 4 recessions. This one was utterly predictable. My only surprise is that it took so long. The problem wasn’t the bundling per se. Bundling if properly done, should reduce risk. Howerver, the loans themselves were insane. Isn’t it funny how few libertarians are left on Wall Street. You must be feeling kind of lonely these days.

  • steven mosher // October 12, 2008 at 2:46 am

    Here you go TCO.

    “An article about computational science in a scientific publication is not the
    scholarship itself. it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set
    of instructions which generated the figures.”

    Claerbout.

    http://www.reproducibleresearch.org/

  • nanny_govt_sucks // October 12, 2008 at 3:04 am

    cce, Hank,

    I find it funny that I can say “The Fed, GSEs, The CRA are all to blame …” and the only response I get is about the CRA. Your objections are addressed in the link I provided above.

  • Bill // October 12, 2008 at 5:21 am

    I’m not sure I find melting glaciers to be proof that it is warmer now than it was ‘before’. 7000 years ago, it was warm enough for a forest to grow on the spot where that glacier was. Is it warm enough for a forest to grow there now? Doesn’t a melting glacier really just prove that it is warmer now than when it was when that glacier first ‘froze’?

  • nanny_govt_sucks // October 12, 2008 at 5:54 am

    Howerver, the loans themselves were insane.

    See the link above.

    Isn’t it funny how few libertarians are left on Wall Street. You must be feeling kind of lonely these days.

    Jim Rogers and Peter Schiff are doing just fine, as far as I’m aware.

  • Gavin's Pussycat // October 12, 2008 at 7:08 am

    > I’m not saying that McI and Mann are equal in
    > evilness.

    OK, point taken. And with Mann, I suspect carelessness and inexperience more than evil.

    > I DISAGREE. Look at Burger05, for how much the
    > reconstructions vary. Look at Moburg05. Heck, look
    > at Mann08 versus Mann98.

    I did. Is there something wrong with my eyes, or is there really no recent credible reconstruction allowing for a MWP as warm as today?

    David Holland, I too would make a point of stonewalling attempts at digging in old dirt — especially if, based on experience, they will lead to more verbal abuse and misrepresentation — when there is a new, major study replicating and strengthening the old one, and committed to complete availability of data and code to the satisfaction of those looking for fraud… yes, I too am a scientist and not a lawyer.

    As for defending MBH98/99, somebody has to do it when unfairly attacked. I will do the same for Newtonian physics when somebody manages to unfairly attack it without making a fool of himself… Einsteinian physics would be a more credible metaphor. Yes, I would stand up for ‘entartete Physik’.

  • Gavin's Pussycat // October 12, 2008 at 10:10 am

    David Holland says:
    > I also think you should attack the weakest part of a fort.

    Very revealing. Do you realize that this is one step away from a conspiracy theory? You should get out more…

  • TCO // October 12, 2008 at 12:42 pm

    Mosh-pit:

    Interesting place. I will check it out.

  • TCO // October 12, 2008 at 2:36 pm

    The paper itself is very interesting. I like the philosophy:

    http://rr.epfl.ch/17/1/reproducible_research.pdf

  • David Holland // October 12, 2008 at 3:15 pm

    David B. Benson // October 11, 2008 at 10:28 pm

    There you defending again. All the evidence finds you cite are local rather than regional, but might show that we are reaching or just surpassing one historic temperatures at that place. It still leaves you with the issue that with much lower CO2, temperatures have been much higher in the past than in the little ice age.

    If you believe Al Gores graphic is was warmer at the poles in the majority of the recent past interglacial periods with lower CO2 than now.

    Then again there is local evidence for it being much warmer in the past. What is at issue is whether it was globally warmer in the past with lower CO2 and as of today no one can say with absolute certainty. What the NRC, 2006 said was that in 2006 it was not possible to to put any number to the probability for reasons that they spelt out.

    Hank Roberts // October 11, 2008 at 11:10 pm
    THIS LONG!
    The link to Pandas Thumb had 40 comments and 2 uncontroversial mentions of Newton. This continuation thread has 250 comments and far more than 2 on issue!

    Why are none of you objective people, with some exceptions of course, engaging with the serious issue of the opaqueness of climate science? You may right in what you passionately believe but then we thought our bankers and regulators knew what they were doing, not to mention the CIA and MI5 with their WMDs.

  • Ray Ladbury // October 12, 2008 at 3:47 pm

    Dave Holland, Climate science is only opaque to those who don’t understand climate science–or science in general. I do not know every last detail of the analysis that led to the discovery of the top quark, but that doesn’t mean I don’t know enough to see that the evidence is cogent. (My PhD was in particle physics, so I can follow the details). Likewise, I understand the physics and enough of the detailed analyses to see that it is pretty much a lead-pipe cinch that we are altering climate. I don’t need a bunch of amateurs to confirm or refute that. If they have anything to say, they can publish, just like the professionals. Maybe they might even want to learn a little of the science before doing so, and then they’d understand what errors/omissions are important and which ones are trivial!
    Science is a competive enterprise. Analyses are supposed to be independent, and there is little value in simply replicating someone else’s results (reproducing, yes, replication, no).

  • Gavin's Pussycat // October 12, 2008 at 5:14 pm

    David Holland, what Ray said.

    Furthermore:

    If you believe Al Gores graphic is was warmer at the poles in the
    majority of the recent past interglacial periods with lower CO2 than now.

    “Recent past interglacial periods”? Do you mean the interglacials over the last three million years or so? Yes, they were warmer than now and had less CO2. One word: Milankovich. Forced variability (OK, three words).

    Do you know the difference between forced and unforced variability?

    The reconstructions for the past 10-20 centuries are about unforced variability. We want to know empirically if what both theory (modelling) and the instrumental period tell us, that the unforced variability power spectrum of global temperature is of type 1/f with known, small amplitude — and thus cannot explain the changes seen in the late 20th century –, holds also for longer time scales in the pre-instrumental period. Sure there is bound to be forced variability too there — think volcanos, deforestation, … –, but we may at least expect to get an upper bound.

    Just for kicking off your learning process.

    About CIA/MI5 with their weapons of mass deception, you don’t want to go there. Also there, a community of professionals on the ground that knew their stuff and got it right; but contrary to the climate situation, they weren’t allowed to speak out, nature of the job and all that (David Kelly did it anyway and we know how that ended). Not exactly an example of transparency. And then when things went south, they were scapegoated by the politicians. You know all this unless you also have a shelf full of alternative history writing… I’m surprised you voluntarily brought this up :-)

  • TCO // October 12, 2008 at 5:47 pm

    Ray:

    I understand science. Have my union card in one field and have worked in several others. MBH98 is poor explication. You still need to read the Wilson book for good old time religion on reporting results. Mann has all the marks of a young Turk scientist. Boastful papers, big grants, uni-hopping, lack of detail on methods, etc. Have seen it in EE and the physical sciences.

  • cce // October 12, 2008 at 6:58 pm

    Nanny,

    I was addressing “your link” aka “The CRA scam and its defenders.” The CRA banks made proportinally more reponsible loans than “independent” banks, despite the mandate to reach out to the poor, and they were lesss likely to repackage these mortgages. That argues for stronger regulation, not less.

    If you want to rail against laws enacted under Clinton, rail against the repeal of the Glass-Steagall Act (thanks to Gramm and Leach), which allowed the banks to turn into these unregulatable monstrosities that were “too big to fail.”

  • David Holland // October 12, 2008 at 8:48 pm

    Ray, I’m not sure what you are on about.
    How do you disprove cold fusion? How do you catch guys like Schön. If you try to do their studies in an independent way and don’t get the same answer you will just get told “you didn’t implement my method correctly”.

    NRC, 2006 states “Our view is that all research benefits from full and open access to published datasets and that a clear explanation of analytical methods is mandatory. Peers should have access to the information needed to reproduce published results, so that increased confidence in the outcome of the study can be generated inside and outside the scientific community.”

    I agree. Who here is willing to say they do not, or that it is acceptable to say you didn’t calculate R2 when your code says you did? Or that it OK to say it’s standard PCA and its not and then refuse to disclose your code? Or it OK not to publish the raw data?

    I doubt that “team”, referred to in the email that I pasted earlier and who are involved in “resisting” my EIR (Aarhus) enquiries into WGI procedures, will come here and say they disagree with NRC, but privately they will still not willingly disclose any information on the IPCC assessment process even though by international agreement it should be open and transparent.

    Who here thinks the IPCC working papers should be kept secret and why?

  • Hank Roberts // October 12, 2008 at 9:22 pm

    Yep. Nan, people who’ve hated the CRA because it limits quick profits are trying guilt by association.
    Racial redlining clearly happened. Testers with identical financial info, different only skin color, went out applying for loans.
    http://www.google.com/search?q=bank+agency+redlining+“testers”l
    The CRA is a response to the proven bigotry in lending. It worked. Payback results have showed a lot of lenders they were wrong in what they’d assumed about lending to people who weren’t white.

  • Hank Roberts // October 12, 2008 at 9:22 pm

    http://www.google.com/search?q=bank+agency+redlining+“testers”

    without the typo above

  • Dave A // October 12, 2008 at 9:49 pm

    Ray

    >. Analyses are supposed to be independent

    Didn’t Wegman show that there was a considerable amount of interdependence between the climate research groupings and that they all tended to quote/cite/use one another in various ways?

    Isn’t the distinction between replication and reproduction sometimes very fine?

  • Dave A // October 12, 2008 at 10:23 pm

    GP,
    Careful how you go on the WMD subject. Fact was it was accepted by most western intelligence agencies that Saddam was developing WMDs, not least because Saddam himself was also saying this to anyone who cared to listen.

    The David Kelly affair was indeed tragic but I suspect it is far more complex than has been portrayed in the media, who like nothing more than the hint of conspiracy.

    So I’m surprised you voluntarily decided to comment and prolong the discussion!

  • Hank Roberts // October 12, 2008 at 11:46 pm

    > it was accepted by most western intelligence
    > agencies that Saddam was developing WMDs

    Well of course. We have the receipts for the materials.

    And it turned out he’d wasted the material we’d sold him unproductively and had nothing left.

  • Ray Ladbury // October 13, 2008 at 12:56 am

    TCO, Here’s what I don’t understand: You claim to understand science, and yet you are fixated on a single paper by a single group from over a decade ago. To me, that doesn’t speak of a particularly deep understanding of science. It really doesn’t matter how you do the analysis. The conclusion you come to is that it’s freakin’ warm out there and getting warmer. The conclusion doesn’t need to be any stronger than that to support the consensus view, and no matter what you do (honestly, anyway), you can’t come up with a reconstruction that supports the contrarians.
    Dave A. asks how you refute cold fusion: Well, first, I’d suggest studying physics for about 10 years so you understand the science from the description in the journal well enough that you can try to reproduce it. I don’t say MBH98 was perfect. It is deserving of a degree of respect as the first such multi-proxy reconstruction, but it’s now ancient history.

  • nanny_govt_sucks // October 13, 2008 at 4:13 am

    The CRA is a response to the proven bigotry in lending. It worked.

    Wow. Enjoy the CRA “working” (along with the Fed, GSEs, HUD, etc…) as we enter the next Great Depression. Perhaps there was a problem with bigotry in lending as you point out, but as usual the government “solution” turns out to be much worse than the original problem.

  • Gavin's Pussycat // October 13, 2008 at 4:26 am

    Didn’t Wegman show that there was a considerable amount of
    interdependence between the climate research groupings and that
    they all tended to quote/cite/use one another in various ways?

    Yes they did. You may also remember the response with the picture of the Solvay Congress where Einstein, Bohr and the other greats of quantum theory interacted…

    I always found the argument not just wrong, but shameful. It’s an implicit conspiracy theory and only makes sense if every single one of these researchers are fraudulent in a way that I know at least some of them not to be.

  • Gavin's Pussycat // October 13, 2008 at 6:33 am

    Fact was it was accepted by most western intelligence agencies that
    Saddam was developing WMDs, not least because Saddam himself was also saying this to anyone who cared to
    listen.

    I see you’re sharing a bookshelf with David Holland… fact is that they all agreed that Saddam had an ambition to get WMD, and was boasting about it for internal consumption, but nobody really knew how far he had gotten because they lacked good sources inside the country.

    The evidence I was referring to was that produced to justify invading now, rather than wait out the Blix process; and that was all manufactured under political pressure. The 45 minute missiles (the ‘dodgy dossier’), the Niger yellowcake, the bio-weapons vans; even the Prague meeting with Mohammed Atta. On both sides of the Atlantic.

    The intelligence people were told to produce the “right” intelligence, rather than left to their proper jobs of producing valid intel. Cheney even set up his own parallel organization, not trusting the existing ones to produce the “right” result — shadows of climate auditing.

    Even in a compact, secretive little community like an intelligence service, peer review and other time-honoured best practices can and do work. The alternative doesn’t, just as it doesn’t in science. I was responding to DH’s claim of a lack of transparency in this sample of intelligence work. He is very right in his claim — but very wrong in blaming the community of working professionals.

    So I’m surprised you voluntarily decided to comment and prolong the discussion!

    You’re welcome. It’s a useful example.

  • steven mosher // October 13, 2008 at 7:01 am

    TCO,

    I think Ray and others should read it as well.

  • Barton Paul Levenson // October 13, 2008 at 10:25 am

    David Holland writes:

    Why are none of you objective people, with some exceptions of course, engaging with the serious issue of the opaqueness of climate science?

    It’s not opaque if you actually get off your lazy butt and study it.

  • TCO // October 13, 2008 at 1:02 pm

    Ray:

    It is possible to drill down for detail and to resolve a single question, without it being a generalization on some meta-issue. Please disaggregate. Also, please do NOT only assume that fault-finding on a small issue is part of some campign on a large one. Rather, it is a way to test and learn. McI also has the same problem with me, when I peck away at chinks in his armor.

    Real scientists could care less if some bad parts get weeded away. It enhances understanding of the whole, to have that done. And they will UNILATERALLY concede flaws within their own papers once exposed, EVEN IF the other sides makes false (or debated) generalizations as well. The true scientist cares only about truth.

  • TCO // October 13, 2008 at 1:05 pm

    Ray:

    I can respect the attempt, within MBH98, even if flawed, as ancient history, and an attempt to solve a problem. BUT NOT if the authors resist identification of the flaws. Capisce?

  • TCO // October 13, 2008 at 1:06 pm

    And to this point, the same people saying “moving along, now, pay no mind to MBH” have resisted examination, corrigendums, comments, etc. on the flaws within that paper.

  • TCO // October 13, 2008 at 1:11 pm

    Ray,

    I also think part of the interesting aspect of MBH98 is that it is so complicated and there are so many little puzzles within it, of exactly what the method was, how that method works in different circumstances, etc. It is far more sophisticated than taking a poll and averaging results. Hence, the room for digging into it. Heck, remember that Joliffe as reviewer said that the arguments of both Mann and SM were so involved that it required access to the code(s) and data, to really resolve them, along with time for working with them, and probably access to ask the authors questions. Basically, the thing is fascinating!

  • TCO // October 13, 2008 at 1:14 pm

    And this really is an intellectual fascination. There are places like Burger05, where studies show some bad aspects of MBH. Similarly, there are places where more thorough examination of some SM critiques (Huybers, WA) show that some of the effects, while existing, have been over-trumpeted in magnitude by SM making test conditions that over-dramatize the impact.

    I’m all about learning the truth.

  • Hank Roberts // October 13, 2008 at 3:07 pm

    Nan, if you’d read, you’d change your mind.
    CRA loans have proven safer, not more risky.
    Lending to neighbors locally even if they were not white people was, when the local bank branches were forced to do it, a good business decision.
    Surprise, just taking deposits in poor areas but not lending in the same area hadn’t been their best idea.
    Only deposit-taking institutions are covered by CRA; only those places that took deposits from individuals and lent to the same neighborhood made loans under CRA. And they’re better than average loans — lower rates, better facts, better repayment.

    Focus. Not all government is bad. Not all business is good. Not all money is real.
    Paying attention to real people with real money and real jobs and real houses made good loans.

    If you think Hoover would’ve done better than Roosevelt, my parents — who grew up during the Depression would tell you otherwise. They taught me not to rely on the government — and that government gets stolen if not watched.

  • David Holland // October 13, 2008 at 3:19 pm

    Barton Paul Levenson // October 13, 2008 at 10:25 am

    Actually it’s on what you call my “lazy butt” that I have done most of my 10 or more years of study of this area as most people do, either at the computer or reading papers.

    You say it is not opaque. So you have a go. Ask the IPCC if they would kindly give you the working documents discussing the over six month extension to the publication deadlines for WG1 on the SOD and copies of the unpublished expert reviewers comments as a consequence of it and the authors responses to them.

    The email I put up was from a top UK climate scientist suggesting they should all work together to ensure I do not get to see the information. Let me know how you get on.

    I would be delighted to engage in a serious discussion if one shows up but otherwise I will go back to more productive work.

  • Ray Ladbury // October 13, 2008 at 4:00 pm

    TCO, I’m just not sure how learning everything there is to know about a single paper out of, say, a few thousand gets you very far in terms of understanding the science. Scientists tend to focus on what the authors did right, since this is what is most likely to help them with their own research in the future. The mistakes do get corrected–not by being “audited,” but because people don’t want to repeat them. Another thing for you to consider, MBH98 really was pretty groundbreaking. It was fairly audacious to assume that such an undertaking would be possible at all. And realizing it took an astounding amount of very tedious work. Now who but a young Turk is likely to take on such a high-payoff-high-risk endeavor? Do I think Mann could have handled the situation better? Yes. However, there’s something about being subpoenaed and accused of fraud that takes the cooperative wind right out of your sails!
    Also, FWIW, I can sympathize with an author who makes some errors in a complicated statistical analysis. I’ve had to feel my way in the dark at times along that path as well. However, I’m sure if somebody like Wegman had been “auditing” quantum mechanics, he’d have taken Dirac to task for developing his “delta function”. Something along the lines of “You idiot. It’s a distribution, not a function…” Anyway, I’m happy to have the braintrust over at CA et al. obsess on this sideshow while climate science continues to progress around them.

  • Gavin's Pussycat // October 13, 2008 at 5:19 pm

    TCO: http://www.realclimate.org/index.php/archives/2006/04/a-correction-with-repercussions/#comment-12584

  • Northw // October 13, 2008 at 7:47 pm

    As someone in the financial services industry, it is interesting to read the knee-jerk ideological responses of Nanny and Hank. As always, the truth (he says authoritatively as the keeper of all truth :-)) is somewhere in between. There is a lot of blame to be shared.

    Of course, there was a breakdown in the private sector and markets. Subprime lending went well beyond anything required by the CRA or facilitated by the GSEs. Countrywide, Golden West (or should I say Wachovia?) and other parties engaged in the origination, packaging and sale of loans adopted reckless, foolhardy and (potentially corrupt) practices in search of profits. Some of this behavior was encouraged by the compensation systems in financial institutions where employees are rewarded on short-term performance – there is no clawback of million dollar bonuses for losses in future years (except, to the extent compensation is deferred or restricted to company equity but this brings in the fallacy of composition problem).

    Fannie and Feddie are also not innocents and an attempt to whitewash the GSEs is equally blinkered. Without going into all the details of their excessive and reckless leverage among other business strategies and their abuse of what many saw as an implicit government guarantee, the obvious question is: if they were well managed and regulated, why are they in conservatorship? Why have preferred shareholders lost billion of dollars? People defending them tend to focus on their falling percentage (if not absolute value) of overall sub prime mortgages, but ignore their dominant position in Alt-A. In addition, let’s have less of people defending the CRA by throwing out statistics of how profitable CRA mortgages are and how low the default rates are. In reality, we don’t know this and the data is not tracked on an ongoing basis. The last survey performed by the Fed was in 2000 – using eight year old data that well predates the housing crisis is just foolish.

  • nanny_govt_sucks // October 13, 2008 at 9:01 pm

    Of course, there was a breakdown in the private sector and markets. Subprime lending went well beyond anything required by the CRA or facilitated by the GSEs. Countrywide, Golden West (or should I say Wachovia?) and other parties engaged in the origination, packaging and sale of loans adopted reckless, foolhardy and (potentially corrupt) practices in search of profits.

    Sold to whom?

  • Dave A // October 13, 2008 at 10:02 pm

    Hank,

    > we have receipts for the material

    That is cheap and pathetic.

    The major suppliers of arms and nuclear material to Iraq were, wait for it, Russia and France. The US and UK were bit part players. At the time of the Iraq war Russia and France were owed billions of dollars by Saddam for their military goods and both countries were vociferous in their opposition to military action.

    Wonder why? Go figure.

  • HankRoberts // October 13, 2008 at 10:30 pm

    True, far from all the receipts are to US suppliers.
    “We” is the long list of those who should’ve known better and reduced their use of oil long before then.

  • HankRoberts // October 13, 2008 at 10:32 pm

    > sold to whom?
    http://www.google.com/search?q=packaging+and+sale+of+loans

  • Dave A // October 13, 2008 at 10:43 pm

    GP,

    Hindsight can be a wonderful thing. Recognition that bureaucracies can be driven in certain ways is also useful. Both aspects will come back to haunt climate science equally as well as the Bush administration.

    Remember that after several frustrating years UNSCOM had been kicked out of Iraq in 1998 and Saddams obfuscation attempts meant no one trusted anything the Iraqi administration said.

    Five years on there was, as you said, no up to date info about Saddam’s WMD progammes, but at the same time there was no reason to believe that he had given up his previous intentions. He had, of course made extensive use of chemical weapons in the Iran-Iraq war yet after the first Gulf war had denied for 4 years that he had produced chemical weapons.
    His programme was then revealed by Hussein Kamils who defected to Jordan. (Of course he and his family, later returned to Iraq after being promised an amnesty and were then murdered by Saddam despite being married into Saddam’s family).

    Moreover, read Hans Blix very carefully. He is a ccnsummate bureaucrat, extremely adept at sitting on the fence and producing reports that mean all things to all men

  • TCO // October 13, 2008 at 10:43 pm

    Ray:

    1. The more complicated it is, the more it needs to be murder boarded.

    2. I’m a little reluctant to take you as a guide on philosophy and practice of science, when you show zero interst in even looking at joyful things like Wilson’s book. Or when you’re at NASA and haven’t even read Katzoff’s Langley underground masterpiece.

  • Ray Ladbury // October 13, 2008 at 11:24 pm

    TCO, look, don’t take this the wrong way, but do you really think I give a tinker’s damn whether you accept my word on science or not. I’d love to read Wilson’s book. Want to know the last time I read something that wasn’t work related? I am literally too busy DOING science these days to read about how to do it. That isn’t by choice. I hope it will change (soon, PLEASE!). I used to enjoy that stuff, and I have a STACK of books waiting to be read.
    Look, science is about progress–that is increasing our understanding of the world. You’re not going to do that by fixating on a single paper.

  • TCO // October 14, 2008 at 12:51 am

    Ray,

    If you don’t have time to engage, fine. You seem to have a lot of time for this blog, though. It’s almost as if you don’t have time for losing an argument, but do have time for defending scientists who share your politics.

  • Lazar // October 14, 2008 at 1:38 am

    Although I sit philosophically with Ray’s approach, I actually don’t disagree with TCOs either. I admire the pragmatism of Ray’s vision, and the purity and intensity of TCO’s, they are complementary and the scientific world is better that they exist together.

    TCOs approach is like a watchmaker, likes picking things apart, seeing how they work, then changing the cogs. That requires grit and hard work. I know for a fact that both have done serious science. Guys, you’re making me feel all inadequate waving your science around like that.

    Request that you chill out.

  • Joel Shore // October 14, 2008 at 1:49 am

    Dave A says: “Hindsight can be a wonderful thing.” This line always pisses me off. Some of us aren’t just arguing in hindsight…We were arguing at the time in 2002 and early 2003 that the evidence for Saddam’s having significant WMDs was weak and circumstantial and the evidence of this constituting any sort of serious threat to us was non-existent. Besides which, after a couple months of inspections, it was clear that (while it is always hard to prove the absence of something like WMD) the intelligence we had for believing there to be significant WMD…or at least that which we were supplying to the inspectors…were wrong, see here: http://www.cbsnews.com/stories/2003/01/18/iraq/main537096.shtml

    Dave A says: “Remember that after several frustrating years UNSCOM had been kicked out of Iraq in 1998 and Saddams obfuscation attempts meant no one trusted anything the Iraqi administration said.” They were not kicked out of Iraq; they were withdrawn from Iraq so that the U.S. and Britain could launch attacks. Yes, this was in reaction to Iraq not being very cooperative with the inspectors. However, this non-cooperation was because Iraq was claiming UNSCOM inspectors were spying on them…a charge that was subsequently confirmed as fact (see http://www.fair.org/index.php?page=1645 ).

    The media, by systematically whitewashing this history, allowed people like yourself to assume the most nefarious motives for Saddam’s actions (”He must be hiding something big”) when in fact there were much more straightforward motives if you actually knew the facts correctly.

  • Gavin's Pussycat // October 14, 2008 at 4:45 am

    > Guys, you’re making me feel all inadequate waving your science around like that.

    Lazar, don’t. Contrary to both, you’re actually practicing this science under discussion in your own, admittedly little (but still bigger than anything I have done) ways.

    Don’t even discount the possibility of publishing some of the things you are doing in a real journal. I’ve seen papers published that were less worth it and had less substance.

    TCO: bringing up papers that were already pointed out to you some time ago to be flawed, is not very nice. Don’t tell me that Mosher’s Alzheimer is getting to you ;-)

  • Barton Paul Levenson // October 14, 2008 at 9:00 am

    TCO writes:

    I also think part of the interesting aspect of MBH98 is that it is so complicated and there are so many little puzzles within it, of exactly what the method was, how that method works in different circumstances, etc.

    And this is what distinguishes you and the rest of the anti-Mann attack dogs from scientists. Real scientists DON’T CARE what Mann’s exact method was. They care what his conclusion was and if that conclusion can be replicated independently.

  • Hank Roberts // October 14, 2008 at 11:11 am

    Dave A’s repeating a frequently asserted error above. It’s easy to look up — tho’ you’ll find Rumsfeld’s off-the-cuff misinformation in searches far more often than the facts.

    http://www.accuracy.org/newsrelease.php?articleId=608

    “As media outlets correctly reported at the time, it was the U.N.’s Richard Butler who pulled his inspection team out of Iraq in December 1998 to clear the way for a U.S. bombing attack. Unfortunately, many media have adopted as fact the official myth that Secretary Rumsfeld repeated.”

    Plenty of references; a few are included at:
    http://mail.openprivacy.org/pipermail/think/2003-January/000057.html

    Dave, you should try skepticism sometime. It’s a skill any good reference librarian can teach you — though often uncomfortable what you learn.

  • TCO // October 14, 2008 at 1:25 pm

    Actually I think its what distinguishes me and Zorita and Huybers and Burger from both the defenders and the attackers. And we are on the side of Feynman, God and apple pie. ;-)

    Defenders care only about his conclusion and its political import and if some other line of logic (say glaciers) supports the conclusion, they could care less if the MBH result is flawed. Similarly McI (when misbehaving) and his hoi polloi cheerleaders (in general) care only about discrediting the methods, thus when one of their lines of attack is shown to be over-dramatizing a flaw, they resist examination/correction of that.

    but I repeat myself, BPL. And toot my own horn. Unless there is really some serious reason to dig into this more, I propose to agree/disagree on the philisophical issues, since we are not really digging into them more and since it devolves into me patting myself on my back.

  • TCO // October 14, 2008 at 1:38 pm

    GP:

    1. I agree that Lazar does more work than either of us. Heck, usually I don’t even understand the math and stuff.

    2. I don’t remember the line of argument you refer to. Perhaps Alzheimers, perhaps I never agreed to your point of view. Probably both. Let it go…if it’s tedious. Or dig into it again.

    P.s. My impression is that there is lots wrong with Moburg05. (I haven’t studied it in detail, but McI says so and he has.) It’s NOT a situation of deniers (including myself in the class) jumping on the study with the MWP and touting that…but a situation of just very simply and child-like saying, “you said that all the studies duplicate the Mann result, using alternate methods…but when I compare the MBH98 curve to Moburg05, they are pretty different. It’s just like the simple, child-like comment I made when Mann (or RC or Gavin) and Tammy touted Jollife as in favor of short-centering and I just said “hey guys, are you really sure that “decentering is the same as short centering…and besides he seems to have a lot of issues even with decentering…plus it is just a damn powerpoint”. :-)

  • Lazar // October 14, 2008 at 4:29 pm

    As far as MBH98, I agree with TCO about Mann’s responsiveness to criticism, reluctance to admit error, theoretical flaws (I don’t know where TCO stands as to their practical impact, I believe their impact does not alter the conclusions of MBH98) and inadequate documentation.
    He doesn’t need me to row his boat, but he has consistently, and imv fairly, raised flaws in ‘his’ ’sides’ arguments and pounded them for it, stuff which ‘my’ ’side’ haven’t considered, e.g. the inconsistency of claiming seven, eight, or whatever it is today, ‘lack of warming’ years as significant together with ‘long-term persistence’ as an ‘explanation’ of recent warming. He is a scientist. I know he’s a scientist. I’ve read his papers. TCO can be very personal in his criticisms, if anything he is more so toward Steve McIntyre than Mike Mann. The guy is very sharp, very honest.
    TCO can row his own boat, but can we make today national ‘be nice to TCO’ day? (only one per year :-).

  • Gavin's Pussycat // October 14, 2008 at 4:43 pm

    TCO, OK… playing games then. But then you’re not saying much more than that there are a lot more different ways that a method can be applied wrongly than rightly, and a lot funnier results can be obtained that way… which I would certainly agree with ;-)

    Ad Lazar, I do understand what he does, either directly or after doing some homework, and I am impressed if indeed he has no science training background as he claims.

  • Dave A // October 14, 2008 at 9:42 pm

    Joel Shore

    I am sorry to have “pissed you ” off but there is a certain amount of hindsight involved here. There were doubts, but there had been no inspections since Dec. 1998. There was no real reason to expect that Saddam had not resumed his WMD programmes.

    OK, when I said “kicked out” I might have been overstating it but Butler withdrew because he didn’t see much point in continuing given Saddam’s obfuscation and playing of games.

    Hank,

    My response to you was about the role you ascribed to the US in financing Saddam’s nuclear and weapons programme. You are wrong in that and the links you provide are irrelevant to these facts. I repeat that the major suppliers of nuclear technology and arms to Iraq were Russia and France.

    The US , and particularly the UK’s contribution were miniscule in comparison

  • Dave A // October 14, 2008 at 10:00 pm

    Hank,

    > you should try scepticism

    I am naturally sceptical but this applies across the board - sceptical of government, sceptical of so-called experts, and especially sceptical of ’scientific consensus’.

  • Lazar // October 14, 2008 at 10:22 pm

    Hey guys, GP, TCO… thanks for the encouragement :-)

  • Lazar // October 14, 2008 at 11:42 pm

    Those who are out-there, in the field actually doing science… my heroes and heroines, bringing light to a world filled with gloom.

  • elspi // October 15, 2008 at 12:57 am

    Dave
    The problem is your dishonestly.

    “I repeat that the major suppliers of nuclear technology and arms to Iraq were Russia and France.”

    and
    “no up to date info about Saddam’s WMD progammes”

    No mention that the chemical and biological
    weapons technology was supplied by the US.

    Bait and switch you [edit]

  • steven mosher // October 15, 2008 at 12:59 am

    TCO,

    Murder boards. Man that brings me good memories, despite my alzheimers.
    Peer review is for pussies.

  • dhogaza // October 15, 2008 at 4:01 am

    There were doubts, but there had been no inspections since Dec. 1998.

    One of the most blatant dishonest statement ever posted to the internets.

    Peer review is for pussies.

    If science is for pussies, why has science contributed so much more to our well-being than Steven Mosher?

    Bloviating bullshit is for wimp-assed pussies.

    Just to be clear as to where bloviating, vs. peer-reviewed science, stands in regard to any rational hierarchy.

  • Gavin's Pussycat // October 15, 2008 at 7:42 am

    > Dave A says: “Hindsight can be a wonderful thing.”

    Yeah. Meaning “if only the invasion and everything hadn’t gone so disastrously wrong, nobody would have called us on our lies”.

  • Rainman // October 15, 2008 at 4:39 pm

    dhog: Do you even know what Mosh is talking about when he says ‘Murder Boards’? If you haven’t been in the military, you have no clue.

  • Ray Ladbury // October 15, 2008 at 6:21 pm

    Rainman, I presume dhog reads. Also, the term murder board has caught on in some grad schools. What I believe he was objecting to was the dismissive attitude shown by SM. We could equally well say that if you haven’t been through grad school in physics/biology…, you have no clue, or 2 a day football practices in East TX in August or whatever. However, what would be the point other than aggrandizing our own experience at the expense of others?

  • dhogaza // October 15, 2008 at 6:55 pm

    If you haven’t been in the military, you have no clue.

    It’s used outside the military, too. Preparing for orals has no relevance for practicing of science.

  • David B. Benson // October 15, 2008 at 7:28 pm

    Rainman // October 15, 2008 at 4:39 pm —With some trepidation, I’ll ask. What is a ‘Murder Board’?

  • Gavin's Pussycat // October 15, 2008 at 7:55 pm

    TCO, Mosh, Rainman, the Wikipedia article needs you :-)

  • HankRoberts // October 15, 2008 at 9:04 pm

    > when I said “kicked out” I might have been
    > overstating it

    Or you might have been repeating a deliberate lie created to fool people like you that had worked.

    Care to say whether you knew the truth and didn’t say it, or posted what you did believe?

  • Dave A // October 15, 2008 at 9:38 pm

    Dhogaza

    >One of the most blatant dishonest statement ever posted to the internets.

    I’m flabbergasted. Are you disputing that there had been no inspections since Dec 1998?

    GP,

    The point I was making was at the time
    there was a certain logic to the way events unfolded. It was only afterwards that the flaws became apparent, ie with hindsight.

    It is ever thus with human affairs as the future will surely come to judge about so-called AGW

  • dhogaza // October 16, 2008 at 12:35 am

    Are you disputing that there had been no inspections since Dec 1998?

    They had to pull the inspectors out in 2003 just before we invaded. They weren’t finished with their work at the time of invasion, but that was only because Bush wouldn’t wait until they were done. Because the inspection process was making clear that those of us saying no WMDs existed nor any program to create them were being shown right the Army wanted to invade before it got to hot.

  • Rainman // October 16, 2008 at 1:17 am

    Preparing for orals is nothing compared to an actual board. It’s a go/no-go process. They are freaking merciless once you make a mistake.

    My boards had to do with operation/emergency response in a shipboard nuclear power plant. You HAVE to know what you are doing, no matter what is going on.

    Incidentally, I think the reason the wikipedia article is so sparse pertains to the difficulty in describing such an experience. The pressure of a year+ of study and training coming down to a 4+ hour grind with people that will pounce on ANY mistake… ugh. You fail, the next time will be even tougher.

  • Rainman // October 16, 2008 at 1:29 am

    Ray: My issue with dhog is taking Mosh’s flippant comment about peer review (a process used outside of science) and making it out to be an attack on science itself.

    None of us here take science lightly.

  • Philippe Chantreau // October 16, 2008 at 1:42 am

    Dave A, are you suggesting that inspections did not resume in Nov 2002 under Hans Blix?

  • Philippe Chantreau // October 16, 2008 at 1:48 am

    This link is informative about post 1998 inspections.
    http://www.un.org/News/Press/docs/2003/sc7682.doc.htm

  • dhogaza // October 16, 2008 at 4:55 am

    Incidentally, I think the reason the wikipedia article is so sparse pertains to the difficulty in describing such an experience. The pressure of a year+ of study and training coming down to a 4+ hour grind with people that will pounce on ANY mistake… ugh. You fail, the next time will be even tougher.

    As opposed to five years working on a PhD thesis which you have to defend, and if you fail, very possibly you’re given walking papers with a condolence MS?

    The pressure of a year+ …

    Just to re-emphasize.

    Sorry, denigrating scientists this way just ain’t going to cut it.

    The military thinks far too highly of itself.

  • TCO // October 16, 2008 at 6:59 am

    Mosh-pit: continuing in discussion of philosophy of science:

    Have you read THE GOAL by Goldratt? At the front of my copy, there is a two-page “Introduction to the Revised Edition” with interesting comments on science and education…basically making the point that what is important is the scientific attitude…and that it has application in business problems. Very Feynmannian, I thought.

  • Gavin's Pussycat // October 16, 2008 at 11:22 am

    Dave A:

    The point I was making was at the time there was a certain logic to the way events unfolded. It was only afterwards that the flaws became apparent, ie with hindsight.

    Yes, sure, you were fooled just like the rest — by design. At the time I opposed the invasion — I frown on war crimes — but obviously lacked proof that Saddam had no WMD. Which was pretty clear though already back then to those that bothered to do a little homework and keep their eyes and minds open. Have been saying “I told you so” since, for what little it is worth.

    A healthy reaction to being fooled and kept in the dark like that is anger. Angry yet Dave? Did you want to be fooled?

    It is ever thus with human affairs as the future will surely come to judge about so-called AGW

    What the future will come to judge, is how we (our scientists) knew what we know, and with what confidence, yet the political sphere was kept from timely action. Fog can be as effective an instrument of opacity as is darkness. It’s those wielding it now that should stand judgement.

    In hindsight, when the goods are in.

  • Ray Ladbury // October 16, 2008 at 1:10 pm

    Rainman describes boards: “The pressure of a year+ of study and training coming down to a 4+ hour grind with people that will pounce on ANY mistake… ugh. You fail, the next time will be even tougher.”

    During my PhD studies, we were required to pass 4 written 3 hour exams over two successive Saturdays in areas we had been taking classes in for 18 months. In addition, there were short answer questions that could cover anything that could be “general knowledge” for a physicist. Each of us was then were subjected to 2 1.5 hour oral exams in front of a panel of 3 faculty members. I remember one question was “How many piano tuners are there in the city of Denver–and you’d better be right within 10%.” On my first exam, the Department Chair was on the panel. He had a habit of looking for any little thing that might trip you up and asking that on the first question–things like maybe a department seminar you might have missed one week. The guy who came in after me was on his second attempt. The prof looked up with a sneer and said, “Oh, you, we flunked you last year didn’t we?”
    One of my fellow students had spent the previous 6 years on a nuclear sub–he said the two experiences were comparable.
    As Steven Wright said, “Never criticize a man until you walk a mile in his shoes. Then if he gets mad, you’re a mile away and you’ve got his shoes!”

  • Rainman // October 16, 2008 at 3:54 pm

    Ray: As I don’t have a PHD, I’ll have to take your (and your fellow student’s) word it is comparable. Thanks for the insight. (I would have thought the acedemic community would be less rabid in their boards… guess not)

    dhog: Perhaps I should quantify a little. That last year is just the year you’ve spent on your current ship. The completion of Nuclear Field A School (3-6 months depending on rating), Nuclear Power School (a 2 year course written by Penn State compressed into 6 months by the Navy), Nuclear Prototype Training (another 6 months hands on application), then going to a specific ship and learning how it works (they all have their issues).

    At any given time if you are not up to standard, you are ‘de-nuked’ and have to do something less challenging. The drop rate when I was going through (1986-1988) was 30% for A School, 30% for Nuke School, 30% for Prototype. I’d guess about5% once you hit the fleet. This is after scoring high enough on the ASVAB to be considered to take the Nuclear Field Entrance Exam and passing that.

    How does that compare to going through college and getting BS/MS/PHD? I honestly have no idea. I never went to college. After the Navy, I got into software development (self taught). I contracted around for a while and was hired as one of the senior developers at my current job. (Certified MCDBA, MCSE, MCSD back in 2001. I do need to recertify on current tech… more reading)

    As to denigrating scientists, was Mosh’s statement denigrating or tongue in cheek?

Leave a Comment