Open Mind

You Bet!

January 31, 2008 · 140 Comments

In a previous post, I stated that the current trend in global temperature increase is sufficient that by 2015, data will probably establish that the planet’s temperature has definitely not stabilized or begun to decline. Specifically:


By 2015, the expected temperature from the regression-line fit and that expected from the “no change” hypothesis will be far enough apart that we’ll probably be able to distinguish between them with statistical significance. In other words, by 2015 either we’ll know that global warming has changed (possibly stopping, possibly reversing), or there’ll be no more of this “global warming stopped in 1998” malarkey.

It’s entirely possible that the numbers may give us statistically significant evidence even before 2015. If so, I’ll report the result. If it turns out that global warming is not continuing (which I seriously doubt), then I’ll readily admit that I was wrong. In fact, I’ll be keeping a close eye on the future evolution of global temperature and actively looking for such results, so if we do get valid evidence that global warming has stopped, I just might be the *first* one to say so.

If 2015 rolls around, and temperature have [sic] risen above present-day levels by enough to be demonstrably significant, I’ll announce that too. Will those who have so often chanted the “no more global warming” mantra admit that they were wrong? Somehow, I doubt it. I suspect that instead, they’ll be flooding blogs, newspapers, magazines, and Faux News reports with claims that “global warming stopped in 2013.”

I didn’t intend this as a “challenge,” but the idea was loosely based on various proposals I’ve seen for “bets” about future global average temperature. The “challenge” aspect has taken on a life of its own among readership; therefore I’m willing to make it official. I will also reiterate that the divergence between warming and no-more-warming isn’t married to the year 2015! That was a choice of a future year in which it’s likely that the issue will be statistically distinguishable, but a significant result might be available before then, or not until after. Also, the choice was based on intuition, not on any quantitative analysis. Eventually a significant difference will emerge; if not by 2015, then not long after.

I’ll also emphasize that I’m not interested in betting money on it. I have a family to provide for, and I can’t afford to have my money tied up in escrow while waiting years for a bet to be settled. Besides, I don’t gamble. Although if I did …

So I’ll outline more precisely what terms I would suggest for a wager, challenge, or whatever you like to call it, on the question “Is global warming continuing or has it ceased?” I’ll try my best to be fair to both sides, so that if you firmly believe that global temperature will continue to rise and you’re eager for a wager, I suggest this is the one to make, and likewise, if you’re firmly convinced that global temperature has peaked and is not going to continue and you’re eager for a wager, this is the one to take. The winner will definitely be decided by the reality or not of continued warming, not by any clever design of the terms and conditions of the wager. In my opinion, settling such a challenge should be based on statistical significance, not on choosing a specific year, so this proposal is based on statistical significance rising above the noise level, not on the temperature at a fixed future time (but as we’ll see, there is a time limit to how long the bet can last).

First let’s review the data leading up to the statement. Here are global average temperature estimates, all set to the same zero point (using the reference period 1950.0 to 1980.0), from NASA GISS, NCDC, and HadCRU:

The trend lines are determined from the data covering the time span 1975-2000. The graph is intended to show that the data after 2000 are not inconsistent with the claim that the trend is continuing, in fact they’re following the line with “wiggles” (i.e., noise) that make trends impossible to identify over short time periods but clear over longer time periods. (and indeed that is so). For the terms of this wager, it is not necessary to recompute the data using the 1950.0-1980.0 reference period I’ve used in this graph. This graph just gives us the essential idea behind it.

And the idea is this: if global warming is continuing, global temperature will continue to follow a rising trend plus noise. If global warming has ceased, it will stay at its present level (or decline) plus noise. So we should outline what global temperature will be in those two cases.

First let’s look at annual average temperature. I used the trend from 1975 to the present to estimate the trend, and used the standard deviation from the residuals (after subtracting the trend from the data) to estimate the noise level. The trend is upward at 0.018173 deg.C/yr, and the standard deviation of the residuals is 0.0959 deg.C. Here in fact are the annual averages (black dots), together with the trend (solid red line), and (dashed) lines two standard deviations above and below that trend line:

bet1.jpg

This gives the expected range of annual averages — between the dashed red lines — and 95% of all years should fall within those lines. If one wishes to be precise, these limits should be modified to account for the red-noise character of the data, but in this case it’s a small correction and I’m going to ignore it. Note that all the annual averages from 1975 to the present fall within the dashed red lines. As an aside, the above graph is about as clear a graph as I’ve seen showing that there’s really no evidence — none whatsoever — that global warming has stopped.

We can of course extend those lines into the future. We can also quantify the hypothesis that global temperature hasn’t changed since 2001; the average from 2001 to the present is 0.5432 deg.C, so we can simply draw a line at that value and dashed lines two standard deviations above and below it. Putting the “no-more-warming” range in blue, we get this:

bet2.jpg

If the “continued warming” hypothesis is correct, future values should fall between the dashed red lines. If the “no more warming” hypothesis is correct, future values should fall between the dashed blue lines. If the earth has actually started cooling, future values will eventually dip below the blue lines.

So here’s the bet based on annual averages: the still-warming side wins if temperature goes above the top dashed blue line; the not-warming side wins if temperature goes below the bottom dashed red line.

If temperature rises above the upper dashed red line, we have evidence that the planet is warming even faster than the present trend. In that case the still-warming side also wins. Alternatively, if temperature falls below the lower dashed blue line, we have evidence that the planet is actually cooling, and the not-warming side wins.

bet3.jpg

Finally, I’ll add one last condition. It’s unlikely but possible that a value can fall outside either range just because of noise. So, my “bet” is that as soon as there are two years (not necessarily consecutive) which are in either decisive region, the side with two decisive years is declared the winner. Therefore:


If annual average global temperature anomaly (land+ocean) from GISS exceeds 0.735 deg.C for two (not necessarily consecutive) years before it falls below the value 0.277455 + 0.018173 (t-1991) (where t is the year) for two (not necessarily consecutive) years, then the still-warming side wins; if it falls below the above equation for two years before it rises above 0.735 for two years, then the not-warming side wins.

By the end of 2015, it is in fact likely but by no means certain that one or the other side will have won. Eventually, the two regions get far enough apart that it’s certain to happen. In fact, by 2028 we’re sure to have two years outside the limits of one or the other side, so the bet can’t take longer than 2028 to be decided. But this test isn’t based on a particular future year; it’s possible (but highly unlikely) that either side could win if 2008 and 2009 both fall into its winning region.

Although it’s unlikely, it’s possible that this bet could be undecided until the end of 2028. This is because the noise level is very high compared to the signal level; the noise level is about 5 times as large as the present annual trend! We can reduce the noise level without affecting the trend rate by using, not annual averages, but 5-year averages. That gives us a graph like this:

bet4.jpg

It’s straightforward to modify the terms of the test in order to base it on 5-year averages rather than annual averages. It’s also straightforward to adopt the test method to the use of HadCRU data, or NCDC, rather that NASA GISS.

I’ve seen other proposals for wagers, some of which strike me as perhaps unfair, having what seem like overly complicated conditions which may be designed to take advantage of statistical naivete as much as depend on the future progress of global temperature. On the other hand, some seem like fair but poorly chosen (too much chance of a false result due to random noise). If any part of this proposal favors one side over the other for purely statistical rather than climate reasons, I swear that it’s an oversight, not intentional. This proposed test is designed to be a fair test of competing ideas, and to be settled by a genuinely significant result, not by accidental changes due to “noise” in the climate system.

A final note: in further reader comments it was pointed out, quite correctly, that even if AGW is completely correct it’s still possible for temperature to show no increase long enough for the “no-further-warming” side to win this wager, IF unexpected events happen to alter the behavior of the climate. For example, large volcanic eruptions will cover the world with aerosols which will lead to significant cooling (such as seen after the Mt. Pinatubo and el Chicon eruptions) even if AGW is completely correct and uninterrupted; a series of large eruptions in succession may cause enough cooling to put future temperatures into the “no-warming” region. Likewise if sulfate aerosols from the booming economies of India and China get so great as to overwhelm the warming influence of greenhouse gases. I leave it to those who have money to bet, to estimate the probability of such things happening, and what additional conditions to impose to account for such a possibility. As for me, I suspect (even though I haven’t estimated the probabilities!) that it’s unlikely enough, that I’d still take this bet (for continued warming) without additional caveats. Of course that’s easy for me to say, since I’m not a gambling man.

If, however many years from now, the no-more-warming side wins the bet, and no unequivocal caveats are identified, then I’ll admit that our understanding of global climate is insufficient and that we can’t rely on the prognostications of the climate science community. I doubt it’ll happen. If, on the other hand, the still-warming side wins the bet … what will be the response from the skeptic side?

Categories: Global Warming · climate change

140 responses so far ↓

  • chriscolose // January 31, 2008 at 3:28 am

    skeptics have it made…no warming = AGW has stopped (or never was), warming = “something else is causing it.” You’ll never convince them

    CO2 should be up around 400 ppmv then. I haven’t looked into this much, but I would guess that with thermal lag (and taking into account just CO2), we’d be at the temperature response of around a 360-370 ppmv like atmosphere, which would be ΔT of ~ 1 K. Barring any unpredictable cooling events like volcanoes, the trend will clearly continue as it has.

    I think that you’ll start being able to take into account a decline in aerosol effects rather soon, and if so that would be the equivalent of letting the other GHG’s (e.g. methane) “show up” since the net RF and CO2 RF are about the same. Without a substantial cooling mechanism, and I do not see a plausible way to not get an increased trend, unless negative feedbacks are much bigger than we think.

    – C

  • Greg Simpson // January 31, 2008 at 3:51 am

    To reduce the chance for a false positive, perhaps due to volcanic aerosols or a temporary increase in solar output, the requirement could be changed to two non-consecutive years. Note that a period of three consecutive years always includes two non-consecutive years, so a three years outside the bounds would always win the bet.

  • Aaron Lewis // January 31, 2008 at 4:32 am

    Global warming is about heat. Most of the heat ends up in the water. Why are so many of these discussions focused on air temperature ?

    (PS very good work)

  • Hank Roberts // January 31, 2008 at 4:53 am

    Now, about ocean pH, that should be done about the same way, right?

  • Raven // January 31, 2008 at 6:03 am

    Interesting.

    It has been my position that the temps in 2015 would have to be at least 0.2 degC higher in 2015 to validate the AGW position. This was a crude calculation based on the CO2 sensitivity ranges presented in the IPCC report.

    Your graphs come up with that same spread for different reasons.

    That said, I don’t trust the GISS and HadCRUT datasets for the same reason I would not trust the unaudited financial statements produced by the Enron. I realize the satellite measurements have their own issues but there are two competing groups using the same dataset which helps ensure that self-serving data manipulation is kept to a minimum.

    Would you take the bet on the average of UAH and RSS or is it limited to the GISS/HadCRUT?

    Instead of betting money would you be willing to publicly acknowledge that AGW alarmists got the science wrong if you lost? Would you be willing to publicly apologize to skeptics who you have denigrated?

    If not what would it take for you to do that?

    The way I see it bets for money are a red herring since most people are prudent and would never bet more than they could afford to lose even if they were 95% sure about the outcome. Instead of fooling around with bets for money you should state what it will take to change your mind.

    [Response: Your implication that GISS or HadCRU is guilty of “self-serving data manipulation” is mean-spirited, offensive, and unsupported by any evidence. Unless you can offer EVIDENCE with documentation to back it up, don’t repeat it here.

    Your questions make me wonder whether you actually read the post. What part of “I’ll also emphasize that I’m not interested in betting money on it” is unclear? What part of “If, however many years from now, the no-more-warming side wins the bet, and no unequivocal caveats are identified, then I’ll admit that our understanding of global climate is insufficient and that we can’t rely on the prognostications of the climate science community” is unclear?

    Shame on you.]

  • cce // January 31, 2008 at 6:53 am

    GISS, Hadley/CRU, and NCDC are three competing groups using largely the same dataset.

    And to repeat tamino’s last paragraph, “If, however many years from now, the no-more-warming side wins the bet, and no unequivocal caveats are identified, then I’ll admit that our understanding of global climate is insufficient and that we can’t rely on the prognostications of the climate science community. I doubt it’ll happen. If, on the other hand, the still-warming side wins the bet … what will be the response from the skeptic side?”

  • Raven // January 31, 2008 at 7:04 am

    Yes I did misread your post about not wanting to make bets for money. I apologize for that. I had mixed you up with James Annan who has frequently talked about betting money.

    For my I part I would also concede that the AGW point view is most likely correct if the warming trend continued into the ranges you identify. I feel your targets are a fair representation of the two possibilities.

    My suspicions of the GISS and HadCRUT datasets comes from a general suspicion of any situation where there is a conflict on interest. One could argue that most executives would not deliberately manipulate their books even if they did not get audited. However, I would never invest money in company that did not allow its books to be audited by third parties.

    A lot of money is riding on the temperature data so feel there is no excuse for allowing the ‘perception of possible bias’ to go on. The fact that many resist acknowledging the potential for bias simply re-enforces my view that the data should not be trusted unless it is audited by third parties.

    [Response: All the data used by GISS can be downloaded from the web; the procedures they use are documented in the peer-reviewd literature; even the code for their computer programs is freely available. Their books are “open” and have been subjected to intense scrutiny.]

  • Raven // January 31, 2008 at 7:28 am

    US taxpayers a fair amount of money for the GISS data to be produced. Expecting volunteers to replicate this work for free is not enough. Also volunteers have attempted to use the computer programs that were made available but were forced to give up because of poor documentation and OS/compiler problems.

    You cannot say it has been audited unless proper funding has been provided to people who sole objective is to identify problems and ensure they get corrected.

    More importantly, there have been a number of people producing analyses that suggest that the data is quite biased yet these criticisms are ignored (Anthony Watts, Ross McKitrick, Roger Peike Sr). You cannot claim that the GISS data has been subject to intense scrutiny if legitimate criticisms are regularly dismissed by the gatekeepers.

    [Response: What a load of crap. Watts, McKitrick, and Pielke have generated ZERO real evidence — just as you have zero evidence of any misconduct — but they’ve slung a lot of unfounded insults — just like you have.

    It seems you’re one of those who, no matter how closely the data and results are examined, will just invent yet another reason to claim it’s not enough. Keep moving the goalposts.]

  • Raven // January 31, 2008 at 7:52 am

    It does not make a difference if you think they have zero evidence. What matters is whether their criticisms have been dealt with reasonably. Your response is typical and demonstrates that they are not being dealt with reasonably.

    Right now the keepers of the data are free to arbitrarily dismiss any criticism. This state of affairs is unacceptable. Governments should take control of the data aways from the agencies developing the models. The conflict of interest is huge and would not be tolerated in any other field.

    [Response: It doesn’t matter that they have zero evidence of any wrongdoing or mistake? It doesn’t matter that after years of trying to discredit the surface record, they’ve managed zip? What dream-world do you inhabit?

    It would appear that from your point of view, it’s the *truth* that doesn’t matter.]

  • cce // January 31, 2008 at 8:01 am

    It’s also worth pointing out, that the Surfacestations.org project documenting weather stations in the US, to date, has produced data and trends from the “highest quality” stations that almost exactly track the GISS analysis for the lower 48 states.

    This is likely (>66% probability) the reason why discussion of the USHCN and the UHI effect has virtually disappeared in recent months. They were not “forced to give up.” They didn’t like the answer so they moved on to the next trumped up controvesry.

    You can download JohnV’s “opentemp” program and run it yourself.

    http://www.opentemp.org/main

  • dhogaza // January 31, 2008 at 9:21 am

    US taxpayers a fair amount of money for the GISS data to be produced. Expecting volunteers to replicate this work for free is not enough.

    Uhhh Uhhh grunt grunt oh my aching back ugghhh ugghhh …

    That’s the sound of Raven moving goalposts.

    First it’s “if the data’s not available for auditing, it can’t be trusted, as the three groups working with it all have a conflict of interest”.

    Then it’s “oh, shit, the data and code’s all publicly available, so, ummm, now what, oh yeah, expecting volunteers …”

    There’s plenty of money available on the denialist side from the fossil fuel industry to finance a legitimate alternative to the mainstream surface temp computations.

    Why don’t they spend money doing that, rather than, say, have the heartland institute finance a shill conference (with paid speakers and free transportation and hotels for politicians), as they’ve recently announced they’re doing?

    Could it be that they know that the surface temp record is sound? Could it be that they know that Christy and Spencer took their best shot with the first UAH satellite temp reconstructions and rather than knock down the surface temp record, only exposed themselves as being shoddy scientists who couldn’t even get their algebra right?

  • dhogaza // January 31, 2008 at 9:23 am

    And before Raven suggests that the fossil fuel industry doesn’t finance legit debunking of the surface temp record because “AGW believers” have a lock on the literature …

    They could easily get such a paper published in their own shill journal, Energy and the Environment.

  • Bruno De Wolf // January 31, 2008 at 10:12 am

    As an (amateur) climate skeptic: I applaud the idea. I think both sides lack producing claims which are falsifiable in a decent timeframe. So, yes, I’m almost in.

    One thing I don’t agree with in your method: GISS as the only measurement. As you know, GISS, and measurements by surface stations in general, is highly contested (correctly or not, I leave this out of this debate) among skeptics. Ideally, you should combine 1 metric based on surfuce station and 1 on based on satellite measurements (UAH or RSS).

    For example: if a year falls outside the band you’ve drawn for both metrics, it’s a point. In this case: I think 2 years out-of-band should be enough to name a winner. There are other possibilities too, but I think you get the idea of involving satellite measurements as well.

    [Response: All the brouhaha to contest GISS, and surface measurements in general, has generated no reason to have less confidence in its correctness. Quite the opposite, it has made confidence in the surface record far greater. But it has made effective propaganda points for those who simply refuse to accept the truth.]

  • JCH // January 31, 2008 at 1:20 pm

    “would not trust the unaudited financial statements produced by the Enron. …”

    The financial statements produced by Enron were audited. Every one of them. The auditing firm was one of the most respected in the industry.

    Every single financial statement produced by the savings and loans industry were audited. Just from memory, but I do not think a single going concern opinion was issued in an industry that sustained pervasive collapse.

    Auditing has a purpose, and it also has severe limits. I’ve been around a lot of auditors. My wife controls auditors. They cannot do what you seem to think they can do, and I can recite tons of proof. Auditors f up about as often as Barney Fife; as in, every show.

  • Don Fontaine // January 31, 2008 at 2:22 pm

    Great post.
    A minor observation. In the 5 yr average graph the last data point appears to be plotted at 2007, but all prior points appear to be plotted at the midpoint of the five year period eg 2002.5, 1997.5, etc. so the spacing for the last point isn’t the same as that for the earlier ones. Is this as you intended?

    [Response: The final 5-year period (2005-2010) is incomplete, so the average temperature has been plotted at the average *time* of the data so far.]

  • Phil. // January 31, 2008 at 2:32 pm

    Re GISS reliability I’d like to hear from Raven how GISS able to ‘fix’ their data while maintaining an approximately constant offset from the satellite data?

    http://bp0.blogger.com/_0HiXKAFhRJ4/R5gl9hMkqcI/AAAAAAAAAgU/8Z6sJBs_XnU/s1600-h/Variation.JPG

  • Bruno De Wolf // January 31, 2008 at 3:36 pm

    @tamino
    Is it about ‘trying to do your best to be fair to both sides’ or about your opinion? I’m willing to take it as official as you do (I’ll do a posting and a follow up on my blog as well, no money involved, I’ll change my opinion if I lose), but I do insist in using more than 1 metric to make the conclusion. NONE of the popular metrics are absolutely 100% correct, RSS did some corrections to their historical data this month, GISS found out their Y2K error in the summer 2007 …

    My comment was not about bashing GISS, it’s about having faith in 1 metric, so I reiterate my request to base your results on the combination of 1 surface station metric and 1 satellite metric. Is it that unreasonable?

    [Response: None of the metrics — popular or not — is 100% correct. And correcting the GISS Y2K error led to a net change in global average temperature anomaly of 0.003 deg.C.

    As I said, I’m not betting money I’m trying to establish conditions under which we can confirm or deny various hypotheses. It was framed as a bet because that seems to be popular for discussion, and it does force one to be explicit about exactly what conditions will lead to a declaration for one or another hypothesis. For a bet, I think it’s better to keep it simple and agree on a single source of data for decision.

    But for determining the outcome with highest reliability it’s better to use multiple data sets. I intend to keep track of GISS, HadCRU, and NCDC, and I’ll probably keep my eye on satellite data from RSS, UAH, UMd, and UW as well. I’ll report any significant results, regardless of the nature of the result or the source of the data. I expect they’ll end up telling the same story.]

  • Hank Roberts // January 31, 2008 at 3:51 pm

    Tamino, have you looked at the Hadley Centre’s ten year paper? It’s based, as I understand it, on the new flood of data coming from the Argo system and indicates a ten-year stretch with a lot more rearrangement than trend, followed by a return to the upward trend.

    I hope you’re not setting up a betting range that would give a false negative if Hadley is correct about the coming decade — hoping you took that into account. The big decadal fluctuations do need to be modeled, and Hadley as far as I know is the first group to predict a ’step’ in the trend line.

    http://inel.wordpress.com/2007/08/09/hadley-centre-decadal-climate-prediction-system/

    [Response: Yes I’ve seen the paper, in fact I posted about it.]

  • Raven // January 31, 2008 at 4:09 pm

    If the GISS has been fixed the Micheals and McKitrick would not have been able to find correlations between temps and social economic data. Nor would Peike be able to demonstrate biases in the measurement techniques. BTW - IPCC IR4 acknowledges the correlations found by McKitrick but dismisses them as mere “coincidence”.

    Auditing is not perfect but it is a lot better than doing nothing and expecting to people to blindly trust the data. Especially when the gatekeepers like Hansen have long since dispensed with any notion of scientific objectivity and become political activists.

    The limited disclosure of GISS methods was only done after the government forced NASA to do so. More critically: the information that was disclosed did not allow others to replicate the work which means the disclosure was meaningless.

    I consider the fact that it was necessary to fight to get any disclosure from NASA is more evidence of bad faith on the part of the gatekeepers and yet one more reason why the data should be treated as suspect until proven otherwise.

    The idea that the fossil fuel industry should finance the effort is absurd - you know damn well that you would reject any work funded in that manner. The money should come the governments that fund the people making the alarmist claims.

    [Response: You’re entitled to your own opinion. But you’re not entitled to your own facts.

    GISS procedures have been part of the peer-reviewed literature for nearly a decade, and have always been an open book.

    As for correlations between temps and social economic data, quite a bit of fudging and cherry-picking was included to make the correlations appear stronger than they really are; essentially, they simply removed the data they didn’t like. And if you don’t believe in the existence of coincidence, you don’t know much about statistics.

    Your statement that “gatekeepers like Hansen have long since dispensed with any notion of scientific objectivity” is nothing short of libelous. It’s the last time you’ll make such a statement here; reiterations will go straight to the trashcan.]

  • Bruno De Wolf // January 31, 2008 at 4:32 pm

    Tamino,
    Agree that in the long run, both satellite and surface measurements will tell the same thing, there can be however quite big differences in between metrics in a single year. E.g. compare 1998 (yes, I know but let me speak) with 2007
    GISS: 2007 was as hot as 1998
    UAH: 2007 was 0.2 °C colder than 1998
    Or: year-to-year variability between measurement systems can go up to 0,2°C
    Such an exceptional year as 1998 (or vice versa, a year with a exceptional ‘cold’ event) can easily create a false positive or negative if your confidence interval is 0,4°C. If you truly want to stick with one source (yes, I agree, it’s simpler), you will have to score 3 points before you convince me (or vice versa: I give you the right to doubt for up to 3 times outside the confidence interval).

  • Heretic // January 31, 2008 at 4:32 pm

    Gee, of Mc Kitrick and Hansen, wich could be the scientist and which the political activist? Mmmm, dunno, perhaps a look at respective numbers of science papers in peer-reviewed publications could give us a clue…

  • Lee // January 31, 2008 at 4:46 pm

    Oh, good christ, raven.
    The raw data is all available to anyone.
    The methods are published you can read exactly what they have done.
    The computer code is released and available you can read the code that does the work. Whining that it doesn’t compile on a different computer does not change that - its sitting RIGHT THERE to be read.
    HADCrut does an independent analysis of mostly the same data - different criteria for which data to include, but that is part of the independence of theri analyis - with very similar results.
    The satellite analyses, with completely different data and analytic methods, analyzed by independent groups, track both GISS and Hadley very closely.
    The Surface Stations project, which set out to discredit the surface record, instead confirmed that the “high-quality” station data nearly perfectly tracks the GISS analysis.

    How much more fricking utter transparency and confirmation do you need?

  • george // January 31, 2008 at 4:47 pm

    raven said: “Governments should take control of the data aways [sic] from the agencies developing the models. ”

    I always thought that NASA was part of the US “government” (ie, a “US government agency”)

    Perhaps the “N” stands for “NON-governmental”? (or perhaps “NON-objective”?)

    You learn something new every day on the internet.

  • Hank Roberts // January 31, 2008 at 5:03 pm

    > … Hadley … posted about it

    Blush. Thanks for the reminder. I thought I’d looked before asking.

    [Response: No problem. I appreciate pointers to interesting stuff. I just can’t keep up with it all, but occasionally a reader points me to an extremely valuable work … and there’s no harm in a pointer to something I’ve already seen.]

  • JCH // January 31, 2008 at 5:05 pm

    Just to satisfy your doubts, how much are these audits going to cost? Wouldn’t a little dutch boy cost-benefit analysis demonstrate that money would be much better spent on treating aids victims in Africa, or spraying DDT in the tropics?

  • george // January 31, 2008 at 5:52 pm

    Tamino said:

    “If the “continued warming” hypothesis is correct, future values should fall between the dashed red lines. ”

    and

    “the still-warming side wins if temperature goes above the top dashed blue line; the not-warming side wins if temperature goes below the bottom dashed red line.”

    Isn’t that based on the assumption that warming continues at the same rate given by the trend over the past 30 years?

    What if the warming trend continues, but at a lower rate (lower slope), from now through 2015?

    Then we have an upward trend line with lower slope bracketed by 2-sigma error lines also with lower slope (defining yet a third region)

    Wouldn’t that mean that it would be possible for the temp to fall below your bottom dashed red line for two (not necessarily consecutive) years even though global warming (ie, an upward trend in temperature) continued? (assuming the same standard deviation for the residuals)

    [Response: You’re quite right. I considered whether or not to address this issue, which complicates things of course, and decided that for the purpose of this blog it was best to omit.

    Both basic physics and computer models suggest that the warming rate won’t be decreasing, rather it’s likely to increase. So IF I were betting money, I’d still go with the bet suggested in the post.]

  • Bob // January 31, 2008 at 5:54 pm

    So the bottom of the GISS rising temp range is currently at about .39 C and the bottom of the 2001 to 2007 average range is .35 C. FYI, the December 2007 monthly anomalies in C are as follows:

    GISS 0.39
    UAH 0.11
    RSS 0.08
    HadCRUT3 0.21
    NCDC 0.40

    [Response: FYI, the random variation is *monthly* averages is bigger than in *annual* averages (just as the variation in annual averages is bigger than in 5-year averages), so the limiting ranges will be even wider than for annual averages.

    And FYI, the autocorrelation of monthly averages is considerable bigger than that of annual averages, so the red-noise character of the data cannot safely be ignored for monthly data.

    FYI, all the data sets to which you refer are on *different scales* because they use a different reference (comparison) period for computing anomalies. Hence each requires different numerical values for its range definitions, just as the “safe operating temperature” of a device will have different numerical values if the temperature is expressed as degrees Kelvin rather than Celsius.]

  • Kevin // January 31, 2008 at 6:30 pm

    This ‘challenge’ is very interesting, and as usual the graphs Tamino presents contribute a lot of clarity to the data. In a way, though, in seems like this challenge accepts a prior move of the goal posts. That is, if this same question had been posed in this same manner in, say, 1989–when eyeballing the data could have given the same misguided impression that some skeptics are promoting today, i.e. that the warming may have “taken a break” in the preceding years–why, then the challenge would already have been resolved. The yearly averages have continued to vary around the increasing trend line. Unless there’s some physical basis to suppose the case is different now, then why must we wait several more years to consider the issue resolved?
    Am I missing something? Is there a reason that the skeptics who post here can give why they expect this trend to end or reverse? I mean, a reason in terms of forcings? I think it’s been made quite clear that eyeballing the past 10 years’ averages doesn’t give a supportable reason to think anything is different, and neither does a regression line on enough data to be statistically significant. So why, exactly, should we expect the trend to go away?

    [Response: How true! But “global warming stopped in 1998″ is the very public mantra du jour. When it’s shown to be false, we can link to this post in response to the “global warming stopped in 2013″ mantra.]

  • Julian Williams // January 31, 2008 at 7:16 pm

    There has been a lot of work done with nonparametric statistics for reliability analysis on testing hypotheses until they can be either rejected or accepted, without any preset endpoint. The amount of work published on it (and its everyday use in detection equipment software) suggests that it is not anything like as simple as presented here, and there is a real danger of accepting the “wrong” result.

  • Brian Schmidt // January 31, 2008 at 7:20 pm

    I will take Tamino’s side of this bet. For money. Skeptics, what do you say?

    I have one modification - to deal with the Hadley Center issue, which is unrelated to AGW, years 2008-2010 don’t count.

    Nice job, Tamino.

  • Aaron Lewis // January 31, 2008 at 9:01 pm

    The bottom line is that all the temperature data that Tamino choreographs so beautifully are only proxies for climate effects on agriculture and our ecosystems.

    It is easy to check agricultural and ecological data. For example it is easy to track harvest dates. Harvest dates are a very precise indicator of climatic warmth and have huge economic importance. NH fruit harvests were 3 to 5 days earlier last year than they were in 2000. This is very good evidence that Global warming has not stopped, and it has significant current economic impact.

    As I write this, my nectarines are breaking bud. In the period of 1997 to 2000, the same trees reached the same stage of bud break on Valentine’s day, and the fruit ripened the last week in June. Last summer, the fruit ripened the second week in June. Due to the early flowering, this year I expect the fruit to ripen the first week in June. (If we have any bees to pollinate the blossoms so early in the year!) The early bloom has totally disrupted my spray schedules. And, never before have I seen red spider mites so active in January. I expect that is because, recently, we have had fewer frosts.

    Citrus is a less precise marker, but my tangerines trees that used to ripen fruit the last two weeks of February, this year ripened the last two weeks of January, and were the sweetest ever. I guess the tangerine trees did not really like those frosts in the old days!

    Thus, my records suggest that the commercial data actually understates the recent effects of global warming.

    Here in California, climate warming has not stopped. I see its effects in the garden and orchard every day.

  • Zeke // January 31, 2008 at 9:41 pm

    Raven,

    If you don’t trust GISS, go out and survey all of the temperature stations in the U.S.

    Choose the best rural stations.

    Create a temperature record based on those.

    I’ll spoil the ending, since its already been done. GISS is remarkably close to an independent temperature reconstruction using only the best rural stations for the United States: http://www.yaleclimatemediaforum.org/pics/gistemp.jpg

    If you want to work through the data yourself, go to http://www.opentemp.org/main and try it out.

    Granted, you can still wax poetic about the quality of temperature measurement stations in China. But since the same arguments were made for the U.S., and it turned out to be reasonably good after all, I’m giving GISS the benefit of the doubt for the moment.

  • J // January 31, 2008 at 10:14 pm

    Okay. How about this?

    Pretend that it’s January 1997, and you’re looking at the GISTEMP data. It looks like temperatures rose from 1975-1990, and then leveled off. Maybe global warming stopped back in 1990? Temperatures have been flat from 1990-1996!

    S0 I repeated Tamino’s analysis more or less exactly, from the perspective of someone back in early 1997.

    From 1975-1990, temperature rose at a rate of 0.0214 deg.C/yr, and the residuals have a standard deviation of 0.09753 deg.C/yr.

    I projected that trend out into the distant future of the 21st century (all the way to 2007!), along with its +- 2 SD envelope.

    I also looked at the 1990-1996 average, extended that as a “no-trend” line, and gave it the same +- 2 SD envelope.

    Just like Tamino.

    Now, let’s take our time machine forward to 2008, and look at what really happened to the climate. Since 1996, there have been *no* points that fell outside the “warming-trend” envelope.

    On the other hand, there have now been *8* years that fell more than 2 SD above the “no-trend” line. (1998, 2001, and every year since).

    In other words, using Tamino’s methodology, if we had made this bet back in January 1997, it would have been resolved definitively by 2001 (well, actually in early 2002, when the 2001 annual mean became available…) and nothing since then would have contradicted this.

    The graph looks really nifty. I think I’ll make a copy to show to the next person who tells me that global warming stopped in 2001.

  • Bob North // February 1, 2008 at 12:19 am

    I think it is a given that each organization (GISS, hadley, UAH) uses slightly different methods and assumptions in processing the raw data but they all give roughly the same result when looking at long-term trends. Therefore, it seems that using an unweighted average of the GISS, Hadley, UAH, and RSS estimates of the global temperature anomaly would be the most reasonable approach for future evaluations of whether or not warming is continuing. I don’t think that either outcome will necessarily prove or disprove the AGW theory. What it will tells us is that our estimates of the effects of increasing CO2 (ie., the forcing) are either too high, too low, or just right.

    Rather than just looking at whether the current trend is continuing or not, perhaps the better test is how close the average trend from let’s
    says 2000-2015 matches the average temperature increase predicted based on the increase in CO2.

    Bob North

  • steven mosher // February 1, 2008 at 12:26 am

    Nicely put bet Tamino. I think it’s fair. I have some quibbles ( mainly about the AGW hypothesis implying INCREASED warming) but I think you did a fair job. And it’s well explained.

    On Hadcru. I see a bunch of people arguing that GISS and hadcru use the same data.

    That’s an open question. Until we forced Dr, Jones via FOI to release his list of stations, we didnt have any confirmation that HAdcru used the same stations as GISS. In fact, we had some evidence that they used diferent stations.

    Now, that comparision can be done. I think those who claim the stations are the same should have a look.

    The other issue with hadcru is the lack of transparency WRT to the actual data .

    GISS, after much lobbying (psst I invented the drive to free the code) has seen fit to release everything: Stations, data, code. GOOD FOR THEM. The day nasa did this I posted a thank you on RC. It was filtered. maybe it was OT.
    So I repeat the thank you here

    In the future I think IPCC should use GISS rather than HADCRU. Transparency reduces doubt. That’s a good thing.

    [Response: I prefer GISS (I think it’s better to estimate unobserved-but-near-to-observed regions rather than omit), but I doubt IPCC will change, mainly because they’ve used HadCRU so far, and there’s an argument to be made for consistency.]

  • Bob // February 1, 2008 at 1:25 am

    GISS and HADCRU do not use the same data sets, Check the links conveniently provided by Tamino. GISS uses satellite data for Sea Surface Temps and HADCRU uses ship measurements. This would appear to be a major difference in two data sets given the size of the oceans.

  • EliRabett // February 1, 2008 at 4:09 am

    Given that the records all track each other it pretty much doesn’t matter which ones you use. Assuming that the AMSU stays on line, all the satellite reconstructions should have essentially zero variance from each other. The rest is noise.

  • steven mosher // February 1, 2008 at 1:58 pm

    tamino, I’m not so sure on the GISS inclusion of the “unobserved” I’m assuming your refering to how they treat the artic. I’d take the hadcru approach and live with the greater uncertainity.
    As I understand it Hansen et al use stations within 1200Km of each other. The issue is in Hansens orignal study the coorelation study to determine this distance showed that at that distance the correlation was around .6 for the northern hemisphere and .5 for SH. The problem is what is the correlation across the polar region? In any case I think reasonable people can disagree about this. And it has no bearing on AGW as far as I can see.

  • Jim Arndt // February 1, 2008 at 11:50 pm

    Hi,

    I think I’ll put my bet on Anthony Watts. Here he shows the correlation between AMO, PDO and TSI better correlate than CO2
    http://wattsupwiththat.wordpress.com/2008/01/25/warming-trend-pdo-and-solar-correlate-better-than-co2/#more-597

  • Deech56 // February 2, 2008 at 1:15 am

    Jim Arndt, isn’t Watt showing only US temperatures (USHCN)?

    [Response: Ordinarily I wouldn’t have allowed what’s really nothing more than a link to denialist propaganda. But Jim’s comment gave me an idea for a post — a critical review of the aforelinked work. Stay tuned.]

  • Chris Colose // February 2, 2008 at 5:07 am

    If you scroll down in the comments, I was already critical. It is hard to take their arguments seriously over there.

    [Response: The post is a summary of a paper by D’Aleo (which appears *not* to be part of the peer-reviewed literature — surprise!). There are so many things wrong with the work … yet for some reason the folks who are so intent on placing everything from legitimate climate science under a microscope, seem to accept everything that supports their pet view uncritically. Odd …

    Note to readers: if you choose to post a link to some denialist work, one of two things will happen. 1) I delete it; this is not a holding area for denialist propaganda. 2) I put it under a *real* microscope, and we get all to find out just how well denialist arguments stand the light of day.

    As for the details of this particular example, all will be revealed in an imminent post.]

  • wattsupwiththat // February 2, 2008 at 5:14 am

    “Tamino”, without hurling additional insults yourself such as “What a load of crap.” or “nothing more than a link to denialist propaganda” I hope you’ll see fit to publish my comment.

    Zeke, Thank you for the interest and discussion. I really do wish you would have contacted me before writing the article at the Yale forum. Solid journalism requires getting both sides of the story.

    But you are a student, and thus should learn from mistakes. So, I’ll give you a pass for not getting my side before writing the story.

    If you had inquired with me at that time, you would have learned the surfacestations project was not nearly complete, and that JohnV ran a series on data released (for transparency) when only about 34% of the USHCN network was surveyed, and geographic distribution was significantly biased for east and west coast.

    The result was that only a handful of CRN1 stations (17 if I recall correctly) were used in JohnV’s “proof” that the “best” stations matched GISS trends.

    For my part, I’m continuing to gather data on stations to have a larger sample to run such a comparison with. Until then I’ll continue to publish census updates.

    Conversely, if I myself had run and published the analysis at that point when JohnV did, prematurely on just a handful of stations, and it came out the other way, I’d surely catch a rash of criticism for having “jumped the shark” and for using such a small and geographically biased data set. I’m sure “Tamino” would have been right up there in criticising me for using such a small and unequally distributed data set.

    So far we are at 482 stations out of 1221, with the majority still being CRN 3, 4,5. This summer, I hope we’ll be able to get more from the midwest, where I believe we’ll find better station sitings to improve on the CRN 1,2 population. It may turn out that there will be a good agreement with GISS, it may diverge. I honestly don’t know, but I’m going to find out.

    And a note on Jim Arndt’s post. That AMO, PDO, and TSI correlation was done by Joe D’Aleo of ICECAP, I just cross posted it at his request.

  • Hank Roberts // February 2, 2008 at 5:40 am

    Aaron, thanks for bringing a reminder that all the numbers and tools are doing is giving us simple information about the world - but the world goes on around us. I don’t know any gardener or farmer who doubts the pattern; I wonder if anyone who does farm or garden, plants annuals or maintains perennials and harvests food, is among those on the doubters’ lists. And how their crop’s looking.

  • Timothy Chase // February 2, 2008 at 5:50 am

    In the inline to an above comment, Tamino wrote:

    Note to readers: if you choose to post a link to some denialist work, one of two things will happen. 1) I delete it; this is not a holding area for denialist propaganda. 2) I put it under a *real* microscope, and we get all to find out just how well denialist arguments stand the light of day.

    I hope you don’t mind if I make a suggestion.

    When you choose to show it, you might consider breaking the link, showing what the commenter intended to link to, but not actually linking to it, showing the text perhaps, but using an “a href” to link to Disneyland or something. Otherwise your site’s relevance in the various search engines will attach greater relevance and give a more prominent position in search results to that which is being linked to. The more links and the more relevant the websites which are doing the linking, the greater the “relevance” of what is being linked to even when in reality it is just canned meat.

  • Heretic // February 2, 2008 at 6:12 am

    Wattsupwithat, your post is of limited interest if you don’t tell us more . You have now a bigger sample, what do the data say? That’s the only thing that really matters, after all.
    Furthermore, regardless of what Zeke did or did not ask you before writing his article, the data collected and shown on his link stand. For one who is skeptical of the Watts’ “effort,” it adds to the reasons for not paying much attention to it.

  • Zeke // February 2, 2008 at 6:22 am

    Anthony,

    I’ll agree that JohnV’s analysis is far from complete. He can speak to how significant (or insignificant) his findings are, but obviously the results would be considerably more robust with the eventual inclusion of all rural and well sited stations. I noted in my October article that only a third of stations had been surveyed to date.

    However, your argument cuts both ways. I seem to run across statements on a daily basis in the comments of various blogs by people attempting to use surfacestations.org to cast doubt on the GISS record. If JohnV’s analysis of the interim results is not enough to independently validate GISS, than surely pictures of badly sited stations should not be enough to invalidate it.

    The fact that the Raven’s of the world keep harping on the validity of GISS is a direct result of your work. It seems that it might incur something of an obligation on your part to correct them.

  • Hank Roberts // February 2, 2008 at 6:26 am

    Chris, WordPress has a tool for it
    http://wordpress.org/extend/plugins/nofollow-case-by-case/

    (I haven’t bothered with /view/source to see if Tamino’s using ‘ nofollow’ here; it’s used most places now).

    Good point generally for bloggers, ‘nofollow’ if you don’t want your site’s good reputation to be a factor used by Google in rating the linked site.

    Keeps the ’search rank optimizers’ and the one-idea-ranters from getting high rankings in search results on your credibility as pointing to them.

  • Zeke // February 2, 2008 at 6:29 am

    An addendum:

    Note that the link I posted shows the trend from rural CRN1, CRN2, and CRN3, and I was under the impression that these contain considerably more than 17 stations.

    Given that JohnV posts here somewhat frequently, it would be interesting to get his perspective.

    Finally, for those not privy to the debate, you can see my original article (which includes links to some of the larger CA threads where the data analysis was originally posted) here: http://www.yaleclimatemediaforum.org/features/1007_surfacetemps.htm

  • wattsupwiththat // February 2, 2008 at 6:52 am

    As an aside I’d also point out this comment from “Tamino” regarding Joe D’Aleo’s essay.

    The post is a summary of a paper by D’Aleo (which appears *not* to be part of the peer-reviewed literature — surprise!).

    I’d point out that none of the essays published here on this blog (some of which are very good) are not in “peer reviewed literature” either.

    It’s one thing to say you disagree with the essay, which is fair game. It’s quite another to diminish it in that way when your own essays don’t meet the same “peer reviewed” standard you impose on others.

    Therefore I think your characterization of Joe’s essay is therefore unfair and biased.

  • dhogaza // February 2, 2008 at 9:26 am

    I don’t know any gardener or farmer who doubts the pattern;

    Nor do conservation biologists, who are also seeing the effect.

    Organizations like the Nature Conservancy are already starting to spend significant sums of money trying to figure out how to adapt their habitat protection strategy to a warming north america.

    Global warming is like a hidden tax on conservation efforts. Much of what’s been done in the last century may turn out to be irrelevant, and it’s costing real money to determine just how extensively global warming may undermine past conservation efforts.

    Remember, though, according to denialists, we’re part of that environmental community that “wants AGW to be true!”

  • dhogaza // February 2, 2008 at 9:29 am

    Solid journalism requires getting both sides of the story.

    Solid journalism is about getting at the truth.

    Anyway, good luck with your endeavors. Everyone needs a hobby, and photographing white boxes in odd places seems as harmless a hobby as any.

  • fred // February 2, 2008 at 10:38 am

    One does worry a bit about exactly what this is going to prove one way or the other, for two reasons: one is causation, the other the shortish 30 year history.

    Suppose GW has not stopped. We have still not a signature of CO2 based warming as opposed to any other kind. Do you have any thoughts about this? There must be one.

    Second, thirty years is not very long. If we had done this in the previous rises and falls in the 20c, we’d have concluded that it was falling, then that it had stopped falling, then that it was rising, then that it had stopped rising… and the 20c went through without anything terribly dramatic happening ovrerall. So yes, if you pass or fail the test, warming either has or has not stopped. But what is it reasonable to conclude other than that?

  • P. Lewis // February 2, 2008 at 11:17 am

    Re Tamino’s comment here

    The first two of the following words and phrases will almost certainly appear in Tamino’s analysis of the D’Aleo paper (by virtue of the subject itself). How many of the others do you reckon will also appear? I reckon it could be all of them.

    AMO; PDO; lag; smoothed time-series; autocorrelation; nonstationarity; white noise; red noise; stationary oscillation; traditional regression technique; independence assumption; testing significance of trend; violation of assumption; invalidation of regression technique.

    Why? Because they allude to various issues germane to time-series analysis that seem absent from my (admittedly cursory) analysis of Jim Arndt’s link.

  • Deech56 // February 2, 2008 at 1:02 pm

    RE: wattsupwiththat // February 2, 2008 at 6:52 am

    “It’s one thing to say you disagree with the essay, which is fair game. It’s quite another to diminish it in that way when your own essays don’t meet the same “peer reviewed” standard you impose on others.

    “Therefore I think your characterization of Joe’s essay is therefore unfair and biased.”

    But it’s actually true. Tamino also wrote, “There are so many things wrong with the work …” and that he will tell us the reasons. I look forward to this.

    Besides, I thought that surface temperature records were bogus.

  • Barton Paul Levenson // February 2, 2008 at 1:24 pm

    Jim Arndt writes:

    [[I think I’ll put my bet on Anthony Watts. Here he shows the correlation between AMO, PDO and TSI better correlate than CO2]]

    Where is he getting his TSI figures? They don’t match Lean’s at all:

    http://members.aol.com/bpl1960/LeanTSI.html

  • Barton Paul Levenson // February 2, 2008 at 1:28 pm

    fred posts:

    [[We have still not a signature of CO2 based warming as opposed to any other kind. Do you have any thoughts about this? There must be one.]]

    Yes, my thought is that you’re completely wrong. If you try to reproduce 20th- and 21st-century warming, you get a lousy match until you factor in CO2. Then you get a nice match. Here’s an example:

    http://people.aapt.net.au/~johunter/greenhou/home.html

  • Deech56 // February 2, 2008 at 1:49 pm

    Oh wait, these are the “good” records.

  • steven mosher // February 2, 2008 at 2:17 pm

    A couple clarifications on JohnVs work. You all can get his code and run it. I have. It’s a nice piece of work and he supplied the source. JohnVs CRN12R ( thats rural sites rated 1 or 2) had 17 sites. There are these conerns:
    1. the small number of sites. S
    2. The geographical bias.
    3. The Decision about what is Rural
    4. Lack of error bars.

    I won’t go into all the details unless folks want to ask> JohnV took an important first step. There’s more work to do.

    I took a different attack at the problem. Since there were 58 class5 sites, I compared the CRN1245 against the Class5. You see about a .15C difference. Now that we have more sites I should probably redo the work. Or you guys can, It’s easy. Even here however there is the issue of significance as the program doesnt output error terms. So, my study, like JohnVs was just a first step. lot’s more work to do.

    Ideally, the work to get GISS code compiled will get done and some can do the same studies with GISS code.

    Finally, we are still only talking about the US.

  • Joe D'Aleo // February 2, 2008 at 2:53 pm

    The link to the actual paper mentioned above is
    http://icecap.us/images/uploads/US_Temperatures_and_Climate_Factors_since_1895.pdf

    As stated in the original comments on WattsUpWithThat, it was a work in progress. But I wanted to put on the table some evidence for other factors, we should be looking more seriously at besides CO2.

    The alarmist side has waved its hand and declared the solar connection is debunked when there is plenty of peer review to the contrary. Very little has been said about the PDO and AMO’s effects except by Bill Gray, George Taylor and maybe Patzert at JPL but they (at least Bill and George) have been demonized for even suggesting a link. Ironically the IPPC AR4 background chapter 3 talks at great length about both AMO and PDO cycles and how they are naturally caused and real and admit they have affects on the climate though they state only those effects are regional in nature. (regional evreywhere=global)

    No one discounts man having influence on the climate but we are never going to get our arms around the problem if we turn out the lights because we won’t know where to grab.

    Prove to me that the sun and oceans are not affecting the climate.

    You have used as support for the greenhouse effect the coincidental rise from the 1970s to the 1990s. I went further back and looked at the correlations with the sun, oceans and CO2 with the only station data set I think that is even close to being accurate (and Anthony is showing even it has warts). I only wish we had accurate satellite data before 1979.

    I agree correlation doesn’t mean causation but you can’t have it both ways. Have it be significant with CO2 from 1970s to the 1990s but not meaningful before or after and not meaningful for the TSI and PDO/AMO in any timeframe.

    My influencers as a young climatologist professor and later as Director of Meteorology at the Weather Channel and then WSI were Doc Willett, Helmut Landsberg and Jerome Namias. I had the good fortune to meet and spend time with each. From them I learned the importance of the sun, the oceans and urban and local factors on climate and have put what I learned to good use in building three successful forecasting businesses that looked beyond the next two weeks.

    Although I wrote published a book in ENSO and other factors, and have addressed the NWA and AMS numerous times on these exact topics, I have not had a peer review paper on this issue, yet. Publishing what you discover while in the private sector is usually not allowed as they regard that as proprietary and an advantage they don’t wish to give away. I am only now more able to write more openly on it.

    By the way after the post on Anthony’s site, I did make some changes based on suggestions by commenters. I did a multiple regression analysis of the PDO and AMO with the temperatures instead of adding the standardized AMO to PDO. It even improved the r-squared to 0.85! I standardized the AMO because Mantua’s PDO was the standardized values for the PDO index, derived as the leading PC of monthly SST anomalies in the North Pacific Ocean poleward of 20N while the AMO was the area averaged SSTs over the Atlantic (Kaplan data set) from 0 to 70N (thus not standardized).

  • tamino // February 2, 2008 at 3:08 pm

    Mr. D’Aleo,

    I hope you’ll comment on my review of your work, when I post it here. In the meantime, you owe us an apology for this statement:

    Publishing what you discover while in the private sector is usually not allowed as they regard that as proprietary and an advantage they don’t wish to give away.

    This is just an outright lie.

  • dhogaza // February 2, 2008 at 3:08 pm

    Prove to me that the sun and oceans are not affecting the climate.

    Obviously the sun affects climate. So do the oceans. Who says they don’t?

  • dhogaza // February 2, 2008 at 3:12 pm

    I agree correlation doesn’t mean causation but you can’t have it both ways.

    And, of course, surely you know the case doesn’t rest on correlation, but that correlation merely serves to strengthen our belief that the basic physics of CO2 warming, feedbacks, etc is reasonably well understood.

    Surely you must …

  • Joe D'Aleo // February 2, 2008 at 3:33 pm

    That is not a lie for the businesses where we try and use these factors to forecast upcoming monthly and seasonal weather. That was not allowed in the two private companies I worked for and I know companies like Accuweather would never allow there meteorologists to publish their techniques in journals for all to copy. It is too competitive a business. In my last company, WSI, we developed statistical models correlating the various global teleconnections with the temperatures and climate by month and season. Even when we gave semi-annual briefings to our own clients we could not reveal any of the details of the approaches. We certainly were forbidden from publishing a paper on these models even though they had skill, were statistically sound and would pass peer review.

  • Deech56 // February 2, 2008 at 4:07 pm

    RE: Joe D’Aleo // February 2, 2008 at 2:53 pm

    “Ironically the IPPC AR4 background chapter 3 talks at great length about both AMO and PDO cycles and how they are naturally caused and real and admit they have affects on the climate though they state only those effects are regional in nature. (regional evreywhere=global)”

    Mr D’Aleo, your paper examines correlations between AMO and PDO and US temperatures. Wouldn’t the “US” climate be, by definition, “regional” in nature?

  • luminous beauty // February 2, 2008 at 4:13 pm

    Joe has proven that the weather is affected by the weather. It’s an amazing discovery, isn’t it?

  • Joe D'Aleo // February 2, 2008 at 4:23 pm

    In private industry Tamino, any findings that might be productized and turned into profit is not allowed to be written up for peer review publication unless or until you patent protect it. Even then it often is not allowed.

    In my last two companies we developed statistical models based on the various teleconnections and had success with them forecasting monthly and seaasonal temperatures and precipitation. We were not allowed to even discuss the methods or even the component factors with clients even when we did quarterly client briefings because clients could share with our competitors or client meteorologists could copy them and we would lose our advantage.

  • tamino // February 2, 2008 at 4:28 pm

    Mr. D’Aleo,

    I misinterpreted your statement. I though you were claiming that peer-reviewed publications obstructed works by those in the private sector, because peer-reviewed publication was a “proprietary advantage” which the scientific community wished to withhold from those not in the “inner circle.” Instead, you were referring to the private sector restricting publication because it considers its knowledge (and any financial advantages which follow) proprietary.

    I apologize for my false accusation based on misinterpretation.

  • Joe D'Aleo // February 2, 2008 at 4:48 pm

    Apology gratefully accepted.

    Please use the latest paper post http://icecap.us/images/uploads/US_Temperatures_and_Climate_Factors_since_1895.pdf
    and not the old one for any analysis as I have used multiple regression for AMO, PDO as suggested by commenter.

    Oh how I wish In had a global data base I could trust. Numerous widely ignored peer review studies have shown problems with the global data bases like station dropout (6000 to 2000), mising data increases, and inadequate adjustment for urban and local factors may ccount for up to 50% of the warming the last century. The US HCN data with all its warts was at least more stable and had an urban adjustment (at least in V1).

  • Joe D'Aleo // February 2, 2008 at 4:55 pm

    Finally to see how the PDO, AMO, solar all correlate GLOBALLY with temperatures and many other parameters go to the CDC reanalysis correlation site http://www.cdc.noaa.gov/Correlation/

    There are many factors at play in the world’s weather and climate, some highly variable week to week and others long term (multidecadal) in nature. The long term factors affect the climate.

  • Zeke // February 2, 2008 at 5:06 pm

    Steven,

    The second version of JohnV’s analysis (posted here: http://www.climateaudit.org/?p=2124#comment-147568 ) uses CRN 1-3.

    The list of sites included can be found at http://www.opentemp.org/_results/20071011_CRN123R/stations_crn123r.txt
    There are 62 stations included.

    But lets stop fighting old battles. The work of the surfacestations project is incomplete, and it shouldn’t really be claimed to vindicate or tarnish the validity of the GISS record at this point in time.

  • dhogaza // February 2, 2008 at 5:21 pm

    inadequate adjustment for urban and local factors may ccount for up to 50% of the warming the last century.

    How many satellites are located in urban areas, again?

  • inputted // February 2, 2008 at 6:11 pm

    > How many satellites are located
    > in urban areas, again?

    Only Triana, Dhogaza.
    And it’s a hostage, blind and powerless.
    Not dead, so there’s hope.

  • luminous beauty // February 2, 2008 at 6:12 pm

    The urban heat island argument would be much stronger if dense urban areas were actually where most of the 20th century anomaly was showing up in the surface station record.

    http://www.nasa.gov/mov/141677main_a10_1891_1996_sor.mov

    Apparently not.

  • Aaron Lewis // February 2, 2008 at 6:30 pm

    Hank,
    Gardeners may not believe that it is the “Hand of Man”, but they do not doubt the effect.

    Farmers - weather and climate is key to their business - they understand.

  • chriscolose // February 2, 2008 at 6:30 pm

    Joe D’Aleo,

    I am not sure how you propose that the internal variability is responsible for warming on timescales of decades. The planetary energy imbalance is now such that ~ 1 W/m2 of solar irradiance outweighs the radiation emitted in the infrared, and the OLR is diminished due to the rising GHG’s. In fact heat is now going in the oceans, not going out, and this imbalance is opposite of what would happen with internal heating.

    As for solar, I may be wrong but is the TSI data in your paper anywhere to be found in the peer review? It deviates substantially from that such as Lean, especially after 1950 where the academic literature suggests there is no trend since around 1950 (Lean 2000, Benestad 2005, Foukal et al. 2006, Lockwood and Frohlich 2007 are all good starts). You did not discuss trends in winter-summer variation, DTR, strat. cooling, etc all of which suggest that GHG’s dominate solar variability (See my post- http://chriscolose.wordpress.com/2007/12/18/the-scientific-basis-for-anthropogenic-climate-change/ ). You have not bothered to quantify your connections, you have only showed lines going up and down. Perhaps you can label your axis, I find the numbers “51.5 to 54.5″ hard to follow when speaking of “TSI.” A useful conversion into a radiative forcing (in W/m2) would be useful, so that we can understand how this graph should change: http://www.greenfacts.org/en/climate-change-ar4/images/figure-spm-2-p4.jpg

    If you want to cherry pick years (e.g. what is that black line doing in your fig. 1 ?), then 2007 is also second warmest on record despite a solar irradiance minimum and the equatorial Pacific Ocean is in the cool phase of its natural El Niño-La Niña cycle.

    I think you are very aware you chose to only handle U.S and not global temperatures. The former makes up about 2% of the latter. In fact, from 1900-50 there was a good deal of solar, anthropogenic, lack of volcanic and other factors, but saying there is a large natural trend sicne 1950 requires rejections of papers like Ammann et al. 2007 or Meehl et al. 2004 and others. But most warming is dominated by the last few decades, with the earlier times more or less marked by large fluctuations.

    R^2 values won’t cut it, please quantify it and get rid of some of the sloppiness

  • steven mosher // February 2, 2008 at 6:47 pm

    Zeke, I know that John did CRN1-3R

    He and I conversed a long time about it. The rural determination is still a big problem for me as it is based on old population data and nightlights. The problem with error bars remain.

    When you add CRN3 you get a nice boost in stations. In fact, JohnV and I talked about that Approach. He took one path: showing “similarity” between GISS and CRN123R. I took a different path: testing whether class 5 sites showed a larger warming trend than CRN1234.

    Interestingly we both had similiar guesses for the Spurious warming trends we could expect from Class5.

    Let me explain my position on this.

    For the purpose of Climate studies the present US stations are substandard. That’s not my opinion, that’s the conclusion of climate scientists. We have more stations that we need, and they do not meet standards. That is why NOAA has been working building the CRN for the past few years. Instead of 1221 sketchy stations, we will have 100 or so properly instrumented , calibrated ,electronically monitored stations that meet Standards.

    For reference, Shen has estimated that the global temperature can be accurately estimated with 60 stations.

    http://www.math.ualberta.ca/~shen/Sam_Papers_pdf/shen_jclim_1998.pdf

    I have not seen Shen’s claim refuted. I have seen Gavin Schmidt support it.

    60 good stations is what you need to estimate the world. I take them at their word.

    So, estimating the USA with 100 good stations is a fair amount of oversampling.

    Some things to think about. The USHCN has 1200 or so stations. As Watts has shown a fair percentage of these do not meet standards.
    We can argue about effect of this. It’s a fun debate. What we cannot argue about or debate is the science. The climate scientists at NOAA have determined that around 100 GOOD sites are needed to monitor the US temperature. I agree with them. Shen has argued and gavin concurs that ONLY 60 sites are needed for the whole world. Ignoring that difference, it’s still clear that climate scientists beleive we can sample the US and the Globe with fewer better sites.

    They are right. I agree with the consensus. Dump the bad sites.

    Now, it bothers me a bit that Shen thinks that 60 sites are needed Globally, while Noaa thinks that 100 sites are needed for the US.. (less than 2% of the globe. ) It also bothers me than Brazil is sampled by 6 stations, while the US is sampled by 1221. That’s odd. If 6 is enough for brazil, shouldnt we pick the best 6 sites in the US?
    On the other hand if 1221 sites are NEEDED to
    estimate the US, what does this say about brazilian data? Hmmm

    Simple question: is the united states OVERSAMPLED, UNDERSAMPLED, or sampled Just right ?

    Think about that before you answer, since it’s the most highly sampled land mass.

    If it’s OVERSAMPLED, then brazil is undersampled. Moreover, we are oversampling it by using non standard sites.

    If it’s Undersampled, then the rest of the WORLD is undersampled by orders of magnitude.

    If the US is goldilocks and sampled just right then

    1. Shen and Gavin are wrong.
    2. Noaa is wrong.
    3. The ROW is undersampled.

    Bottomline. If JohnV is right and CRN123R matches GISS. Then just delete CRN4 and CRN5.

    Delete the stations that dont meet standards.
    Let the temperature bar FALL WHERE IT MAY.

  • Deech56 // February 2, 2008 at 6:59 pm

    Tamino, your post about publication of private organizations actually wasn’t that far off. Patents become part of the public record, so once an organization’s intellectual property is protected (patent application or equivalent) a researcher is usually free to publish. Of course, my experience is in the biotech sector. YMMV.

    What Mr. D’Aleo is describing sound to me like trade secrets, which are not patented, and which would not be published for obvious reasons.

    Mr D’Aleo is correct that one may not have as extensive a record in the private sector; however, publication by Corporate employees is not unknown. Anyway, this argument should not be on credentials, but on data.

  • inputted // February 2, 2008 at 7:43 pm

    Okay, let’s look at this calmly:

    > They are right.
    True.
    > I agree with the consensus.
    So you say.
    > Dump the bad sites.
    Wrong on fact, wrong on conclusion.

    You know better. You’ve been round this claim repeatedly in many places.

    Time series aren’t improved by throwing out data, and accuracy of a large number of approximate observations is much greater than the accuracy of any individual observation.

    And you keep fudging this. Why?

  • cce // February 2, 2008 at 8:15 pm

    So, if I understand things correctly.

    People believe that the global mean temperature anomaly of the earth is being increased because there has been a multi-decade net release of heat from the ocean? Even though the land is warming faster than SST?

    People believe that the sun is warming the earth, even though the stratosphere is cooling, and nighttime temperatures are increasing faster than daytime temperatures.

    For TSI, they use the Hoyt & Schatten analysis, because all skeptics use the Hoyt & Schatten analysis, even though there are newer analyses out there, and they show far less variability. Ask Svalgaard (no fan of AGW) what he thinks of Hoyt & Schatten.

    They complain that 17 of the highest quality sites aren’t enough to establish the temperature trends of the US, while simultaneously using the temperature of the US to represent the globe.

    I suspect that 60 stations around the globe is what is necessary to estimate the global mean temperature anomaly. If you are trying to measure the temperature variation of specific regions, obviously, you are going to require more stations than that.

  • henry // February 2, 2008 at 8:37 pm

    dhogaza said: (February 2, 2008 at 9:29 am)

    Solid journalism is about getting at the truth.

    Anyway, good luck with your endeavors. Everyone needs a hobby, and photographing white boxes in odd places seems as harmless a hobby as any.”

    And just think how much time and money Anthony and his volunteers would have wasted if the people in charge had kept decent photographic evidence…

  • Timothy Chase // February 2, 2008 at 9:17 pm

    steven mosher wrote:

    When you add CRN3 you get a nice boost in stations. In fact, JohnV and I talked about that Approach. He took one path: showing “similarity” between GISS and CRN123R. I took a different path: testing whether class 5 sites showed a larger warming trend than CRN1234.

    Interestingly we both had similiar guesses for the Spurious warming trends we could expect from Class5.

    I take it that Class 5 showed spurious warming trends compared to GISS?

    Apparently not.

    Please see:

    Sunday, September 16, 2007
    And so it goes . . .
    http://rabett.blogspot.com/2007/09/and-so-it-goes.html

    The trends in temperature look pretty much the same. In fact, CRN5 runs slightly cooler than the entire GISS.

    And it appears that “bad stations” can perform just as well as “good stations” if one knows how to correct their biases.

    Please see:

    Thursday, September 20, 2007
    Ethon checks out the air conditioning. . .
    http://rabett.blogspot.com/2007/09/ethon-checks-out-air-conditioning.html

    steven mosher wrote:

    For the purpose of Climate studies the present US stations are substandard. That’s not my opinion, that’s the conclusion of climate scientists. We have more stations that we need, and they do not meet standards. That is why NOAA has been working building the CRN for the past few years. Instead of 1221 sketchy stations, we will have 100 or so properly instrumented , calibrated ,electronically monitored stations that meet Standards.

    Once the CRN network is in place, what do propose we do with the “1221 sketchy stations”? You do realize that they serve different functions. They belong to different networks used for different purposes.

    What do you propose we do with their temperature records for the past century? Scrap them so that we can pretend a century’s worth of warming didn’t take place? Even though the Class 1s and the class 5s show essentially the same trend? Apparently so, since you go on to say, “Dump the bad sites.”

    I am glad you aren’t my opthamologist.

    steven mosher wrote:

    For reference, Shen has estimated that the global temperature can be accurately estimated with 60 stations.

    http://www.math.ualberta.ca/~shen/Sam_Papers_pdf/shen_jclim_1998.pdf

    I have not seen Shen’s claim refuted. I have seen Gavin Schmidt support it.

    Actually, I believe the closest Gavin Schmidt comes to supporting this is the following statement:

    It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom - that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations - many times more than would be theoretically necessary.

    2 July 2007
    No man is an (Urban Heat) Island
    http://www.realclimate.org/index.php/archives/2007/07/no-man-is-an-urban-heat-island/

    We aren’t just concerned with the trend in the global average annual temperature. We are interested in the monthlies, the regionals and so on. They give us more datapoints against which to test climate models. And this becomes particularly important when we start trying to base mitigation efforts on model projections. The models have to be tested and demonstrate a fair degree of reliability at the regional level if we are to rely upon their projections in determining what investments we should make in response to the changes we will be seeing on decadal scales.

  • Gareth // February 2, 2008 at 10:47 pm

    Aaron Lewis said:

    Farmers - weather and climate is key to their business - they understand.

    Unless, of course, emissions reductions might inconvenience them - New Zealand’s dairy farmers (or at least their representative organisations) are reluctant to accept they should be accountable for their emissions (methane, nitrous oxide).

    The wine business, on the other hand, is very aware of the problem. You can see the effects of warming in the wine - and it’s more than just extra ripeness. Faster ripening means fewer flavour compounds…

    I’m doing some background work on a possible book on the subject with a local winemaker (who consults with Stags Leap, Chateau Palmer and Antinori). The wine business is very concerned indeed, and the prospect of rapid change is deeply worrying. If you have a lot of money tied up in a prime terroir, you can’t just move it…

  • JCH // February 3, 2008 at 1:08 am

    The vast majority of the methane produced by cattle is produced by cow-calf operations. Dairy herds are much smaller.

  • Gareth // February 3, 2008 at 2:54 am

    Not in NZ (figures here). And in NZ, about half of our emissions come from agriculture, which is unusual in comparison with most developed countries.

  • hankroberts // February 3, 2008 at 3:04 am

    Surrendering to WordPress for the name.

    Tamino, a bit tangential but
    here’s a parallel example where a variety of stations are used, an argument was made using a different set that gave different results, and an argument was made that some stations should be thrown out.

    Nobody’s going to be photographing these. But I think the authors’ rebuttal of the suggestions is useful as an illustration of how to think here.

    http://www.sciencemag.org/cgi/content/full/319/5863/570c

    This is just the ‘response to comments’ page, with small illustrations; see your library if you don’t subscribe to Science for the full text. But there’s enough here to support the authors’ point stating:

    “Quality control of the observations is essential, but it must be based on rational criteria and applied to all stations equally. The exclusion of stations based on their unique signal risks removing real information. Early CO2 observations are too sparse and precious to reject based on subjective grounds. Thus, the aim of inversion is to extract signals from the early data and to quantify their associated errors. Although our analysis contains uncertainties partly underestimated in (1), both our inversion and process model results suggest a persistence of the 1981 to 2004 trends when applied to data for 2005 and 2006 (20).”

    As always — see and check the footnotes; see the various helpful links for “Related Content” and “Searches” and “Related Resources.”

    This is a developing area, the whole issue of monitoring carbon sinks by species and by ocean is fascinating, but off topic. Just figured the point that keeping station data was good.

  • fred // February 3, 2008 at 7:34 am

    BPL, thank you for the references to John Daly and Still Waiting, which I had not previously seen, though Still Waiting spends a lot of time on funding controversies which seem tangential to the subject at issue.

    However when you say

    “If you try to reproduce 20th- and 21st-century warming, you get a lousy match until you factor in CO2. Then you get a nice match.”

    That’s not quite what one is looking for. It smacks too much of saying that there is nothing else we can think of, in an area of study where understanding is still not perfect.

    It is not a matter of denying that the present warming is GG caused. Its a matter of finding some unequivocal signal that could only result from such warming. We know that previous warmings in historical time happened, and didn’t result from GG effects. So it is quite a reasonable question to ask. Were this medicine for instance we might be asking, is there some signature of polluted water-born as opposed to miasma born infections, and we’d soon find one, as Snow did in Soho for cholera.

    At one point I thought there was an agreed one, to do with warming in the tropical troposphere, but that was shot down. There must however be one.

  • dhogaza // February 3, 2008 at 9:06 am

    And just think how much time and money Anthony and his volunteers would have wasted if the people in charge had kept decent photographic evidence…

    They don’t need photographic evidence, and they know they don’t need photographic evidence, and thus far the effort to undermine the surface temp record with photographic evidence has fell flat.

    Though Stephen Mosher is holding out hope that finishing the project will still turn everything on its head.

    Meanwhile, when does the project to photograph the satellites in orbit begin? I’m still waiting for that one. Photographic evidence being so all-fired important and all that.

  • dhogaza // February 3, 2008 at 1:39 pm

    That’s not quite what one is looking for. It smacks too much of saying that there is nothing else we can think of, in an area of study where understanding is still not perfect.

    It’s not for lack of trying, though. People propose different explanations over and over and over again. Problem is, they fail the “fits the data” test.

    At some point, when the other possibilities have been exhausted, you have to accept that observations that are consistent with known science might just be the basis for stating that we know what’s going on.

    It is not a matter of denying that the present warming is GG caused. Its a matter of finding some unequivocal signal that could only result from such warming.

    Predictions - stratospheric cooling, for one - based on the physics underlying CO2-forced warming, and incompatible with solar forcing being the cause, etc, have been confirmed.

    But something tells me no amount of confirmed predictions will convince you.

    We know that previous warmings in historical time happened, and didn’t result from GG effects. So it is quite a reasonable question to ask.

    Doesn’t seem reasonable to me at all. No scientist approaches the problem with the “there must be” or even “there most likely is” a single cause for all warming observed past or present.

    Were this medicine for instance we might be asking, is there some signature of polluted water-born as opposed to miasma born infections, and we’d soon find one, as Snow did in Soho for cholera.

    Scientists have asked similar questions, and have come up with similar answers (i.e. stratospheric cooling, etc). The problem is that you have decided what’s good enough for medicine isn’t good enough for climate science.

  • steven mosher // February 3, 2008 at 2:41 pm

    Timothy, Actually I did show a difference between crn1234 and crn5. so did johnV. a difference in trend. crn5 was higher. dont believe me run the code yourself. use the latest stations. its all open.

    second, I’m not the one who first criticized the existing stations: hansen and peterson did. hence the crn.

    third, no one suggests junking 1221 sites. crn will rely on a subset of the 1221. a subset of good stations. so eventually the 4s and 5s will be phased out. Noaa and giss routinely exclude sites from analysis on quality issues. They can because the USA is oversampled for climate studies. 100 years from now, you’ll probably only have CRN sites.

    4th. climate scientists, gieger, oke, jones, peterson, all recognize the potential for microsite and microclimate bias. Some have argued ( jones I believe, and eli amongst others)
    that the bias is both negative(shading) and positive ( heat storage) Thats a testable hypothesis.

    5th. I don’t think eliminating 200 or so class5 statations will change the USA trend substantially, maybe not even outside the existing error band. and since the USA is such a small part of the global temp, it wont effect the global record substantially. I think it’s good practice to follow the quaility guidelines you set up.

  • JCH // February 3, 2008 at 4:15 pm

    “Not in NZ …”

    Wow, you essentially have one dairy cow per New Zealander. In the United States we have 11 million dairy cows.

  • dhogaza // February 3, 2008 at 7:09 pm

    Wow, you essentially have one dairy cow per New Zealander. In the United States we have 11 million three dairy cows per New Zealander.

    There, that oughtta give the denialist camp something to work with!

  • Timothy Chase // February 3, 2008 at 7:48 pm

    Steve Mosher wrote:

    Timothy, Actually I did show a difference between crn1234 and crn5. so did johnV. a difference in trend. crn5 was higher.

    I stand corrected: you are right about the long-term trend. But by how much? And what about recent years?

    According to John V’s analysis, the trends were close, and crn5 has been showing a consistently cooler trend since roughly 1990 and has on the whole run cooler since the late 1960s.

    Please see:

    I think these plots speak for themselves, but here are my conclusions:
    - There is good agreement between GISS and CRN12 (the good stations)
    - There is good agreement between GISS and CRN5 (the bad stations)
    - On the 20yr trend, CRN12 shows a larger warming trend CRN5 in recent years

    To be honest, this is starting to look like a great validation of GISTEMP.

    The next step is probably to look at the subset of rural CRN12 stations. Can anybody get me a list of these?

    A First Look at the USHCN Quality Classification, Comment 88
    http://www*climateaudit*org/?p=2061#comment-138432

    *

    Steve Mosher wrote:

    dont believe me run the code yourself. use the latest stations. its all open.

    I believe that if the results were any different from what John V. initially found or you would be publicizing them.

    John V. has layed bare the problem with Watts’ project: in attempting to create the appearance that the warming trends that we’ve seen over the past century may all be a figment of our instrumentation, he invited people like John to investigate just how well that instrumentation has performed.

    Quick question: did John simply use the raw data from the stations? Or did he employ the same methodologies which GISS uses to eliminate various biases, station jumps, etc?

    I am assuming he hadn’t. What he did principally was create his own area-averaging of temperature. That is part of what is neat about his project. He didn’t seek to duplicate what GISS or any other agency performs, but to arrive at an altogether new analysis — which nevertheless shows much the same results. But in so doing, he has undoubtedly left out some of the quality controls which would have resulted in an even closer fit between the three curves.

    *

    Steve Mosher wrote:

    second, I’m not the one who first criticized the existing stations: hansen and peterson did. hence the crn.

    third, no one suggests junking 1221 sites. crn will rely on a subset of the 1221. a subset of good stations. so eventually the 4s and 5s will be phased out. Noaa and giss routinely exclude sites from analysis on quality issues. They can because the USA is oversampled for climate studies. 100 years

    So it would seem that they are already doing what you so strenuously advocate. If so, what purpose does your earlier rhetoric serve? In the meantime, the experts are convinced that they can correct for various biases and have performed studies which support this conclusion. If they know what the biases are for various stations, they can correct for those biases. Doing so they are able to get more information from which to base their estimates.

    Given the law of large numbers, the uncertainty of an average is smaller than the individual uncertainties of the elements that are being averaged, and the more the number of elements, the more the uncertainty of the average is reduced. Take a dice. The value on any role will be anywhere from 1 to 6. There is a great deal of uncertainty. But role the dice a thousand times and average all those roles, and the average is very likely to be quite close to 3.5.

    Reduce the number of stations, and you reduce the number of points the average is based on. The average may end up being less accurate than it would have been before. And it will be less accurate if we knew how to account for the biases. For example, if a given “bad” station consistently measures half a degree higher, we can correct for this bias simply by subtracting half a degree. However, in terms of the actual trends in temperature, since it is consistently half a degree higher, it will have no effect upon the trend.

    Who should be the judge of the tradeoffs involved? Someone who walks off the street, perhaps after having read an angry editorial to the effect that global warming is all bunk — or someone who is intimately familiar with the system?

    *

    Steve Mosher wrote:

    4th. climate scientists, gieger, oke, jones, peterson, all recognize the potential for microsite and microclimate bias. Some have argued ( jones I believe, and eli amongst others) that the bias is both negative(shading) and positive ( heat storage) Thats a testable hypothesis.

    As a matter of principle, the general microclimate bias for all potential sites must be neutral. In any given regional climate, the biases of all microclimates must cancel since their individual biases are simply deviations from their average — which is the regional average temperature. But the actual microclimates of a network of stations may have a bias, either positive or negative in its sign.

    What about microsite issues? They may on the whole have some net effect, positive or negative. But any given issue at any given site will produce a jump only once — just the sort of thing which the methodologies employed by the climatologists look for. It will not produce a trend.

    *

    Steve Mosher wrote:

    5th. I don’t think eliminating 200 or so class5 statations will change the USA trend substantially, maybe not even outside the existing error band. and since the USA is such a small part of the global temp, it wont effect the global record substantially. I think it’s good practice to follow the quaility guidelines you set up.

    Those are guidlines on the establishment of new stations. They are not guidlines on what existing stations must be discarded — and it would unreasonable to apply them as such, particularly where we know how to correct for their biases, or where it would actually result in our estimates of trends being less accurate than they would be otherwise.

    This may not be particularly significant in terms of estimating either the annual global average temperature even the US annual average, but it may be quite significant on a regional or monthly scale. And we have need for that information in terms of estimating current trends and testing models.

    *

    Now you have stressed in this most recent post how much what you are advocating presumably reduces to nothing more than what the experts themselves are advocating and in the process of implimenting. Fair enough. I support them.

    However, I cannot support your efforts. You have a history of employing rhetoric to enlist an army of the disgruntled by creating the appearance that global warming trends are little more than a product of instrument error when everything indicates otherwise.

    And even if what you ulitmately advocate in actual scientific practice is nothing more nor less than what they themselves advocate, you seek to politicize the process, having largely misinformed and disgruntled amateurs impose “the proper approach” upon the experts — who are clearly in a better position to determine what the proper approach (in terms of methodology, network utilization or resource allocation) is. This would set bad precedent.

  • Timothy Chase // February 3, 2008 at 9:33 pm

    fred wrote:

    It is not a matter of denying that the present warming is GG caused. Its a matter of finding some unequivocal signal that could only result from such warming.

    Well you could try looking at the reduction in radiation which is making it to the top of the atmosphere. Check the spectra. You have the signature of carbon dioxide and increasing levels of water vapor. A reduction in radiation reaches the top of the atmosphere will necessarily imply that more thermal energy is being kept in play within the climate system — and that necessarily heats things up.

    Its been done.

    Please see:

    Here we analyse the difference between the spectra of the outgoing longwave radiation of the Earth as measured by orbiting spacecraft in 1970 and 1997. We find differences in the spectra that point to long-term changes in atmospheric CH4, CO2 and O3 as well as CFC-11 and CFC-12. Our results provide direct experimental evidence for a significant increase in the Earth’s greenhouse effect that is consistent with concerns over radiative forcing of climate.

    Harries et al, Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997, Nature. 2001 Mar 15;410(6826):355-7

    Based on atmospheric modeling, the spectra were exactly what would be predicted given trends in atmospheric constituents.

    Please see:

    The simulations shown in Fig. 1b and c were calculated as follows. Profiles of atmospheric temperature and water vapour were extracted covering the same region and three-month time periods from the NCEP (National Centers for Environmental Prediction, Washington) reanalysis project. Stratospheric ozone changes were estimated using measured trends extrapolated back to 1970, whereas tropospheric ozone changes were calculated using a three-dimensional chemical transport model, forced by realistic emission scenarios. Remaining gaseous concentrations were taken from the relevant IPCC values for CO2, CH4, N2O, CFC-11 and CFC-12, in 1970 and 1997. The MODTRAN3 code was used to calculate the expected radiance spectra in 1970 and 1997. All the principal features due to changes in CO2, CH4, O3, temperature and humidity are well modelled, as are the small changes due to the chlorofluorocarbons (for example, at 850 and 920 cm-1) and weak CO2 bands (for example, at 795 cm-1).We note that the main features of the observed difference spectrum can only be reproduced by including the long-term changes in CH4, CO2, O3 and the chlorofluorocarbons: inter-annual and short-term variability is not sufficient.

    ibid.

    Of course you might still ask whether this is an unequivocal signal. Got any alternative explanation? That’s how science operates.

    Spectra tend to be fairly specific. I suppose a mysterious force might be grabbing the energy at just the right wavelengths and in the right amounts to produce what we are seeing — but if that and nothing more were the hypothesis you were proposing as an alternative to our current scientific understanding, it would belong more to a freshman course in philosophy than any discussion of actual science. And you would also have to explain why the various greenhouse gases aren’t operating in the way that physics says they must.

  • wattsupwiththat // February 4, 2008 at 12:25 am

    Zeke wrote:

    “But lets stop fighting old battles. The work of the surfacestations project is incomplete, and it shouldn’t really be claimed to vindicate or tarnish the validity of the GISS record at this point in time.”

    I agree. It does need more stations to be statistically representative. Hopefully we’ll be able to get the majority of the stations by the end of this summer and to improve the geographic distribution. In the meantime, if you feel like helping in the effort so that we can get to a point where we can answer it, I’d welcome havign you or others (skeptical or not) help in adding stations near where you live.

    All you have to do is signup at the website http://www.surfacestations.org Help is especially needed in the midwest. Thanks for your consideration.

  • Timothy Chase // February 4, 2008 at 3:23 am

    wattsupwiththat wrote:

    In the meantime, if you feel like helping in the effort so that we can get to a point where we can answer it, I’d welcome havign you or others (skeptical or not) help in adding stations near where you live.

    As I said earlier:

    And it appears that “bad stations” can perform just as well as “good stations” if one knows how to correct their biases.

    Please see:

    Ethon checks out the air conditioning. . .
    Thursday, September 20, 2007
    http://rabett.blogspot.com/2007/09/ethon-checks-out-air-conditioning.html

    When “bad” stations produce the same trends as “good” stations, pictures may be used for deception and serve as political propaganda.

  • Hank Roberts // February 4, 2008 at 11:46 am

    > where we can answer it

    First, you plan your data collection and decide on the statistic you will use and the hypothesis for which you’ll use it — before starting to collect data.

    Got a rating sheet to use with each photograph? Got a comparison of raters to see how consistent they are?
    Got a hypothesis how the rating from the pictures, and whatever else, will look? How many data points do you need to test your hypothesis? What test do you plan?

    All this was decided before you started sending people out with cameras. Where is it recorded, so you have a possibility of answering a specific question with some confidence?

  • Barton Paul Levenson // February 4, 2008 at 12:58 pm

    fred posts:

    [[“If you try to reproduce 20th- and 21st-century warming, you get a lousy match until you factor in CO2. Then you get a nice match.”

    That’s not quite what one is looking for. It smacks too much of saying that there is nothing else we can think of, in an area of study where understanding is still not perfect.]]

    No, you’re still not getting it. Our understanding of how the greenhouse effect works is a very mature area. The findings with the climate record are only correlation.

    Again, with emphasis: The theory of global warming does not depend on climate correlations. It depends on radiation physics.

  • fred // February 5, 2008 at 10:37 am

    Timothy C, thank you for the informative references.

    dhogoza says “The problem is that you have decided what’s good enough for medicine isn’t good enough for climate science.”

    No I haven’t. Nor is it true that I am unconvinceable about AGW. You will persuade no one of the merits of your case by insulting them and their motivation.

    BPL, I am not persuaded by your last point. The degree of warming due to a certain increase in the GG level absent feedback effects is certainly physics. But I am not clear what laws of radiation physics would have to be abandoned did we discover negative feedbacks rather than positive ones.

    If there is a given level of energy in an electric car battery, how many footpounds of thrust that will deliver is physics. How far that will propel a given car is engineering, and if it goes 20 rather than 40 miles, we do not have to abandon any laws of physics. Just look harder at the design.

    What is wrong with this point of view?

  • steven mosher // February 5, 2008 at 3:22 pm

    Timothy you wrote

    “I stand corrected: you are right about the long-term trend. But by how much? And what about recent years?

    According to John V’s analysis, the trends were close, and crn5 has been showing a consistently cooler trend since roughly 1990 and has on the whole run cooler since the late 1960s.”

    I think you misunderstand the difference between my work and JohnV’s work. Here was the hypothesis I thought Should be tested:
    CRN5 show a warming trend that is larger than CRN1234. Simply: Subtract crn1234 from crn5. Then test that time series for a positive trend. The point: if CRN5 show a warming bias greater than CRN1234, then the trend will be positive and the other studies by climate scientists ( for example the CRN study ) that show siting concerns are real will be confirmed.
    So, that’s what I did. If you like, you can RTFM and redo the study and confirm what I confirmed. NOAA was right to institude the CRN because siting issues bias the results. So, their quality concerns, the quality concerns of perterson, vose, and hansen are confirmed.
    If you want to deny that climate science. feel free. The code is there, the files are available.

    I’ll discuss your other points, later after work

  • chriscolose // February 5, 2008 at 4:10 pm

    Fred,

    I am not sure how one can make a case for negative feedbacks given the evidence out there today. There is nothing in the paleoclimate record, or in line with observations that suggests as the OLR is decreased, the temperature drops, or stays approximately the same, due to some reaction in the climate system which goes toward initial conditions. From a purely physical standpoint, and our understanding of the radiative lines of CO2, you must warm 1.2 C per 2x CO2 to satisfy Stefan-Boltzmann when the planet reaches a new radiative equilibrium. From Clausius-Clapeyron and our understanding of H2O (gas) as a greenhouse gas, we know that approximately doubles the sensitivity of climate, and the lapse rate partially offsets this. Albedo (ice feedback) is not hard either.

    I suppose there is still a case to be made regarding clouds, and in fact one needs to make a case for a positive cloud feedback for the 3 to >3 C climate sensitivty. I see no evidence to suggest clouds are so negative as to swarm out all the other feedbacks.

    If we define the Earth’s energy budget as a simple S/4(1-a) + G = σT^4 and you increase G and you decrease a, T must be rising, and I don’t see any work on feedbacks in the up to date literature to suggest the rise is trivial. In fact you go from about 150 W/m2 to about 170 W/m2 absorption by the atmosphere with feedbacks under 2x CO2 like conditions.

  • luminous beauty // February 5, 2008 at 4:59 pm

    fred,

    The performance of the climate engine is running on spec, according to theory and empirical measurement. What evidence do you have to suspect there is some unknown negative feedback necessary to explain some unobserved loss of performance? Especially in the light of more recent ocean and ice data that indicates the highly conservative IPCC report seems to be underestimating the positive feedbacks.

    Speculating on imaginary forces to explain imaginary phenomena is a useful exercise in writing fantasy fiction. For understanding the real world, not so much.

    Steven Mosher,

    It is refreshing to the human spirit how humbly you admit that NASA’s corrections to station biases are justified.

  • steven mosher // February 5, 2008 at 5:08 pm

    I stand corrected: you are right about the long-term trend. But by how much? And what about recent years?

    According to John V’s analysis, the trends were close, and crn5 has been showing a consistently cooler trend since roughly 1990 and has on the whole run cooler since the late 1960s.

    Please see:

    I think these plots speak for themselves, but here are my conclusions:
    - There is good agreement between GISS and CRN12 (the good stations)”

    JohnV sought to test the hypothesis that GISS and CRN12 would show no difference. Namely
    GISS-CRN12 =0. Given that the 2sd error on
    Giss/hadcru is something on the order of +- .1C
    and given that CRN12 station count was low,
    I thought this test was not the best test of the concern at hand. Namely, violations of siting guidelines may bias our understanding of the climate. People who feel that GISS is under attack, will of course want to test this. But the test has no power. I wanted to test something different. I wanted to prove that the climate scientists at NOAA were right. Siting matters.
    I wanted to prove that OKE was right. Siting matters. Ideally, I would test CRN5-CRN1.
    But the CRN1 were sparse. The CRN5 were also sparse. So I focused on a test that looked at the worst stations ( CRN5) versus the rest. Tim,
    I’m just confirming what Peterson and Hansen already claimed. The present network is quality challenged. I don’t get your resistance to tests that aim to confirm their position?

    “- There is good agreement between GISS and CRN5 (the bad stations)”

    My approach was not to test for good agreement.
    My test was mathematical. Use Opentemp.
    Generate a series for CRN1234. Generate a series for CRN5. Test whether CRN5-crn12324
    showed a positive trend. This is the WEAKEST test I could construct. Ideally I would look at CRN5-CRN1. That test would tell me if hansen and peterson were correct when they criticized the quality of the existing network, and established the CRN. When Anthony, finishes the the survey, we should have enough Class1 stations to perform this test with some real power. Hopefully, I will vindiacate hansen and peterson and their criticism of the existing network.

    “- On the 20yr trend, CRN12 shows a larger warming trend CRN5 in recent years”

    I played around with some cherry picking.
    I have lots of charts that show the introduction
    on the MMTS and the change in paint formula
    on the Stevenson screen, introduce a bias.
    Many people asked me to look at specific time periods, throw out early data, look at 1975 and after, all ad hoc. Sites become class5 over time.
    We dont have the historical data to tell when.
    When did that tree grow? when was that AC unit added. I wanted to avoid all cherry picking and do the tough test possible. Does CRN5 warm more over the entire record than CRN1234 does? Tough test.

    “To be honest, this is starting to look like a great validation of GISTEMP.”

    I’m interested in government agencies following the quality guidelines they set up. They should be open with their data. Open with their code.
    ( free the code, remember that! one of my finer moments, glad Dr. Hansen saw fit) and they should follow the data quality law. Others
    have seen this as an opportunity to Skewer AGW. I encourage them of course, because I share a value with them on the issue of being open. and I share their concern about quality.

    “The next step is probably to look at the subset of rural CRN12 stations. Can anybody get me a list of these?”

    The rural classification has been a concern of mine. When I reviewed the Peterson Study on UHI I found that he classified Mineral California as a Peri-Urban site. It’s not. When I discovered that Orland california was classified as a URBAN site I was also confused. Having been to both
    I was somewhat suspect of a methodology that relies on outdated population data and outdated
    “nightlights” photos. JohnV wanted to go down the route of comparing CRN12 rural to GISS.
    His goal: validate GISS, preserve AGW. My goal
    was different. Validate hansens and petersons concerns about the network. Confirm the CRN studies that showed that siting by asphalt causes a bias. He wanted to “validate” giss. I am more interesting in validation of CRN, Hansen, peterson, oke.

    “I believe that if the results were any different from what John V. initially found or you would be publicizing them.”

    I am. here. now. I have. on CA. there. then.

    “John V. has layed bare the problem with Watts’ project: in attempting to create the appearance that the warming trends that we’ve seen over the past century may all be a figment of our instrumentation, he invited people like John to investigate just how well that instrumentation has performed.”

    Anthony has his beliefs. I dont share them all.
    I tested what I thought was interesting. Do Class5 sites show a higher warming trend.
    JohnV defended GISS. That’s a different question. One could argue that Hadcru defends Giss quite well. or RSS or UAH. That’s distinct from the question I looked at.

    “Quick question: did John simply use the raw data from the stations? Or did he employ the same methodologies which GISS uses to eliminate various biases, station jumps, etc?”

    Timothy. RTFM. giss does not eliminate bias from “station jumps”. That’s done at NOAA. sheesh. Station moves are accounted for in USHCN SHAP processing.
    First comes the TOBS ajustment, then SHAP, then FILNET. JohnV worked from 2 different sets of data GHCN and USHCN. I’m not sure
    if he has hooked into the latest USHCNv2 which has adjustments that rely on change point analysis. Noaa promised these changes in July 2007, but I havent seen links to new files. When The USHCNv2 files have been used for a while, I think it would be advisable to redo my tests with that data. Why? because they will have performed additional quality checks for station discontinuty

    “I am assuming he hadn’t. What he did principally was create his own area-averaging of temperature. That is part of what is neat about his project. He didn’t seek to duplicate what GISS or any other agency performs, but to arrive at an altogether new analysis — which nevertheless shows much the same results. But in so doing, he has undoubtedly left out some of the quality controls which would have resulted in an even closer fit between the three curves.”

    “So it would seem that they are already doing what you so strenuously advocate. If so, what purpose does your earlier rhetoric serve? In the meantime, the experts are convinced that they can correct for various biases and have performed studies which support this conclusion. If they know what the biases are for various stations, they can correct for those biases. Doing so they are able to get more information from which to base their estimates.”

    You misunderstand. Up until the stations have been surveyed, you cannot study the bias. You
    can HYPOTHESIZE that the bias is neutral. To test this, you have to examine the stations.
    categorize them for site compliance and then test the hypothesis.

    “Given the law of large numbers, the uncertainty of an average is smaller than the individual uncertainties of the elements that are being averaged, and the more the number of elements, the more the uncertainty of the average is reduced. ”

    This is besides the point. the point is to confirm
    the original CRN study which showed a warming bias based on siting factors.

    More later. The statistical issues surrounding Bias adjustments as assumptions of iid and LLN
    are quite interesting. Google while I am gone
    so you can be prepared.

    Question: If I take 10 measurements and then use 9 of those measurements to adjust a bias in 10th measurement. How many independent measurements do I have? Just a question.

  • Barton Paul Levenson // February 5, 2008 at 5:39 pm

    fred posts:

    [[BPL, I am not persuaded by your last point. The degree of warming due to a certain increase in the GG level absent feedback effects is certainly physics. But I am not clear what laws of radiation physics would have to be abandoned did we discover negative feedbacks rather than positive ones.
    If there is a given level of energy in an electric car battery, how many footpounds of thrust that will deliver is physics. How far that will propel a given car is engineering, and if it goes 20 rather than 40 miles, we do not have to abandon any laws of physics. Just look harder at the design.
    What is wrong with this point of view?
    ]]

    The fact that people have been looking for the negative feedbacks that would prevent global warming for several decades now without finding any. Lindzen tried hard with his tropical cloud “infrared iris,” but satellite observations shot it down. The negative feedbacks you require don’t seem to exist.

  • Brian Schmidt // February 6, 2008 at 6:22 am

    Tamino (if you’re still reading this far into comment threads), I’m curious why you’re not interested in betting money over this proposal. All investments are gambles, after all, and you can design a climate change bet to make it safer than a stock investment. You don’t even need escrow - bet someone you trust, have an escalating series of bets, or come up some other imaginative approach. Or post it at longbets.org, and double whatever amount of money you would otherwise give to charity.

    Many of the denialists I challenge say that they “don’t gamble.” Any particular person saying that may be honest, but as a group reaction I consider it disingenuous. You might help me understand this reaction, which I don’t consider to be entirely rational.

    (For what it’s worth, I’ve gambled a total of $100 or so in my entire life prior to challenging denialists, so it’s not a hobby of mine.)

    [Response: Betting doesn’t prove anything. I’ve been accused of being mistaken, deluded, idiotic, downright pigheaded in my beliefs about AGW, but I’ve never been accused of not being sincere about it. And by the time the bet pays off, there’ll be no more doubt to dispel. In the meantime, my money would be tied up just “waiting” — which is necessary, because the people I trust enough to bet when there’s noone holding the money, aren’t interested in betting. Last but by no means least, I’m not a young man, I may not live long enough to collect — that would be a bummer!

    But it’s not up to me to decide how other people use their money.]

  • Meltwater // February 6, 2008 at 11:35 am

    Forgive me if I don’t make sense and correct me if and wherever I’m wrong but, if I’m not, climate change commitment results from ocean thermal inertia and other factors that delay the response of global temperatures to a climate forcing such as carbon dioxide emissions. Different models have given various estimates of the delay and of the future warming and sea level rise that commitment makes inevitable, even if atmospheric CO2 were to stop rising. Estimates of the delay, or lag time between CO2 reaching a given concentration and the temperature increase reaching its equilibrium response, have kept within a range of between 15 and 35 years. That raises an interesting question. As Kevin and J have explained in this thread, we already had enough data by 2002 for Tamino to have won his “bet” back then. Quoting J:

    Pretend that it’s January 1997, and you’re looking at the GISTEMP data. It looks like temperatures rose from 1975-1990, and then leveled off. Maybe global warming stopped back in 1990? Temperatures have been flat from 1990-1996! S0 I repeated Tamino’s analysis more or less exactly, from the perspective of someone back in early 1997.From 1975-1990, temperature rose at a rate of 0.0214 deg.C/yr, and the residuals have a standard deviation of 0.09753 deg.C/yr. I projected that trend out into the distant future of the 21st century (all the way to 2007!), along with its +- 2 SD envelope. I also looked at the 1990-1996 average, extended that as a “no-trend” line, and gave it the same +- 2 SD envelope. Just like Tamino. Now, let’s take our time machine forward to 2008, and look at what really happened to the climate. Since 1996, there have been *no* points that fell outside the “warming-trend” envelope. On the other hand, there have now been *8* years that fell more than 2 SD above the “no-trend” line. (1998, 2001, and every year since). In other words, using Tamino’s methodology, if we had made this bet back in January 1997, it would have been resolved definitively by 2001 (well, actually in early 2002, when the 2001 annual mean became available…) and nothing since then would have contradicted this.

    Here is my question. If we compare the changing rate of increase in atmospheric CO2 concentration since the IGY to the changing slant of the temperature trend envelope (of plus or minus two standard deviations) since 1975, do we have enough real-world measurement data to narrow the range of our estimates of the length, in years, of climate change commitment delay?

  • Meltwater // February 6, 2008 at 12:05 pm

    On climate change commitment delay, Gavin Schmidt at RealClimate summarizes various GCM findings by estimating several decades are needed to reach 80% of the equilibrium response and a lot longer to reach 100%. He is, as I am not, involved in original research; my reading gave shorter estimates of 15 to 35 years, as I stated above. I guess I am feeling more confused, and I hope someone can clarify matters.

    [Response: Well, 35 years is arguably “several decades.” I’d guess that one of the reasons Gavin’s statement is not more precise is that we don’t know from observations. The problem is that there’s more than one “time scale” in the climate system, and the data since, say, the IGY is too noisy and too *brief* to pin down the longer time scales (which is was we’re really after). Disentangling multiple time scales from noisy data shorter than some of the time scales we’re looking for, is risky business.

    And he knows quite a bit more about the issue than I do, too!]

  • Brian Schmidt // February 7, 2008 at 4:27 am

    Tamino wrote:

    “…by the time the bet pays off, there’ll be no more doubt to dispel. ”

    There’s no rational doubt to dispel even now, and I’ll bet that 10 and even 15 years down the line there will still be irrational doubt, just like the dead-enders who still deny the CFC-ozone link. Hopefully the denialists will just be irrelevant by then.

    Your sincerity point is a good one though - there’s no reason to question the sincerity of someone who supports the mainstream consensus, while there’s every reason to question the commitment of those who adopt a wild alternative to the scientific consensus that conveniently fits their political mindset.

  • Meltwater // February 11, 2008 at 11:33 am

    Let me ask a different question about climate change commitment delay, this time without my earlier focus on its full equilibrium duration. Do we have enough data on atmospheric CO2 since the IGY and on global temperature since 1975 to say how much time elapses between an increase in the rate of CO2 accumulation and the spot on the temperature curve where the warming response begins to be statistically significant?

  • Hank Roberts // February 11, 2008 at 7:28 pm

    Yes, it took about 20 years after 1975 for the warming signal to emerge from the yearly variability.

    http://scholar.google.com/scholar?sourceid=Mozilla-search&q=warming+signal+emerge+background

    http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F1520-0442(1996)009%3C2281:DGGICC%3E2.0.CO%3B2

  • Hank Roberts // February 12, 2008 at 1:01 am

    The second link, the one I gave as an example of the 20 years it took, appears to have expired; it’s down the first page in that Scholar search result, look for this one:

    Detecting Greenhouse-Gas-Induced Climate Change with an Optimal Fingerprint Method

    Volume 9, Issue 10 (October 1996)
    Journal of Climate

    GC Hegerl, H von Storch, K Hasselmann, BD Santer
    Journal of Climate - ams.allenpress.com
    … a given but unknown signal embedded in a noise background … In the case of the greenhouse warming signal, the mean warming is expected …

    “The null hypothesis that the latest observed 20-yr and 30-yr trend of near-surface temperature (ending in 1994) is part of natural variability is rejected with a risk of less than 2.5% to 5% (the 5% level is derived from the variability of one model control simulation dominated by a questionable extreme event). In other words, the probability that the warming is due to our estimated natural variability is less than 2.5% to 5%. The increase in the signal-to-noise ratio by optimization of the fingerprint is of the order of 10%–30% in most cases….”

    1996, there you are

  • John Tofflemire // February 12, 2008 at 1:46 pm

    Tamino:

    You said, “So, my “bet” is that as soon as there are two years (not necessarily consecutive) which are in either decisive region, the side with two decisive years is declared the winner.”

    Curious that in the initial month of your “bet” the temperature anomaly has fallen well into the “not-warming wins” category. In fact, the year-on-year decline in the NOAA series of .65 degrees is the greatest recorded over its 128 year length. Given that February may be even cooler than January, you are starting out in a bit of a hole. While the average for 2008 will probably end up above that lower blue line, my bet is that you will lose your “bet”, perhaps well before the end of the “bet”.

    My own view is that anthropogenic warming is significant but that natural factors still dominate. My “bet” is that there will be significant enough natural cooling over the next five to seven years to force a rethink of the current consensus. Things should get interesting going forward.

  • dhogaza // February 12, 2008 at 5:13 pm

    My own view is that anthropogenic warming is significant but that natural factors still dominate. My “bet” is that there will be significant enough natural cooling over the next five to seven years to force a rethink of the current consensus.

    Why would a known factor like La Niña cause a rethinking of the current consensus? (we’re in a La Niña situation now).

    What UNKNOWN factor do you imagine will play out in the next five to seven years to force a rethinkiing of the consensus?

  • P. Lewis // February 12, 2008 at 5:17 pm

    I think you’re overextrapolating, personally.

    (1) Last Jan’s anomaly was exceptional for a start (about 1.5 to 2 times recent years’ Jan values). And 1989 and 2000 anomalies were 8 and 13 respectively.

    (2) If the rest of this year’s months just have similar/same anomalies to recent years, then 2008 will come in at around the anomaly figures for 2001 to 2007 (say, around mid 60s). Even a few months’ low anomalies is only going to take the year down to say the low 50s, which is fairly similar to values through the 1990s

    (3) This year is expected to be lower on account of La Nina effects.

    We shall see.

  • B Buckner // February 12, 2008 at 8:51 pm

    “Why would a known factor like La Niña cause a rethinking of the current consensus? (we’re in a La Niña situation now).”

    I am not saying the current temperatures are cause to rethink current consensus, but regarding La Nina:

    Well we are just now officially starting the La Nina event, as of the end of January (5 consecutive three month running averages of anomalies less than 0.5C below normal).

    The air temperatures are still going down.

    The current temperature (GISS) is below any month in the previous two La Ninas going back to 1995. The 98-2000 La Nina was much longer and more severe than the current La Nina to date.

    The recent NOAA data indicates that land surface temperature anomalies are well below the SST anomalies.

    Some things to think about.

  • cce // February 12, 2008 at 11:13 pm

    The current La Nina event began about 6 months ago.

  • P. Lewis // February 13, 2008 at 1:57 am

    Since the La Nina index is a 3-month rolling average, the effect of the individual month with which to compare Jan GISS temp anomalies is at its peak in the middle of the 3-month La Nina season (I’m sure someone will correct me if that supposition isn’t true — but it seems logical); hence DJF is the preferred season for comparison. Thus:

    DJF La Nina years: 1996, 1999, 2000, 2001, 2008

    Jan GISS anomaly: 37, 55, 13, 51, 31

    1999 was still largely playing out the 1998 El Nino. That still leaves 2001, of course. Off the top of my head I can think of no reason why the Jan 2001 anomaly was so high compared with the others (it was only 4 seasons into a small La Nina, but it was following on closely from the extended La Nina).

    So Jan 2008’s 0.31°C anomaly doesn’t look too out of kilter with previous years to me.

  • Heretic // February 13, 2008 at 5:23 am

    P lewis, you’re talking about weather.

  • Heretic // February 13, 2008 at 5:40 am

    Mosher, you’re splitting hair, and trying to do it in the best possible way. Good for you. I’m still unconvinced of the value of the all “effort.”

    My opinion (skpetics always seems to think it’s OK to give theirs, whether or not it’s relevant) is that even throwing out the entire surface record would still leave plenty data available to foster climate understanding and would not change the direction where the weight of the evidence is poimting. After all, the trends still show on other datat sets.

    And really, in the future, you should stay clear of the verbal assaults, whether that’d be half accusing Hansen of fraud on CA (which you did), or the childish attack on Dhogaza.

  • P. Lewis // February 13, 2008 at 8:53 am

    Re Heretic’s point:

    P lewis, you’re talking about weather.

    Yes! I know!!

    The point is a reply to B Buckner’s point:

    The current temperature (GISS) is below any month in the previous two La Ninas going back to 1995. The 98-2000 La Nina was much longer and more severe than the current La Nina to date.

    And just to clarify further. My point here was in reply to John Tofflemire’s point

    In fact, the year-on-year decline in the NOAA series of .65 degrees is the greatest recorded over its 128 year length. Given that February may be even cooler than January, you are starting out in a bit of a hole. While the average for 2008 will probably end up above that lower blue line, my bet is that you will lose your “bet”, perhaps well before the end of the “bet”.

    .

  • P. Lewis // February 13, 2008 at 8:56 am

    Oops! The “here ” in my previous message should have been linked thus.

  • P. Lewis // February 13, 2008 at 8:58 am

    Sod it! Got the links wrong now.

    Oh well, I’m sure you can work it out now. Sorry for taking up the extra bandwidth. (Grr! Preview would be a real help.)

  • John Tofflemire // February 13, 2008 at 11:42 am

    dhogaza,

    Things that are frequently “known” may in fact not be understood at all. There are, what seem to be, recurring patterns labeled “La Nina” or “El Nino” which could be the same thing or they could be different things that only seem to be the same thing. This is true for phenomena as diverse as disease or cosmology. Labeling what seems to be a pattern as a pattern that can therefore be put into a box as being “explained” may be missing something much deeper.

  • John Tofflemire // February 13, 2008 at 12:09 pm

    P Lewis,

    If what occurred last month was simply a year-on-year event, I would agree with your assessment. However, consider that (using the NOAA anomaly calculation) the total global temperature anomaly decline over the four months including January 2008 was -.3496 degrees. You would agree that such a decline had nothing to do with the unusual anomaly in January of 2007. However, this four-month temperature decline was among the largest four-month decline in the 128 year NOAA time series. If you compare the 18 other instances since 1880 where the total four-month anomaly temperature decline was greater than .3 degrees, the year-on-year change at the end point of that four month period averaged -.2412 degrees and the standard deviation for those 18 instances was .1989 degrees. In other words, the January event was more than two standard deviations outside of this average. It looks very, very unusual by multiple measures.

    In other words, this four month temperature decline is more than just a year-on-year comparison.

  • B Buckner // February 13, 2008 at 2:49 pm

    P. Lewis,

    Except the current GISS January 08 temp is 0.12C and not 0.31. Are you using the right data set?

    It should be this one:
    http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt

    I get 26, 40, 17, 38, and 12 for ‘96, ‘99, ‘00, ‘01 and ‘08 respectively.

  • dhogaza // February 13, 2008 at 6:16 pm

    Things that are frequently “known” may in fact not be understood at all.

    This trivially true statement is of little interest given that “not understood at all” is not an accurate description of our knowledge of La Niña/El Niño. There’s a lot we don’t understand about the phenomena, in particular how to predict when they’ll occur, but that doesn’t mean we know nothing at all. You’re making the common mistake of assuming that since we don’t know everything, we don’t know anything.

  • steven mosher // February 13, 2008 at 6:18 pm

    Tamino, Just to be clear and get it in writing.

    For 2008, you lose the “first leg” if GISS falls below what anomaly? Just to be clear.

    What is the threshold value for 2008 ?

  • steven mosher // February 13, 2008 at 6:25 pm

    Tamino, On your bet you say the coolists
    win if Anomaly < .277455+.018173(t-1991)
    for two years. Where t = year.

    Is this right? Have another look and provide the threshold anomaly for 2008.

    On a related note. We are having a friendly betting pool on the 2008 Anomaly ( global, Giss)
    What is your best guess? ( ps, we know its the weather, but sometimes having some fun can defuse tension and animosity.)

  • Hank Roberts // February 13, 2008 at 6:44 pm

    If you’re talking about ENSO/ El nino/ La nina, you’re talking about ocean temps:
    http://data.giss.nasa.gov/gistemp/graphs/Fig.A4.lrg.gif

  • steven mosher // February 13, 2008 at 6:50 pm

    Heretic, I did not half accuse Hansen of fraud. I half accused Mann. In the end, after looking at the history I think I said he was willfully ignorant. He made a mistake, it was pointed out, and he persists. The situation was strikingly similiar to the piltdown mann scenario, which was actual fraud. Here is what I aim at with this comparison. With evolution, a valid theory was challenged by sceptics because people refused for years to admit a mistake. I find a similiar pattern with AIT ( where mistakes were made) and with the hockey stick ( where mistakes where made). I cannot observe the intent. I can observe the similiarity. There is a contingent of
    disbelievers who will always disbelieve. There is a contingent of people in the middle who want mistakes fixed, quality maintained, knowledge advanced. Everyone seems to forget that I asked hansen to free his code. he freed his code.
    and I thanked him for that. Hansen releases his data, releases his code, acknowleges his errors
    and fixes them. GOOD for him. I may disgree with his politics on occasion, but he is now an open scientist. Jones needs to be open, mann needs to open, thompson needs to be open.
    Recently gavin noted that the IPCC simulation database will become more open. GOOD. transparency removes doubt. adherence to quality standards remove doubt. The ONLY good weapon sceptics have is doubt.

    Let me put this another way. The sceptic side is roughly split between kooks with unproven theories and nitpickers with doubts.

    Divide and conquer. Just adopt open science, publish data and methods and half of the problem goes away.

  • Hank Roberts // February 13, 2008 at 6:58 pm

    Aside — remember y’all are pointing to _surface_ and atmospheric temperatures.

    The ocean is expected to be pushing up cold water as warm water sinks.

    I wish I had access to this article:
    The once and future battles between Thor and the Midgard Serpent: The Southern Hemisphere Westerlies and the Antarctic Circumpolar Current
    Geochimica et Cosmochimica Acta, Volume 70, Issue 18, Supplement 1, August-September 2006, Page A547
    J.L. Russel et al.

    Here’s a nice picture of reflected and emitted heat, scaled in watts/sq. meter:
    http://oceanmotion.org/images/background/113816main_solar_radiation.jpg

    Here’s a reminder why you need to know what’s going on _below_ the surface:

    http://earthobservatory.nasa.gov/Study/LovelyDarkDeep/
    —-excerpt—-
    TOPEX/Poseidon measures changes in sea level, which responds to heat at any depth. By combining these data with modern general circulation models, scientists are seeing a difference between actual measurements and a long-held theory that the ocean warms primarily at the surface. Now, scientists say that waters midway between the surface and the floor are heating up the fastest. They are concerned that disproportionately heating the middle depths of the ocean will dramatically alter the current patterns that allow nutrient-rich waters from the bottom to mix and intermingle with surface waters.

    “Ocean mixing is a far more complex ‘engine’ than was appreciated before,” said Wunsch, who has studied the ocean for 30 years. “It carries grave consequences not just for climate, but also for the biology and chemistry of the ocean.”

    Wunsch and colleague Walter Munk, of the Scripps Institute of Oceanography at the University of California - San Diego, used TOPEX/Poseidon data to estimate the rate at which the abyssal ocean mixes and the power required to sustain this mixing.

    —end excerpt—–

  • Gareth // February 13, 2008 at 8:47 pm

    T, you might be interested in this Wordpress plug-in: http://wordpress.org/extend/plugins/wp-ajax-edit-comments/
    It allows commenters to edit their own comments for a short period after posting (the default is 15 minutes, I think), plus allows admin users to edit comments on the page, without have to go into WP admin.

    I’ve installed it at Hot Topic, and it “just works”. I’ve also installed it at On The Farm which uses another Chris Pearson theme, Neoclassical with no problems.

    Feel free to delete this without posting..

  • P. Lewis // February 13, 2008 at 11:49 pm

    Re B Buckner

    You are right. I inadvertently used the land data rather than the land & sea data (hit the wrong bookmark!). Although it’s preferable to use the latter data, it makes little or no practical difference to the actual initial points I was making in reply to you or to John Tofflemire, i.e. the magnitudes obviously change, but one or two months’ data isn’t going to change the annual anomaly much, etc.

    DJF La Nina years: 1996, 1999, 2000, 2001, 2008
    Jan GISS land anomaly: 37, 55, 13, 51, 31
    Jan GISS L&S anomaly: 26, 40, 17, 38, 12

    Actually, in light of this additional analysis, 2000 might be sort of interesting. And 2000 and 2008 are not that much different (and 1982 and 1989 were worse). But as Heretic so kindly pointed out to me, this is all about weather. Enough of weather I think.

  • Hank Roberts // February 14, 2008 at 2:38 pm

    > the piltdown mann scenario,

    Um, Tamino, spitballs in the back row?

  • dhogaza // February 14, 2008 at 4:56 pm

    He made a mistake, it was pointed out, and he persists. The situation was strikingly similiar to the piltdown mann scenario, which was actual fraud.

    That would be actionable as libel in more than one english-speaking country, though you apparently live in the US which allows you a great amount of freedom to smear the reputation of other people through outright lies.

    There’s nothing in the least similar to the piltdown man fraud and Mann’s work, the latter having been supported by no less than a blue-ribbon commission of the National Academy of Sciences. Even your “willful ignorance” charge is a slimy, inaccurate characterization of the work.

    The charge you make is shameful, and all to typical of the crap we see from CA regulars.

  • dhogaza // February 14, 2008 at 5:03 pm

    The ONLY good weapon sceptics have is doubt.

    Then why continue on with the ignoble, crappy weapons which include open accusations of fraud or “half-fraud” (whatever the hell that’s means), attempts to get Oreske fired, the harassment of Thompson.

    As far as openness goes, the natural path towards free access might well move faster if denialists weren’t eagerly looking for any excuse to engage in the behavior outlined above (which is by no means complete). “I want your data and reserve the right to try to get you fired, to accuse you of scientific fraud, or to otherwise attempt to smear your reputation” isn’t helping.

    Everyone seems to forget that I asked hansen to free his code. he freed his code.
    and I thanked him for that.

    I’d say the evidence suggests that you greatly exaggerate your relevance and importance.

  • tamino // February 14, 2008 at 6:22 pm

    The discussion here has left the topic — so it should be taken to the open thread.

  • Heretic // February 15, 2008 at 3:29 am

    Ok, Steven, if Mann was a half, then Hansen was a 1/4 accusation. There is a varity of degrees on that on CA, one reason I don’t care much for the site.

  • John Tofflemire // February 16, 2008 at 11:30 pm

    Tamino,

    Shouldn’t your equation for the lower dashed red line be:

    .277455+.018173(t-1991)-.1918

    rather than:

    .277455+.018173(t-1991)

    since the latter represents the mid-point of your trend line? Otherwise, I think you are severely shortchanging yourself here since you would effectively lose the “bet” should the temperature drop below the mid-point of your trend line for two consecutive years.

  • lucia // February 19, 2008 at 1:34 pm

    Though this won’t stunn Steve Moscher, you, Tamino will be stunned to discover, I predict warming. I don’t fit lines though, I fit functions.

    My early projection using “Lumpy I”. (Lumpy II uses additional information about forcing from volcanos, which Gavin kindly hepled me find. The consequence is Lumpy II catches the ‘dips’ due to volcano eruptions better.)

  • Terry Ward // February 22, 2008 at 5:24 pm

    I’ll take that bet. 2008, 2009 and 2011 and beyond = colder than anything in the last 10 years. 2010 will be warmer than 2008/09 but not too much. The cycle is beginning to bite.

Leave a Comment