Open Mind

Red White & Blue (noise, that is)

January 12, 2007 · 36 Comments

Warning: this post is about mathematics. I’ve tried to make it as clear and comprehensible as possible for the lay reader — but time will tell whether or not I’ve succeeded. My wife looked it over and basically said, “What?!?”

In a recent post, I listed trend rates for temperature from stations in the U.K. that are available from the European Climate Assessment and Dataset Project (ECA). One reader objected that the trend listed for Armagh is not statistically significant, which means that it could have been simply due to random fluctuations, not any genuine trend.

This is a recurring issue in data analysis. Global temperature in 2006 is definitely hotter than it was in 1975, but is that a real change in the climate system (global warming), or is it just part of the natural variability in the climate system (essentially a random process)? You flip a coin five times, and all five times it comes up heads; is that because the coin is “fixed” rather than fair, or is it just one of those coincidences that sometimes happen?


We mathematicians have developed statistical tests to tell us whether or not what we observe might reasonably be from “random fluctuations.” Usually (but not always!) this involves computing the probability of getting that result, assuming that there’s no trend. We then choose a critical value for that probability, and if the actual probability is below the critical value — if the observed result is just too unlikely to be believed — then we conclude that the result is probably not due to random fluctuations; we have evidence of a genuine trend. The usual critical value is a probability of 5% (one chance out of twenty); if the likelihood of getting the observed result is less than one in twenty, the result is considered to be too unlikely to be due to chance, and the result is “statistically significant.”

Tests done with this (the most common) critical value (5% probability) are often referred to as “95% confidence,” or as “5% false-alarm probability.” That’s because there is a 5% probability of a false alarm.

Consider the coin flip experiment; five flips in a row come up heads. So we observe the result: five consecutive flips come up the same — maybe the coin is fixed. We begin by assuming that the coin is fair, so on each flip there’s a 50/50 chance of heads or tails. Then the chance of five heads in a row is (1/2)x(1/2)x(1/2)x(1/2)x(1/2) = 1/32. This is less than one chance in 20, so the result is significant, right? Not so fast! It’s also possible for a fair coin to come up five tails in a row; that probability is also 1/32. So, the chance of five flips in a row being the same — the result that made us suspicious in the first place — is (1/32)+(1/32) = 1/16. That probability is greater than 1/20, so the result could be due to random fluctuation. The chance, for a fair coin, is 1/16 = 6.25%. We say that such a test does not pass muster at 95% confidence (5% false-alarm probability).

It’s much the same with trend analysis. We compute, from the data, what the indicated trend is (usually by fitting a straight line, but there are many many ways). We also compute the chance of seeing that much trend or more, if the data are just random. The random part of data is called noise. If the probability that we could get the observed trend just by chance (when the data is all noise) is less than 5%, we say the trend is statistically signficant at the 95% confidence level.

Now here comes the tricky part: noise can come in many colors.

White Noise

The most common assumption about the noise itself, is that the random part of each individual observation is independent of the random part of each other individual observation. The “noise” in the global temperature for 2006 doesn’t depend on the noise in the global temperature for 2005, or 1975, or any other year for that matter; they’re all independent of each other. Noise like this is called white noise.

The reason it’s called “white” is what we get when we do a Fourier analysis of the noise. Fourier analysis is one of the chief ways to look for periodic behavior in data, to see whether there’s a pattern that repeats — every year, every 5 years, 12 times every year, or whatever. A Fourier analysis is a “cycle analysis,” a test to see whether or not there is repetitive behavior.

Essentially, it’s like a trend analysis but it looks for repetitive behavior rather than a trend. The data might repeat with any given frequency (the number of cycles per year/day/whatever), so we simply test lots and lost of frequencies to see whether any of them give a “statistically significant” result. We then usually plot the numerical value of our statistical test as a function of the frequency we’re testing. The statistic we use to test any given frequency is often called the power, and the plot of power as a function of frequency is called a power spectrum.

I generated some random data to illustrate this. The data are random and each point is independent of each other, so this is “white noise.” Here’s what the data look like:

white.JPG

Looking at the data, it doesn’t seem to show any pattern; there’s no visible trend, and no visible cyclic behavior either. I also computed the power spectrum:

white4.JPG

There are plenty of ups and downs in the power spectrum. But none of the “ups” is big enough to pass the test for statistical significance. While there are lots of differences, the overall pattern is pretty evenly distributed over the full spectrum of frequencies — it’s not concentrated at low frequencies or high frequencies, it’s all over the place. Making an analogy with the optical spectrum, we can call low frequencies “red” and high frequencies “blue” (it’s just an analogy, there’s no physical significance to it). Using this analogy, a mixture of all colors — all frequencies — corresponds to white light. Hence we refer to this kind of noise as “white noise.”

Red Noise

Now suppose that the data are truly random, but nearby values are not independent of each other. Suppose that consecutive values have a strong tendency to be close to each other, so that if the last random value was high, the next random value is likely to be high also (but not certain — it’s random!). In fact, the bigger a given value is, the bigger the next value is likely to be. Nearby values are correlated with each other.

We can generate such “correlated” random variables using what’s called an “autoregressive process.” I’ve done just that to create this data:

red.JPG

Just looking at the data, it looks like there are patterns — significant ups and downs. But in fact that’s just our intuition misguiding us, perceiving a pattern where it doesn’t really exist; this data are truly random noise. But the correlation of nearby values makes it look like there’s a pattern. If we do a Fourier analysis of this data, we get this:

red4.JPG

You can see that the tallest spikes in this Fourier spectrum are mostly concentrated at low frequencies. Using the “optical spectrum” analogy, low frequencies correspond to red light, so such a spectrum is called a red noise spectrum. The noise which yields it, is called red noise.

It turns out that there are a lot of random processes in nature, and quite a few of them behave like red noise. In fact the noise in the climate system (especially in temperature) behaves like red noise.

When we do a trend analysis, we have to compute the probability of getting a given size trend based on the assumption that the data are noise. The tricky part is that if the data are white noise, there’s very little chance of getting a large trend, but if the data are red noise — truly just random — there’s a much higher chance of getting a larger trend.

So, in order to compute the probability that the trend we observe is due to random chance, we have to know whether the noise is white or red. If we assume it’s white noise but it’s actually red, we can get a big trend that we think is “too unlikely to be believed,” when in fact it’s not that unlikely at all.

That was the essence of the reader’s objection to my listed trend for temperature data for Armagh. He suspected that I had not compensated for the fact that the random part of temperature is not a white noise process, it’s a red noise process. He had even run the numbers himself, and confirmed his suspicions.

It turns out that I do regularly compensate for the red-noise character of temperature time series. The reason we got different results is that the reader had not removed the seasonal pattern from the data before trend analysis — a necessary step. So he severely underestimated the significance of the trend, and concluded that I had severely overestimated it. But in fact I had not.

Blue Noise

It is possible for nearby random values to show negative correlation, i.e., if a given value is high, the next value is likely to be low, and vice versa. In fact, the higher a given value is, the lower the next value is likely to be. I’ve generated noise like this using an autoregressive process:

blue.JPG

And of course, I compute the Fourier spectrum:

blue4.JPG

We see that in this case, the tallest peaks are concentrated at high frequencies. Since high-frequency light is blue, noise like this is called blue noise. White noise is the most common kind, red noise is not at all uncommon, but blue noise is extremely uncommon.

I’ve also plotted all three Fourier spectra together; the red noise spectrum in red, the blue noise spectrum in blue, and the white noise spectrum in black (white doesn’t show up too well on these graphs):

rwb.JPG

The values are rather crowded together and hard to see, so I’ll plot the same thing on a logarithmic scale to make it easier:

rwblog.JPG

Plainly, the response of a Fourier analysis to noise depends strongly on what color the noise is. Likewise, the response of a trend analysis to noise depends strongly on the noise color. This is yet another example of how complicated it can be to get reliable information from data analysis!

Scientists in general are not statisticians. Sometimes they go awry when doing statistical tests; one of the most common mistakes is to assume white noise when in fact the noise is red. Certain scientific disciplines tend to be not so good at statistical analysis! But over the last couple of years, I’ve investigated closely much of the statistical analyses in climate science. I’ve found that climate scientists in general do an outstanding job of statistical analysis, and often collaborate with statisticians to apply cutting-edge techniques. From this mathematician’s point of view, the application of statistical analysis in climate science is 1st-rate.

Categories: Global Warming

36 responses so far ↓

  • Steve Bloom // January 12, 2007 at 8:58 am | Reply

    Uh oh, that’s a line in the sand if I ever saw one. Your recent correspondent, to say nothing of the rest of the crowd over at you-know-where, isn’t going to take a challenge like this lying down.

    Lacking as I am in a stats background, I very much appreciated the clear explanation. Thanks!

  • Dano // January 12, 2007 at 12:47 pm | Reply

    I’ll make sure Dano doesn’t bring too much red noise to your comments, sir, but I see, already in this short time, much promise in what you are doing.

    Perhaps a good metric to judge your impact on the FUD purveyors is to see what name they call you and how often you are ridiculed.

    Best,

    D

  • Willis Eschenbach // January 14, 2007 at 4:47 am | Reply

    Moderator, thank you for the very clear exposition of the different kinds of noise, and their effect on trends. As I am the “reader” referred to in the first paragraph, I am responding to your post. What is your name, by the way? I can’t call you “Moderator” indefinitely, and I don’t find your name on the blog (which may only be my myopia).

    You say (above) that:

    It turns out that I do regularly compensate for the red-noise character of temperature time series. The reason we got different results is that the reader had not removed the seasonal pattern from the data before trend analysis — a necessary step. So he severely underestimated the significance of the trend, and concluded that I had severely overestimated it. But in fact I had not.

    I was simply trying to follow your procedure, where you said that you had used the “raw data” to determine the trend. Yes, you can first remove the seasonal trend, but this introduces other problems.

    As you point out, autocorrelation affects the likelihood of trends. There are a variety of ways to deal with this. One of them I discussed in the post that inspired this thread, which is to use the lag-1 autocorrelation to adjust the effective N (the number of observations) in the series. However, this method is not appropriate after we remove the seasonal data. This is because the removal of the seasonal data reduces the short-range (lag-1) autocorrelation without reducing the long-range correlation. Thus, methods that depend on the short-range autocorrelation are no longer valid.

    Like most climate series, the long-range correlation of the Armagh record is very high. The Hurst coefficient for the Armagh data is 0.99 (using the Whittle method, see Whittle, P. (1963); On the fitting of multivariate autoregressions and the approximate canonical factorization of a spectral matrix. Biometrika 40, 129–134.) Unlike the lag-1 autocorrelation, this is not changed by removing the seasonal trend:

    ___________________lag-1 autocorr___Hurst
    Raw Armagh Data________0.91________0.99
    Armagh w/o seasonal____0.73________0.99

    Because of this, we need to use other methods for determining if a seasonally-reduced trend is significant. The real question is not whether the trend since 1975 is different from zero. It is whether the trend is unusual for this dataset. The most obvious method, of course, is to see if there are other places in the dataset where the trend is exceeded. The trend from the first of January 1975 to the end of the record (end of April 2001 is exceeded in 5.6% of the record, indicating that the trend is not unusual for the dataset.

    A better way to do this is to look at the distribution of the Mann-Kendall “tau”. Tau is a non-parametric measure of the existence of a trend. Tau for the 316-month period of interest (1975-) is 0.137, which is significant (p=0.0002). However, it is affected by autocorrelation as well. To see if this rise is unusual for the dataset, we can calculate the true variance of tau for this dataset.

    Using just the 1865-1975 portion of the dataset (to avoid influence from the recent warming) we find that the average tau is 0.005, with a standard deviation of 0.09. This means that the 95% confidence interval for tau is from -0.19 to +0.20 (mean ±1.86 standard deviations).

    Since the recent tau (0.137) is well within the range of trends in the earlier part of the dataset, we can be quite sure that the recent result is not anomalous for this dataset. It also shows that the trend is not significantly different from zero.

    This misrepresentation of temperature trends as “unusual” is a much more general problem than just with Armagh. Before we go running out to say “CO2 is the cause of the recent unusual warming”, we need to look at the warming very carefully to see if it is in fact unusual. The 25-year post-1975 temperature rise in Armagh is not unusual.

    Nor is the post-1975 rise in global temperatures (HadCRUT3) unusual. Tau post-1975 is 0.63, which normally would be very significant. But the average pre-1975 tau for periods of the same length is 0.15, 95% CI -0.36 to +0.66, so the recent rise is not unusual. It does, however, show that the recent rise is statistically different from zero. So we can conclude (if the dataset accurately represents temperatures) that the world is warming, but that the recent warming is not statistically significant.

    There is a good discussion of a closely related question at On the Role of Global Warming on the Statistics of Record-Breaking Temperatures. The abstract says:

    We theoretically study long-term trends in the statistics of record-breaking daily temperatures and validate these predictions using Monte Carlo simulations and data from the city of Philadelphia, for which 126 years of daily temperature data is available. Using extreme statistics, we derive the number and the magnitude of record temperature events, based on the observed Gaussian daily temperatures distribution in Philadelphia, as a function of the number of elapsed years from the start of the data. We further consider the case of global warming, where the mean temperature systematically increases with time. We argue that the current warming rate is insufficient to measurably influence the frequency of record temperature events over the time range of the observations, a conclusion that is supported by numerical simulations and the Philadelphia temperature data.

    This question of the short length of climate records is pervasive in the field, and is not sufficiently appreciated. As a result, we have people claiming unusual temperatures and unusual extreme events,

    My best to everyone,

    w.

  • tamino // January 14, 2007 at 3:36 pm | Reply

    Willis,

    I’m the moderator. I prefer to remain anonymous; you can call me “tamino.”

    I’m glad you consider the explanation clear. I’m not so sure! At least one respondent who claims to be non-statistical (the target audience) agrees, but I wonder how many who have not replied, are scratching their heads saying “Huh???” It so often happens that an explanation which seems crystal-clear to me (and to bright students) merely serves further to puzzle those who are having difficulty. It’s all too easy to see utter clarity in an explanation of something you already understand. And explaining subtle mathematics to the lay public, is very risky business!

    My motive for this post, contrary to another comment, is not to “draw a line in the sand,” but to alert people that time series analysis is subtle and complex, and we need to be circumspect before drawing too firm conclusions. In this regard, we’re working toward the same goal. Also, in my opinion, it’s just plain interesting.

    As is so often the case in disputes, perhaps we’re not so much in disagreement as it seems at first sight. For example, we seem to agree that the world actually is getting warmer. Armagh too; however, although I haven’t run the numbers yet (I will), I have a lot of experience with time series analysis, and on the basis of visual inspection alone I’d bet that you’re correct, the recent warming in Armagh is not unprecedented. And I certainly agree that the brevity of instrumental records is a big problem that is given insufficient attention in most discussion.

    But — even before running the numbers — I’ll venture a guess, that other data sets do show “unprecedented” results. This may be the case for Central England Temperature (although there are open questions about the reliability of the earlier part of the record compared to more recent data). I’ll bet money that it’s the case for temperature over the last 1000 years according to paleoclimate reconstructions (although that depends on believing the reconstructions, you may be a died-in-the-wool hockey-stick skeptic). I intend to investigate whether these (and other) records show “unprecedented” behavior that can be established rigorously; I’ll keep ya posted.

    We must bear in mind that global average surface temperature is a dynamical variable subject to known, deterministic physical laws. And although the weather system is chaotic, preventing long-term prediction of its details, the system is constrained by conservation laws which limit its variation; heat is energy, and it has to come from somewhere. Also, we can be quite confident that variations in global average surface temperature are not only following a random process, however complex we formulate that process. There’s far too much correlation between observed temperature and known physical processes (like volcanic eruptions and el Nino) for that to be believed. The more we study the climate system, the more we are able to explain the ups and downs of temperature — which takes it outside the realm of an arbitrary mathematical process and into the realm of comprehensible physics.

    And there’s the rub. Yes, global average temperature is getting higher, it’s certainly not due to volcanic eruptions (or the lack thereof) or el Nino, and from the evidence I’ve seen solar variation is not a viable explanation, nor are some of the more unusual theories like galactic cosmic rays. A lot of the past excursions in global temperature can be linked to known physical processes. The only known physical process to which we can link the last 30 years of global warming, is increased greenhouse gas concentrations.

    Clearly you’re savvy about the mathematics. But some of your statements indicate that you’re not so savvy about the rest of the science. For example, your earlier statement that CO2 started rising in 1930; a glance at the Law Dome ice core data will dispel this idea. I suspect you’re just echoing the claim on the co2science website, without checking the facts; I hope you will apply the same rigor to claims such as this, as you do to the numerical analysis. Or your most recent comment (in the other thread) which says, “However, as you point out, there are less volcanoes and likely more solar forcing since 1930, both of which should bring temperatures up.” Actually, I pointed out that volcanic activity was down, and solar probably up, during the period of warming in the early 20th century (about 1915-1945); since the 1950s volcanic activity has returned to “normal,” and solar forcing seems to have levelled off, even begun to decline, in the last several decades (although this is notoriously hard to pin down, even in the satellite era; the data come from multiple satellites, and there’s dispute over how the data from different instruments should be “stitched together”).

    So I’ll repeat what I’ve said before, that for me the “clincher” for global warming is basic physics. Greenhouse gases really do absorb infrared radiation. They really do trap heat near earth’s surface. For greenhouse gases not to cause global warming is not consistent with our present understanding of the laws of physics. One might as well claim that earth will not get hotter in spite of an increase in solar irradiance!

    If the instrumental record of global average temperature were the only evidence, then it’s strong enough that I’d be concerned, and possibly even advocate action (better safe than sorry), but I’d be firmly in the camp saying “quite likely — but don’t be too sure.” However, the instrumental temperature record is only one of a vast number of evidences. And, they all accord with what basic physics tells us must happen. Hence when considering the recent warming and the likely future progress of global temperature, the “burden of proof” is on those who claim that greenhouse gases are not the cause.

    I hope you’ll stick around — every global warming advocacy site (I think this qualifies) needs a good skeptic! And your level of civility in discussion is admirable (which is one of the things I most want for my blog — thank you). Based on your comments so far, I’d place you firmly in the “skeptic” category rather than “denialist.” Be advised, I have a lot of stuff to do (I’ve neglected my work the past several days), and I’m likely to be far from completely rigorous — maybe even get sloppy — in future posts!

    All the best,
    t

  • Willis Eschenbach // January 15, 2007 at 11:24 am | Reply

    Tamino, I didn’t want to disturb your anonymity, just to have a name to call you. I sincerely hope you will find time in your work to continue this conversation.

    A couple of comments on your excellent and most interesting post.

    First, I agree that increasing CO2 will warm the world. The question is, how much?

    It is generally accepted that without the greenhouse effect, the world would be about 33°C cooler. It is also generally accepted that the total downwelling “greenhouse” radiation is about 325 W/m2.

    This indicates that the warming from greenhouse radiation is on the order of a tenth of a degree C per W/m2. This, as you know, is called the “climate sensitivity”. However, since parasitic losses increase with the greenhouse induced temperature rise (∆T), it is likely that the current sensitivity is less than that. But let’s use that value.

    It is also generally accepted that for a doubling of CO2, the forcing will increase by about 3.7 W/m2. Given a sensitivity of 0.1°C per W/m, this indicates that a doubling of CO2 will increase the average temperature of the planet by about 0.4°C.

    This projected doubling of CO2 may or may not happen in the next century. If so, we may get as much as a 0.4°C warming. This, I must admit, doesn’t worry me. It warmed more than that last century, with no adverse effects.

    Second, you seem to think that because the solar forcing correlates well with temperature until 1980 but diverges after that, this means that CO2 must be the cause of the post 1980 temperature rise. But to believe that, we’d have to assume that CO2 had no effect until 1980 (or it would have disturbed the solar correlation with the temperature), and then suddenly kicked into gear. Seems doubtful.

    In fact, our understanding of climate forcing is very poor. Of 12 known forcings, the IPCC TAR rated our understanding of 9 of them as “Low” or “Very Low” … and that doesn’t include forcings that are not included in the TAR (biogenic aerosols, biogenic methane, plankton created aerosols, solar magnetism, cosmic rays, etc.) or unknown forcings. The fact that we cannot explain the post 1980 divergence proves nothing about CO2. We do not understand what causes the shifts in the PDO, or the timing of ENSO events, or why there were a lot of hurricanes in 2005 and relatively few in 2006, or why the AMO cycle is about forty years long … why do you think we should understand the recent warming?

    Finally, you seem to think the burden of proof is on the skeptics, which is science set on its ear. We don’t understand the climate. You have a theory about it. Your theory is not yet advanced enough to make falsifiable predictions, so it is not yet possible for scientists to say anything definitive about the theory. In the absence of an ability to make falsifiable prediction, to support your theory you need evidence. Not absence of competing theories. Not computer models whose results vary by 800%. Evidence, that is to say, real world data. You say that there are a “vast number of evidences” for the theory that CO2 will cause large, significant warming. What is that evidence? For a start, how does your theory explain that 325 w/m2 of forcing only resulted in a 33° temperature rise?

    My best wishes to you,

    w.

  • guthrie // January 15, 2007 at 1:53 pm | Reply

    Off the top of my head I understood the climate sensitivity with regards to a doubling of CO2 was about 3 degrees C.
    Theres a realclimate post which picks apart what gases (and vapour) contribute what to the “greenhouse effect” and its numbers are quite interesting. I’ll dig it out later.

  • Jean S // January 15, 2007 at 2:22 pm | Reply

    “But over the last couple of years, I’ve investigated closely much of the statistical analyses in climate science. I’ve found that climate scientists in general do an outstanding job of statistical analysis, and often collaborate with statisticians to apply cutting-edge techniques. From this mathematician’s point of view, the application of statistical analysis in climate science is 1st-rate.”

    Really? Care to cite a few papers that fall under that category. Last few years I have also investigated much of the statistical analyses in climate science. I’ve found that climate scientists in general do a poor job of statistical analysis, and almost never collaborate with statisticians. Moreover, they tend to apply ancent old, inappropriate techniques. From this statistician’s point of view, the application of statistical analysis in climate science is primitive.

  • Gil Pearson // January 15, 2007 at 3:26 pm | Reply

    Willis,

    With regard to the sensitivity for a doubling of CO2. (Ignoring all feedbacks) You have it calculated at 0.4 degrees C. I have also seen it stated that it is about 1 degree C. Do you know what is the difference in the method of calculation this, and how would you critique to two approaches?

  • Hans Erren // January 15, 2007 at 3:42 pm | Reply

    The IPCC range is 1 to 3 K/2xCO2
    The no-feedback value is 1.2.

    see also
    http://www.sciencebits.com/OnClimateSensitivity

  • sonicfrog // January 15, 2007 at 4:50 pm | Reply

    Thank you very much for your explaination of statistical noise.

    Found your site via Willis and Climate Audit. I am one of those lay-persons people talk about, having little… er, no formal training in stats beyond one or two college coarses 14 years ago. I’m trying to play catch-up, and the climate science debate seems the perfect vehicle to accomplish this.

    If you don’t mind, I’d like to include you in my blogoll.

    [Response: feel free]

  • sonicfrog // January 15, 2007 at 4:52 pm | Reply

    Oooop, forgot to spell check :-)

  • tamino // January 15, 2007 at 5:15 pm | Reply

    Willis,

    You have confirmed my suspicion. Honestly, no offense meant — but you’re not too savvy about the rest of the science.

    It is generally accepted that without the greenhouse effect, the world would be about 33°C cooler. It is also generally accepted that the total downwelling “greenhouse” radiation is about 325 W/m2.

    No, it’s generally accepted that that’s the total downwelling longwave (LW) radiation. The atmosphere, whether it has greenhouse gases (GHG) or not, is warm, and hence will emit longwave radiation.

    The question is, what’s the difference between having greenhouse gases and not having them? The difference — the part due to GHG — is around 122 W/m^2.

    This indicates that the warming from greenhouse radiation is on the order of a tenth of a degree C per W/m2. This, as you know, is called the “climate sensitivity”.

    No, it indicates (using the correct figures) an increase of about 0.27 deg.C per W/m^2. And this calculation is only 1st-order; the response to incoming radiation is nonlinear, and accounting for nonlinearity it turns out, at present conditions, to be about 0.33 deg.C per W/m^2. By the way, the term “climate sensitivity” has two uses: one is as you say, the temperature change due to an increase of 1 W/m^2 in forcing; the other is the temperature change due to a doubling of CO2. Confusing? Yes. Whatcha gonna do?

    It is also generally accepted that for a doubling of CO2, the forcing will increase by about 3.7 W/m2. Given a sensitivity of 0.1°C per W/m, this indicates that a doubling of CO2 will increase the average temperature of the planet by about 0.4°C.

    No, given a sensitivity of 0.33 deg.C per W/m^2 it indicates an increase of about 1.2 deg.C (which can also be computed by direct application of the Stephan-Boltzmann equation). This was established way back in 1896 by Nobel-prize-winner Svante Arrhenius.

    And this is the increase you get if there are no feedbacks in the system. But there are feedbacks, including at the very least water-vapor feedback (warmer means more water vapor in the air, and that’s a potent GHG too) and ice-albedo feedback (warmer means less ice/snow, and that means less reflectivity of incoming solar).

    Our best method of computing the total impact of the feedbacks is with computer models. These indicate that the overall (including feedbacks) sensitivity to doubling CO2 is between 1.5 and 4.5 deg.C, with the “best estimate” being 3 deg.C (note: there is some evidence that it could be much higher, as much as 10 deg.C, but those numbers are at the “fringe”). These are the figures given by numerous researches, and quoted by the IPCC.

    Second, you seem to think that because the solar forcing correlates well with temperature until 1980 but diverges after that, this means that CO2 must be the cause of the post 1980 temperature rise. But to believe that, we’d have to assume that CO2 had no effect until 1980 (or it would have disturbed the solar correlation with the temperature), and then suddenly kicked into gear. Seems doubtful.

    Again, no. There are many forcings at work, and they are all active all the time. But since about 1975, GHG forcing has been far bigger than the others. This comment represents the kind of oversimplified, even naive, view of climate that is used more often by denialists than by skeptics.

    In fact, our understanding of climate forcing is very poor. Of 12 known forcings, the IPCC TAR rated our understanding of 9 of them as “Low” or “Very Low” … and that doesn’t include forcings that are not included in the TAR (biogenic aerosols, biogenic methane, plankton created aerosols, solar magnetism, cosmic rays, etc.) or unknown forcings. The fact that we cannot explain the post 1980 divergence proves nothing about CO2.

    The forcings which are poorly understood, are nonetheless known to be small compared to other forcings. And some of the things you list are either redundant or ridiculous. Aren’t “plankton created aerosols” also “biogenic aerosols”? Do they even exist? What’s your source for that? As for cosmic rays controlling climate — that’s a crackpot theory (and a rather desperate one, if you ask me) if ever I saw one.

    We do not understand what causes the shifts in the PDO, or the timing of ENSO events, or why there were a lot of hurricanes in 2005 and relatively few in 2006, or why the AMO cycle is about forty years long … why do you think we should understand the recent warming?

    This is just a repetition of the we-don’t-know-everything-so-we-don’t-know-anything argument.

    And at least some of your facts are wrong. There were plenty of hurricanes (by other names) in 2006, but not in the Atlantic; if you lived in Japan, China, or Australia, you’d never say such a thing. Considerable research indicates that while the number of tropical cyclones (a more world-wide term) varies a lot in any given ocean basin, the worldwide total is surprisingly consistent. 2006 was no exception.

    Finally, you seem to think the burden of proof is on the skeptics, which is science set on its ear.

    The idea that greenhouse gases must warm the planet is basic physics. The amount of warming (ignoring feedbacks) is also basic physics. That it can be otherwise, is contradictory to basic physics. So the “burden of proof” of the contrary belief is indeed on the skeptics.

  • Gil Pearson // January 15, 2007 at 5:27 pm | Reply

    Tamino,
    Sorry for hi-jacking your post for this. I can take it off line if you wish.

    [No problem. In fact, I'm rather hoping to create a place where people can discuss issues back-and-forth, and where eventually I contribute very little to the discussion, leaving that to readers. My only "rules" for discussion are: 1. No spam; 2. Stay on topic (at least tangentially, i.e., let's not discuss evolution); 3. Be polite.]

    Hans,
    Thanks for the link, very helpful.

    A further question. I am intrigued by the recent paper on ocean heat content (Lyman 2006). As I understand it measurements of ocean heat from the Argo network will give us an excellent empirical way of determine the most probable climate sensitivity. Ocean heat content seems to be a direct indicator of the radiated imbalance without confounding surface effects such as weather. My problem is that I do not know how to translate the latest apparent trend of radiated imbalance as expressed in watts into an inferred CO2 sensitivity. Perhaps you can help me here.

    Hansen’s in his 2005 “Smoking Gun” paper noted an accelerating trend in ocean heat that indicated a radiative imbalance of .85 W and the end of the 1993 to 2003 period. He said that this inferred a climate sensitivity of 2.7 degrees, close to the 3.0 degrees predicted in the models. Also, of course, the terrestrial temperature record gave similar results. Also it was noted that half of the heat was delayed due to the thermal inertia of the ocean and so the 2.7 number includes a doubling due to this un-expressed imbalance.

    We now have two further years of data from Lyman and if it is correct the globe cooled from 2004 to 2005 rather drastically. There is still a net warming from 1993 to 2005, however the only trend you can ascribe to AGW is .33 W.

    My question. If we assume that the IPCC TAR forcing definitions are correct (I realize you differ on solar), what does all this imply about climate sensitivity. I think that there is a logarithmic translation between energy imbalance in watts and equalization temperature. It seems to me that it implies a climate sensitivity of about 1 degree.

    Thanks and Regards, Gil Pearson

  • Hans Erren // January 15, 2007 at 9:32 pm | Reply

    “No, given a sensitivity of 0.33 deg.C per W/m^2 it indicates an increase of about 1.2 deg.C (which can also be computed by direct application of the Stephan-Boltzmann equation). This was established way back in 1896 by Nobel-prize-winner Svante Arrhenius.”

    Wrong! Arrhenius made some serious calculation errors and ended up with a dry climate sensitivity for CO2 of 4 K/2xCO2 (5 K inclusive H2O feedback).

    [Response: I'm talking about the sensitivity to an increase in forcing (at the surface) of 1 W/m^2. You're talking about the sensitivity to doubling CO2.

    But I dug up the original paper by Arrhenius, and indeed he doesn't directly state the sensitivity to increased forcing, although it's implicit in his formalism as he does apply the Stephan-Boltzmann equation. However, he appears not to have been the first. My mistake; my apologies.]

    BTW His nobel prize was for chemisty and not for climate science. It’s like claiming Linus Pauling has authority in vitamin C research, because of his nobel prize in physics.

    http://home.casema.nl/errenwijlens/co2/howmuch.htm

  • Hans Erren // January 15, 2007 at 9:40 pm | Reply

    “Our best method of computing the total impact of the feedbacks is with computer models. These indicate that the overall (including feedbacks) sensitivity to doubling CO2 is between 1.5 and 4.5 deg.C, with the “best estimate” being 3 deg.C (note: there is some evidence that it could be much higher, as much as 10 deg.C, but those numbers are at the “fringe”). ”

    Any value higher than 1.6 K/2xCO2 needs a cooling factor in the 20th century. Anthropogenic areosols have been claimed to give this cooling, however, there still isn’t a best guess number that scientist can agree upon how much aerosols do cool. A bit of armwaving really. And 10K/2xC02 is really science fiction.

    [Response: What exactly are you claiming? That there's no cooling effect from anthropogenic aerosols in the period 1945-1975? Ever? That the least uncertainty in, or disagreement about, quantification means an effect isn't real? That computer models which give stunning agreement to observation in post-diction, and good agreement in pre-diction, have no quantified estimate of aerosol influence?

    As for the 10 deg.C sensitivity to CO2, I said it was on the "fringe." But it's not science fiction; it's from Stainforth et al. 2005, Nature, vol. 433, pg. 403.]

    http://www.grida.no/climate/ipcc_tar/wg1/figts-9.htm

  • Eli Rabett // January 16, 2007 at 1:02 am | Reply

    I think this is going to get very definition like.

    First, if there were no greenhouse gases, how much would convection warm it?(remember clouds only form because there are greenhouse gases principally water and SO2 and sensible heat depends on a greenhouse gas, water vapor).

    Second, how would the heat of the atmosphere be rejected to space if there were no greenhouse gases? (see above).

  • Willis Eschenbach // January 16, 2007 at 1:03 am | Reply

    Dear Tamino:

    Thanks for your reply. You say:

    Willis

    You have confirmed my suspicion. Honestly, no offense meant — but you’re not too savvy about the rest of the science.

    Ooooh, dueling insults … can I play?

    Tamino, you have confirmed my worst fears. Truly, no offense meant — but you have the social skills of a six year old. Adults know that “no offense meant” does nothing but cloak an insult in a veneer of responsibility.

    Whether either of us is “savvy about the rest of the science” is not determined yet, and will not be determined by insults, or by unsupported claims regarding the other’s skills and abilities. I’d much prefer to keep this civil, and I have only insulted you in this fashion to show you the effect of your words and condescending tone. Can we get back to the science?

    I had said:

    It is generally accepted that without the greenhouse effect, the world would be about 33°C cooler. It is also generally accepted that the total downwelling “greenhouse” radiation is about 325 W/m2.

    You replied:

    No, it’s generally accepted that that’s the total downwelling longwave (LW) radiation. The atmosphere, whether it has greenhouse gases (GHG) or not, is warm, and hence will emit longwave radiation.

    The question is, what’s the difference between having greenhouse gases and not having them? The difference — the part due to GHG — is around 122 W/m^2.

    OK, let me make a change to my statement so that we can agree. Let me say:

    It is also generally accepted that the total downwelling radiation is about 325 W/m2.

    Now, a question for you, so we can see where our differences are. Let’s take it one question at a time, so we can build upon agreement.

    1) If the atmosphere were composed solely of oxygen and nitrogen, and the earth received 235 W/m2 from the sun, what would be the temperature of the earth?

    I say, based on the fact that nitrogen and oxygen are very weak absorbers of longwave radiation, that it would be only slightly above blackbody temperature, at about 260°K, or about -13°C. I base this on the Modtran line-by-line calculator. Set the CO2, CH4, and ozone levels to zero to eliminate the GHGs, the water vapor scale to 0 to eliminate the water vapor, and what remains is the weak absorption by nitrogen and oxygen. Set the ground temperature offset to -28.5°C, and you will see that outgoing radiation is 235W/m2.

    Please provide a source supporting your answer.

    I appreciate this discussion, Tamino, as it allows us to investigate the science.

    My best to you,

    w.

    PS – a word to the wise. I would not debate statistics with Jean S. … those who have tried in the past have gotten burned.

  • tamino // January 16, 2007 at 5:00 am | Reply

    Willis,

    When I said you’re “not too savvy” about the science, I was being kind.

    Here are some quotes from you:

    “The CO2Science website makes no bones about the fact that their graph starts in the 1930s. This is around the time when CO2 started rising.”

    You can’t possibly ever have looked at the actual data, or you’d know this is false.

    “It is also generally accepted that the total downwelling “greenhouse” radiation is about 325 W/m2.”

    Utterly false, but you got called on this and have now changed your tune. That this is not just a “slip of the tongue” is proved by your later question,

    “For a start, how does your theory explain that 325 w/m2 of forcing only resulted in a 33° temperature rise?”

    The argument leads on to:

    “This indicates that the warming from greenhouse radiation is on the order of a tenth of a degree C per W/m2.”

    Not just false, this is downright amateurish; not even your denialist friends will buy this. “Not too savvy” is the most forgiving explanation for such a ridiculous claim.

    “…you seem to think that because the solar forcing correlates well with temperature until 1980 but diverges after that, this means that CO2 must be the cause of the post 1980 temperature rise. But to believe that, we’d have to assume that CO2 had no effect until 1980…”

    Not only did I never say any such thing, the whole idea is naive — at best.

    “… there were a lot of hurricanes in 2005 and relatively few in 2006…”

    False.

    “… none of these trends are statistically significant because of the autocorrelation of the dataset.”

    Based on a “rookie mistake”: testing for trends without in any way compensating for the seasonal pattern. You changed your tune on this one, too.

    “… there was a large pre-1930 rise in the data you show above … which was obviously not the result of CO2. Since then, temperatures have been dropping.”

    As for “Since then, temperatures have been dropping,” the only explanation is that you just made it up.

    “From 1930 to the end of the record, CO2 forcing rose by five times! that much, about one and a quarter W/m2. If the temperature variations were CO2 related, we would expect to see five times the rise from 1930 to the present in Forestburg”

    This argument is based on the “CO2 is the only forcing at work” model, together with the “isolated locations will faithfully follow the global forcing” assumption. This goes beyond a lack of “savvy.”

    “However, as you point out, there are less volcanoes and likely more solar forcing since 1930, both of which should bring temperatures up.”

    Not only did you seriously mischaracterize my statements, you obviously are “not too savvy” about the history of volcanic activity.

    “… but the lack of a competing theory only points to our lack of knowledge of the climate…”

    No it doesn’t, any more than the lack of a competing theory to Maxwell’s equations points to a “lack of knowledge” of electromagnetism. This isn’t just false — it’s utter nonsense.

    The number, extremity, and ineptitude of your misstatements is very revealing. Your contentiousness is counterproductive. You are no longer welcome here.

  • Dano // January 16, 2007 at 5:44 am | Reply

    If I may, willis also claimed over at CA this past summer that all he needed to do to forecast global agricultural yields in the future was plot a trendline; the method was ‘bozo simple’.

    I offered him and any reader there a bet that if he/anyone (no limit to how many papers) got published with such a method I’d pay him, and if these methods held up to peer scrutiny I’d pay more. I got a bunch of words, but no acceptance from anyone.

    This comment thread is just another line in the growing litany. Don’t feel bad.

    Best,

    D

  • Hans Erren // January 16, 2007 at 7:56 am | Reply

    What exactly are you claiming? That there’s no cooling effect from anthropogenic aerosols in the period 1945-1975? Ever? That the least uncertainty in, or disagreement about, quantification means an effect isn’t real? That computer models which give stunning agreement to observation in post-diction, and good agreement in pre-diction, have no quantified estimate of aerosol influence?

    If we can get good results by diminishing the cooling effect of aerosols then occams razor dictated that you don’t need this fiddle factor.

    Let me pile up some evidence:
    multivariable analysis of the sattelite record reveals a CO2 climate sensitivity of 1K/2xCO2. (Douglass &Clader)
    Reduction of aerosols in eastern europe doesn’t lead to significant warming (engelbeen)
    An aerosol low forcing model gives a response of 1 K/2xCO2, and a good correlation in the 20th century (CKO Dutch Challenge project)
    Phanerozoic climate sensitivity for CO2 is 1.2 K/2xCO2 (Naviv).

    There is no comprehensive global aerosol database, like one exists for CO2. In the global aerosol optical depth index only volcanos show up.

    [Response: Unfortunately your references are incomplete, so they're hard to check. But I'm already familiar with two of them -- probably! It's impossible to know whether we're talking about the same paper, with incomplete references.

    Douglass & Clader: First, they don't get 1 K per doubling CO2 (they don't consider CO2 at all), they get 0.63 K per W/m^2 -- which is about 2.3 K per doubling CO2. That's why they say, "The sensitivity is about twice that expected from a no-feedback Stefan-Boltzmann radiation balance model, which implies positive feedback." Second, they're considering the response to the solar cycle; the response to a cyclic forcing is expected to be different from the equilibrium response to a constant change in forcing.

    Shaviv (not Naviv) & Veizer: They only get that result after removing 66% of the variance of the original data, based on their theory of galactic cosmic rays.

    Phanerozoic? It's ironic that denialists dismiss the "hockey stick," claiming we don't know enough to reconstruct temperature accurately a mere thousand years ago, but will accept uncritically studies depending on temperature estimates hundreds of millions of years ago.]

  • Dano // January 16, 2007 at 1:59 pm | Reply

    Some read/improperly pile up cite only a certain subset of authors so that their worldview isn’t upset.

    Others remember the lessons from critical thinking class when they were 14:

    How well has Shaviv & Veizer’s paper withstood peer review?

    Where can we find an empirical paper by a non-climate listserv denizen?

    Did the D&C conclusion use corrected or uncorrected sat datasets?

    Best,

    D

  • Dano // January 16, 2007 at 2:00 pm | Reply

    hmmm…I had a strike tag for ‘pile up’. I’ll check acceptable tags.

    D

    [Response: I've noticed that some of the tags which work in posts, won't work in comments. I haven't figured out the details.]

  • Hans Erren // January 16, 2007 at 3:32 pm | Reply

    Douglass & Clader: First, they don’t get 1 K per doubling CO2 (they don’t consider CO2 at all), they get 0.63 K per W/m^2 — which is about 2.3 K per doubling CO2. That’s why they say, “The sensitivity is about twice that expected from a no-feedback Stefan-Boltzmann radiation balance model, which implies positive feedback.” Second, they’re considering the response to the solar cycle; the response to a cyclic forcing is expected to be different from the equilibrium response to a constant change in forcing.

    Indeed solar forcing is larger than CO2 forcing. After a personal communication with David Douglass where I showed this graph to him:

    http://home.casema.nl/errenwijlens/co2/co2-sb.gif

    David replied, that he couldn’t add this obvious explanation in for the trend L, because that would mean a rejected paper, so he had to conceal the trend in mK/decade. The resulting transient sensitivity for CO2 forcing for a residual trend of 65 mK/decade is 1K/2xCO2.

    Duh, I have minor in signal processing, you don’t have to teach me about frequency response of feedback systems.

    Naviv: yeah sure.

  • Glen Raphael // January 16, 2007 at 11:35 pm | Reply

    Dano: Make sure you put the URL in your attempted links in double quotes. Though html doesn’t require those quotes, for some reason many comment scripts will strip the link if you don’t put them in. (Took me a long time to realize that was the issue.)

    t: I was impressed with how cordial and informative your discussion with Willis was (on both sides) right until you blew up at the end. Your earlier comment was right – the blog needs a resident skeptic or two to make sure your arguments aren’t preaching to the choir. Keeps everyone honest.

    Regarding this:
    >>“… there were a lot of hurricanes in 2005 and relatively few in 2006…”
    >False.

    Certainly true for the atlantic, and quite surprisingly so relative to predictions:

    http://en.wikipedia.org/wiki/2006_Atlantic_hurricane_season

    (though the pacific was just about as expected.)

  • Judith Curry // January 16, 2007 at 11:39 pm | Reply

    I just spotted this site, this particular thread is very useful. Tamino, I would like to add that based upon my own “sparring” over at climateaudit, Willis and Jean S are among the best of the skeptics, and have unfailingly tried to be polite and dig into the science in a meaningful way.

    [Response: Thanks for stopping in. And thanks for your aforementioned paper, which is very useful and a good read to boot.

    Permit me to doubt about Willis. Example: he actually stated that CO2 started rising around 1930. This means that he has never bothered to look at the data, but he's still willing to make "factual" claims about it. That's hardly what I call "digging into the science in a meaningful way." And it's increasingly clear that he will beat a dead horse until the end of time.

    If he wants to do that at climateaudit, that's their business and his. But this is my house. I'm not letting my blog turn into a "debate" site. There are plenty of those; anybody who craves a debate can find many -- elsewhere.]

  • nanny_govt_sucks // January 17, 2007 at 1:04 am | Reply

    I’m not letting my blog turn into a “debate” site.

    So, what is the purpose of your blog?

    [Response: to provide information about the science of global warming.

    This is not a forum for denialists. This is not an arena for debate. If that's what you want, you have plenty of sites to choose from. But in my opinion, creating yet another debate site is about as useful as inviting an "opposing viewpoint" every time the news runs a story on global warming.]

  • nanny_govt_sucks // January 17, 2007 at 3:06 am | Reply

    This is not a forum for denialists.

    Is it a forum for credulists only?

  • elspi // January 17, 2007 at 3:18 pm | Reply

    What kind of noise is nanny_govt_sucks?

    [Response: I let this comment through in order to make a point.

    In my latest post, I have issued a "declaration of independence" from denialists, and a clear commitment not to turn this blog into an "argument site." I know that you're just following the lead of other comments on this post, and I acknowledge that it's my fault, for letting denialists "hijack" my blog in the first place. So I certainly don't blame you (and I'll give you points for humor).

    But I'm determined to avoid going down the path of so many other blogs. If nanny_govt_sucks wants to stick around and engage in open-minded discussion, I think that's a good thing. But posts which refer to the cooling effect of aerosols as "hodge-podge magic-wand" will no longer appear.]

  • William L. Hyde // January 17, 2007 at 5:41 pm | Reply

    Hi…I’m new to all this and I was going to ask a few questions but I’m worried that I’ll be contravening your guidlines. Do you have them written down anywhere? If I check before I post then maybe it won’t be a waste of time. Thanks.

    [Response: In general I will attempt to be very liberal, only blocking posts which are clearly designed to muddy the waters rather than inform, and those which "beat a dead horse" by repeating ad infinitum what's already been addressed. In fact, those who have heard denialist arguments and genuinely want more information are the people I'm here for; if you have an open mind, post.

    Of course this is a matter of motivation. The same question, in one context is an inquiry, in another is denial. As to judging which is which ... that's up to me.]

  • Steve Bloom // January 17, 2007 at 9:39 pm | Reply

    I just wanted to add in response to Judy’s comment that Willis and Jean are in fact denialists, not skeptics. I say that because both have asserted that there are fundamental errors in every significant aspect of climate science; i.e., the models are wrong, the physics is wrong, the paleo work is wrong, the surface record is wrong, climate scientists who support the consensus must be engaged in a conspiracy, etc., etc. But even ignoring that, there is a sharp limit to how far an expert(?) knowledge of statistics coupled with an amateur knowledge of the science can carry one.

    I personally have fun debating with denialists, and would continue doing so on Climate Audit if co-moderator John A.’s animus toward me would allow it (he’s taken to removing my comments when Steve M. is off-line), but it’s a little less clear to me how a climate scientist benefits. Certainly getting the statistical soft spots in one’s work identified has some value, but then the discussion on CA inevitably dissolves into demands that the scientist explain the basics of her/his field from first principles. Funny, when I tried suggesting that the same be done with statistics I didn’t meet with a very positive reception.

    (For those who don’t know me or John A., he dislikes me in particular because I am a very small cog in the machinery of climate policy, and to my knowledge the only one readily available for him to dump on.)

  • Hans Erren // January 18, 2007 at 1:06 am | Reply

    So you are starting a blog with global warming statements and you expect everybody to agree with you? How long have you been in blogosphere, newby?

    As long as there are catastrophists and denialists there always will be a debate.

    [Response: No, I certainly don't expect everybody to agree with me. I don't even want that; I certainly don't expect to be right all the time.

    But I do insist that people who simply won't listen to reason, go someplace else.]

  • Andrew Dodds // January 18, 2007 at 8:20 am | Reply

    Hans -

    You are creating a false dichotomy here..

    A denialist is defined as one who denies all of the science behind global warming however many times it is explained to them, and would, for instance, happily repeat the ‘Global cooling was predicted by everyone in 1975′ myth every time a new discussion started.

    In the same vein, a catastrophist would repeatedly insist that all the models err on the low side and we are going to see another PETM in the next 30 or so years. Lynn V. is as close to this as anyone I’ve seen (Denialists seem far more common the Catastrophists, or at least a lot noiser).

    I would expect a person who blindly insisted that the sciencer was wrong because it underestimated AGW to be banned as well.

    Now, if you are saying that the scientific consensus on AGW is ‘Catastrophist’, then you are essentially putting the scientists in a similar box to the denialists as if their positions were equivilent. That is wrong – questioning the scneice is OK, but pretending that all viewpoints on AGW are equal is extremely sophilistic.

    [Response: I'm hoping not to ban people. But I sure will block posts. Both catastrophists and denialists are free to state their opinions. But if, after the topic is dealt with, they won't let it go, the post will not appear. Also, if the original opinion is just plain ludicrous, that won't appear either.

    This means that a lot of people will find this site boring. That's OK with me.]

  • Hans Erren // January 19, 2007 at 3:27 pm | Reply

    The given fact that the SRES A1B scenario is considered as “Business as usual” and not the A1T scenario – a scenario whithout active CO2 reducing measures – shows that interpretation of the IPCC report has a hot bias. Sometime even 1 % cumulative increase is called business as usual. The constant emphasis on the even more unlikely scenarios A2 and A1FI is telling.

    The omission of negative feedback models (like Lindzen) in IPCC TAR model output is also showing political bias in a publication that claims to give an comprehensive overview of scientific research

    Busines as usual is technical innovation and increase of efficency to reduce cost of production. Look at the mile per gallon numbers. I consider myself halfway between the warmers and the denialists: I argue with both.

    btw nanny talk bollocks about significant othe CO2 sources, but OTOH analysis of the CO2 record since 1959 shows that presently CO2 half life is decades, not centuries.

    [Response: I'm not familiar with the term "talk bollocks." If you think her mention of other CO2 sources is mistaken, please give some references.]

  • Hans Erren // January 19, 2007 at 4:14 pm | Reply

    I notice in the “This Blog is Different”
    thread that nanny is changing the subject from “significant volcanic output since 1958″ to “supervolcanos”

    typical.

    [Response: Please, no more on this. If nanny_govt_sucks posts are scientific, whether right or wrong, they'll probably get through. Likewise for everybody.]

  • nanny_govt_sucks // January 19, 2007 at 8:01 pm | Reply

    I didn’t change the subject, Hans. I think you misquoted me here: “significant volcanic output since 1958″. I never said or refered to this.

    [Response: this is the final comment on this. Everyone interested can read exactly what was said simply by scrolling up; I never have, and never will, change people's words. People who want to "pounce" on misstatements may do so at most once, maybe not at all, and there'll be no further "back-and-forth." Subject closed.]

  • Hank Roberts // October 21, 2007 at 3:02 pm | Reply

    A Google link brought me back here while looking up something else I recalled Dr. Curry posting.

    Looking back over this thread and thinking about the subsequent year, one suggestion for Tamino –

    You like the back and forth and want people to discuss.

    Would it be possible to pull out your science/math/chart work and post that separately somewhere with a pointer to discussion thereof?

    Something akin to globalwarmingart, if that site had a forum.

    Right now, I find the science you post is good, and the comments do help you improve it and update it.

    But then stuff accumulates and over time the science and graphs and math work all gets lost in the chatroom stuff, regrettably.

    [Response: I actually did something like that with this post. But there are more graphs since then, I should update it.]

Leave a Comment