Open Mind

Spencer’s Folly 3

August 1, 2008 · 14 Comments

Part 3: Fast and Slow

We’ve had a look at a simple model of the influence of climate forcing on global temperature (the zero-dimensional one-component model), and we’ve noted that when it comes to feedback (in the usual sense) in the climate system, some are reasonably fast (water vapor takes a few weeks or so to equilibrate) while others are slower (ice takes years to decades or longer to melt), as well as a method to use observed data for surface temperature and net radiation imbalance to try to estimate climate sensitivity.


In the real world, global temperature can change due to radiative forcing and it can also change due to internal variation. The real difference is that radiative forcing doesn’t just cause temperature change, it’s directly observable as part of the top-of-the-atmosphere (TOA) net radiation imbalance. The cause of internal variation does not show up in the net radiation imbalance. However, the effect of internal variation, i.e., temperature change, is observable in the TOA radiation imbalance. That’s because one of the “feedbacks” (in the more uncommon sense) from temperature change is that warmer objects radiate more energy; this is the origin of the factor \lambda_0 in the equation for the total feedback (in the more unusual sense) \lambda from the last post:

\lambda = \lambda_0 + \lambda_w + \lambda_\alpha + ....

The factor \lambda_0 (more radiation from warmer objects) can be calculated from the Stephan-Boltzmann radiation law, having the value 3.3 W/m^2/K (watts per square meter per Kelvin). \lambda_w is the feedback due to water vapor, \lambda_\alpha the feedback due to albedo change, and “…” indicates that there are other feedbacks too, I’m just too lazy to enumerate them all. The much-sought-after climate sensitivity (due to an increase of 1 W/m^2 in climate forcing) is 1/\lambda.

Temperature change from climate forcing, like a CO2 increase, is a slow and steady process. Suppose, for instance, we start from a stable climate with stable temperature at climate forcing anomaly zero, temperature anomaly zero, then suddenly increase CO2 so that the climate forcing due to CO2 is raised to some constant value F. We already solved the simple model exactly for the temperature change due to any forcing function. We defined a few convenient variables, namely \omega = \lambda / C_p which is the “inverse time scale,” and the “scaled forcing” \theta(t) = F(t) / \lambda. The scaled forcing function is of course also constant,

\theta(t) = F / \lambda.

The temperature evolves according to

T(t) = T(0) + \omega e^{-\omega t} \int_0^t \theta(s) e^{\omega s} ~ds.

We assumed T(0) = 0 (temperature anomaly is zero when we start), and for our simple constant forcing function the integral can be evaluated exactly to give

T(t) = (F/\lambda) [1 - e^{-\omega t}].

Now let’s compute the net radiation imbalance N (used for the method of the last post to estimate climate sensitivity). This will have two terms. One is +F(t), just the change in incoming radiation caused by the climate forcing. The other is - \lambda T, the radiation “feedback” (in the more unusual sense) caused by a temperature change. This is just the equation of the last post

N = F - \lambda T.

Plugging in the equations for F and T from the simple model, we get

N = F e^{-\omega t}.

Now suppose we measured the TOA radiation imbalance and the surface temperature, and it followed these equations. For this example, take \lambda = 1.5 W/m^2/K, C_p = 45 W-yr/m^2/K, so that \omega = 0.0333... yr-1 (characteristic time \tau = 1/\omega = 30 yr). Take the forcing to be F = 3 W/m^2. Then we can plot the observed data (for 100 years of temperature change) as:

We can also plot radiation imbalance against temperature change, and use a linear regression to determine the equation of the best-fit straight line:

For this line to have the form of the radiation imbalance equation for constant forcing, the forcing must be 3 W/m^2 and the “feedback” parameter (in the more unusual sense) must be \lambda = 1.5 W/m^2/K, so the climate sensitivity is 1/\lambda = 2/3 K/(W/m^2). This was, in fact, the first application of the radiation imbalance equation to estimate climate sensitivity, from net radiation imbalance and temperature anomaly for the output of a climate model with constant forcing (a rather more elaborate model than the simple one we’ve used here, Gregory et al. 2004, Geophysical Research Letters, 31, L03205).

Now let’s add some natural variation to our simple model. Let’s assume that the natural variation is due to internal processes which don’t contribute to the radiative forcing, so there’s no change in the term F(t) of the radiation imbalance equation, that’s still just constant at 3 W/m^2. But the random fluctuations in temperature will cause outgoing radiation to fluctuate, because you just can’t get around the fact that warmer objects radiate more energy. However, we can’t just compute these fluctuations as - \lambda T, because these fluctuations will be so rapid (both up and down) that the feedbacks (in the usual sense) don’t have time to operate. Hence to compute the impact of the rapid fluctuations on outgoing radiation, we should include only the “default” feedback (in the more unusual sense) parameter \lambda_0. The impact of the rapid fluctuations on radiation imbalance will therefore be - \lambda_0 T rather than - \lambda T.

I’ll make the temperature fluctuations random, and give them a standard deviation of 0.1 K. Now the plot of radiation imbalance and temperature looks like this:

and the regression of radiation imbalance against temperature looks like this:

The regression line indicates that the radiative forcing is 3.071 W/m^2, which is very close to the true value 3 W/m^2. It also indicates that the feedback (in the more unusual sense) parameter is 1.553 W/m^2/K, very close to the true value 1.5 W/m^2, so the estimated climate sensitivity is 1/1.553 = 0.644 K/(W/m^2), very close to the true value 0.667 K/(W/m^2). Basically, the method has worked; we estimated the forcing and the sensitivity with good accuracy from our “observations.”

You may also notice that the data in the linear regression plot tend, on very short time scales, to follow straight lines at a steeper slope than the overall regression. That’s becaues the rapid fluctuations, the ones that are too fast to allow feedback (in the usual sense) to appear, affect radiation imbalance according to the default feedback (in the unusual sense) parameter. The slope of these short line segments is -3.3 W/m^2/K, but that’s not the feedback parameter \lambda, it’s the default, or “no-feedback” (in the usual sense) feedback (in the unusual sense) parameter \lambda_0. If we looked only at very brief time spans, we’d be fooled into thinking that one of those straight line segments was the actual relationship between radiation imbalance and temperature change. If we did that, then we’d conclude — quite mistakenly — that the actual feedback (in the unusual sense) parameter was simply equal to the default (no-feedback in the usual sense) value. Then we’d conclude (quite mistakenly) that climate sensitivity was 0.3 K/(W/m^2) rather than its true (for this model) value 0.667 K/(W/m^2).

That would be folly!

But that’s exactly what Spencer has done in his presentation. That’s Spencer’s folly. (If you read his presentation, be aware that he’s reversed the sign of the radiation imbalance, so the lines slope upward to the right rather than downward to the right.) He’s based his estimation of climate sensitivity on time spans which are so brief that feedback (in the usual sense) in the climate system doesn’t have time to operate! If you eliminate feedback (in the usual sense) from consideration, you’re not going to get a realistic estimate of climate sensitivity.

The method can be used to estimate climate sensitivity, but it actually requires reasonably long time spans to give the correct result. It was used by Forster and Gregory (2006, J. Climate, 19, 39) to estimate climate sensitivity based on actual measurements from 1985 to 1996, when radiation imbalance N was measured by the Earth Radiation Budget Satellite. Their analysis was complicated by the fact that climate forcing was not constant, so you can’t apply a simple linear regression because the “intercept” of the regression line isn’t constant; to get climate sensitivity they had to estimate the slope of the curve giving the relationship between temperature and radiation imbalance. This makes the method less precise, but it does have the virtue of giving information about the time evolution of radiative forcing; they use the modified method to estimate both climate sensitivity and to reconstruct climate forcing. They also caution that the time span under study casts doubt on the accuracy of their results, because it’s too short a time span for the method to be very precise. They further caution that some feedbacks (in the usual sense) can take decades or longer to appear, so their analysis is more like an estimate of “prompt” climate sensitivity than of the true, equilibrium climate sensitivity. Forster and Taylor (2006, Journal of Climate, 19, 6181) got around the too-brief-time-span problem by analyzing the output of climate models for hundreds of years, but of course that’s the result for a climate model, not observations for the actual planet earth.

But Spencer, he not only disdains to allow sufficient time for feedback (in the usual sense) to be observed, he actually tries to persuade us that the longer-term behavior is wrong but the shorter-term behavior is right. He refers to the very rapid, brief, steeper-sloped lines as “feedback stripes” (feedback in the more unusual sense) and the longer-term behavior as “radiative forcing spirals.” They’re nice names, but they only cloud the issue, giving the false impression that one can estimate climate sensitivity by computing slopes of lines with such short duration that feedback (in the usual sense) simply doesn’t have time to make its impact observable.

To me, this seems rather ridiculous. But I get the distinct impression that Spencer actually believes it. Which illustrates starkly that when one wants to believe (or disbelieve) something for ideological reasons, the ability to fool oneself can amplify to impressive force.

It also illustrate why it’s so easy to “cloud” the issue of global warming, and so difficult for the layman to be confident when well-crafted misinformation is so prevalent. Spencer makes a very slick presentation. It’s taken me 3 installments to address his folly without making any single blog post prohibitively long. Now imagine that you’re a reasonably well-educated and informed lay reader who sees his presentation. It’ll seem to make perfect sense — unless you have lots of time and motivation to consider it in detail and the necessary skill to analyze what’s really going on. Spencer’s presentation isn’t the kind of “obvious” mistake, like “there’s more CO2 emitted by volcanos than by human activity,” which is easily debunked in a few lines simply by looking up some real data. It’s the sophisticated mistake, which the lay reader can’t generally see through with a quick and easy google search. Which is not folly — it’s a pity.

Categories: Global Warming
Tagged:

14 responses so far ↓

  • georghoffmann // August 1, 2008 at 2:17 pm

    Tamino
    this ist what I suspected that you are heading to and you are certainly right for many feedbacks. But water vapour is a different issue. It is fast on one hand but on the other hand (on these short time scales) it is strongly controlled by direct circulation changes and not by what I would call feedbacks. The moisture fields after and during an El Nino are not changing because there are classical water vapour feedbacks due to a warming middle troposhere but because convection and circulation directly moves moisture around.
    So my critique for the very important WV feedback of Spencers analysis would be rather that he is mistaking circulation for a feedback. What are your thoughts?

    [Response: I'm not sufficiently knowledgeable to comment on circulation changes vs temperature change as causes of water vapor changes. But it certainly underscores the need to analyze long time spans to get a proper estimate of water-vapor feedback, so that the short-term fluctuations (whatever their root cause) can average out and the actual feedback signal rise above the noise.]

  • thingsbreak // August 1, 2008 at 3:01 pm

    Thanks for this series of posts, and my apologies for contributing to the derailment of the comments on the first.

    Very clearly written- I liked especially your characterization of “prompt” versus equilibrium sensitivity.

  • Ray Ladbury // August 1, 2008 at 4:00 pm

    Tamino, excellent analysis, and once again an admonition that we have to have the definition of climate in mind when we do analysis of it.

    Georg, It sounds as if you might be saying something similar to what Chris Colose is saying:
    http://chriscolose.wordpress.com/2008/06/23/is-the-atmosphere-drying-up/

  • Joel Shore // August 1, 2008 at 5:34 pm

    Tamino: Congratulations on a most excellent exposition!

    Your last paragraph really rings true for me. After the folks at RealClimate recently posted a response to Monckton’s tirade in the APS Forum on Physics & Society newsletter, I was encouraging them to talk more about Spencer’s work. After all, I could pretty much debunk Monckton on my own…but Spencer’s stuff was more of a challenge. I am glad that you have taken on that challenge, and done so quite admirably from what I can tell.

  • David B. Benson // August 1, 2008 at 8:53 pm

    Tamino — Well done! Admirably done!

    I need to add that not only are there ‘prompt’ feedbacks and ’slower’ feedbacks, there is the feedback from warming the oceans. From a ModelE study, this seems to require in excess of 1300 years to reach equilibrium.

    One of Reto Knutti’s studies (he was lead author), pointing out why Schwartz was wrong, shows that the climate model he was using produced about 60% of the equilibrium climate sensitivity for 2xCO2 (ECS) within 5–7 years with the rest, due to ocean heating, requiring many, many centuries.

    This is just to elaborate on the point that determining ECS reuires looking at very long time intervals.

  • sod // August 2, 2008 at 3:35 am

    good work Tamino.

    your work is of incredible value, in the fight against denialist spread misinformation…

  • Gavin's Pussycat // August 2, 2008 at 12:56 pm

    Food for thought, as always… I notice that both in the derivations here and in Spencer’s slide show, the algebraic signs of the feedbacks are opposite to what I am used to… staring pretty long at that, I was. The lambda_0 is a positive number, representing in my intuition a negative feedback (when temperature rises due to a positive forcing, it restores equilibrium N = 0).

    Same with water vapour feedback: lambda_w is a negative number (though it’s commonly thought of as a positive feedback), reducing the total lambda! This is the only way the feedback can strengthen the temperature response of the original CO2 forcing… perhaps trivial, but this may help others in their understanding.

    As to tamino’s “slow feedback” hypothesis, one way to get a handle on the time scale of the water vapour feedback is to observe that global mean precipitation (and thus evaporation) is 2.61 +/- 0.03 mm per diem or 95 cm per annum. The global mean water vapour column of the atmosphere is 25 mm water equivalent, meaning that the atmosphere’s water vapour is “cycled” completely in ten days.

    This is clearly a lower limit as the atmosphere is far from homogeneous (and one third of it not even over water).

    An upper bound may be derived by looking at the spread of CO2 as a “mixing tracer”. The Keeling curve has this annual ripple, which is strong in Hawaii, weakens toward the south, and is no longer present in Antarctica. This suggests that the time scale for mixing air from the NH to Antarctica is over one year.

    The reality for water vapour will be somewhere in-between. Looking at Spencer’s slide five I see that the “regime transition” happens between the time scales 31 days and 91 days.

    I would say tamino’s explanation makes physical sense.

    BTW the argument presented by georghoffmann makes sense too… it would be similar to the critique of Chris Colose on Spencer et al 2007 on an earlier thread here. Taking a weather phenomenon like the Julian-Madden Oscillation and observing a correlation between temperature and some source of feedback is nice, but you cannot just go and call the correlation a feedback even if the SI units are the same :-)

    About Spencer’s honesty, tamino is a bit, eh, gracious. I have observed Roy knowingly telling untruths before an audience where he could expect to get away with it. Ah well, innocent until proven guilty.

  • Gavin's Pussycat // August 2, 2008 at 3:11 pm

    Referring to the previous post, actually I overlooked something important.

    The time period of 10 days for water vapour equilibration (which also Gavin Schmidt reported on RC as the result from model simulations deleting atmospheric moisture and seeing how long it took to come back) refers to exponential decay, and should be multiplied by 2pi before comparing to sinusoidal periods!

    That gives us 60 days, nicely in Spencer’s 31-91 day range!

    I did further test computations. You can express the effect of the feedback slowness as an attenuation factor. My derivation gives

    1/sqrt(4 pi^2 rho^2 + 1),

    where

    rho = tau / sigma,

    tau being the 10 day exponential decay time scale, and sigma the periodic time scale studied: 7, 31, 91 and 365 days in Spencer’s graphs on his page 5.

    I computed these attenuation factors by the above formula, and also, independently, from Spencer’s feedback coefficients on the graphs, by

    att-factor = (3.3 - Spencer) / 3.3.

    Below the results.

    Per- att.fac.
    iod me Sp.
    —————-
    7 0.11 -0.36
    31 0.44 0.26
    91 0.81 0.82
    365 0.99 1.07
    —————–
    Spencer’s regression values are a bit noisy but I think I have just empirically confirmed tamino’s theory from Spencer’s data ;-)

  • Phil. // August 2, 2008 at 5:20 pm

    Spencer bearing false witness, surely not!
    He was rather disingenuous with the old MSU website, he left statements on the front page about the lack of warming long after the corrections to his algorithm had been made and the results on his data page actually contradicted them.

  • Gavin's Pussycat // August 3, 2008 at 7:47 pm

    On further consideration, looking at the regression coefficients on Spencer’s slide, and the point clouds they are supposed to describe, they’re nowhere near good enough for any such precise analysis… not his and not mine ;-)

    At most the trend among them with averaging length is real… and water vapour slowness in the ball park for explaining it.

  • Duae Quartunciae // August 4, 2008 at 11:50 am

    Is there anywhere that tablulates time constants associated with different feedbacks, according to the various models used to evaluate such things? I’ve read Bony et al (2006); and Soden and Held (2006), on feedbacks, and I have seen the tabulation of inferred feedback parameters for different models. But I have not seen time constants listed. Can they be inferred from model?

  • Gavin's Pussycat // August 6, 2008 at 1:01 pm

    Duae, good question. No I haven’t found any reference in one place. I am sure they could be inferred from model runs, as Gavin Schmidt did in
    here for aquaeous vapour:

    http://www.realclimate.org/index.php/archives/2005/04/water-vapour-feedback-or-forcing/

    You can try to ballpark the numbers using general physical considerations like I did above. For clouds, I would expect the response to be first immediate and positive (as the troposphere heats up, but no WV is added, clouds find it mmediately harder to form), but the exponentially converges to the secular value, on a ten day time scale.

    For albedo, we can try to use also the number of 2.5 mm/day evaporation global average, i.e., one metre per year. This suggests a time scale for land snow and sea ice on the order of a year. On the one hand the number for evaporation in high latitudes is probably smaller, on the other, snow and ice disappear by melting and runoff, not sublimation, talking much less energy.

    For continental ice sheets this guesstimate gives 1000 years per km thickness. This obviously does not consider collapse.

  • David B. Benson // August 7, 2008 at 12:25 am

    The following paper seems relevant to this thread:

    http://www3.interscience.wiley.com/cgi-bin/fulltext/117865547/PDFSTART

    wherein the simplified climate response model is treated in terms of two parameters in a interesting manner.

  • David B. Benson // August 7, 2008 at 2:11 am

    But here is what James Annan writes about that paper:

    Now I have read the paper, and I can only reiterate
    that Allen’s conception of probability as expressed in that paper is fundamentally and irretrievably broken. He talks about the models not sampling the range of uncertainty indicated by the measurements, but this is a wholly fallacious concept.

    An analogy should make it clear. I have 3 clocks readily to hand (PC, PDA, watch) - let us consider these as an ensemble of “models” that estimate the time. They say 10:03, 10:05 and 10:06 respectively. Now I can also look out of the window to measure the true climate system time by observing the position of the sun in the sky. Unfortunately I don’t have any precise instrumentation, so I can only estimate the time as 10:00±30mins.

    According to Allen’s interpretation, the clocks can now be said to be “undersampling the range of uncertainty that is consistent with the observation” and thus are underestimating the probability of the time being much later or earlier than 10:00ish. This is wholly bogus.

    I hope it is immediately obvious to you that although a *precise* observation could potentially tell us that our clocks are wrong, an observation with high uncertainty can *never* tell us *anything* about the accuracy of the clocks. It is true that if I had another watch that said 10:35, this specific observation would not be able to tell me that it was wrong. But this does *not* mean that I *should* have such a watch, or be worried by its (potential) existence. Similarly, I do not worry that the sun has gone out every time I shut my eyes, even though my “observation” of its brightness does not exclude this possibility.

    This fallacy all dates back to his bogus interpretation of confidence intervals as if they are valid credible intervals for an unknown parameter (ie asserting that a uniform prior is somehow the “objective” choice). I’ve pointed out to him several times that his methods are inconsistent with standard probability theory, but he seems to regard this as a feature rather than a bug…and as far as I can tell, few of the reviewers and colleagues have any idea what he is on about,
    apparently assuming that since he is Myles Allen, he must be right.

    James

Leave a Comment