Open Mind


October 21, 2008 · 80 Comments

The two most often used estimates of temperature in the lower troposphere (the lower layer of the atmosphere) based on satellite data are from RSS (Remote Sensing Systems) and UAH (University of Alabama at Huntsville). Neither is a direct measurement of lower-troposphere temperature because satellites don’t measure that. They do measure the temperature in the mid-troposphere and the lower stratosphere, but these two groups (among others) have used available information to attempt to reconstruct lower-troposphere temperature from the satellite data.

Of course there’s disagreement between the two data sets. Still they’re pretty close, as can be seen by plotting them both on the same graph:

We can get a much better look at the differences if we plot those differences directly; here’s the RSS estimate minus the UAH estimate over the available time span:

There are two differences which are apparent to the eye. First, there’s a “step change” at 1992, with RSS being higher than UAH after that but lower before that. Second, in the most recent time period (from about 2003 on) there’s an annual cycle, with RSS being relatively higher during northern hemisphere summer and UAH relatively higher in northern hemisphere winter.

The step change at 1992 is due to differences in the way the two groups join the data from different satellites. The “satellite record” is not from one satellite in continuous operation, it’s the combined data from over a dozen different satellites and their instruments. There are considerable differences between the approaches used to join the different satellites’ data, as pointed out in Mears et al. 2003:

A more important difference between our methodologies is the way in which we determine the intersatellite merging parameters. We use a unified approach where each overlapping pentad average is treated with equal weight to determine both the target factors and the intersatellite offsets. The equal weighting of each 5-day overlap serves to deemphasize periods of short overlap without ignoring them altogether. Christy et al. (2000, 2003) impose a minimum time period over which an overlap must occur before it can be taken into account to help determine the merging parameters. This leads CS to discard the TIROS-N–NOAA-6, NOAA-7–NOAA-9, NOAA-8–NOAA-9, NOAA-9–NOAA-10, and NOAA-10–NOAA-12 overlaps when determining their target factors. Their intersatellite offsets are then determined by evaluating the mean difference between coorbiting satellites utilizing a single path that connects all the satellites in question.

The 1992 step coincides with the end of the NOAA-10 mission and the beginning of NOAA-12. The NOAA-11 mission covers this transition, but that particular instrument was subject to instrumental drift; as Mears et al. point out, “This is especially important for NOAA-11, where there is a long-term drift in the target temperatures and a long-term diurnal correction of approximately the same shape.”

As for the apparent annual cycle in the difference between RSS and UAH, we can quantify it by applying a wavelet transform. This also gives (as a side benefit) a smoothed estimate of the mean difference between the two:

Again, we can see the step change in 1992. We don’t see the 1-yr periodic fluctuation because the wavelet transform smooths that out. But we do see a possible trend in the data from 1979 to 1992, with RSS getting warmer relative to UAH.

But what we’re really interested in from the wavelet transform is the size of the annual cycle in their difference. Here it is:

There’s a notable annual cycle from about 2003 onward. In fact for the last 5 years or so it’s shown a semi-amplitude of about 0.08 deg.C, or a full amplitude of 0.16 deg.C. We can further quantify this by a Fourier analysis of the difference between RSS and UAH from 2003 to the present:

Either one is showing an annual cycle that the other doesn’t, or they both show an annual cycle but of very different characteristics. They can’t both be right.

How can an annual cycle exist when these data are anomalies, which should eliminate the annual cycle? The answer is that these data are anomalies relative to the baseline period 1979-1999. Hence the average annual cycle 1979-1999 has been eliminated. But if the annual cycle from 2003 to the present is different from the annual cycle 1979-1999, then that difference will remain in the anomalies. In fact we expect there will be differences in the average annual cycle 2003-present compared to 1979-1999, because of greater warming in winter than in summer. But the change of the annual cycle from the reference period to the last five years is quite small; it’s simply not plausible that the difference has an amplitude as great as 0.16 deg.C.

So, which data set is showing the faulty annual cycle over the last five years? We can look for this annual cycle in the last five years of the individual data sets using Fourier analysis:

The RSS data show about what we’d expect, given the red-noise character of the data. The UAH data show the same, plus a strong response at a period of 1 year. So it’s the UAH data that show a false annual cycle recently, in fact they show a semi-amplitude 0.13 deg.C, full amplitude 0.26 deg.C. It’s just not believable that the annual cycle in global lower-troposphere temperature has changed its amplitude by 0.26 deg.C between the 1979-1999 reference period and the 2003-present period. Something is wrong with the UAH reduction.

It’s my considered opinion that the step change in 1992 is due to UAH using an inferior method to join the different satellite data sets, so on that basis the RSS data are to be preferred. It’s also my opinion that there’s little if any doubt that the annual cycle shown recently by UAH data are false, indicating further faultiness in their reduction procedure. Again, the RSS data are to be preferred. So on the whole, I believe that the RSS satellite data are considerably more accurate and reliable than those from UAH.


It seems some aren’t convinced that there’s an annual cycle in the difference between RSS and UAH lower-troposphere temperature after 2003. Perhaps they are led astray by the fact that the Fourier spectra plotted in this post are amplitude spectra, designed to show the physical amplitude of fluctuations which are present. But statistical significance is better indicated by the power spectrum. So, here’s the power spectrum of the difference between RSS and UAH data from 2003 to the present:

Furthermore, the annual cycle in the difference RSS-UAH is not restricted to the time span 2003-present, but that’s when the signal is strongest. A power spectrum of the data from 1995-present also shows the annual cycle with undeniable significance:

The annual cycle is real, there’s no doubt.

Note: Having compared RSS and UAH to the HadAT2 data set, I find that there’s more divergence between RSS and HadAT2 at the 1992 step than between USH and HadAT2. So I withdraw my opinion that the step change represents a reason to prefer RSS over UAH.

Categories: Global Warming

80 responses so far ↓

  • Mark Hadfield // October 21, 2008 at 9:19 pm

    Are you planning to publish this result?

  • TCO // October 22, 2008 at 12:18 am

    Are the step change and the annual cycle diffenrent issues?

    [Response: Yes. The step change is related to the transition from one satellite's data to another's. The annual cycle ... ?]

  • Richard Steckis // October 22, 2008 at 3:58 am


    I ran your analysis by one of our statisticians (internationally published and a former lecturer in Statistics at a Western Australian University). His assessment was that if your spectral analysis is based on only the monthly data, then a sixty point dataset is not large enough to come to the conclusion you have come to. He asserts that “you are drawing an extremely long bow”.

    The noisiness of the fourier spectral plot is indicative of a limited dataset. Sixty points is not enough to come up with the conclusion that the UAH dataset is faulty.

    Have you run this by UAH and RSS to see if they have addressed this issue?

    [Response: "Internationally published" doesn't impress me. I am too, including work on the specific topic of the statistical behavior of Fourier analysis.

    Statistical significance depends on more than just the number of data points, it also depends on the legnth of the time span and the signal-to-noise ratio. The significance is real, especially in the difference between UAH and RSS. Look again at the spectrum for RSS-UAH.]

  • Raven // October 22, 2008 at 4:02 am

    The measurements are reported in anomolies which means differences in the respective baselines would introduce artifacts have no effect on the accuracies of the trends over the last 10 yeats.

    The step change is something I have noticed before but these satellite measurements have been gone over with a fine tooth comb in the peer reviewed literature (see Randall, R. M., and B. M. Herman (2007)) so I suspect the step change is justified.

    In fact, you see similar step changes in the stratospheric records each time a volcano blows so you can’t reasonally argue that a step change centered on the Pinatubo eruption is non-physical.

    [Response: Nonsense. The step change is not in the data from RSS or UAH, it's in the difference between them. That has nothing to do with the physical behavior, it's all about the difference between their methods for estimating lower-troposphere temperature from other satellite data.]

  • Richard Steckis // October 22, 2008 at 4:39 am


    I am not going to get into a statistical argument (I will lose). However, you have not answered my main question to you:

    “Have you run this by UAH and RSS to see if they have addressed this issue?”

    Finally, Are you sure that what you are expressing as a statistically important difference is not just a sampling artifact given the small sample size?

    [Response: See the update to this post. The annual cycle in the difference between RSS and UAH since 2003 (and even before that) is quite real; it's not an artifact.

    I have not run this by RSS and UAH.]

  • Richard Steckis // October 22, 2008 at 4:43 am

    Oh, and my comment on my advisors international publication status was not to try and impress you or fill you with awe. It was merely to establish the baseline that he has extensive knowledge statistics.

    [Response: Understood; you weren't appealing to authority, you were establishing credibility.]

  • Gavin's Pussycat // October 22, 2008 at 5:53 am

    Richard Steckis, Tamino chided me once for seeing a solar signature in data where neither he nor his Fourier software could see any… he was right. There are proper ways for judging the significance of this. Don’t believe your (or anyone’s) lying eyes ;-)

  • Hank Roberts // October 22, 2008 at 6:05 am

    Curious from eyeballing the last chart, Tamino — the mismatch at 1 year is obvious; the only other similar mismatch, much smaller, is near the left side (some small fraction of a year). In both mismatches the two curves suddenly go in opposite directions then return. I don’t see that elsewhere. Anything interesting going on at that shorter time that resembles the one-year effect? Maybe it’s quarter-years, or some such artificial interval?

  • deepclimate // October 22, 2008 at 6:39 am

    This looks very convincing to me.

    My understanding of the data satellite sets is a little different than described above. Although the raw data sets are identical, even the published T2 sets (mid-troposphere, before correction for stratospheric cooling) are quite different. As noted, these differences arise from different adjustments for inter-satellite calibration, as well as for diurnal drift and orbital decay. In the IPCC 4AR, the respective T2 decadal trends were 0.04 deg c/decade for UAH and 0.12 for RSS (I haven’t looked them up since, but my hunch is that they have converged a bit).

    It would be interesting to know if the RSS-UAH differences showed a similar annual pattern in the T2 set; this might help pinpoint where in the processing problems are being introduced into the UAH data set.

    Zonal comparisons (NH, SH, tropical) might also be instructive.

    Again, great work. I have a feeling this will finally lead to a resolution of the UAH-RSS puzzle.

    [Response: Indeed, all the channels could be compared usefully. Although the directly measured channels don't require a data-combination procedure, they still require *calibration* and there are differences in the way that's done.]

  • Georg Hoffmann // October 22, 2008 at 11:21 am

    Extremely clear, Tamino, and convincing.
    A remark on the appearing seasonality. Thresholds, at least in some climatic subsystems (I have sea ice in mind), might however produce dramatic increases and appearences of strong seasonality in anomaly series.
    Sea ice anomalies are calculated relative to a climatology from the 70s,80s during which summer and winter sea ice still covered basically the entire Arctic. The anomaly series do not show any seasonality for very long time as expected. Now with summer sea ice progressively disappearing summer anomalies (to the original climatology) become hugely negative whereas winter anomalies still are weather driven. This increase in sea ice seasonality (first time really impressive in 2007) is due to the thresholds (melting conditions or not) in sea ice formation and in actually measuring the sea ice (ocean is sea ice covered or not). In contrast to the entire atmosphere you are not just warming a little bit more in winter (you mentioned that example), the effect for sea ice is much larger.
    In summary I agree entireley with you when looking at the global lower troposhere, but there are climate subsystems which exactly are expected to show such a sudden increase in seasonality.

    [Response: I checked the GISS surface record and found no similar-sized change in seasonality between 1979-1999 and 2003-present. But yes, there's no reason other variables (sea ice extent is a good candidate) might not show larger changes in seasonality.]

  • mauri pelto // October 22, 2008 at 1:55 pm

    Exceptional analysis. I agree there is not other explanation than what you have arrive at. The data have been gone over with a fine tooth comb and have been recalculated several times as a result of these ongoing reanalysis. The UAH in particular has suffered in the past from problems of adjusting tmeperatures with changing satellites.

  • B Buckner // October 22, 2008 at 1:57 pm

    Great post. Creative original thinking that adds knowledge and insight. You are at your best in post such as these.

  • Duane Johnson // October 22, 2008 at 3:49 pm

    It would be interesting to see a similar comparison between GISS and HadCRUT anomolies. Since there are differences in the processing approaches between the two sources, do analogous periodic differences appear?

  • Richard Steckis // October 22, 2008 at 4:01 pm


    I ran this post by John Christy of UAH. His reply to me was:

    The evidence is pretty clear that RSS has a spurious warming shift in the 1990s. Some of that information is in the three papers attached.
    The key point is that RSS shows a jump relative to all other datasets (UAH, SSTs, HadAT, RATPAC, surface temps, US radiosondes, Australia radiosondes etc.).

    The three papers he mentions are:

    Christy and Norris (2006). “Satellite and VIZ-Radiosondes for Diagnosis of Nonclimatic Influences. Journal for Atmospheric and Oceanic Technology. 23: 1181-1194.

    Christy et. al. (2007). Tropospheric temperature change since 1979 from tropical radiosonde and satellite measurements. J. Geophys. Res. 112: D06102 doi: 10.1029/2005JD006881.

    Randall and Herman (2008). Using limited time period trends as a means to determine attribution of discrepancies in microwave sounding unit-derived tropospheric temperature time series. JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 113, D05105, doi:10.1029/2007JD008864, 2008.

    Perhaps Tamino, you have backed the wrong horse?

    But then again, maybe you have uncovered something they were not aware of and should be made aware of it?

    [Response: Did he have nothing at all to say about the spurious annual cycle in the UAH anomalies?

    Considering the track record of the UAH team, and the fact that John Christy is a member, I suspect he's just being defensive.]

    [Response 2: Well well ... I compared the RSS and UAH data to GISS, and to HADAT. It looks like Christy is leading you down the garden path. I feel another post coming on...]

  • Wolfgang Flamme // October 22, 2008 at 4:52 pm


    since GISS and HadCrut do show these elevated seasonal anomalies too, how do we conclude how much of it is an artefact and who is right or wrong by how much?

    [Response: As I said before, I checked the GISS data to see whether that much difference in seasonal cycle between the 1979-1999 and 2003-present periods was plausible. According to GISS data, it isn't. And there's no plausible explanation for GISS (or any other surface record) to fail to detect that big a change in the seasonal cycle.]

  • Ray Ladbury // October 22, 2008 at 5:55 pm

    Richard Steckis, Tamino has identified a completely separate issue in the data that has nothing to do with a warming shift. I find it difficult to believe that Christy simply ignored the issue Tamino raised. Are you sure he’s perused Tamino’s analysis sufficiently? I wouldn’t expect him to simply dodge the question.

  • PaulM // October 22, 2008 at 6:38 pm

    How are you plotting the Fourier spectra? If you are looking at the last 5 years then the lowest frequency is 1/5, and all your frequencies are multiples of this. So the peak at 1 cycle per year should be point number 5 in your spectrum. But in your pictures you seem to have many more than 5 points before the spike?

    In fact the 1-year cycle is so clear in your second plot (even clearer if you plot uah - rss over the period 2003-present) that Fourier analysis is not really necessary.

    [Response: The lowest frequency is 1/5 ONLY if the sampling rate is 1 data point/yr. But the sampling rate is 12 points/yr, so the lowest applicable freuency is 1/60.]

    [Response 2: My mistake -- brain fart. The fundamental frequency spacing is indeed 1/5. The reason there are more "points" than that is that the spectrum is oversampled, to produce a smooth rather than "choppy" plot.]

  • deepclimate // October 22, 2008 at 7:02 pm

    No way that UAH matches surface temp record better than RSS.

  • dhogaza // October 22, 2008 at 7:33 pm

    I ran this post by John Christy of UAH…

    John Christy and his sidekick Spencer, the Detroit Lions of the climate science community.

  • Wolfgang Flamme // October 22, 2008 at 10:11 pm


    yes but the signal’s in the rest of both surface records as well. No problem, we know climate’s changing so something like that can be expected.

    Without destroying evidence we could make adjustments for the reference period in question - and shift the problem around in time. Nah..

    Deliberately introducing discontinuities is no solution of course. We need to *know* exactly what went wrong. I’m all with you about the need to scrutinize UAH. But we must not forget about RSS:

    1) It wasn’t much data the UAH team discarded after all. Possibly shouldn’t have made much of a difference - especially if the discarded data mainly contained ‘comfirmative’ information.

    But since there might be considerable differences I’m afraid the data could have contained much ‘new’ and ‘different’ information. By design this new information were incorporated into RSS but not into UAH. It might have been exceptional - yes, pinatubo might be a candidate. Dunno.

    2) Just because RSS looks more like red noise we cannot be sure that the 2003+ data actually was red noise.
    Being lazy I did a short MCA creating artificial RSS-ish red noise of proper length and looked for the frequency of outstanding seasonal amplitudes …saying there is ~1% chance for the seasonal amplitude to stand out. Some evidence suggests that climate change can cause changes in seasonality and that climate change mustn’t necessarily be a continuous process. With all that in mind I cannot rule out a faint possibility of UAH being right.

    3) Finally, there’s too much ado about consensus and aggreement nowadays. This strong argument of yours isn’t a product of arguing that RSS and UAH are so very much alike that the difference doesn’t matter at all - it’s a product of disagreement. When there’s disagreement, there’s need for strong evidence. When there’s strong evidence instead of superficial plausibility, consensus will emerge deliberately.

    I say that because dissing UAH became sort of a green insider’s sport here in germany - as long as UAH showed less warming trend at least.
    I admit I usually preferred them but that was because AFAIK they have a more direct (however more noisy) way to derive their temperature estimates. And I don’t care how many corrections they undergo as long as I can remain confident that every single one of them is for the better. I would hate Christy for being aware but not dealing with that problem because I’d loose a different point of view.

    4) Must rework the negative trend analysis I recently promised to deliver to Sean. I used equal weighing for GISS, CRU, UAH and RSS then. Will throw out UAH for the time being.

  • Raven // October 22, 2008 at 11:25 pm

    GISS constantly alters it data after publication because of the algorithms used to fill in missing data. If someone makes a claim based on GISS you have to find out exactly which version of GISS was used and you will likely have to get the data from them because GISS does not archive old data.

  • David B. Benson // October 23, 2008 at 12:26 am

    Raven // October 22, 2008 at 11:25 pm — It is not data; GISS obtains the raw data from NOAA, AFAIK. It is old temperature products that you claiming that GISS does not archeive.

  • Richard Steckis // October 23, 2008 at 3:01 am


    From the abstract of the Randall and Herman paper (University of Arizona not UAH):

    “Comparison of MSU data with the reduced Radiosonde Atmospheric Temperature Products for Assessing Climate radiosonde data set indicates that RSS’s method (use of climate model) of determining diurnal effects is likely overestimating the correction in the LT channel. Diurnal correction signatures still exist in the RSS LT time series and are likely affecting the long-term trend with a warm bias.”

    I think we can dismiss your attribution of UAH being the outlier with regard to the 1992 step function.

    With regard to the annual periodicity in the spectral analysis, your dataset is too small. I think you would be on safer ground if you looked at the daily data instead.

    To really solve this issue, I think it is important that you publish your findings so that your peers can determine the veracity of your research. This is too important for just a blog post. It affects one of the two major satellite data products.

    I found John Christy to be very obliging and responsive (even though he did not know me from a bar of soap). Therefore, I am sure he would be very keen to find out if his dataset has problems with error and bias. I do not believe he was being defensive at all and gave me the impression of being open to scrutiny.

    [Response: Your dismissal of annual periodicity is a prime example of your lack of objectivity: the numbers say it's there with undeniable statistical significance. You refuse to believe it because you don't want to. Take a look at the update to this post. Look hard at the final graph.]

  • Richard Steckis // October 23, 2008 at 5:19 am


    I did not dismiss your annual periodicity claim (just your claim re: the 1992 step function). I merely pointed out that the sample size was not large for fourier analysis. You have improved that marginally and I do accept that there is something in the data. However, I still believe you should analyze the daily data that are available at the UAH (and I presume the RSS) website.

    I do not refuse to believe your analysis I am merely encouraging you to pursue a more robust approach to your analysis. To put some meat on the bones, so to speak.

    This is important and should be pursued beyond just a blog post.

  • Ray Ladbury // October 23, 2008 at 2:37 pm

    Richard Steckis says: “This is important and should be pursued beyond just a blog post.”

    On this at least, we agree. I know from the experiences of colleagues that bridging across satellites is very difficult. No two satellites ever made were identical–it’s one reason the damned things are so expensive.

    Tamino, do you have time to pursue this through to publication? What might be very interesting would be a review article comparing RSS and UAH with this as a subsection? I have to admit that to date I’ve taken both datasets with a grain of salt, because the continuous record just isn’t long enough to have much confidence in either.

  • R. Randall // October 23, 2008 at 2:49 pm


    Getting to the bottom of these discrepancies is extremely important if we are to truly understand the temperature trend profiles in the atmosphere and thus climate change in the atmosphere. I think this is extremely important work, however the process of creating the MSU time series is extremely complicated and getting to the bottom of the signatures you found (also discussed by ATMOZ and Lucia on their respective blogs back in May) will not be as simple as one first thinks, but I hope it is pursued to peer reviewed publication.

    The step exists, as you stated and is due to different group choices, as you quote from Mears. However, as to which one is “inferior” or closer to reality is, and has been, the big question since Mears’ group created their data series. In order to determine such a thing requires that the MSU databases be compared to independent datasets and the only ones out there are radiosondes, which have their own problems. The discussion always goes around in circles, whenever it is brought up, XXXX MSU is comparable to XXX radiosnode,…. Oh but wait XXX radiosonde has this problem… so XXX MSU dataset is closer to reality….and on and on. So at this point, speaking peer review, as far as the step is concerned, Christy’s conclusion that the RSS database is the one with the step in it, has not yet been contested.

    The annual cycle signature can be explained. The diurnal and the hot target corrections that are applied to the raw MSU data have an annual cycle in them. Therefore, as you stated, the differences in the databases will have an annual cycle in them. As the process for determining the diurnal cycle correction is different for both groups there is a temporal signature that is caused by this difference. It might be premature, however, to claim that it is not possible for the amplitude to increase to what you are showing, thus concluding that UAH has the problem. It is possible that the actual diurnal/hot target correction required is at that magnitude. Resolution will only come from a detailed look the magnitudes of the corrections and how they “interact” with each other.

    I’ll provide a couple comments that may help hone in on the cause; procedures we used in our work. First, separating ocean from land will show a significant difference in magnitude of the yearly cycle, land having the greater magnitude (due to diurnal cycle) and comparisons may provide additional information to use. Additionally, keep in mind that the hot target corrections are determined from MSU Ch2 data only, then applied to LT channel data, the diurnal corrections applied the MSU Ch2 and LT are determined separately. This causes a problem in isolating causes of discrepancies as the hot target correction is determined after the diurnal corrections has been accomplished. In other words the LT channel hot target corrections are influence by the MSU ch2 diurnal correction. Note that these comments may not represent the newest version of the RSS data which was released in July.

    R Randall

    [Response: Thanks for your comments.

    To do this right, a great many things should be studied in detail, including separate analyses for land and ocean, for the two hemispheres, for latitude zones, etc. And it would probably be better to analyze daily data rather than monthly averages. That's one of the reasons I hesitate to take this on as a project for a peer-reviewed publication; it's a lot of work! But it does seem to be a worthwhile effort, so ... ?

    The annual cycle change is present in the UAH data but not in RSS, nor is it indicated by GISS or HadAT2. However, GISS and HadAT2 show a lot more high-frequency noise, making such a change more difficult to nail down, and the result is based only on global average land+ocean. It seems to me that the change in the annual cycle in UAH from the 1979-1999 reference period to the 2003 (or earlier)-present time period is implausibly large, but again, more data sets should be studied; does HadCRUT3v show a change of comparable magnitude, or NCDC? And of course, the surface temperature records also provide a breakdown by latitude zones and separate series for land and ocean. I'm a bit surprised that nobody's noticed the annual cycle in the difference RSS-UAH before now (or has it been noticed?), but it seems to be a phenomenon that could shed some real light on what's happening with the MSU TLT estimates.

    I've been comparing RSS and UAH to HadAT2, and those comparisons indicate that UAH is less divergent from HadAT2 at the 1992 step change. But the result doesn't seem to be conclusive (too much high-frequency noise in the differences RSS-HadAT2 and UAH-HadAT2), and I do find Mears' argument more persuasive. Nonetheless, I'm prepared to retract the opinion that RSS is more correct at the 1992 step change. It would probably be a very good idea to study limited geographic regions for which radiosondes have good coverage. But as you say, the radiosonde record is far from unimpeachable.

    So of course there's even more work to do! And I'm not getting paid for this. But then ... we're not doing it for the money, are we?]

  • mauri pelto // October 23, 2008 at 6:23 pm

    Tamino, keep it up. The key is the annual signal. Just focus on teasing out the cause of this and the relative magnitude observed. I agree that the magnitude of the UAH is too large. For peer reviewed work it is hard to focus on more than one issue at a time. As far as pay. I am paid to teach five classes a semester, nothing for research. I do manage to average 3 peer rev pubs per year, but have trouble presenting often at conferences. The point being don’t let money slow you down too much.

  • david douglass // October 24, 2008 at 3:38 am

    There is a new paper “Limits on CO2 Climate Forcing from Recent Temperature Data of Earth” by myself and John Christy where we compare tropical UAH and RSS to an El Nino index. The correlation is highest for UAH.
    Go to [Read the appendix also.]

    [Response: Published in Energy and Environment. You couldn't get it into a reputable journal?]

  • david douglass // October 24, 2008 at 6:33 am

    I was expecting a response to the paper — not to where it is published.

    Your readers may have a scientific response when they read it. It can be found at the following URL.

    [Response: And you'll get a response to the paper.]

  • Gavin's Pussycat // October 24, 2008 at 6:58 am

    Tamino, I seem to remember that someone here posted on doing a PCA analysis on the various global temperature time series together. That could also be a way to tease out the “skip” around 1992. It is obviously not possible to decide by looking at UAH and RSS data to decide which one has the skip; probably fairest to say that there is a weak point there, and that there are different legitimate ways to handle it, producing somewhat different results. Your argument that RSS does it better sounds plausible, but is at this point a belief, and reviewers don’t care much for statements of belief :-)

    So you have to bring in external data. As Dr Randall states, radiosondes have plenty problems of their own, and that holds for GISS, Hadcrut etc. as well. But perhaps by combining them in a PCA, something could be done. The first PC would undoubtedly represent global mean temperature; one of the PCs might be the arctic amplification signature; your annual signal in UAH would surely show up in a PC, and if you’re lucky, the step function. And then you can look at the corresponding eigenvector to see which time series it comes from.

    Would this be an idea?

  • michel // October 24, 2008 at 9:09 am

    Never mind where its published. Never mind their religion or political affiliation, what country they live in….etc etc.

    Is it right, that’s the only thing we want to know?

    [Response: I'll do a post on the paper after I've studied it in detail, and the publication location will not affect my opinion. None of which alters the fact that Energy and Environment is not a reputable journal.]

  • dhogaza // October 24, 2008 at 4:20 pm

    I was expecting a response to the paper — not to where it is published.

    It’s a reasonable question, though. Did you try submitting it to a reputable journal, or not?

    If not, why not?

  • Hank Roberts // October 24, 2008 at 5:22 pm

    Ask the question without the spin
    – was E’n'E your first choice for publication?
    – did you submit the paper elsewhere?

  • TCO // October 24, 2008 at 9:45 pm

    When I see stuff published in EnE, it makes me think that our side has weaker arguments. Also that they are lazy. I definitely notice people put less work into publications (in general) than what they deserve, considering that other people are going to try to build on them. Often even simple things, like clear writing, checking math derivations and notation, proper referencing and figure/axis labeling is done poorly. I think doing these things well makes it EASIER to engage on the real issues in debate. And even helps the proponent of a particular view/insight to think through his own side more thoroughly.

    An example of this poor practice is the Burger submission to CPD where it had Godawful English and confused logical flow. In that case, he hadn’t even bother expanding his discussion, when he resubmitted a rejected GRL paper to a journal that had no space requirements. That’s laziness.

    Please, note, I’m not trying to make a point of pedantry or to be a grammar nit. I’m making the more classical Katzoff/Wilson point that having the best clarity makes it easier for everyone to engage and science to move forward.

  • TCO // October 24, 2008 at 9:47 pm

    And when people submit to EnE, they put less work even on very simple things than when they go to the real literature.

    I think they don’t do enought obvious simple work with real literature also, of course. Makes me very angry when people don’t read the notice to authors and proof their paper to see if they followed those instructions.

  • Lee // October 25, 2008 at 1:06 am

    TCO: “When I see stuff published in EnE, it makes me think that our side has weaker arguments. Also that they are lazy.”

    Uhhh… TCO, I hate to break this to you, but…

  • michel // October 25, 2008 at 6:38 am

    Problem is with the whole style of argument, the substitution of irrelevant ancillary considerations for scientific argument on the issues. Its a pattern here.

    So we hear that Lucia is ‘full of shit’ (a comment that, extraordinarily is condoned and allowed to remain posted), that Douglass has published in a journal which is ‘not reputable’, and the argument then moves on to wishing to know the totally irrelevant matter of did he offer it to other journals first - as if that could have any bearing on any argument in it. Spencer’s religious beliefs are cited as evidence against his climatology views, though they are totally irrelevant. People are falsely accused of being funded by oil interests, or correctly of having worked in industries the posters do not approve of, or (mostly on no evidence) of supporting the wrong political party… and so on.

    This blog (and CA too for that matter, though CA is considerably better policed) would be a far nicer place, and one to be taken more seriously, if everyone, and one means everyone, would just get the personalities out of their comments.

    You do not realize it, but your target as you indulge in this stuff is your own foot. This stuff is what takes you down to the level of such blogs as those you most disapprove of: Icecap and Watts. Yes, this will horrify you, but its true. For those of your readers who are from off the island, they react to this stuff as you do to that, and for the same reason. Think about it.

    [Response: Problem with you is, you don't know what's relevant and what isn't.

    Example: the disreputable nature of the "journal" Energy and Environment isn't irrelevant. It's a fact. It's not just grossly incompetent , they have a political agenda too -- you can get any global-warming crap published there but only if it casts doubt on the reality or severity of the problem. The editor, Sonja Boehmer-Christensen, admitted to science journalist Richard Monastersky in the Chronicle of Higher Education that "I'm following my political agenda — a bit, anyway. But isn't that the right of the editor?" Not if you're editor of a scientific publication.

    So why are people asking where else the Douglass & Christy paper was submitted? Because they suspect that it's such an extreme combination of obfuscation and bungling that it would be laughed out of a reputable journal. Well I've read that paper now, and they're right: that's exactly what it is. I can't begin to comprehend what rationalization a scientist would have for attaching his name to such garbage; it's an embarrassment.

    It's also a wild success: as propaganda. I started writing a post about it, but then I realized what a total waste of time it is to give this work the time of day. Either way the forces of denialism carry forward their agenda -- either it goes unrefuted or I waste my time on shit. Even if I do refute it, there's at least one U.S. senator who'll take it as gospel. It's a pattern.

    But you want a polite discussion. When enough shit is shoved in your face, the time for polite discussion is over. I didn't go looking for this paper, Douglass himself came here to refer to it. So I call a spade a spade. You're perfectly welcome to get your climate science from the Jerry Springer show -- you'd probably get better information than you would from E'n'E. As for "CA is considerably better policed," that's a joke, but it's not funny.]

  • TCO // October 25, 2008 at 2:06 pm

    1. There are a LOT of real academic journals out there. By which I mean archived, abstracted by the major services. If you have an issue with a particular journal or group of journals where you have a rivalry with an editor, etc, you can invariably try another society, another field (most articles are interdisplinary), non-US journals, etc. I have seen a very top notch group that had issues with Bell Labs take this approach on hot physics discoveries. You CAN get published.

    2. Most papers are poorly written. If you are brutally careful in terms of your writing, your argumentation, following notices to authors, etc. reporting facts and clearly labeling interpretations separate from facts, you will STAND OUT above most papers. Even papers by NAME researchers. You can stuff that makes rivals very unhappy published by this approach. I (in a very minor way) took on Bell Labs via this method.

    3. When I see papers go to EnE, it is a sign of sloppiness and of in-group chat, rather than science. When I see stuff posted ONLY as blog posts (and meandering ones) and not EVEN white papers, then I know that the authors (McIntyre, Lucia) don’t really rate the skull sweat to consider their work. Nothing nasty about it…but it’s just a rational approach. It’s like a reviewer getting a paper that was written in very bad English (by a foreign speaker) saying “rewrite the paper in clear English, use a translator if needed, BEFORE I even try to come to grips with the science.”

  • TrueSceptic // October 26, 2008 at 1:34 am

    Nexus6 summed up E&E perfectly here.

    But more seriously, the founder/editor of E&E is on record as saying that the journal was set up specifically to provide a vehicle for papers that can’t get published in mainstream journals.

    Need I mention how E&E is funded?

  • Ray Ladbury // October 26, 2008 at 1:51 am

    Michel, if you are shocked or offended by someone claiming another researcher is full of beer and beans or other organic matter, then you haven’t been around scientists much. Do you really think dialog is that much more elevated at conferences in science/engineering departments…? Scientists are passionate people who take their subject matter seriously. If you are the sort who is easily offended, you really might be happier in another line of work. And the thing that will make scientists angrier than a hornet dipped in tabasco is seeing science perverted. Energy and Environment is just such a perversion. It’s like a creationist rag pretending to be a peer-reviewed journal.

  • Gavin's Pussycat // October 26, 2008 at 8:57 am

    TCO: hear, hear.

    Another relevant trick is: make your paper good looking (and remember to spell correctly the names of authors that might end up as reviewers ;-) ).

    Executive summary: use LaTeX / BibTeX. If the journal accepts it, of course. But then, it’s for you to pick the journal. Those eqs. in the Douglass paper look like something the cat dragged in. But then I remember myself how impossible it is to make eqs. look good in Word.

    About reviewers BTW, they appreciate it if the data is made available easily from a single source. I had no trouble getting Rahmstorf’s (2007) data, a single zip file, and could reproduce his graphs and start playing with it right away. The Douglass et al. data is undoubtedly widely available from the Intertubes, I just don’t want to go hunt for it. Weblinks are your friend (if you don’t want to build a supp data archive). So are TinyURLs.

    Scientifically valid, even seminal, papers may ignore these points, but by reverse inference, they are not very likely to so be if they do ;-)

  • Manfred // October 26, 2008 at 10:04 am

    One question about the step of almost 0.1°C in 1992:

    The consequence of the step change in RSS-UAH should be, that temperatures before have been measured 0.1° too high or too low for one of the satellite systems (assumed that the side of the step functon after 1992 is “correct”).So was

    RSS too high before ?
    RSS too low before ?
    UAH too high before ?
    UAH too low before ?
    or some mixture ?

  • Barton Paul Levenson // October 26, 2008 at 11:15 am

    michel writes:

    the argument then moves on to wishing to know the totally irrelevant matter of did he offer it to other journals first - as if that could have any bearing on any argument in it.

    You miss the point. If he didn’t offer it to real journals first, it’s a sign that he knew damn well that they would reject it — that it wouldn’t pass peer review — that it wasn’t good enough to be published in a real journal.

  • P. Lewis // October 26, 2008 at 1:24 pm

    Gavin’s Pussycat says:

    But then I remember myself how impossible it is to make eqs. look good in Word.

    Where’ve you been?

    Check out MathType (Design Science) for equations in Word, etc. and to the Web (MathML). You can convert Mathtype equations to LaTeX, and you can now even input in TeX if you want (I think).

    Design Science licensed the original and still extant Equation Editor to MS in 1991, but Design Science continued to develop it as MathType.

    Try the 30-day trial.

    PS. I have no connection with Design Science, but have been happily using MathType for about the last 10 years in my day job.

  • TCO // October 26, 2008 at 3:45 pm

    Manfred, nobody knows. The variability of temp, even over relatively long periods like a full year to year is high. Therefore, it is tough to decide based on just comparing the two and seeing one have a step and the other not. I guess, you could see which is more consistent with ground sources or balloons. But even here, it’s not trivial as I bet the correlation to those sources is not very tight (to the extent needed to resolve a one time .1 step).

  • Robert Grumbine // October 26, 2008 at 8:37 pm

    Tamino: The annual cycle is something to pursue seriously. It screams to me that Richard Swanson already published the answer in 2003 (Geophysical Research Letters). Namely, UAH sees the surface in polar regions, and there are trends in the polar surface (esp. sea ice). That includes annual cycle magnitude shifts. A quick glance at Crysophere Today shows 2003 as the last time northern hemisphere sea ice had greater than climatological extent. Hmm. RSS doesn’t go as far to the poles, so is unaffected by what is going on with the sea ice.

    Pursuit to a journal paper is, of course, a haul. But I could help you divide the work by attacking the sea ice side, and maybe we can get Swanson to join in. email me, or post contact info over at my blog, or …

    The splice problem … not one I can do much for/with. It is one, however, that I do see as a serious problem for any claims of the satellite record being ‘by far the best’.

  • michel // October 26, 2008 at 9:30 pm

    BPL, you can speculate till blue in face over whether he offered it elsewhere first, and if not, what his motives were.

    None of it has any bearing on the merits of any argument in the paper.

    It might, could we find it out, be a clue to the general merit of the paper, or maybe to his views of the merits of EnE, or maybe to his sophistication as a person, however considered as a clue to paper’s merits, it suffers from the difficulty that it is harder to establish the truth about who he offered it to when, than to establish the merits of the arguments in it.

    But we see you are not only determined not to consider its arguments, but also determined to refuse to admit that considering its arguments is a reasonable thing to do.

    And you go around calling other people ‘denialists’!

  • Ian Forrester // October 27, 2008 at 12:45 am

    David Douglass, where do you get the data for your HadCrut3 graph from? When I plot the data from

    it looks nothing like your graph.

  • Richard // October 27, 2008 at 3:21 am

    Does publishing in a highly reputable journal mean that your science is better? Maybe not.


    Energy and Environment may be just one of the new breed snapping at the heels of the established dinosaurs.

  • dhogaza // October 27, 2008 at 4:08 am

    None of it has any bearing on the merits of any argument in the paper.

    It might, could we find it out, be a clue to the general merit of the paper

    Contradict, much?

    Energy and Environment may be just one of the new breed snapping at the heels of the established dinosaurs.

    God, I hope not. Well, if you think that the equivalent of “Astrology today” or “Homeopath and Chemistry” will snap at the heels of the established dinosaurs that, among other things, have led to your being able to type a message on the internet … Palin help you.

  • david douglass // October 27, 2008 at 4:23 am

    At last, proper scientific questions on the content of the paper.

    To: Ian Forrester
    In our paper, go to section 2.2 Methods and definitions. There we state: “ we have applied a 12-point trailing average “box” digital filter, F, to all time-series.” We used the same data that you quote. Apply this filter and you will get our plot. [Hadley changes the values a little every few months, so there may be very small differences].

    To: Manfred
    There are 8 possibilities. The 4 you enumerate and 4 more with the word “after” replacing the word “before”. In our well written, easy to read, paper with the MathType prepared equations you will find in the appendix that our conclusion is: “RSS too high after” and the jump is 0.136 K.

    I am willing to answer additional scientific questions from either of you via email. I give Tamino permission to give it to each.

    [Response: Dr. Douglass' email address can be found by searching for him on the website of the Univ. of Rochester.]

  • deepclimate // October 27, 2008 at 5:10 am

    Re: Robert Grumbine // October 26, 2008 at 8:37 pm

    “Namely, UAH sees the surface in polar regions, and there are trends in the polar surface (esp. sea ice). That includes annual cycle magnitude shifts.”

    With all due respect, I don’t see that this could account for such a huge difference in winter vs summer trends in UAH. I’ve confirmed large differences between winter and summer month linear trends (over the full 1979-2008 period) in UAH TLT and TMT global monthly anaomaly data. These differences can even be seen in the UAH TLT tropical data. I hope to be posting on this topic within a week.

  • cce // October 27, 2008 at 6:01 am

    Or perhaps Energy and Environment is a “forum for laundering pseudo science” which has published work far more incompetent than anything the so-called “skeptics” have ever pointed their guns at. Thus we have stuff like Ernst-Georg Beck’s CO2 graph making the rounds.

    On the other hand, Roy Spencer doesn’t believe the CO2 rise is anthropogenic either, so maybe E&E is credible after all and it’s just the people with functioning brains who are the stupid ones.

  • Barton Paul Levenson // October 27, 2008 at 10:55 am

    michel writes:

    BPL, you can speculate till blue in face over whether he offered it elsewhere first, and if not, what his motives were.

    I wasn’t about to speculate. He was asked the question directly. He failed to reply. You’re the only one who doesn’t find that evasive and suspicious.

  • Ray Ladbury // October 27, 2008 at 1:03 pm

    Richard, The Economist piece is actually quite disappointing–they are usually more astute. The problem is that Ioannidis et al. understand neither science nor scietific publication. First, journals like Science and Nature publish on a very broad range of scientific topics and for a fairly general scientific audience. The result is that 1)they tend to publish more research that is of general interest–particularly research that seems to be “breakthrough”, and 2)some “sciences” on which they publish such as medicine barely merit the name science.
    I rather doubt that you would reach similar conclusions looking at Physical Review Letters or similar publications in other fields. So basically, Ioannidis entire premise is based on a fundamental misunderstanding of his subject.
    It would have fit right into the pages of Energy and Environment–a hotbed of alternative science (meaning alternatives TO science).

  • Ray Ladbury // October 27, 2008 at 1:13 pm

    Dave A., Presuming you are actually not joking, your very ability to ask the question is proof of the greenhouse effect, as otherwise life would not exist on Earth.
    As to your assertions about models–they simply are not true. The models do a remarkable job of capturing the physics–from seasonal effects to volcanic eruptions and on and on. This is particularly true given the granularity of the models. I would be quite curious to know what physics you think they are missing?

  • cce // October 27, 2008 at 2:05 pm

    Isn’t the difference in coverage between UAH and RSS primarily in the southern hemisphere? They both go to ~82.5 degrees in the northern hemisphere.

  • Richard Steckis // October 27, 2008 at 2:39 pm

    Ray Ladbury:

    “The problem is that Ioannidis et al. understand neither science nor scietific publication. ”

    You obviously haven’t researched this guy much. He is widely published in the literature. He is a medical scientist and therefore surely does know a bit about science. Also, his co-author of the PLOS Medicine article is also well published in a wide array of highly regarded scientific literature. Just google scholar their names.

    Therefore your presumptions are false as usual. I find it remarkable that you regard medical science as “barely meriting the name science”. I hope you are not a doctor of medicine.

  • Richard Steckis // October 27, 2008 at 2:51 pm

    Oh. By the way Ray. Nature and Science are the two top rated scientific journals. They are not general science journals for a general science audience. They are for ostensibly ground breaking research that is original and contributing to the advancement of the individual area of study.

    I don’t think a paper titled: “Crystal structure of a stable dimer reveals the molecular basis of serpin polymerization” Nature doi:10.1038/nature07386″ rates as a general interest science article.

    If you want a general science mag then try New Scientist.

  • Ian Forrester // October 27, 2008 at 3:06 pm

    David Douglass said: “we have applied a 12-point trailing average “box” digital filter, F, to all time-series.” We used the same data that you quote. Apply this filter and you will get our plot”.

    What you have done is distort the data not moving it by “six months” as you claim. Your “manipulated” data show temperatures dropping for the past 12 months. This is patently false which is why I asked you about your data.

    Every time this graph is reproduced in the denier blogsphere it is showing that temperatures have dropped, (quite rapidly) according to your manipulated data, over the past 12 months. Do you not feel guilty that your data is leading to this false assumption?

    I would have though that, as a scientist, you would be concerned about how people are interpreting your data.

    [Response: Taking a 12-pt moving average is a valid procedure, although using the time of the final point as the "time" for the average is unusual (not in economics). However, what Douglass & Christy either choose to ignore, or are ignorant of, is that filtering all the data sets in this way introduces artificial autocorrelation in the time series. That's OK if it's compensated for, but in this paper it isn't. And the filter is unnecessary; the reason given for using it is bull.

    In fact the series already show strong autocorrelation, so that would need to be taken into account even if the moving-average filter were not applied. Compounding that by a filter which exacerbates autocorrelation, exacerbates the problem: the uncertainties stated for regression coefficients are far, far too low, and the extremely high correlations (which the authors seem to be very impressed with) are too high. But the issue isn't mentioned. This is either mendacity, or astounding ignorance.

    But frankly, the statistical issues with the paper are much less important than the ludicrous physical assumptions made.]

  • Ray Ladbury // October 27, 2008 at 4:52 pm

    Richard Steckis, Believe it or not, I’ve heard of Science and Nature. I was using the term “general science” in the sense of not specialized to a specific field–i.e. physic, chemistry, etc. Moreover, I draw a very important distinction between “medical science,” which is large empirical and epidemiological and sciences like physics, climate science etc., where there is a good theoretical framework to guide research.
    I also stand by my contention that Ioannidis contentions are based on a misapprehension of science and science publishing. In a journal like Nature, where a broad range of subjects are published, you have varying norms of how certain the research must be to get over the threshold. Research that contradicts “conventional wisdom” tends to be interesting to the community, so if it is not obviously incorrect, it tends to get through peer review. That it is subsequently found to be incorrect is also not surprising. However, I contend that you see this mainly in fields like medical science. If the authors confined themselves to fields like physics, I rather doubt they’d reach the same conclusions even if they looked only at Science and Nature.

  • Phil. // October 27, 2008 at 5:51 pm

    cce // October 27, 2008 at 2:05 pm

    Isn’t the difference in coverage between UAH and RSS primarily in the southern hemisphere? They both go to ~82.5 degrees in the northern hemisphere.

    RSS also excludes high land above 3000m.

  • Horatio Algeranon // October 27, 2008 at 7:29 pm

    Tamino says:

    the extremely high correlations (which the authors seem to be very impressed with) are too high. But the issue isn’t mentioned. This is either mendacity, or astounding ignorance.”

    …or perhaps simply Lookin’ for correlation in all the wrong places

    (The gift that keeps on giving. Only the acronyms have been changed to protect the innocent.)

  • Eli Rabett // October 27, 2008 at 8:57 pm

    P Lewis and Gavin’s PCat, fwiw Eli was sitting at a conference today with someone who also strongly recommends MathType

  • HankRoberts // October 27, 2008 at 10:31 pm

    > In fact the series already show strong
    > autocorrelation, so that would need to be
    > taken into account even if the moving-average
    > filter were not applied.

    One is led to wonder about the judgment of the anonymous peer, or peers, whoever it was who did review this publication.

    I realize the peer reviewers are anonymous, but I wonder if their actual comments can be made available to see what they did and did not find?

  • Telling // October 27, 2008 at 11:43 pm

    Ray Ladbury,

    You clearly don’t have a clue about how medical research works if you differentiate other sciences by there being “a good theoretical framework to guide research.”

    Do you really think that clinical trials are performed in a vacuum without a theoretical framework? That we create a random compound (or thousands of random compounds with high throughput screening) and start testing without a basis in theory? If so, you are demonstrating significant ignorance. No compound leaves the discovery phase without a full theory of mechanism of action based on fundamental biological principles. In climate speak, we create a model of how a drug will interact in the body- partially based on extensive computer modeling (we have access to computing power and 3d holographic rooms that you physics jockeys can only dream of). Then we design clinical trials to test our theory. To lump this all as “large empirical and epidemiological” is, well, silly.

    The biologics boys are equally, though somewhat differently, grounded in basic science. Though you probably would dispute that the sequencing of the human genome creates a “good theoretical framework” and you would claim that it is “barely science”. I guess they discovered Gleevec, Epogen, Avastin by pure luck.

    Next time, see if you can make your point without gratuitously insulting other scientists.

  • Ray Ladbury // October 28, 2008 at 12:31 am

    Telling, I draw a distinction between biologically motivated science–where indeed there is a framework–and epidemiological studies like those that tell us we’ll get cancer if we use our cell phones. It is the latter that give rise to the type of studies “headline grabbing studies that turn out to be wrong”. It is Ioannidis that fails to make the distinction, not me. However, I acknowledge I could have been clearer in my communication. Sorry of the confusion.

  • TCO // October 28, 2008 at 9:00 am


    What’s important is to have a questioning attitude and bring the tools of different methodologies to various fields. There is good science in market research and bad work in nanotechnology (and the converse…blabla).

    It’s like music. Or food. One needs to learn to pick the good things from various cultures. To be eclectic. To discern patterns where some fields are weak or strong. So, what I like about your commentary, is that there are differences in strengths of practice/ioners in different fields. We should not assume that all tribes are equal.

    What I don’t like is the failure to analyze the tribes on various parameters. Certainly epedimiology and medical science have some great practices in data provenance, in statistics, in blind trials, in consultation with statisticians, etc. that other fields could do well to emulate. While not emulating the bad parts of epedemiology (and there are more flaws, which you didn’t touch on.)

    One of the things you should learn from your Ph.D. (or your B.S. or even high school chem class) is to solve problems in various fields. Is to take tools from one area and move them to another. And there is a huge amount of that, that goes on nowadays. Look at the social network guys at Crooked Timber and there more facile understanding of Eurovision voting patterns as opposed to neophyte physicists.

    Physicists are bright guys. And they are good at tough math. But they should not think that they are the only good thinkers.

  • Ray Ladbury // October 28, 2008 at 1:45 pm

    I think that both you and Telling missed the point I was trying to make–namely that we have to draw a distinction between science guided by a mature theory and science in its exploratory phase. It is the former where you tend to get “revolutionary” results that attract a lot of attention but are ultimately found to be incorrect. The studies attributing cancer to cell phone use and power lines are classice in how not to do exploratory studies. They don’t control for complicating and correlated factors. They ignore the laws of physics. I have also seen studies done in ecology that were models of how to conduct a study when precise replication and exacting control are not possible. My objection is the Ioannidis and his collaborators completely ignore these distinctions.
    Since you are much more likely to see exploratory studies in a magazine like Science or Nature, which are not specialized, than in a journal like Physical Review Letters, which is, Ioannidis strongly overstates his conclusions. I referred to physics because it is what I know, not because I feel it is in any way speciel or immune from error.

  • Gavin's Pussycat // October 29, 2008 at 1:52 pm

    P. Lewis, Eli: I’m sure MathType is a good solution if you’re stuck on Windows/Word… vanilla LaTeX does that on any OS, and with LyX, even graphically. It’s fun to use and looks great without tinkering. And yes, you can copy the equations over the clipboard.
    …and yes, it is a while since I seriously used Equation Editor ;-)

  • steven mosher // October 30, 2008 at 2:40 am


    here is a simple question: As you note, “The models do a remarkable job of capturing the physics–from seasonal effects to volcanic eruptions and on and on. This is particularly true given the granularity of the models. I would be quite curious to know what physics you think they are missing?”

    So here is the question. If, as you note, “the models” do a “remarkable” job of capturing the physics of volcanos, and if you were trying to see how well “the models” modelled the observational record for, say, GSMT, or the temperature at the troposphere, would you use models that did not have the physics of volcanic eruptions, models with physics known to be incomplete? would you?

  • Ray Ladbury // October 30, 2008 at 12:17 pm

    Steven Mosher, Are we modeling temperatures at a time of high volcanic activity or low activity? Volcanos are Poisson processes. They have a very large effect in the months just after an eruption, but a much smaller effect averaged over time. The same model that models the intial effect well may not necessarily get the average effect right and vice versa, since one looks at the effect of a high aerosol concentration and the other depends much more on how that concentration evolves over time. Different physics.

  • TCO // November 15, 2008 at 8:06 pm

    atmoz sez he already noticed this.

  • dko // November 19, 2008 at 6:59 pm

    Yeah, I know, this thread is almost dead.

    But I wanted to draw attention to what may be a new step function. New to me, anyhow.

    Check out the differences between Nov 08’s posted RSS data:

    and Oct 08’s:

    Subtract the two and plot. Run a linear fit from Jan79 to Dec86…and another from Jan87 to Sep08.

    The older data have been “warmed” a little and the more recent “cooled” a little. The difference at the step is about 0.05C. Is this an attempt to bring RSS a little closer to UAH?

  • David B. Benson // November 20, 2008 at 12:20 am

    dko // November 19, 2008 at 6:59 pm wrote “Is this an attempt to bring RSS a little closer to UAH?” I rather seriously doubt that is the reason for the modification.

  • cce // November 20, 2008 at 12:58 am

  • dko // November 20, 2008 at 1:11 am

    GISS is forever changing data, especially from the last five years. For RSS to go back that far and modify tells me something is afoot.

    Maybe the intent isn’t to align more closely with UAH, but this does take them in that direction.

    [Response: GISS is forever *adding* data, since more data becomes available even long after estimates are released.]

  • cce // January 16, 2009 at 11:29 pm

    It might be worth redoing this analysis using RSS 3.2. The changes might bring it more into line with HadAT2 around the “step change.” Also interesting would be to compare them by latitude bands, using radiosondes and the surface data (land, ships/buoys, and the Reynold satellite SST) to help detect artifacts amongst all of the methods.

  • apolytongp // February 2, 2009 at 1:59 am

    Wahtever happened with the annual frequency?Deep Climate was going to report in a week in OCT.

    Wouldn’t a paper even just noting the difference be helpful. Even if you don’t know the cause, even if you can’t prove annual issue is a fault? Would leave step function out as it’s well known.

  • apolytongp // February 2, 2009 at 3:24 am

    Just looked at the MEars paper. Very clear and well written. Slight nit: figs 8b, 9 and 10c give the same info.

Leave a Comment