A look at temperature anomalies for all 4 global metrics: Part 2

6 03 2008

Before I left on my trip to New York, I published part 1 of this series looking at the temperature anomalies between the 4 global temperature metrics from 1979-January 2008. The first post I made on the subject used the unadjusted global temperature anomaly data to do the comparisons. I also wanted to do the same comparisons using anomaly data adjusted to a common reference baseline. But unfortunately ran out of time to complete all of the histograms for the next data set before I left on the trip.

In the meantime, while I was traveling, the first post, missing the all important part 2, generated some controversy, and some accusations that I was misrepresenting the data by not showing it adjusted to a common baseline.

It was a mistake on my part to not have them both available at the same time, and for that I apologize to anyone whom was misled by the lack of part2. Atmoz did a quick study of the issue also and illustrated what I wanted to do for part2 with a simple graph, and while it would have been easy to simply use his, I wanted to complete what I started using the same presentation style. Recognizing that having part1 only was misleading to some, I put part 1 back on the shelf until I could return from my trip and finish part 2, so that I could show what happens when all four metrics are adjusted to the same base period.

That is complete, the Part1 article has been restored, and below is the new adjusted information as it compares to part1.

Here is the first graph, the unadjusted raw anomaly data as it was published in February by the four metrics from  UAH, RSS, GISS and HadCRUT. Note that while there is pattern agreement to the 4 metrics, there is an amplitude difference.

giss-had-uah-rss_global_anomaly_1979-2008-520.png

Here is the source data file for this plot and subsequent unadjusted plots.
4metrics_temp_anomalies.txt

Here is the same data, but adjusted to a reference period of 1979-1990:
giss-had-uah-rss_global_anomaly_refto_1979-1990
Click for a larger image

Here is the data used: 4metrics_temp_anomalies_refto1979-1990.txt

Now we can see that the agreement of the 4 metrics is better using the data adjusted to a common baseline period.

The difference between these metrics is of course the source data, but more importantly, two are measured by satellite (UAH, RSS) and two are land-ocean surface temperature measurements (GISS, HadCRUT).

One of the first comments from my post on the 4 global temperature metrics came from Jeff in Seattle who said:

Seems like GISS is the odd man out and should be discarded as an “adjustment”.

That is no longer the case once the adjusted data is presented. The trend and amplitude agreements are very good with all four metrics.

In my previous post on this in part 1 I mentioned I had never seen a histogram comparison done on all four data-sets simultaneously. The first set of histograms showed a wide disagreement, particularly in the land-ocean metrics from HadCRUT and GISS.

Below I have plotted the original histograms part 1 and the new adjusted ones:

First we have the satellite data-set from UAH.
UAH UNADJUSTED DATA:

uah_histogram-520.png
University of Alabama, Huntsville (UAH) Microwave Sounder Data 1979-2008 - click for larger image

The UAH data above looks well distributed between cool and warm anomaly. A modest warm bias at 63%.

UAH ADJUSTED DATA- Baseline 1979-1990:

uah_histogram_refto_1979-1990-520.png

University of Alabama, Huntsville (UAH) Microwave Sounder Data 1979-2008 ADJUSTED - click for larger image

Next we have the satellite data-set from RSS.
RSS UNADJUSTED DATA:

rss_histogram-520.png
Remote Sensing Systems (RSS) Microwave Sounder Data 1979-2008 - click for larger image

Again we have a modest warm bias at 63%. And now the adjusted data.

RSS ADJUSTED DATA - Baseline 1979-1990:


rss_histogram_refto_1979-1990-520.png

Remote Sensing Systems (RSS) Microwave Sounder Data 1979-2008 ADJUSTED - click for larger image

Note that we have now shifted to a slight cool bias in the histogram at 51.6%

Here we have the land-ocean surface data-set from HadCRUT.

HadCRUT UNADJUSTED DATA:

hadcrut_histogram-520.png
Hadley Climate Research Unit Temperature data 1979-2008 - click for larger image

Here, we see a much more lopsided distribution in the histogram with what appears to be a strong warm bias of 89%. But when the1979-1990 baseline adjusted data is plotted, that apparent warm bias reverses and becomes a slight cool bias as seen below.

HadCRUT ADJUSTED DATA - Baseline 1979-1990:

hadcrut_histogram_refto_1979-1990-520.png

It is interesting how the simple application of a common baseline to the data modifies the distribution on the histogram.

Finally we have the GISS land-ocean surface data-set.

GISS UNADJUSTED DATA:

giss_histogram-520.png
NASA Goddard Institute for Space Studies data 1979-2008 - click for larger image

giss_histogram_refto79-90-520.png

NASA Goddard Institute for Space Studies data 1979-2008 ADJUSTED - click for larger image

In part 1 I stated: “I was surprised to learn that only 5% of the GISS data-set was on the cool side of zero, while a whopping 95% was on the warm side.” But just as seen above with the UAH data, when the1979-1990 baseline adjusted data is plotted, that apparently large warm bias reverses and becomes a slight cool bias as seen above.

So from the presentation of the time series and these new histograms using data adjusted to a common baseline I conclude three things:

1. It is important to present data with a common baseline of reference when doing comparisons between data sets of different origins.

2. Data that has been adjusted in this way may vary significantly from the raw data

3. When one is looking at graphical presentations of data, it cannot always be taken at face value. One must look deeper into it’s provenance to fully understand what the basis is, for what may appear as agreement or disagreement in comparatively plotted data may have an explanation that lies within how it is prepared for that presentation.

Some folks who viewed part1 without the benefit of part 2 were quick to criticize it and say that it didn’t represent the whole story of data accurately. I would agree with that, which is why I removed part1 temporarily until part2 was complete. Some of the same people had sharp words for me personally, suggesting the part1 presentation was “stupid”, or worse.

I’ll be the first to admit that I’m not a skilled statistician on par with people like Steve McIntyre of Climate Audit. Neither is 99% of my readership. But, I’m doing an honest investigation into things I want to learn about. Making mistakes along the way (like not having both parts 1 and 2 completed for comparison) is part of the process. I doubt there is a scientist in existence that never made a mistake as he/she went along the path learning new things. Often mistakes are quietly pointed out by colleagues in the university environment, and you never see or hear about them. In the rarefied atmosphere known as the “Blogosphere”, such mistakes are often the fodder for vicious attacks rather than congenial learning experiences. Still, they are learning experiences nonetheless.

My specialty is in meteorological instrumentation and presentation of live weather data from stations, radar, and satellite sources. But that doesn’t prevent me from learning and trying new things in meteorology and climate science. For me. and I think also for my readers, this is a learning exercise. So much of the way climate data is presented is often a mystery, because the folks that publish it are often so far ahead and so focused on their own tasks they become unaware of how narrow the skill set becomes. As a result they may not see the need to publish instruction manuals for the data so that it can be interpreted by others that aren’t at the same level of understanding.

Given that climate change is such an interesting and provacative subject to a wide segment of the population now, I’d say it is incumbent upon researchers to devote a little effort to providing better documentation so that a better understanding of the data they publish is fostered.

For example, GISS does a good job and makes note of the base period in their data set seen here: temperature index data but HadCRUT does not say one thing in their published data seen here, and I had to create a file for my own blog to help myself and my reader interpret the data columns, seen here.

It would also be nice if the global temperature data was presented in some sort of unified format so that when it is used for public consumption, the interpretive issues can be minimized. Given the importance of the four global metrics, this seems a reasonable approach. I think it would in the public interest if researchers get together on this and create a common format with a common set of descriptions to accompany it.

When I was on television, I often had to prepare graphics to present to the public in the space of a few minutes, and then do an interpretive discussion of it to help them understand it. Given the readership, I see this blog as being much like that, though I often have much more “viewing time” than the usual 2-3 minutes on TV. My goal here is still the same; to make things understandable for a wider audience.

In part 3 of this series, we’ll look at the differences in reporting in more detail.


Actions

Information

50 responses to “A look at temperature anomalies for all 4 global metrics: Part 2”

6 03 2008
steven mosher (12:05:27) :

Thanks Anthony,

I remember the first time I stumbled across the ‘base period’ difference between
Giss, Noaa, and hadcru. For a while I would only look at hadcru since they actually publish absolute values ( in a hard to find place of course ) I never wear
golf shoes when doing the anomaly dance. For grins you can go back to some of the old OpenTemp threads and find JohnV and I tripping over each other over similiar issues. Lucia has a nice little post on this as well. Over at her place.

Also, ATMOZ did a nice little piece on Crater Lake for me. It would be cool to
have him publsih his method.

6 03 2008
Jeremy (12:13:59) :

I don’t understand how you adjusted this data. Could you explain what “adjusted to a reference period of 1979-1990″ means? Do you mean that you changed the offset so that the average temperature anomaly from 1979-1990 was 0? Did you also change the scale?

REPLY: That will be covered in part 3

6 03 2008
MattN (12:22:50) :

1979-1990 to the eye looks to be the coolest period in that series. I would not have guessed a 50/50 split on warmer/cooler given how warm the 90s are accused of being.

Curious as to why GISS has 1998 so much cooler than the other series. Is that because it’s more land-based measurements and the super El-Nino was (obviously) ocean-based?

6 03 2008
Bob_L (12:30:57) :

Anthony,

I regret that some have chosen to misinterpret part 1 as a finished work but I feel the problem is with them, not you and your effort here. Unlike others in the blogisphere, like those climate scientist, you are not paid for your research but appear to be investing considerable time and resources for the “joy of discovery”.

Thirty years ago, you might have read an article in a journal, questioned the placement of the thermometers, driven around in your Pinto to the stations you could get to in a day and make some observations.

Today, in a period of a few months, you have managed to organize a cadre of like minded “hobbyists” to survey over 500 stations throughout the country and share information and photographs with millions. It is truly a wondrous time to be alive.

I feel like an office lurker. I pop into your cube several times a day where you explain what you are doing and then we all stand around the water cooler and discuss it.

I am not a scientist by training but I am a thinker and like to join others in the discovery process and in my estimation, I can interpret the science here better than any other blog.

Thanks for letting me look over your shoulder!

6 03 2008
Frank Ch. Eigler (12:34:06) :

Can you specify the adjustment algorithm? Is the effect the
cancellation of a constant offset between the various series?

REPLY: That will come in part 3

6 03 2008
Dan Evens (12:37:52) :

Well done.

The second graph does not load correctly when I “click for bigger.” I get a 404 error. All the other graphs work correctly.

REPLY: Fixed, oversight on my part

6 03 2008
Alan S. Blue (12:41:42) :

There isn’t a larger image backing up the “Click here for larger image” line under the second image.

REPLY: That one I forgot to upload from my laptop, which is at home. Will fix tonight. I’m sorry for the inconvenience. A note is now under the image.

6 03 2008
Richard Wright (12:42:33) :

Could you explain exactly how you did the adjustments? If you simply normalized (i.e., shifted up or down) the 4 data sets so that they overlap as closely as possible in the period of 1979-1980, then the shapes of histograms should not change at all. However, they do change, so something else must have been done.

Normalizing the 4 data sets is useful to show there overall response to temperatures, but that still leaves the question as to what are the real temperature values. It appears that your adjustments center the 4 data sets around a temperature anomaly of zero degrees, so that temperatures before 1995 are generally cooler than “normal” and temperatures later are generally warmer than “normal”. But who’s to say this is correct? So, could you provide details of your adjustments and the rationale behind them?

REPLY: That is coming in part 3

6 03 2008
Raven (13:00:37) :

Anthony,

I found it ironic that your critiques engaged in exactly the kind of nonconstructive criticism that they accuse you of. I am sure they can justify their behavior in their own mind.

However, the agreement between the datasets over the last 30 years does not mean much because:

1) The datasets are not independent. The all rely on various statistical factors which give the dataset owners a lot of flexibility when it comes to publishing data. Claiming agreement with the other dataset owners is the easiest way to justify whatever algorithms they choose to use. This does not mean that the owners of the datasets have done anything intentionally deceptive - it just means that no one can claim that the agreement in itself validates the datasets.

2) The case for AGW is built entirely on the presumption that the warming in the last 30 years is unusual. However, we have seen that adjustments added into the surface records seem to rely heavily on adding cooling to pre-satellite data (this is particularily true for the sea surface data). This has the effect of exaggerating the trends over the 20th century. Now these cooling adjustments to past data may be justified scientifically, however, these adjustments mean that there could still be significant issues with the surface record even if they manage agree with with the recent satellite record.

The bottom line is all of the datasets are the output of a very complex statistical process and should never be treated as a simple measurement of temperature. We know from the hockey stick debate that the choice of statistical algorithm can produce radically different results.

Final comment - you histograms that show GISS with a slight ‘cooling’ bias is misleading because that cooling bias probably exists primarily in the older data which exaggerates GISS trends compared to the other datasets.

6 03 2008
MattN (13:03:26) :

Something’s not right Anthony. If the zero point is the average of 1979-1990, then I would expect all the data from 1979 to 1990 to be centered on zero. It looks like most of it is below zero.

Or am I not reading this right?

REPLY: I don’t know at the moment. Can’t look further still lots of catchup to do at my office.

6 03 2008
Atmoz (13:07:54) :

Are you sure you used a reference period of 1979-1990? It looks like you did exactly what I did in my quick analysis; 1979-Jan/2008. The means of all the “adjusted” distributions are zero. Also, the anomaly time-series don’t seem centered around the zero anomaly for 1979-1990.

The link to the enlarged version of the “adjusted” time series plot doesn’t work.

REPLY: Yes, missed uploading that image, back at home on my laptop, will fix tonight.

6 03 2008
Evil Carbon (13:10:06) :

Man you are smart. I only understood about 8 words. Keep up the good work!!

Global Warming alarmists beware… EvilCarbon.com

6 03 2008
Drew Latta (13:11:10) :

Your observation about the explanation of methods is absolutely right on with what I’ve encountered going through some of the paleoclimate literature recently, and it is the same as the small body of modern climate literature I’ve read. Most of the discussion in these climate papers in broadly disseminated scientific journals, esp. Science and Nature, takes place a a level that exceeds the level of the broad audience.

If you read some of the classic papers by 19th and early 20th century scientists (I tend to read chemistry papers) you notice that things are much more descriptive since they didn’t have the body of jargon that has attached itself to whole fields of science more recently.

One might level this complaint upon the high impact scientific journals like Science and Nature, especially. Root causes of this may include page-limits, which cause descriptive methods to be attached in electronic annexes/supporting info sections, and the fact that peer review is done within a more narrow field of scientific inquiry compared with the audience. Although more broadly, might it be considered a result of the specialization of scientific inquiry, and thus perhaps unavoidable?

6 03 2008
dscott (13:12:51) :

Anthony, You mentioned you are using the base period 1979 to 2008 for the adjustment, can you give us the base period used for each of the 4 series and their average value before making the adjustment? It would help us understand why this is such an important issue. If I understand what you are saying here is that the average value used for the zero temp. anomaly mark on the graph is dependent upon the time period. So each time period has it’s own respective average which determines the zero mark used for displaying the anomaly or the deviation from zero. One can manipulate the time period in order to drop out outlier data to create an impression of either warmer or cooler temperatures.

Btw- Is this a “simple average” or a “median” value used for the zero mark?

REPLY: Sure, part 3. In the meantime I’d like to remind everyone that I’m one guy with a business to run, familiy to feed, and a nationwide survey project to amange on zero budget. Patience please.

6 03 2008
Earle Williams (14:06:57) :

All,

I sent Anthony the spreadsheet with the collated data of the four metrics. That data is shown in the text file Anthony makes available. I also had in that spreadsheet calculated all four metrics centered around respective 1979-2008 mean. In an initial calculation I had used 1979-1990 as an arbitrary reference. My cantering was based on 1979-2008, but I inadvertently left the label above these columns referring to a 1979-1990 reference. if my spreadsheet is the source of these curves that would explain the difference.

I suggest we use some standard terminology in the ongoing discussions to minimize the chance of perpetuating confusion.

CENTERING: Centering a time series involves adding or subtracting an arbitrary constant value. Visually this has the effect of sliding the curve up or down on the graph. When I centered these 4 metrics I did the same process Atmoz describes, calculate the mean value for the series over the time period in question and then subtract that constant from the series. If the time period is the entire length of the series this has the effect of making half the data positive and half the data negative. Resulting histograms for series centered on the mean will always be centered on zero, by definition. Note that there will be some nominal change in histogram shape as some data values will fall into different buckets.

NORMALIZING: Normalizing involves scaling a time series so that the range of values is comparable to a desired standard. The math is straightforward but a little harder to describe than for centering. The visual effect of normalizing is to squeeze or stretch a curve in the vertical direction.

These data reflect temperature anomalies in degrees Celsius. They are generated by four different methods but aspire to be measuring nearly the same thing. It is not only appropriate but desirable to center the four curves about the 1979-2008 mean so that some comparisons can be drawn regarding the similarities and differences among the four. It would be wholly inappropriate in my mind to normalize these data without some a priori rationale for doing so. The satellite data measure different physical properties than the land data so some difference among the metrics is to be expected.

It should be noted though that both GISTEMP and HadCRUT incorporate satellite data into their algorithms. Global SST cannot be estimated otherwise. What I don’t know at this point is to what extent the similarity is due the dependence of the datasets versus due to true physical correlation between the land temperatures and the satellite sea surface temperatures.

ANYWAY, everyone can download the data and center the curves to their heart’s content. If you don’t have Excel then install OpenOffice. It’s free. Don’t take someone else’s word for it, run the numbers yourself. If you’ve never played with computer spreadsheets before then a whole new world of learning and exploration lies before you.

REPLY: Thanks Earle, I was going to write to you about this tonight, but you beat me to it. Yes I agree, run the numbers, “a whole new world of learning and exploration lies before you.”

I and many others appreciate you taking the initiative.

6 03 2008
Evan Jones (14:30:57) :

I am not a scientist by training but I am a thinker and like to join others in the discovery process and in my estimation, I can interpret the science here better than any other blog.

This is a general subject I hope to comment on more extensively in the near future.

6 03 2008
Evan Jones (14:37:48) :

It looks like most of it is below zero.

Looks centered to me.

(I have a number of questions to follow.)

6 03 2008
Evan Jones (14:42:04) :

Re. two posts ago: Oops Matt is right. I was looking at 1979-2008.

6 03 2008
Bob B (14:54:27) :

Looks like Feb2008 is pretty cold as well.

http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.2

REPLY: Thanks I just posted a plot of it.

6 03 2008
Johan i Kanada (15:16:52) :

Earle,

Re: Centering: The statement “this has the effect of making half the data positive and half the data negative” is not exactly correct. It is the sum of all anomalies that, after centering, will be zero.

All,

I am wondering how to interpret differences in anomalies between these four sources. Are these differences significant or not? Does it mean that there is flaw in the methodologies?

E.g. if one source states that the drop global average over the last year (or some other time period) is .8 and another .6, does it mean that there is a discrepancy of 33% (.8 -.6)/.6)? If so, this would seem very significant.
On the other hand, expressed in absolute numbers, the difference is instead (approximately) (.8 - .6)/290 = 0.07%, which seems completely insignificant.
So, how consistent (or inconsistent) are the numbers from these four different sources?

(Btw, by using anomalies, as opposed to absolute values, the small temperature trends/changes can be made to appear very large and significant, e.g. in order to score political points (like Gore likes to do).

6 03 2008
Earle Williams (17:24:56) :

Johan i Kanada,

You are correct, I didn’t mean to imply that an equal number of points would be above or below zero.

6 03 2008
steven mosher (17:28:08) :

Let me see if I can explain the anomaly method in a simple way and then
you can see what all the fuss is about, and you can see how to adjust periods.

Consider the data series

2
2
4
4
6
6
8
8

Now, Lets create ANOMALIES according to the AVERAGE of the first two years.
The average of 2 and 2 is …. 2.

So, Subtract 2 from each measurement. You get this

0
0
2
2
4
4
6
6

These are ANOMALIES ( differences) from the BASE period. Year 1 and 2

Now, Lets pick year 7 and 8 as our base period. The average of 7 &8 in the original data was 8.

So. now we get this anomaly

-6
-6
-4
-4
-2
-2
0
0

The base period is ARBITRARY. It doesnt matter. It doesnt change the shape
or slope ofthe curve, just the placement on the y axis.

So. If one curve has one base period of year 1 & 2 and anomalies like so:

0
0
2
2
4
4
6
6

And another curve has a different base period ( year 7 and 8) and anomalies
like so.

-6
-6
-4
-4
-2
-2
0
0

How do you put them on the same base Period?

It should be clear. And now. you can all do the anomaly dance.

Questions?

6 03 2008
Philip_B (17:42:07) :

The mode of all 3 adjusted bar charts is on the cool side, which says to me they are skewed to the warming side and that means warming comes disproportionately from a smaller number of sites than in a random distribution.

Would a real statistician like to comment?

6 03 2008
henry (17:52:23) :

Johan i Kanada (15:16:52) :

“(Btw, by using anomalies, as opposed to absolute values, the small temperature trends/changes can be made to appear very large and significant, e.g. in order to score political points (like Gore likes to do).”

Also, use of the older, colder averaging period does nothing to the trends, but makes the recent anomalies appear warmer (i.e, higher above zero). Choice of zero = choice of a “normal” temp.

6 03 2008
J (17:55:18) :

Earle Williams wrote:

It should be noted though that both GISTEMP and HadCRUT incorporate satellite data into their algorithms. Global SST cannot be estimated otherwise. What I don’t know at this point is to what extent the similarity is due the dependence of the datasets versus due to true physical correlation between the land temperatures and the satellite sea surface temperatures.

Earle, I’m not sure whether you’re talking about the similarity between the two surface temperature records (GISTEMP vs HadCRUT), or among the four surface + lower-troposphere temperature records.

There IS no “dependence” between the satellite data used to estimate SSTs in the GISS and Hadley Center analysis, vs the satellite data used to estimate lower troposphere temperatures. They’re from different instruments operating on completely different physical principles. SST is derived from AVHRR data, measurements of thermal IR radiation in two (or three, at night) wavelength bands to remove the effects of atmospheric emission and absorption and retrieve the ocean skin surface temperature.

Lower tropospheric temperatures are estimated using passive microwave radiometry at various wavelengths in the 50-58 GHz range. The MSU and AMSU are insensitive to surface temperature and only measure emission from atmospheric molecules.

The MSU/AMSU lower troposphere record is, for all practical purposes, independent of GISTEMP and HadCRUT, whether you include SSTs or not. The close correlation among them is thus evidence that the surface temperature record is not wildly erroneous.

6 03 2008
Earle Williams (18:30:51) :

J,

Thanks for the correction. I was referring to dependence among all four, assuming that the same satellite data were used. Bad assumption on my part.

I’ve learned something new and am now I’m going to have to go read up on the AVHRR system. :)

6 03 2008
Raven (18:40:45) :

The MSU/AMSU lower troposphere record is, for all practical purposes, independent of GISTEMP and HadCRUT, whether you include SSTs or not. The close correlation among them is thus evidence that the surface temperature record is not wildly erroneous.The MSU sensors measures brightness - not temperature. They must be calibrated against real temperature records like any other temperature proxy. Here is a link with references that explains the technically why MSU data is not independent from the surface data:
http://www.climateaudit.org/?p=2746#comment-215094

6 03 2008
Evan Jones (19:01:19) :

It seems to me that if you have a proxy measurement that does not correlate on a 1-1 basis, and not even at a constant (different in temperate zones than tropics) it is inevitable that there has to be some sort of calibration with surface.

BTW, what exactly IS being measured in the above graphs? Are all four Land or SST? Or are the satellite measurements for troposphere? (And if so, lower trop only?)

If it’s all a surfface measurement, how is the sat. measure converted from atm. to surface?

6 03 2008
MattN (19:03:00) :

Anthony, it looks to me like you normalized to the entire series to 1979-2007(8?) instead of 1970-1990. That’s why every metric has ~50% cooler and ~50% warmer than baseline. If you did it to 1979-1990, then the 1979-1990 data would be centered on “0″ and it looks to me like it’s centered on about -.15C. I think you need to look at that again.

Going strictly by an eyeball measurement, I’d say the 4 metrics will be ~60% warmer, 40% cooler….

REPLY: Seems reasonable. When I get time, I’ll run it again, or maybe Earle would like to redo his Excel Spreadsheet per this suggestion?

6 03 2008
J (19:35:18) :

Raven writes:

The MSU sensors measures brightness - not temperature. They must be calibrated against real temperature records like any other temperature proxy. Here is a link with references that explains the technically why MSU data is not independent from the surface data:
http://www.climateaudit.org/?p=2746#comment-215094

“Brightness” is an imprecise word. As I said in the previous comment, MSU/AMSU measure microwave radiation at a wavelength that is about 3 orders of magnitude longer than the thermal IR radiation measured by AVHRR and used for SST estimates. The sensors and physical principles involved are completely different.

Yes, the algorithms used for deriving temperature from microwave radiometers were originally derived and/or validated in part via reference to in situ data (mostly radiosondes, IIRC). That in no way, shape, or form can be taken to mean that the MSU/AMSU temperature trends are somehow doctored or forced to match the GISS/HadCRUT surface temperature trends.

The comment you link to at CA is fundamentally misleading. If you want my advice (which you may not…), pursuing that line of argument will be non-productive.

6 03 2008
J (19:42:24) :

I’ve learned something new and am now I’m going to have to go read up on the AVHRR system.

Glad to hear it. Be aware that despite its name, the Advanced Very High Resolution Radiometer is neither Advanced nor Very High Resolution, although it is in fact a radiometer. It has also been a spectacularly productive instrument over the past three decades.

6 03 2008
Evan Jones (20:06:42) :

Check Raven’s “D. Patterson” link. He makes much the same point.

If it’s a proxy, it has to be correlated. If it’s not a 1-1 (and not even constant) correlation, it more or less has to be an A to A (circular) comparison.

If Satellite data were available and worked for the entire century, that would tell us that the data corresponded contantly from a time of “good stations” to a time of “bad stations”. And therefore god vs. bad did not make much of a difference.

But when DOES that data start?

IT STARTS IN 1979 [3 exclamation points on loan from Joe D’Aleo] ! ! !

And guess when THAT was.
–Just when the latest rise started.
–Just before the stations became (steadily) corrupted by the MMTS switchover.
–Just when the Air Conditioning revolution took off.
–Just when exurban creep began seriously to overtake the ground stations.

THAT’S when.

Therefore, the (uneven) correlations being made between heat measure and microwaves (depending on latitude) were occurring at precisely the time when the stations were undergoing a steady corruption.

Taking D. Patterson’s observations a step further, one can conclude that MSU may indeed NOT be independent of GISTEMP and HadCRUT and in fact MSU correlation was based on the very same trends that were corrupting the surface station measurement.

We already know that microsite violations are not adjusted for. After all, one can’t adjust for what one doesn’t even know exists.

Therefore, in order to conclude that sat and surface measures are totally independent, one must conclude that microsite violations are irrelevant. And it seems obvious (at least to NOAA/CRN) that microsite violations are NOT irrelevant. (Can I hang onto those exclamation points? I think I’ll still be needing them for a bit.)

6 03 2008
Evan Jones (20:11:00) :

BTW, it does not mean that the sat. data was doctored to fit the ground data.

Just that the 1.2 and 1.4 conversions were taken at a time when outside influences were affecting to correlation.

6 03 2008
Raven (20:54:02) :

J says:
“Yes, the algorithms used for deriving temperature from microwave radiometers were originally derived and/or validated in part via reference to in situ data (mostly radiosondes, IIRC).”

Thank you. That is my entire point. Converting MSU values to a temperature requires some reference to the land record. When the MSU trends reported cooling a few years ago and it was assumed that the MSU data was wrong and the land record was right. The algorithms used to correct for diurnal drift include parameterizations that require tuning. In the case of RSS, CCM3 GCM was used to provide the calibration data - a GCM which requires some sort of parameterization to represent reality even if it is based on physics.

“That in no way, shape, or form can be taken to mean that the MSU/AMSU temperature trends are somehow doctored or forced to match the GISS/HadCRUT surface temperature trends.”

Even if we accept that the correspondence between the satellite and surface record since 1979 means that the surface record during this period is accurate. That correspondance does not *in any way* demonstrate that the entire surface record trends from 1880 are not biased by UHI or other measurement issues.

For example, we would not likely be having this discussion if the surface record said that the 30s-40s were as hot as today (i.e. just like it appears in many individual station trends). The warming since the 40s could easily an artifact introduced by measurement issues.

6 03 2008
Harold Vance (21:17:51) :

The re-plotting of the adjusted data does not change the fact that GISTEMP shows higher highs whereas the other three do not.

Scientists continue to claim that the differences between the four sets are not significant from a statistical perspective, and I have no reason to doubt these claims based upon what I can see (simple eyeballing).

However, the manufacturer of GISTEMP can make certain claims that the others cannot, claims that the press and independent movie producers can take to the bank and claims that policy makers can use to justify all kinds of new taxes and regulations. GISTEMP becomes highly significant from a policy and public relations perspective though it is highly insignificant from a statistical perspective.

Can you imagine Hansen or Schmidt claiming that the higher highs are basically meaningless? Judging by their website, GISS is already committed full bore to the AGW narrative. They don’t even try to pretend that they have no bias.

6 03 2008
Evan Jones (21:54:14) :

Raven:

So the satellite measurements were validated from surface records during a time when stations were steadily becoming corrupted. Then, when a divergence occurred and adjustment was applied (albeit for a legitimate reason)–using a CGM as a basis for comparison (yikes!). Therefore can it be any surprise that there is “agreement”?

I do not see any reason why anyone would use satellite measurements as a basis to pooh-pooh the seemingly obvious effects Rev’s hard, documented, empirical findings. Yet it is done all the time. Q.E.D. (And I bet that when the surface stations are finally done right there will be another “divergence”.)

7 03 2008
J (06:37:24) :

I’ll repeat what I said when Raven first linked to Patterson’s comment over at CA: You may or may not want my advice, but that’s not a productive argument for you guys.

The MSU/AMSU radiometers are not calibrated using weather stations, and their decadal trends are not adjusted to match the weather station trends. The use of a GCM in the RSS reanalysis was solely used to model the diurnal temperature cycle for the process of understanding the impact of orbital drift, and AFAIK wasn’t used at all by Christie et al. (I realize that the acronym “GCM” probably translates to “satan!” in some people’s minds, but frankly, that’s not my problem.)

To be perfectly honest, If I were a lawyer rather than a scientist, I’d have kept my mouth shut and let you guys go on your merry way. There’s a reason why, for example, Steve McI has begun censoring discussion of things like the pre-1960 chemical CO2 measurements cited by EG Beck on CA. He realizes that when people make prima facie crackpot claims, it reduces the credibility of his blog.

Well, this is basically the same thing. If you guys want to use Anthony’s blog to promote these kinds of claims, go right ahead.

Argh, what the heck, I’ll give you one more piece of advice, then I’m out of here. Why not call up Christie and inform him that his data set is invalid due to UHI and microsite contamination of the surface temp record? If he doesn’t agree with you, he’s either (a) too dimwitted to get your point, or (b) part of the conspiracy himself.

REPLY: I agree with what you are saying about discussing such things such as the Beck measurements, which are terribly fraught with variance. Actually I’m in touch with Christy and Spencer regularly, so I’ll pose the question and post it to settle the issue.

7 03 2008
randomengineer (06:48:46) :

J says — “Yes, the algorithms used for deriving temperature from microwave radiometers were originally derived and/or validated in part via reference to in situ data (mostly radiosondes, IIRC).”

And I read on RC last week that there’s now ongoing tweaking to radiosonde data because the radiosonde data was found to be wrong. The note was from Gavin Schmidt IIRC.

What this tells me is that radiosonde data will soon be corrected to show all sorts of stuff… and what you’re claiming is that the MSU’s are calibrated to the radiosonde data and this too will be “corrected.”

How circular does this get? How is radiosonde data “corrected?” This seems just nuts to me.

7 03 2008
Raven (07:53:31) :

I would really like to hear Christy’s and Spenser’s opinion in this. A question that would be worth asking: would the satellite record have to change if the surface record was found to have grossly overstated the warming trend from 1979 to 2008?

One of thing that makes this issue complicated is the splicing of multiple satellite records. The calibration for individual satellite records is probably robust but if different satellites are tied to a temperature baselines then the differences between the satellites would result in a larger trend over time.

That said, I would like to make one thing very clear: any dependency between the satellite and surface records is likely indirect and not intentional. I never meant to imply that the records were deliberately modify to match. However, any dependency - even if unintentional - means that one record can not be used to “prove” the correctness of the other.

7 03 2008
Earle Williams (08:26:27) :

I would do as J suggests, sort of, and go to the sources and verify whether or not specific temperature measurements are tuned, tweaked, or otherwise adjusted. Do not, however, take the word of Gavin Schmidt, Steve Bloom, etc., that the satellite and radiosonde data are being adjusted to match the surface data. As far as I’m concerned it’s pure BS meant to impugn the validity of data sources that disagree with their conclusions. If you are concerned then go to the data originators such as UAH and RSS and read up on the process. If after that there is still some doubt then you may wish to ask the good folks about how temperature is derived from the microwave signal.

7 03 2008
Evan Jones (09:44:56) :

because the radiosonde data was found to be wrong

How does he know it’s wrong? By what comparison is it corrected?

J: If MW compuation were a 1-1 comparison with Surface temps, then a “straight correlation” would be established. But not only is this not so, but it is different at different latitudes. How was this determined if not by some sort of comparison with surface data? Or by comparison with something that was compared with surface data?

7 03 2008
Evan Jones (09:48:56) :

the acronym “GCM” probably translates to “satan!” in some people’s minds, but frankly, that’s not my problem.

The acronym GCM translates roughly to HSM (Historical Simulation Model), which I have actually done. Same strengths. Same weaknesses. Great hindcasters. Lousy forecasters.

7 03 2008
dscott (11:52:26) :

J wrote:
SST is derived from AVHRR data, measurements of thermal IR radiation in two (or three, at night) wavelength bands to remove the effects of atmospheric emission and absorption and retrieve the ocean skin surface temperature.

Interesting, are you saying it is actual measuring the physical surface? Because if you are, then there is a problem which such a measurement as a proxy for the air above it. The temperature retention quality of any surface is dependent on a number of factors such as color, density, roughness, etc. but there is one factor which can change the temperature of the object or surface that is independent of such physical characterists, that factor is wind speed. Place two identical rocks on the ground with one in still air and the other with a fan on it. The surface temperature is going to be different because of convection. Remember what causes the temperature in a Greenhouse to rise above ambient is the lack of convection. So calibrating the reading with just an air temperature is going to be useless since, any day you pick to calibrate the microwave reading you also have to factor in wind speed.

Isn’t the underlying assumption of microwave and IR measurements that everything else stays the same, i.e. wind speed? This works great on the moon where there is no air. How do we know the temperature difference between two days or two months for that matter have the same convection or wind speed happening. In effect what you may be measuring is both air temperature and wind speed. Are certain months in a locale windier than others? If so, a Microwave or IR surface measurement is going to pick up that variable. Anyone want to comment on that?

7 03 2008
Obsessive Ponderer (20:31:44) :

Why is it that if you graph the any of the temperature records (including the average) as a bar graph you get a completely impression of what is going on? (I don’t know how to post this from Excel).

A bar graph gives you the impression that during the 1979 to 1997 time the temperatures were cooler that the 1998 2008 time. Does the bar graph really indicate something different? (Notice in the write up from the NY conference they used a bar graph).

REPLY: Anomalies don’t translate well to bar garphs. Remember, this isn’t absolute temperature, but variance of temperature compared to a zero baseline.

7 03 2008
Evan Jones (20:58:47) :

Well, heck it was cooler from1979-1997. Measurements were increasing for, in my opinion, for two reasons: PDO, and the increasing microsite violation the Rev has been documenting. Then there was the ‘98 El Niño (big spike and recvovery). After that, it’s been kind of flat, but higher than 1979-’97.

8 03 2008
An Inquirer (05:07:36) :

I wonder if I understand the issue being discussed between Raven and J and others. The underlying issue is our desire for a reliable temperature trend. Microwave radiometers need to be converted into degrees Celsius. If the relationship is proportional, we can take one observation (such as in 1979), establish the relationship, and then use microwave radiometers to give us the trend. For example, if one observation of an instrument measuring in centimeters and a simultaneous observation of another instrument measuring in inches established a proportional relationship of 2.54, then in the future we can use the centimeter-measuring instrument to give reliable trends in inches – and we may wish to do so if we suspect that the inch-measuring instrument has turned faulty. Even if the inch-measuring instrument was off at the time of simultaneous observation – say, giving us a proportion of 2.60, still future trends identified by the centimeter-reading instrument could lead us to a reliable indication of trends in inches. (I do recognize that in absolute terms the increases and decreases in inches might be slightly exaggerated if 2.6 were used rather than 2.54.) I believe that is the argument of J.

On the other hand, perhaps Raven is saying that the relationship is not proportional, but rather linear or perhaps even more complex. For example, if simultaneous observations of a Celsius-measuring instrument and a Fahrenheit-measuring instrument where 10 and 50 respectively, it would be a mistake to say that the relationship is proportional in a 1:5 relationship. If it was proportional, then 100 degree Celsius observation would be translated into 500 degree Fahrenheit. Any trend would be greatly exaggerated. So therefore two simultaneous observations are needed (such as in 1979 and 1981). And if both instruments were measuring correctly, the linear relationship of F = 32 +9/5C could be relationship. But what if the Fahrenheit-measuring instrument had a built-in measuring error that increased with time so that the 1981 observation gave you a relationship of F = 32 + 9/4C? Then relying on Celsius-measurements to give you Fahrenheit would overstate % trends in Fahrenheit. And the situation would get even more complicated if the relationship was curvilinear rather than linear. I understand this to be the argument of Raven. My understanding is not necessarily correct so I also would look forward to an explanation from Dr. Christy.

8 03 2008
Gary Gulrud (05:41:09) :

Following dscott’s thinking on J, the exalted.
This practice of adjusting data, post fact, and conflating proxies with data, by algorithms designed and implemented over some decades, using inadequate tools, e.g. statistics instead of vector calculus to study radiative fluences, is an obfuscatory rather than explanatory endeavor by nature.
It reminds me of Goedel’s ‘non-decidable’ problems, those for which his theorems do not apply.
These are problems, the solution of which, creates work faster than a satisfactory solution is approached.
And for my trouble, I’m a jester because I’ve equated his work with ’satan’?

8 03 2008
Colonel Sun (08:11:28) :

I don’t understand the merit of this normalized comparison.

You’ve clearly done some thing other than just change the overall offset as the shape of the distributiions (number of entries in each bin has changed).

If you want to compare the 4 data sets should you not plot the distributions of the absolute values of the temperatures and then compare their means and spread?

As a minimum, any differences in the mean absolute temperature in the four data sets would informative.

8 03 2008
John Willit (17:16:20) :

I am tired of the biased adjustments to the raw temperature data from Hansen, Jones and the GHCN. Of the total increase in global temperatures of 0.7C between 1900 and 2007, fully 0.6C can be accounted for as “adjustments” to the data. (The pro-AGW crowd does not understand this issue at all.)

While the rationale for all these adjustments sounds reasonable when first proposed in textual scientific form, when you look at what they “actually” did to the data (not that they explain what they actually did, it has to be post-adjustment audited), it makes no sense whatsoever unless you assume they are biased in adjusting the temperature records upwards to prove their pro-AGW bias.

At least it appears that the MSU data from RSS and UAH is not biased so we should just rely on it and throw the GISS, Hadley Centre and GHCN data OUT (or just go back to the raw temperature data instead.)

8 03 2008
DeWitt Payne (17:30:25) :

I think there is a basic misunderstanding of MSU’s here by some. What is being measured is a dip in the surface microwave black body emission caused by absorption/emission by oxygen. There is no low frequency cut off for black body emission.

The emission curve looks very much like the emission of IR at 667 cm-1 (15 micrometers) There is a flat bottom and steeply rising emission on either side. The T4 sensor measures on the bottom of the valley, corresponding approximately to just above the tropopause. T1 sees only emission from the surface. T2 also sees the surface at high altitudes like the Tibetan Plateau, Antarctica and and the peak of the Andes. That is why there is no lower troposphere satellite temperature from those regions.

A good indication that the satellite orbital drift/decay corrections are valid has been demonstrated by the agreement between data using the old satellite #15 and the new Aqua AMSU which has station keeping jets and doesn’t require the drift correction. See the Jan 03 2008 readme at the UAH site.

Leave a comment

You can use these tags : <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>