Before I left on my trip to New York, I published part 1 of this series looking at the temperature anomalies between the 4 global temperature metrics from 1979-January 2008. The first post I made on the subject used the unadjusted global temperature anomaly data to do the comparisons. I also wanted to do the same comparisons using anomaly data adjusted to a common reference baseline. But unfortunately ran out of time to complete all of the histograms for the next data set before I left on the trip.
In the meantime, while I was traveling, the first post, missing the all important part 2, generated some controversy, and some accusations that I was misrepresenting the data by not showing it adjusted to a common baseline.
It was a mistake on my part to not have them both available at the same time, and for that I apologize to anyone whom was misled by the lack of part2. Atmoz did a quick study of the issue also and illustrated what I wanted to do for part2 with a simple graph, and while it would have been easy to simply use his, I wanted to complete what I started using the same presentation style. Recognizing that having part1 only was misleading to some, I put part 1 back on the shelf until I could return from my trip and finish part 2, so that I could show what happens when all four metrics are adjusted to the same base period.
That is complete, the Part1 article has been restored, and below is the new adjusted information as it compares to part1.
Here is the first graph, the unadjusted raw anomaly data as it was published in February by the four metrics from UAH, RSS, GISS and HadCRUT. Note that while there is pattern agreement to the 4 metrics, there is an amplitude difference.
Here is the source data file for this plot and subsequent unadjusted plots.
Here is the data used: 4metrics_temp_anomalies_refto1979-1990.txt
Now we can see that the agreement of the 4 metrics is better using the data adjusted to a common baseline period.
The difference between these metrics is of course the source data, but more importantly, two are measured by satellite (UAH, RSS) and two are land-ocean surface temperature measurements (GISS, HadCRUT).
One of the first comments from my post on the 4 global temperature metrics came from Jeff in Seattle who said:
Seems like GISS is the odd man out and should be discarded as an “adjustment”.
That is no longer the case once the adjusted data is presented. The trend and amplitude agreements are very good with all four metrics.
In my previous post on this in part 1 I mentioned I had never seen a histogram comparison done on all four data-sets simultaneously. The first set of histograms showed a wide disagreement, particularly in the land-ocean metrics from HadCRUT and GISS.
Below I have plotted the original histograms part 1 and the new adjusted ones:
First we have the satellite data-set from UAH.
UAH UNADJUSTED DATA:
The UAH data above looks well distributed between cool and warm anomaly. A modest warm bias at 63%.
UAH ADJUSTED DATA- Baseline 1979-1990:
University of Alabama, Huntsville (UAH) Microwave Sounder Data 1979-2008 ADJUSTED - click for larger image
Next we have the satellite data-set from RSS.
RSS UNADJUSTED DATA:
Again we have a modest warm bias at 63%. And now the adjusted data.
RSS ADJUSTED DATA - Baseline 1979-1990:
Remote Sensing Systems (RSS) Microwave Sounder Data 1979-2008 ADJUSTED - click for larger image
Note that we have now shifted to a slight cool bias in the histogram at 51.6%
Here we have the land-ocean surface data-set from HadCRUT.
HadCRUT UNADJUSTED DATA:
Here, we see a much more lopsided distribution in the histogram with what appears to be a strong warm bias of 89%. But when the1979-1990 baseline adjusted data is plotted, that apparent warm bias reverses and becomes a slight cool bias as seen below.
HadCRUT ADJUSTED DATA - Baseline 1979-1990:
It is interesting how the simple application of a common baseline to the data modifies the distribution on the histogram.
Finally we have the GISS land-ocean surface data-set.
GISS UNADJUSTED DATA:
NASA Goddard Institute for Space Studies data 1979-2008 ADJUSTED - click for larger image
In part 1 I stated: “I was surprised to learn that only 5% of the GISS data-set was on the cool side of zero, while a whopping 95% was on the warm side.” But just as seen above with the UAH data, when the1979-1990 baseline adjusted data is plotted, that apparently large warm bias reverses and becomes a slight cool bias as seen above.
So from the presentation of the time series and these new histograms using data adjusted to a common baseline I conclude three things:
1. It is important to present data with a common baseline of reference when doing comparisons between data sets of different origins.
2. Data that has been adjusted in this way may vary significantly from the raw data
3. When one is looking at graphical presentations of data, it cannot always be taken at face value. One must look deeper into it’s provenance to fully understand what the basis is, for what may appear as agreement or disagreement in comparatively plotted data may have an explanation that lies within how it is prepared for that presentation.
Some folks who viewed part1 without the benefit of part 2 were quick to criticize it and say that it didn’t represent the whole story of data accurately. I would agree with that, which is why I removed part1 temporarily until part2 was complete. Some of the same people had sharp words for me personally, suggesting the part1 presentation was “stupid”, or worse.
I’ll be the first to admit that I’m not a skilled statistician on par with people like Steve McIntyre of Climate Audit. Neither is 99% of my readership. But, I’m doing an honest investigation into things I want to learn about. Making mistakes along the way (like not having both parts 1 and 2 completed for comparison) is part of the process. I doubt there is a scientist in existence that never made a mistake as he/she went along the path learning new things. Often mistakes are quietly pointed out by colleagues in the university environment, and you never see or hear about them. In the rarefied atmosphere known as the “Blogosphere”, such mistakes are often the fodder for vicious attacks rather than congenial learning experiences. Still, they are learning experiences nonetheless.
My specialty is in meteorological instrumentation and presentation of live weather data from stations, radar, and satellite sources. But that doesn’t prevent me from learning and trying new things in meteorology and climate science. For me. and I think also for my readers, this is a learning exercise. So much of the way climate data is presented is often a mystery, because the folks that publish it are often so far ahead and so focused on their own tasks they become unaware of how narrow the skill set becomes. As a result they may not see the need to publish instruction manuals for the data so that it can be interpreted by others that aren’t at the same level of understanding.
Given that climate change is such an interesting and provacative subject to a wide segment of the population now, I’d say it is incumbent upon researchers to devote a little effort to providing better documentation so that a better understanding of the data they publish is fostered.
For example, GISS does a good job and makes note of the base period in their data set seen here: temperature index data but HadCRUT does not say one thing in their published data seen here, and I had to create a file for my own blog to help myself and my reader interpret the data columns, seen here.
It would also be nice if the global temperature data was presented in some sort of unified format so that when it is used for public consumption, the interpretive issues can be minimized. Given the importance of the four global metrics, this seems a reasonable approach. I think it would in the public interest if researchers get together on this and create a common format with a common set of descriptions to accompany it.
When I was on television, I often had to prepare graphics to present to the public in the space of a few minutes, and then do an interpretive discussion of it to help them understand it. Given the readership, I see this blog as being much like that, though I often have much more “viewing time” than the usual 2-3 minutes on TV. My goal here is still the same; to make things understandable for a wider audience.
In part 3 of this series, we’ll look at the differences in reporting in more detail.