Open Mind

Prime Meridian

February 13, 2010 · 29 Comments

I’ve decided to average the GHCN station data in gridboxes which are 10 deg. latitude tall, and approximately the same width. That makes them 600 nautical miles tall, which is a bit over 1100 km. Within that range, we can expect that all stations which inhabit the same grid box will show correlation with each other. The exception to the “10 deg. tall” rule will be stations north of 70N latitude — instead of defining separate grid boxes for stations north of 80N latitude, I’ll lump them together with the stations north of 70N latitude.

If I wanted to be as precise as possible, I’d use smaller grid boxes and I’d probably weight the average by the distance of a station from the gridbox center. But I’m not aiming for maximum precision; I just want a good solid answer that’s based on a straightforward analysis of the raw data.


From the equator to 40 deg. latitude (north or south), I’ll use gridboxes 10 deg. longitude wide. From 40 to 60 deg. latitude the gridboxes will be 15 deg. longitude wide. From 60 to 70 deg. latitude they’ll be 20 deg. longitude wide. From 70 deg. to the poles, the grid boxes will be 30 deg. longitude wide.

I’ll start just east of the prime meridian, from longitude 0 to 10E for latitudes below 40. Most of the action in this region is in the northern hemisphere; at this longitude range the southern hemisphere is almost entirely ocean until you get to Antarctica — and I’m not looking at Antarctica beause that’s not the GHCN. We end up with 8 grid boxes in this longitude band:

Box Latitudes Longitudes Num.Stations
1 00-10N 00-10E 26
2 10-20N 00-10E 18
3 20-30N 00-10E 8
4 30-40N 00-10E 35
5 40-50N 00-15E 118
6 50-60N 00-15E 73
7 60-70N 00-20E 21
8 70-90N 00-30E 5

I’ve already outlined the procedure for computing a gridwide average, so without further ado: in the grid nearest the equator (00-10N) we see recent warming, by about 0.7 deg.C since 1975.

For the next grid (heading north) there’s also recent warming, again about 0.7 deg.C since 1975. We also see notable warmth during the 1930s-1940s, but not quite as strong as at present (in terms of long-term averages).

For the 20-30N latitude grid, the warming since 1975 is quite a bit stronger, well over 1 deg.C.

From 30-40N, again we have more than 1 deg.C warming since 1975, and we also see minor warmth (only a few tenths of a deg.C) during the 1940s/1950s.

From 40-50N the pattern is very similar to that from 30-40N; warming of over a deg.C since 1975, and minor warmth during the 1940s/1950s.

From 50-60N we again find a full deg.C warming since 1975, and some minor warmth in the 1940s/1950s. We also see signs of greater variation from year to year than was present in more equatorial gridboxes.

We find more of the same from 60N-70N.

Finally, north of latitude 70N we see much greater recent warming (a full 2 deg.C) and much greater variability throughout the last century.

We can combine all the gridbox data by plotting, not annual averages, but 5-year averages, for each on a single plot

from which it’s apparent that the northernmost grid box has shown both the greatest warming, and the greatest volatility.

Some of these gridboxes (but not all) show signs of minor warmth during the 1930s/1940s, some (but not all) show signs of minor warmth during the 1940s/1950s, but all of them show pronounced warming from about 1975 to the present. The northernmost shows extreme warming during this “modern global warming era.”

In the next installment, we’ll examine some of the false claims which have been made about factors affecting the analysis of GHCN data.

Categories: Global Warming
Tagged:

29 responses so far ↓

  • Slioch // February 13, 2010 at 5:45 pm | Reply

    For those like me who aren’t too sure what GHCN stands for or consists of, this is what Wikipedia says:

    “The Global Historical Climatology Network (GHCN) is a database of temperature, precipitation and pressure records managed by the National Climatic Data Center, Arizona State University and the Carbon Dioxide Information Analysis Center.

    The aggregate data are collected from many continuously reporting fixed stations at the Earth’s surface and represent the input of approximately:

    * 6000 temperature stations
    * 7500 precipitation stations
    * 2000 pressure stations

    This work is often used as a foundation for reconstructing past global temperatures, such as NASA’s GISTEMP. The average temperature record is 60 years long with ~1650 records greater than 100 years and ~220 greater than 150 years (based on GHCN v2 in 2006). The earliest data included in the database were collected in 1697.

  • Chad // February 13, 2010 at 7:03 pm | Reply

    Tamino,

    Did you calculate the anomalies at the grid point level or did you calculate it after finding the global average? It might matter because if you have irregular spatial coverage, I think the global average temperature will correlate to some extent with the spatial coverage.

    Also, I have a post up on combining station data. The reference station method appears to allow spurious warming to contaminate the overall average while other methods don’t. Still a work in progress.

    [Response: Anomalies are calculated at the grid-point level.]

  • suricat // February 14, 2010 at 12:45 am | Reply

    Tamino.

    “Some of these gridboxes (but not all) show signs of minor warmth during the 1930s/1940s, some (but not all) show signs of minor warmth during the 1940s/1950s, but all of them show pronounced warming from about 1975 to the present. The northernmost shows extreme warming during this “modern global warming era.”".

    I don’t see a great enough resolution in any of your ‘grid boxes’ to come to any of your conclusions, only random selection of data for a general conjecture.

    Best regards, suricat.

    [Response: Warming rates since 1975 in deg.C/yr:
    00-10N, 0.023 +/- 0.008
    10-20N, 0.029 +/- 0.012
    20-30N, 0.041 +/- 0.020
    30-40N, 0.046 +/- 0.020
    40-50N, 0.045 +/- 0.017
    50-60N, 0.039 +/- 0.027
    60-70N, 0.052 +/- 0.027
    70-90N, 0.093 +/- 0.044
    ]

    • suricat // February 14, 2010 at 1:59 am | Reply

      Tamino.

      “Response: Warming rates since 1975 in deg.C/yr:
      00-10N, 0.023 +/- 0.008
      10-20N, 0.029 +/- 0.012
      20-30N, 0.041 +/- 0.020
      30-40N, 0.046 +/- 0.020
      40-50N, 0.045 +/- 0.017
      50-60N, 0.039 +/- 0.027
      60-70N, 0.052 +/- 0.027
      70-90N, 0.093 +/- 0.044″

      I concur on your point of conjecture. However, the resolution for GHCN that you’ve shown is insufficient to show a true signal for temperature of the regions you’ve defined.

      If you can show otherwise, then please do so. Personally, I don’t see enough resolution in the original data to provide this definition. In ‘other terminology’, the baud rate is too low to recognise the signal.

      Best regards, suricat.

      [Response: I have already shown it. The error ranges for the trends since 1975 overwhelmingly reject your "no true signal" hypothesis.

      Merely asserting "the resolution is insufficient" doesn't make it so, and "Personally, I don't see... " is no evidence at all.]

      • suricat // February 15, 2010 at 4:31 am

        Tamino.

        “[Response: I have already shown it. The error ranges for the trends since 1975 overwhelmingly reject your "no true signal" hypothesis.
        Merely asserting "the resolution is insufficient" doesn't make it so, and "Personally, I don't see... " is no evidence at all.]”

        As an engineer, I don’t “hypothesise”! I only consider data and extrapolate only whenever this is a viable objective for the scenario. Your scenario doesn’t have enough data points to prove your ‘conjecture’. That’s OK! Everyone is entitled to their POV!

        Best regards, suricat.

        [Response: There's more than enough data to prove the existence of a real signal. It has absolutely nothing to do with "point of view" -- you're just plain mistaken.

        The only "evidence" you've offered is your opinion -- with nothing to back it up other than "Personally, I don't see." You need to accept the fact that your inability to get it, doesn't make it false.]

      • Ray Ladbury // February 15, 2010 at 4:14 pm

        Suricat,
        Although I am a physicist, I work in a very applied field. I see no flaw in what Tamino is doing.
        So far, all I get from your posts is that you are uneasy about the method. That’s not science. Try to make explicit what in the method makes you feel uneasy?

        You do understand that contemporaneous data in nearby stations will be correlated, correct?

        And you understand that the result of insufficient data would be large error bars, that would render the result insignificant, correct?

        What is it you don’t get?

  • dhogaza // February 14, 2010 at 5:42 am | Reply

    It’s like suricat doesn’t realize why statistics was invented in the first place …

  • Nathan // February 14, 2010 at 9:20 am | Reply

    Will you repeat for SH?

    [Response: There's no SH data for this longitude band (it's all ocean up to Antarctica). For GHCN land stations, I'll do the entire world.]

  • Didactylos // February 14, 2010 at 4:55 pm | Reply

    suricat, what do you mean by “great enough resolution”?

    • suricat // February 15, 2010 at 5:05 am | Reply

      Didactylos.

      “suricat, what do you mean by “great enough resolution”?”

      To be honest, I’m neither a scientist or a climatologist. My discipline is engineering. Suffice to say that I can’t accept a viewpoint that makes an extrapolation from a scenario that uses insufficient data to arrive at its conclusion.

      IOW, the basic data is inconclusive!

      Best regards, suricat.

    • Didactylos // February 15, 2010 at 3:43 pm | Reply

      Oh, I see.

      You are saying that you don’t know what you are talking about, and Tamino is correct when he says you are just plain mistaken. No shame in that.

      The irony here is that if Tamino had gone with higher resolution gridboxes, then he would have had to address the problem of empty gridboxes and boxes without enough data to combine properly (the coverage problem that you seem to be concerned about). But he didn’t. He chose in such a way that he could keep things simple enough so that even I could understand it.

      Don’t throw around technical terms unless you know what they mean.

      • suricat // February 16, 2010 at 2:48 am

        Didactylos.

        “You are saying that you don’t know what you are talking about, and Tamino is correct when he says you are just plain mistaken. No shame in that.”

        This is obviously not what I’m saying. Extrapolated data from a low data region is OK for statisticians, politicians and scientists. However, an engineer needs proven data before it can be included in any project that may put human life at risk.

        This begs the question of ‘what do statisticians and scientists centre their work around’? Is it humanity, or money and prestige (before you answer, I know that there is ‘good’ and ‘bad’ in all camps)?

        It seems to me that you’re telling me that I don’t have any say in this debate. If that’s the case then I don’t see how findings can ever result in any ‘engineered’ projects!

        “The irony here is that if Tamino had gone with higher resolution gridboxes, then he would have had to address the problem of empty gridboxes and boxes without enough data to combine properly (the coverage problem that you seem to be concerned about). But he didn’t. He chose in such a way that he could keep things simple enough so that even I could understand it.”

        This is the reason that the data is unreal to me. Tamino has extrapolated blank data regions and that’s perfectly acceptable for a statistician, but not for an engineer. It’s a ‘discipline thing’.

        The ‘low resolution problem’ is also one reason (besides temporal changes, etc.) why clouds pose such a challenge for observation.

        Best regards, suricat.

        [Response: You don't make any sense at all.]

      • Didactylos // February 16, 2010 at 4:09 am

        You have it completely backwards. When there is uncertainty, engineers must take the safest approach, which means taking the top end of the warming estimates, the top end of the sea level rise estimates, and so on.

        Tamino is right – you really don’t make any sense. You are one of those people who shout “we don’t know everything, so we know nothing”.

        Not knowing when you are totally wrong, after it has been explained to you? Twice? Personally, I believe there *is* shame in that.

      • carrot eater // February 16, 2010 at 5:35 am

        Suricat,
        So far as I can tell, you’re worried about insufficient spatial sampling. The formal way to study this is to use a climate model to create a very high-resolution field of anomalies. From this, calculate the global or hemispheric mean, or the mean of some large area of interest.

        Then, using only the subset of the model grid points that correspond to the locations of actual weather stations, again calculate the global mean, or the mean of the large area of interest. If the number closely matches the previously calculated mean, then your sampling is good enough.

        This has been done in several papers, with Hansen/Lebedeff (1987) being one example. GISS has used this method ever since to put error bars on its global means.

        So these sorts of calculations are justified, and one can find good ideas for the confidence intervals involved. Whether you are an engineer or statistician, that sort of analysis has to be compelling. For the sort of illustration Tamino is doing here, I don’t think it is at all necessary for him to repeat the model-field exercise with his setup; there are more interesting things to do.

      • Ray Ladbury // February 16, 2010 at 10:12 am

        Suricat,
        It’s clear that you aren’t understanding the method. Tamino is not adding any “information” or doing any technique that gives you any result other than what is based on data.

        You are also wrong about engineering–every time you fit a failure distribution to a Weibull or Normal or Lognormal, you are projecting. The only techniques that don’t have such a projection are nonparametric techniques like jackknifing, and even here you are making the assumption that the data are representative.

        The key is in understanding the assumptions underlying the technique, how those assumptions might break down and what the consequences would be if they did. I suggest you try that with this analysis before pronouncing judgment.

  • Denialist :) // February 14, 2010 at 9:42 pm | Reply

    Dear Tamino,
    Thanks for efforts and nice contribution.
    My nick is just a joke :)

    However, could you tell us what is the running average period you used for smoothing the graphs? Somehow I could not find out the algorithm for this.
    Inspecting visually (Mark Eyeball 1 device) the smoothed temperature for 70N-90N box I feel uneasy with the trend around year 2000+: a kind of temperature blip around 2000 pulls up the whole smoothed curve; I might presume that your running average algorithm took there just a few points, much less than in central part of the plot.

    Next note is about the way you use the selectio of your grid boxes (longitudinal) size.
    For a given time point You compute the average of surface temperatures. The temperature is a scalar field. So for asingle box one expects something like:
    T_box_average = 1/Box_area*surface_integral(T, dArea)
    Since you do not know the spatial distribution of T in your box then I assume that:
    1. You average your box temperature
    2. multiply the average by box area
    3. Divide the result (2) by the box area

    Small problem, however, if you want to be consistent
    The longitudinal extend of your boxes changes like 1/cos(latitude); so for 60N you should have the box width like 20E, and for 80N MUCH more – like 50N. This would make you to include more stations for the polar region, and sub-polar as well.
    This would avoid my next problem – how we can compare the results from 100+ stations to 5 stations, in terms of temperature variability. Your confidence intervals – now at least – should be quite different, when proceeding from box to box.

    Reading book by Marcel Leraux, about climate, one would expect quite large variability in polar regions.
    Also book “Collapse” – which has chapters about medievial Vikings – ponints out large climate variability in polar regions.
    Similarly, records aboit Arctic ice extend in 20. century – and temperature records you have just shown – point to large variability.

    Kind regards
    Adalbert

  • Rattus Norvegicus // February 14, 2010 at 11:36 pm | Reply

    Not so fast there my friend! This paper clearly shows that there is no relationship between greenhouse forcings and temp. Not only that, it shows that the greenhouse effect is only temporary! Clearly this breakthough paper should be published in both Nature and Science! We’re saved!

  • Hank Roberts // February 15, 2010 at 6:12 am | Reply

    > This paper
    Wow. Off on the wrong foot and downhill from there.

    I refute it thus:
    http://scholar.google.com/scholar?q=%22plankton+cooled+a+greenhouse%22

  • Bernard J. // February 15, 2010 at 6:47 am | Reply

    Bugger, Rattus!

    I was just about to post the very same thing, and almost word-for-word!

  • John Mason // February 15, 2010 at 12:02 pm | Reply

    Nice one, Rattus!

    That paper rather terminally undermines Monckton’s stance on the Greenhouse Effect!

    Makes me wonder how many thousand years it would taken them to reach some kind on consensus if we all went away and left them to it! Could makes for a good Hitchikers’ Guide to the Galaxy storyline!

    Cheers – John

  • J // February 15, 2010 at 1:51 pm | Reply

    suricat writes: To be honest, I’m neither a scientist or a climatologist. My discipline is engineering. Suffice to say that I can’t accept a viewpoint that makes an extrapolation from a scenario that uses insufficient data to arrive at its conclusion.

    IOW, the basic data is inconclusive!

    You keep asserting that the data are insufficient, or that there’s some unspecified problem with the “resolution”. But you don’t show that.

    Look at the width of the uncertainty intervals in Tamino’s response to your very first comment. Assuming they’re calculated correctly, the trends for all latitude bands are statistically significant, so the data are clearly not insufficient.

    If you think you’ve found an error in Tamino’s calculations, or a logical flaw in the conceptual framework for those calculations, you need to explain that. “I just feel that the data are insufficient” isn’t an explanation. Or at least it’s not an explanation that anyone else will find convincing.

    • suricat // March 2, 2010 at 1:08 am | Reply

      J.

      “If you think you’ve found an error in Tamino’s calculations, or a logical flaw in the conceptual framework for those calculations, you need to explain that. “I just feel that the data are insufficient” isn’t an explanation. Or at least it’s not an explanation that anyone else will find convincing.”

      OK! One last try! I don’t see that it should be the responsibility of a mediocre engineer like myself to explain a definitive signal resolution to science types here. Especially when Tamino keeps telling me that “I don’t have a clue”.

      [edit extremely long comment]

      When observing the convolutions of a point phenomenon of nature its important to observe the convolutions at adjacent points elsewhere as well. A station’s data (whatever baud rate) is only a point in a network that can report ~metres from its location. We need many more land stations to resolve the definition of land temperature for a large area IMHO. Don’t take my word for it though because ‘I don’t have a clue’.

      Best regards, suricat.

      [Response: It's been established many times by multiple researchers that the sampling rate for global temperature is plenty high enough. In fact, the sampling we have is WAY more than necessary. But again, you simply claim that it isn't, then go on a long ramble about it, but provide no evidence. Rather than accept your word for it because it's "in your humble opinion," I'll choose to agree with those who actually ran the numbers.]

      • Ray Ladbury // March 2, 2010 at 1:37 pm

        Suricat,
        You need to think about this in terms of the physics. First, what we’re interested in is the anomaly–the departure from normal behavior for a point or a region or the globe. That means that the scale of interest is the scale over which things that change that behavior occur.

        OK, so in this case we are looking at forcing of climate–things like insolation, the greenhouse effect, aerosols, albedo, clouds, etc.

        On what scales do these things vary? Well, insolation varies according to angle, and a degree is on order of a thousand km on Earth. Certainly a few km or a few 10s of km will not matter much. The greenhouse effect? Well, greenhouse gasses are well mixed, so really this isn’t a concern. Clouds? Weather systems are either very large of of short duration. Albedo? Mostly determined by geology and/or plants, which are generally change consistently over a region with the seasons or on very long timescales.

        I think if you consider the problem you will find that your initial take is incorrect. Indeed, the best estimates are that the globe is oversampled by about a factor of 4x.

  • Kevin McKinney // February 15, 2010 at 1:53 pm | Reply

    While we’re posting links, how about a couple of (IMO) pretty good climate change pieces (though I still have a couple of quibbles.) From NPR’s Morning Edition today:

    http://www.npr.org/templates/story/story.php?storyId=123671588

    (Featuring comments from Kevin Trenberth)

    http://www.npr.org/templates/story/story.php?storyId=1025

    (On climate change refugees in the Bay of Bengal)

  • Deech56 // February 15, 2010 at 2:19 pm | Reply

    OMG, Rattus, we might as well give it up. A “Nature_Paper” at that! And it’s undergoing blog review at WUWT as we speak. We all know that’s better than peer review from some stinky old journal like Science or Nature.

Leave a Comment