Climate Science: Roger Pielke Sr. Research Group News


December 30, 2005

What is the Uncertainty in the Climate Observations from Heterogeneous Data Sources?

Filed under: Climate Change Metrics — Dev and Dita Niyogi @ 7:00 am

This question is important in order to assess the robustness of the trends and variability in the surface temperature records. A new paper was recently accepted by the AMS Journal of Atmospheric and Ocean Technology that addresses this issue. It is titled: “Comparison of Co-Located Automated (NCECONet) and Manual (COOP) Climate Observations in North Carolina” by Christopher Holder, Ryan Boyles, Ameenulla Syed, Dev Niyogi, and Sethu Raman. A draft copy is available at the link above.

Even though the study is focused over North Carolina, we believe the findings can be considered generic enough and provide a feel for the uncertainty in the datasets. The abstract states,

“The National Weather Service’s cooperative observer network (COOP) is a valuable climate data resource that provides manually observed information on temperature and precipitation across the nation. These data are part of the climate dataset and continue to be used in evaluating weather and climate models. Increasingly, weather and climate information is also available from automated weather stations. A comparison between these two observing methods is performed in North Carolina, where thirteen of these stations are collocated. Results indicate that, without correcting the data for differing observation times, daily temperature observations are generally in good agreement (0.96 Pearson product-moment correlation for minimum temperature, 0.89 for maximum temperature). Daily rainfall values recorded by the two different systems correlate poorly (0.44), but the correlations are improved (to 0.91) when corrections are made for the differences in observation times between the COOP and automated stations. Daily rainfall correlations especially improve with rainfall amounts less than 50 mm per day. Temperature and rainfall have high correlation (nearly 1.00 for maximum and minimum temperatures, 0.97 for rainfall) when monthly averages are used. Differences of the data between the two platforms consistently indicate that COOP instruments may be recording warmer maximum temperatures, cooler minimum temperatures, and larger amounts of rainfall, especially with higher rainfall rates. Root mean square errors are reduced by up to 71% with the day-shift and hourly corrections. This study shows that COOP and automated data (such as from NCECONet) can, with simple corrections, be used in conjunction for various climate analysis applications such as climate change and site-to-site comparisons. This allows a higher spatial density of data and a larger density of environmental parameters, thus potentially improving the accuracy of the data that are relayed to the public and used in climate studies.”

Some interesting findings from our study suggest:

  • there can be significant variability in the data quality from one decade to another;
  • caution is needed when combining datasets from different instrumentation platforms and observation methods since not all data are homogenous. The estimates in this study suggest that data from heterogeneous measurement networks could have significant inconsistencies and should not be combined into a unified dataset without quality control and adjustments that account for inherent system biases. Such adjustments include at minimum, consideration for differences in data observation time, location, and sensor characteristics.
  • For our study, the manual stations recorded warmer surface temperatures and more rainfall than the automated sensors.
  • It is not possible to state which is correct but does provide insights into the uncertainty and differences one could expect just from differences in the measurements from collocated stations. The errors or uncertainty that will be caused by station location is another issue we could not address but is an important issue for data representativeness.

    This brings back the old adage “Everyone but the observer believes in the observations; nobody except the modeler believes in his/her model results!” , which for the climate scenarios seem to be flipped over and people often may start looking at model results as “reality”.

    December 24, 2005

    More on Sudden Climate Transitions - A Book by John D. Cox

    Filed under: Climate Change Metrics, Vulnerability Paradigm — Roger Pielke Sr. @ 5:46 pm

    A book published in 2005 by the National Academy Press by John D. Cox entitled “Climate Crash: Abrupt Climate Change and What It Means for Our Future (2005)â€? raises a critically important issue regarding the ability of global climate models to skillfully predict regional and even the globally averaged climate in the coming decades. (See also “Climate Prediction as an Initial Value Problem“)

    The book is an excellent source on this subject. This weblog highlights this important contribution, but also there is a very important conclusion that the author missed.

    First, the positive points.

    A few quotes from his book are very insightful,

    “On top of this new view of a more changeable climate is the unnerving discovery that it is basically unpredictable.� (page 3)

    “Abrupt change means that, like the weather itself, climate sometimes behaves in ways that defy prediction. Processes in the atmosphere, in the ocean, and on the land are known to interact with one another, and even though scientists think they know all of the parts and all of the important processes, still they cannot be sure of the outcome of these interactions from one time to the next.” (page 146)

    “A problem that Alley and other paleoclimate scientists refer to as the “insensitivity of modelsâ€? or the “model-data gapâ€? sounds like a technical issue but really is more fundamental. It means that the models are unable to reproduce accurately the numerous episodes of abrupt change that show up clearly in many environmental archives around the world. The reasons for this failure are not yet known, but the implications are plain enough. Until these highly sophisticated numerical representations of Earth’s climate system—running on the world’s most powerful computers—are able to get the past right, what reason is there to believe they can get the future right?’ (page 183).

    “In 1997, Wally Broecker wrote in GSA Today that he had been “humbled� by his lifetime study of Earth’s climate, a circumstance that may have surprised a few graduate students at Columbia University. “I’m convinced that we have greatly underestimated the complexity of this system,� he wrote. “The importance of obscure phenomena, ranging from those that control the size of raindrops to those that control the amount of water pouring into the deep sea from the shelves of the Antarctic continent, makes reliable modeling very difficult, if not impossible. If we’re going to predict the future, we have to achieve a much greater understanding of these small-scale processes that together generate large-scale effects.� (page 189)

    “Not only are policy makers presented with what are likely to be overly optimistic expectations of the future, they are presented a profile with contours that look nothing like the record of climates past. The changes of the past are single lines or narrow bands that represent real data, whereas the projections of the future prepared for policy makers are large smoothed curves that represent the average results of many computer model simulations and a wide range of possibilities. “This tendency of the policy maker to see a smooth curve has to be really disturbing,� said Alley. “Because whatever it’s going to do, if it’s smooth, we’re going to be really surprised. It’s going to stagger, it’s going to jump. What happens regionally is not going to happen globally. And we really, I think, need to look at how variable it will be. What’s possible in the system, and where does it go?� (page 190)

    And now the very important conclusion that the author missed.

    It is the regional climate change that matters. The more homogeneous climate forcing of the anthropogenic well-mixed greenhouse gases, while clearly a concern, is not likely to pose as much of a threat to abrupt climate change, as the heterogeneous climate forcings. The book does make this point as given in the following text (but does not follow up on the implications of this issue),

    “……..on the scale of human history. On this scale, of course, it is not the global but the regional climate changes that push humanity around. Hidden inside the variable of global temperatures is the more powerful circumstance of their differences between one place and another. Climate scientists know that changes in these temperature differences alter the circulation of the atmosphere, and this is where climate hits the pavement of human experience. This is what is most important to societies: not the temperature changes themselves, but how these changes affect precipitation patterns over time—where in the world it rains or snows and how little or how much. “(pages 164-165).

    The book, however, does not adequately recognize the very important consequences of this paragraph. Since regional climate variability and change dominate society’s vulnerability to climate, the heterogeneous climate forcings of land use/land cover change, the diverse climate forcings of aerosol, and the biogeochemical effects of carbon dioxide clearly are of more concern as risks to abrupt climate change, than is argued by the author at the end of the book. As written, the author contradicts this perspective in the remainder of the book, and is using his view of the complexity of the climate system to make the very narrow point that “global warmingâ€? could result in a greater alteration of the Earth’s temperature than portrayed by the models, ie.

    “In 2002, in a lecture to the American Geophysical Union meeting in San Francisco, Alley reviewed the performance of the major climate models on which the Intergovernmental Panel on Climate Change has based its forecasts for a globally warming Earth. When it comes to simulating the past, across the board the models underestimate the changes that are known to have taken place. “On average, the models got two-thirds of what happened,� he said. “It’s not a cold bias, it’s not a warm bias. It’s an insensitivity to changed boundary conditions … how sensitive the model is when you change things.�
    “The least astonishing hypothesis that I get from this is that either the future warming projections are accurate or they have underestimated what we face in the future,� he said. “There’s a lot of paleoclimate work to be done here to test this hypothesis,� but if climate models are systematically underestimating reality, then the more radical-looking “high side� of temperature projections may be more accurate than the conservative-looking “low side.� (page 189-190)

    Climate, however, is more than temperature. A globally averaged temperature is an inadequate meric to represent other aspects of the climate system. This narrow conclusion presented in this otherwise excellent book contradicts its own thesis by limiting the discussion to global warming, and then to “temperature projections, as if this were the most serious of the human disturbances of the climate system in the coming decades.

    The real message to take from this book is that climate is a complex, nonlinear system in which skillful predictions of these transitions is a daunting, perhaps unachievable challenge. Since climate involves much more than the “temperature projections�, we need to identify what are the important climate variables that significantly affect society.
    This is why we have promoted a vulnerability perspective as the more valuable paradigm to reduce society’s risk to climate change and variability of all types. To continue to pour many millions of research dollars into the very limited (and perhaps unachievable) goal of multi-decadal globally-averaged temperature forecasts is a poor use of these funds.

    The last paragraph of the book moves in the direction of a vulnerability paradigm
    ,
    “The news from Greenland, unfortunately, is that much more is possible in the climate system than anyone would have supposed. Changes can be big and fast and potentially dangerous to societies that are heavily invested in stability and resistant to adaptation. In the event of an abrupt change—a climate surprise—political arguments probably will no longer be about industrial emission controls, their fairness or economic viability. More urgent political and economic problems will command the attention of nations around the globe. In the event, it is in nature’s power to so change our world that even such basic questions as cause—whether the crash came naturally or not—could seem sadly beside the point.� (page 190).

    However, the book does not take the obvious next step which is to specifically recommend that we move beyond the over simplistic approach of continuing to focus on a global average temperature, and of the radiative effect of increased carbon dioxide, as the dominant human climate forcing. As repeatedly reported on the Climate Science weblog, other human and natural climate forcings are much more significant in altering the climate. We need to develop a society that is resilient to whatever climate we face the future. A focus on reducing the emissions of carbon dioxide is, by itself, not going to reduce our risks.

    December 23, 2005

    Added Significance of the Climate Forcing of Aerosols and Ignored Consequences of Their Study with Respect to Human-caused Climate Change

    Filed under: Climate Change Forcings and Feedbacks — Roger Pielke Sr. @ 9:50 am

    A new article on the direct radiative forcing effect of aerosols has appeared in Nature on December 22. Its title is “Global estimate of aerosol direct radiative forcing from satellite measurements� authored by Nicolas Bellouin, Olivier Boucher, Jim Haywood and M. Shekar Reddy.

    The article, unfortunately, perpetuates the inappropriately narrow focus on the globally averaged radiative forcing of CO2. It avoids highlighting the major importance of the spatially heterogeneous radiative forcing of the aerosols. This forcing necessarily results in regional diabatic heating of the troposphere, which would not occur without the human input of these aerosols. A change in time in heterogeneous atmospheric forcing is central to the issue of climate change, and is not captured by a limited focus on global warming. We will post our submitted peer reviewed paper that quantitatively documents the importance of this heterogeneous diabatic forcings from aerosols on the Climate Science weblog as soon as we can.

    The abstract of the Bellouin et al Nature article states,

    “Atmospheric aerosols cause scattering and absorption of
    incoming solar radiation. Additional anthropogenic aerosols released into
    the atmosphere thus exert a direct radiative forcing on the climate
    system. The degree of present-day aerosol forcing is estimated from
    global models that incorporate a representation of the aerosol cycles. Although the models are compared and validated against observations,
    these estimates remain uncertain. Previous satellite measurements of the
    direct effect of aerosols contained limited information about aerosol
    type, and were confined to oceans only. Here we use state-of-the-art
    satellite-based measurements of aerosols and surface wind speed to
    estimate the clear-sky direct radiative forcing for 2002, incorporating
    measurements over land and ocean. We use a Monte Carlo approach to account
    for uncertainties in aerosol measurements and in the algorithm used.
    Probability density functions obtained for the direct radiative forcing at
    the top of the atmosphere give a clear-sky, global, annual average of -1.9
    W m-2 with standard deviation, 0.3 W m-2. These results suggest that
    present-day direct radiative forcing is stronger than present model
    estimates, implying future atmospheric warming greater than is presently
    predicted, as aerosol emissions continue to decline.�

    The news media has quickly reported on this study, e.g. from the Guardian

    An excerpt from that study, headlined “Pollutants ward off global warming, study finds� alludes to the vulnerability paradigm that has been highlighted in our research and reported on the Climate Science weblog. The excerpt from the news article states,

    “Earlier this year, Peter Cox at the Centre for Ecology and Hydrology in
    Winfrith, Dorset, warned that if the cooling effect of aerosols turned out
    to be greater, it could trigger faster global warming.

    “It’s quite a bizarre thing, because the last thing you want to suggest to
    people is that it would be a good idea to have dirty air, but as far as
    climate change is concerned, that’s right. Everyone would be getting
    asthma, but the environment would be cooler.

    That said, the direct effects of air quality, particularly in urban
    areas, are so important to human health, that it would be crazy to think
    of anything other than health damage,’ he said.�

    That quote by Peter Cox, who I have a great deal of respect for, captures the concept of vulnerability and of prioritizing threats to important societal resources (in thus case human health) that we published the paper entitled “A new paradigm for assessing the role of agriculture in the climate system and in climate change� Rather than focusing almost exclusively on the gloabally averaged radiative effect of CO2, we need to not only consider the heteogeneous climate forcing of aerosols, but also the broader range of environmental threats we face.

    December 22, 2005

    Science Questions on the Global Surface Temperature Trends

    Filed under: Climate Change Metrics — Roger Pielke Sr. @ 8:17 pm

    Our research has raised several issues on the robustness of the global surface temperature trend analyses. I am summarizing the questions raised over the last several months in this weblog, along with brief peer reviewed background material where available. Other weblogs, such as RealClimate.org, are invited to respond to these questions.

    The first overarching question, of course, is what is meant by the “global average surface temperature�? The 2005 National Research Council Report (see page 19 and page 21) provides a definition as

    “According to the radiative-convective equilibrium concept, the equation for determining global average surface temperature of the planet is

    dH/dt = f - T’/lamda (1-1)

    where H…….is the heat content of the land-ocean-atmosphere system ……. Equation 1-1 describes the change in the heat content where f is the radiative forcing at the tropopause, T′ is the change in surface temperature in response to a change in heat content, and λ is the climate feedback parameter (Schneider and Dickinson, 1974), also known as the climate sensitivity parameter, which denotes the rate at which the climate system returns the added forcing to space as infrared radiation or as reflected solar radiation (by changes in clouds, ice and snow, etc.).â€?

    Thus T is the “global average surface temperature â€?. However, where is this temperature and its change with time, T’, diagnosed?”

    Question: What is the level at which this temperature is monitored? Is T’ height invariant near the surface, if the lowest levels of the atmosphere are used to compute T’?

    Using the near surface air temperature changes as the climate metric to assess T’ raises the research questions listed below:

    1. We have shown that the Parker Nature study entitled “Large-scale warming is not urban” has serious issues on its conclusions, as well as demonstrated that an unrecognized until now warm bias occurs in nighttime minimum temperatures ( see Pielke Sr., R.A., and T. Matsui, 2005: Should light wind and windy nights have the same
    temperature trends at individual levels even if the boundary layer averaged
    heat content change is the same?”)

    Question: What is the magnitude of this bias in the analyses of the global surface temperature trends?

    2. There are photographed major problems with the microclimate exposure of a subset of surface observation sites ( see Davey, C.A., and R.A. Pielke Sr., 2005: “Microclimate exposures of surface-based weather stations - implications for the assessment of long-term temperature trends”.

    Question: What photographic documentation is available for the global network of surface temperature sites used to construct the long term global surface temperature analyses?

    3. We have shown that surface air water vapor changes over time must be accounted for in the assessment of long term surface air temperature trends (see Pielke Sr., R.A., C. Davey, and J. Morgan, 2004: Assessing “global warming” with surface heat content” and Davey,
    C.A., R.A. Pielke Sr., and K.P. Gallo, 2005: Differences between near-surface equivalent temperature and temperature trends for the eastern United States - Equivalent temperature as an alternative measure of heat content.”

    Question: What are the quantitative trends in surface absolute humidity for the sites used to construct the global surface temperature trends, and what is the uncertainty that is introduced if this information is not available?

    4. Our research has raised issues with the robustness of the adjustments that are used to “homogenizeâ€? surface temperature data. This includes adjustments made due to the time of observation, a change of instrument, the change in location, and from urbanization. Pielke Sr., R.A., T. Stohlgren, L. Schell, W. Parton, N. Doesken, K. Redmond, J. Moeny, T. McKee, and T.G.F. Kittel, 2002: Problems in evaluating regional and local trends in temperature: An example from eastern Colorado”

    Question: What are the quantitative uncertainties introduced from each step of the homogenization adjustment? Do they vary geographically?

    5. As discussed in the weblog of December 16, 2005 the raw surface temperature data from which global surface temperature trend analyses are derived are essentially the same. The best estimate we have seen is that 90-95% of the raw data is the same. That the four analyses produce similar trends should come as no surprise.

    Question: What is the degree of overlap in the data sets that are used to construct the global average surface temperature trend analyses? To frame this question another way, what raw surface temperature data is used in each analysis that is not used in the other analyses?

    These are important scientific questions which have either been poorly, or not at all, examined in climate assessments such as the IPCC and CCSP reports. Clearly, we need to move beyond such assessments that are written by individuals who are mostly evaluating their own research. Policymakers are poorly served by this inbred assessment framework by the scientific community.

    December 20, 2005

    The Role of Human Intervention in the Mediterranean Region on the Earth System including Climate

    Filed under: Climate Change Metrics — Roger Pielke Sr. @ 1:40 pm

    An important new paper provides valuable insights into the role of land surface and air pollution processes in the Mediterranean region;

    Millán, M. M., Mª. J. Estrela, M. J. Sanz, E. Mantilla, M. Martín, F. Pastor, R. Salvador, R. Vallejo, L. Alonso, G. Gangoiti, J.L. Ilardia, M. Navazo, A. Albizuri, B. Artiñano, P. Ciccioli, G. Kallos, R.A. Carvalho, D. Andrés, A. Hoff, J. Werhahn, G. Seufert, B, Versino, 2005: Climatic Feedbacks and Desertification: The Mediterranean model. J. Climate, 18 (5), 684-701.

    “Mesometeorological information obtained in several research projects in southern Europe has been used to analyze perceived changes in the western Mediterranean summer storm regime. A procedure was developed to disaggregate daily precipitation data into three main components: frontal precipitation, summer storms, and Mediterranean cyclogenesis. Working hypotheses were derived on the likely processes involved. The results indicate that the precipitation regime in this Mediterranean region is very sensitive to variations in surface airmass temperature and moisture. Land-use perturbations that accumulated over historical time and greatly accelerated in the last 30 yr may have induced changes from an open, monsoon-type regime with frequent summer storms over the mountains inland to one dominated by closed vertical recirculations where feedback mechanisms favor the loss of storms over the coastal mountains and additional heating of the sea surface temperature during summer. This, in turn, favors Mediterranean cyclogenesis and torrential rains in autumn–winter. Because these intense rains and floods can occur anywhere in the basin, perturbations to the hydrological cycle in any part of the basin can propagate to the whole basin and adjacent regions. Furthermore, present levels of air pollutants can produce greenhouse heating, amplifying the perturbations and pushing the system over critical threshold levels. The questions raised are relevant for the new European Union (EU) water policies in southern Europe and for other regions dominated by monsoon-type weather systems.”

    Professor Millán and colleagues have proposed a very informative schematic entitled MEDITERRANEAN MESO-METEOROLOGY: FROM AIR POLLUTION DYNAMICS TO GLOBAL CLIMATE FEEDBACKS which illustrates the complexity of the human intervention into the Earth system including climate in this region of the world.

    A focus on a global average surface temperature anomaly obviously misses this complexity!

    December 19, 2005

    Is Soil an Important Component of the Climate System?

    Filed under: Climate Change Metrics — Roger Pielke Sr. @ 9:39 am

    The answer is a definitive YES.

    A letter to Science (which was unfortunately not accepted) by Professor Dan Yaalon of the Institute of Earth Sciences of Hebrew University Givat Ram Cam is reproduced below with his permission. The letter was written in response to the Foley et al paper of earlier this year, which I commented on in the weblog. In the letter,

    Professor Yaalon makes the very effective case that land management processes result in soil changes. This will necessarily alter land-atmosphere interactions, and thus climate. His 2000 Nature article (subscription required) is entitled “Down to earth – Why soil – and soil science – matters� provdes more information on his valuable perspective.

    To the Editor of Science:

    LAND USE IS ALWAYS ACCOMPANIED BY SOIL CHANGE

    The global review by Foley, DeFries and others on consequences of land
    use (22 July, p. 570, with ample online supporting material) is valuable, summarizes well the major current features and includes hints on developing future strategies. However for some reason it neglects to discuss the impact on soils due to changes in land use. Soils are a major factor in land use and the important link between climate and biogeochemical earth systems (1). Hence land use practices and land cover change are always accompanied by soil change. Not only the carbon and hydrologic cycle but equally the soil and sediment cycles have been and are being changed by human land use
    practices over time. Is this not significant enough to review ? Why this slant on
    biodiversity decrease and no mention of the possibly equally significant pedodiversity reduction and soil quality attributes change (2) ?

    With nearly half of the earth land surface now drastically changed
    to arable land and pastures (currently 12% and 25% respectively, with additional areas of managed forests), the respective surficial soils have changed their original nature and pedological properties, and some must now be differently classified (3). While largely turning more productive, some were degraded and certain soil varieties have become endangered or even extinct, like any other biota. This was surely worthwhile to draw attention to as consequence of changing land cover surfaces.

    Pedology (soil science) is a relatively young branch of the earth
    sciences (1, 4) and because combining both the bio-geo-chemical and physical aspects, soils developed into an exceptionally complicated system of ecosystem functions, including applied services for mankind, as the several recent articles in Science (11June 2004) so well demonstrated (5). Statistical evaluation of pedodiversity, partly analogous to biodiversity, is a growing topic in soils (3, 6). We must not neglect to consider soils appropriately in any global, regional or local context.

    References

    1. D.H. Yaalon, Nature 407, 391 (2000).

    2. R. Amundson, Are soils endangered? In J. Schneiderman (ed.) The earth
    around us; maintaining a livable planet, Freeman, New York
    (p.144-153, 2000).

    3. R. Amundson et al., Ecosystems 6, 470 (2003).

    4. D.H. Yaalon and S. Berkowicz (eds.), History of Soil Science -
    International Perspectives (Catena Cerlag, Reiskirchen, Germany, 1997)

    5. Soils - The Final Frontier, Science 204, 1613 (2004) .

    6. J.J. Ibanez et al., Geoderma 83, 171 (1998).

    He stated in his e-mail of December 19, 2005 to me,

    “This is in support of your brief review in SCIENCE (December 5) on Land Use and Climate Change. No doubt an important interaction which needs to considered, even though the definition of ‘climate change’ is only touched briefly in your discussion. As a soil scientist/pedologist I consider the interaction ‘land use and soil change’ equally relevant, and consider that humankind has transformed closer to half of the land surface rather than one-third and thus affected also the soils.

    Strangely enough my response to a previous Science review (July 22) by Jon Foley et al. on Consequences of Land Use was not considered significant enough by the Editors of Science for publication (see [above]), even though Jon Foley agreed with me fully and regretted that soils were not mentioned. There is no doubt that the interacting trio of ‘Land Use - Soil Change - Climate Change’ have affected civilization in the past, present and future, and need to be appropriately evaluated.

    Climate change, most frequently just connected with seasonal or intensity change of some the climatic parameters, has not been defined so far in detail and certainly needs more attention in this respect. When does the increase or decrease in temperature or precipitation count as ‘climate change’ ? Soils are greatly affected by climate and all the factors are strongly periodic or seasonal, but I consider or recognize climate change only when the direction of the soil forming processes has been changed. Slightly increased temperature or precipitation may only change the intensity of the already acting processes, not change their direction. Hence understanding changing soils, including the eolian processes of addition or removal, is important in all human affected ecosystems.â€?

    Professor Yaalon’s provides a very valuable perspective in how we define climate. It also provides further substance to the need for us to treat climate as an integrated Earth system issue as articulated in the 2005 National Research Council report which expands the concept of climate as we have discussed numerous times on this weblog.

    December 16, 2005

    Comments on the Media Report that 2005 is (or is nearly so) a Record Hot Year.

    Filed under: Climate Science Reporting — Roger Pielke Sr. @ 9:38 am

    The media reports today that 2005 is among the hottest years on record. This claim is based on the global average surface temperature record, which as discussed several times on this weblog is fraught with serious data quality issues. Our recent paper has even shown that a warm bias exists in the data.

    The media supports this claim of the hottest year (or nearly so) by stating

    “Four separate temperature analyses released Thursday varied by a few hundredths of a degree but agreed it was either the hottest or second-hottest year since the start of record-keeping in the late 1880s.�

    This is a misleading statement. The “four separate temperature analyses� are mostly from all or a subset of the same raw data!

    While the statement is clarified later in the article,

    “The groups use the same temperature data but differ in how they analyze them, particularly in remote areas such as the Arctic, where there are few thermometers�,

    this important caveat is missing in the earlier statement in the article (moreover, as we will show in a soon to be submitted paper, other areas also have a sparcity of data; for 20N to 20S, for example, 70% of the grid areas over land have 1 or less observation sites).

    The raw surface temperature data from which the four analyses are derived are, therefore, essentially the same. That the four analyses produce similar trends should come as no surprise!

    A question to the different groups which has been posed to several of them, but they have not answered, is what is the degree of overlap in the data sets? While some of the analyses use subsets of the raw data, the raw data is almost identical. To frame this question another way, what raw surface temperature data is used in each analysis that is not used in the other analyses? The best estimate we have seen is that 90-95% of the raw data is the same.

    Not to highlight this important issue is an example of cherrypicking; this time by the analyses groups that are releasing the surface temperature data.

    December 14, 2005

    New Paper On the Importance of Diagnosing Moist Enthalpy In Addition to Temperature Trends As a Metric of Surface Atmospheric Global Warming

    Filed under: Climate Change Metrics — Roger Pielke Sr. @ 3:49 pm

    A new paper entitled “Differences between near-surface equivalent temperature and temperature trends for the eastern United States - Equivalent temperature as an alternative measure of heat content� by Christopher A. Davey, Roger A. Pielke Sr and. Kevin.P. Gallo has been accepted for publication in Global and Planetary Change. The abstract of the paper states,

    “There is currently much attention being given to the observed increase in near-surface air temperatures during the last century. The proper investigation of heating trends, however, requires that we include surface heat content to monitor this aspect of the climate system. Changes in heat content of the Earth’s climate are not fully described by temperature alone. Moist enthalpy or, alternatively, equivalent temperature, is more sensitive to surface vegetation properties than is air temperature and therefore more accurately depicts surface heating trends. The microclimates evident at many surface observation sites highlight the influence of land surface characteristics on local surface heating trends. Temperature and equivalent temperature trend differences from 1982-1997 are examined for surface sites in the Eastern U.S. Overall trend differences at the surface indicate equivalent temperature trends are relatively warmer than temperature trends in the Eastern U.S. Seasonally, equivalent temperature trends are relatively warmer than temperature trends in winter and are relatively cooler in the fall. These patterns, however, vary widely from site to site, so local microclimate is very important.�

    This paper provides additional confidence in the results of Young-Kwon Lim , Ming Cai, Eugenia Kalnay, and Liming Zhou which we reported on in our weblog of December 1, 2005 . Their paper is entitled Observational evidence of sensitivity of surface climate changes to land types and urbanization, Geophys. Res. Lett., Vol. 32, No. 22, L2271210.1029/2005GL024267, 30 November 2005.

    As we state in our paper,

    “…… these findings give support to the idea that land cover exerts a major influence on heating trends (Kalnay and Cai, 2003; Cai and Kalnay, 2004).�

    Our new paper show that trends in absolute humidity must be included in any accurate quantitative assessment of surface air heat trends. Surface air temperature alone is an inadequate metric of “global warming� even without the other problems with using this data, as has been discussed several times on this weblog. This effect of humidity is represented in the metric of heat called “moist enthalpy� (see the July 18 2005 weblog What Does Moist Enthalpy Tell Us?).

    The paper also documents that the trends are different depending on the landscape type.

    Clearly, as we repeatedly report on the Climate Science weblog, the use of the surface air temperature as the icon for assessing global warming is fraught with major unaddressed uncertainties.

    December 13, 2005

    Is Carbon Sequestration More Complicated than Presented in the Kyoto Protocol?

    Filed under: Climate Science Misconceptions — Roger Pielke Sr. @ 12:20 pm

    The answer is YES.

    Several years ago, I published a short note on this in the Bulletin of the American Meteorological Society

    Pielke Sr., R.A., 2001: Carbon sequestration — The need for an integrated climate system approach. Bull. Amer. Meteor. Soc., 82, 2021.

    As I stated in that publication

    “There has, unfortunately, been no attempt to evaluate the benefit of carbon sequestration as a means of reducing the concentrations of the radiatively active gas CO2 in the atmosphere, while at the same time, assessing the influence of this sequestration on the radiatively active gas H2O, and on the surface energy budget.�

    A new paper has appeared which provides further substance to this issue by S. Gibbard and colleagues entitled “Climate effects of global land cover change�

    The abstract reads

    “When changing from grass and croplands to forest, there are two competing effects of land cover change on climate: an albedo effect which leads to warming and an evapotranspiration effect which tends to produce cooling. It is not clear which effect would dominate. We have performed simulations of global land cover change using the NCAR CAM3 atmospheric general circulation model coupled to a slab ocean model. We find that global replacement of current vegetation by trees would lead to a global mean warming of 1.3°C, nearly 60% of the warming produced under a doubled CO2 concentration, while replacement by grasslands would result in a cooling of 0.4°C. It has been previously shown that boreal forestation can lead to warming; our simulations indicate that mid-latitude forestation also could lead to warming. These results suggest that more research is necessary before forest carbon storage should be deployed as a mitigation strategy for global warming.�

    This study raises the issue effectively that if the goal is to reduce the radiative imbalance resulting from the addition of carbon dioxide, than the procedure of storing in vegetation may not achieve the intended goal if the albedo of the replacement vegetation is darker. The issue is even more complicated, of course, when we consider that the pattern of vegetation itself can affect the resulting cloudiness, through mesoscale circulations as shown, for example, by in Avissar, R., Y. Liu, Three-dimensional numerical study of shallow convective clouds and precipitation induced by land surface forcing, J. Geophys. Res., 101(D3), 7499-7518, 10.1029/95JD03031, 1996.

    Indeed, this is another example which shows the complexity of the climate system. We need to broaden the assessment of first order climate effects beyond the radiative effects of increased atmospheric concentrations of CO2.

    December 11, 2005

    Comments on the UCAR Press Release on the Feddema et al. (2005) Science article

    Filed under: Climate Science Reporting — Roger Pielke Sr. @ 7:17 pm

    NCAR released a press statement on the Feddema et al. (2005) paper that included the following text

    “Taken together, the impacts of greenhouse gases around the globe should far outweigh the regional effects of land-cover change, according to Feddema. However, the regions with extensive agriculture and deforestation also tend to be highly populated, so the effects of land-cover change are often focused where people live.

    ‘Compared to global warming, land use is a relatively small influence. However, there are regions where it’s really important,’ he says.”

    This statement begs the question as to what climate metric is used to define “influence.” The metric that was used to make this conclusion is the global surface average temperature. As Feddema et al. and others (including ourselves) have found, land-use/land-cover change has only a small effect on the global average temperature since areas of warming and cooling average out. However, as shown in the Feddema et al. paper and elsewhere, regional circulations are very significantly altered.

    The NCAR Press release did a disservice to a balanced presentation of the study by not describing the climate metric they were using to make the statement about the significance of the result.

    The global surface temperature is a particularly poor metric to use to monitor climate change (and even the subset of climate change that are meant by the term global warming) (e.g., see the November 1, 2005 weblog entitled What is Meant by the “Global Surface-Averaged Temperature”? and the July 28, 2005 weblog entitled What is the Importance to Climate of Heterogeneous Spatial Trends in Tropospheric Temperatures? ).

    We need new climate metrics. The 2005 NRC report specifically recognized this need as a priority recommendation as excerpted below;

    “Encourage policy analysts and integrated assessment modelers to move beyond simple climate models based entirely on global mean TOA radiative forcing and incorporate new global and regional radiative and nonradiative forcing metrics as they become available.”
    (from http://www.nap.edu/books/0309095069/html/7.html)

    The UCAR news release presented a biased perspective by not defining in the press release the metric used to make the global warming versus land use change comparison. Most readers would (incorrectly) assume this appies to all climate effects.

    The Feddema et al. paper is an important contribution as to why we need new metrics, so that its importance in climate science is not understated. The IPCC commnity needs to address this needed broadening of the climate metrics used to assess climate change and variability.

    Weblog editor: Dallas Staley (dallas AT cires DOT colorado DOT edu)