Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

More errors identified in contrarian climate scientists' temperature estimates

Posted on 11 May 2017 by John Abraham

Human emission of heat-trapping gases is causing the Earth to warm. We’ve known that for many decades. In fact, there are no reputable scientists that dispute this fact. There are, however, a few scientists who don’t think the warming will be very much or that we should worry about it. These contrarians have been shown to be wrong over and over again, like in the movie Groundhog Day. And, a new study just out shows they may have another error. But, despite being wrong, they continue to claim Earth’s warming isn’t something to be concerned about.

Perhaps the darlings of the denialist community are two researchers out of Alabama (John Christy and Roy Spencer). They rose to public attention in the mid-1990s when they reportedly showed that the atmosphere was not warming and was actually cooling. It turns out they had made some pretty significant errors and when other researchers identified those errors, the new results showed a warming.

To provide perspective, we know the Earth is warming because we can measure it. Most of the heat (93%) goes into the oceans and we have sensors measuring ocean temperatures that show this. We also know about warming because we have thermometers and other sensors all over the planet measuring the temperature at the surface or in the first few meters of air at the surface. Those temperatures are rising too. We are also seeing ice melting and sea level rising around the planet. 

So, the evidence is clear. What Christy and Spencer focus on is the temperatures measured far above the Earth’s surface in the troposphere and the stratosphere. Generally, over the past few decades these two scientists have claimed the troposphere temperatures are not rising very rapidly. This argument has been picked up to deny the reality of human caused climate change – but it has been found to be wrong.

What kinds of errors have been made? Well first, let’s understand how these two researchers measure atmospheric temperatures. They are not using thermometers, rather they are using microwave signals from the atmosphere to deduce temperatures. The microwave sensors are on satellites which rapidly circle the planet. 

Some of the problems they have struggled with relate to satellite altitudes (they slowly fall over their lifetimes, and this orbital decay biases the readings); satellite drift (their orbits shift east-west a small amount causing an error); they errantly include stratosphere temperatures in their lower atmosphere readings; and they have incorrect temperature calibration on the satellites. It’s pretty deep stuff, but I have written about the errors multiple times here, and here for people who want a deeper dive into the details.

It’s important to recognize that there are four other groups that make similar measurement estimates, so it’s possible to compare the temperatures of one group against another. The new paper, completed by Eric Swanson and published by the American Meteorological Society compares the results from three different groups. He focused on measurements made over the Arctic region. His comparison found two main differences amongst the three groups that suggests the errors.

To better appreciate the issues, the satellites have instruments called Microwave Sounding Units (MSUs) or more recently, Advanced Microwave Soundings Units (AMSUs). These instruments allow reconstruction of the lower troposphere (TLT), the mid-troposphere temperature (TMT), and the lower stratosphere temperature (TLS). But the measurements are not at a specific location (like a thermometer) - they are smeared out over large spaces. As a consequence, it’s possible to have one layer of the atmosphere contaminate the results of another layer. You wouldn’t for instance, want your measurement of the troposphere (lower atmosphere) to include part of the stratosphere (above the troposphere).

Among the key differences among the research teams are their methods to ensure this contamination is minimized. According to the recent paper, which was published in January 2017:

Click here to read the rest

0 0

Printable Version  |  Link to this page

Comments

Comments 1 to 12:

  1. The most important point about the satellite data is that it is an attempt to model atmospheric temperatures. Once fully mature, this would be an invaluable tool to model temperatures throughout all strata of the atmosphere. Like all complex modeling, there is always room for incremental improvements on the basis of advances in the data and theoretical insights.

    The opposition often articulated between climate models and "the satellite data" completely skips the problem that all "raw" data must be integrated and interpreted within a framework (model) to tell us anything useful.

    0 0
  2. Using satellite-measured microwave radiation to try to determine atmospheric temperatures is affected by what is called "the inversion problem". I can't quickly find a definitive discussion of it, but a quick search produced this paper that mentions it in the abstract:

    http://www.sciencedirect.com/science/article/pii/002240737890136X

    The inversion problem can be summarized as this:

    • radiation transfer theory is quite capable of taken a known set of atmospheric conditions (temperature, pressure, chemistry, etc.) and giving quite accurate estimates of the resulting radiative fluxes.
    • Going the other way - taking radiation measurements and trying to use a model to determine atmospheric conditions (in this case, temperature) - is much more difficult. The problem is usually "ill-conditioned" - there are a lot of unknowns, and the model can be made to fit the measurements fairly well with a wide variety of closely-related input parameters that may or may not be known accurately.
    • For example, if my model says A+B=C, and I know A and B, it is easy to estimate C. On the other hand, if I measure C and don't know much about A and B, then it's really hard to say I know B with certainty. If B is what I am interested in, and I can find an independent way to know or estimate what A is, then I can learn about B by measuring C, but my estimate of B is highly dependent on how well I do with A.

      All the "corrections" to Spencer and Christy's results over time can be described as modified attempts to constrain the results based on improved understanding of either the models or the approximations needed to overcome the inversion problem. Spencer and Christy's track record - of having others find problems that need fixing - does not do them a lot of credit. Follow the link to the Grauniad's story to see the graph of Spencer and Christy's sequence of corrections to their work.

    0 0
  3. There are errors associated with land and sst measurements too of course. In fact, a thermometer enclosed within a pseudo solar shielding box near ground level is really only a means of attaining an approximation of air temperature - because the solar shield converts some short wave radiation into LWR, and additionally, the ground and surrounding area also radiate (LWR). The consequence of this is that LWR viariables can contaminate climate data if they change over time.        

    0 0
  4. It's true that there are errors everywhere in measured data.  But not all errors are equal.  Is it just scatter?  or bias?  or what?  Kevin Cowtan's page at Univ. York is interesting in that it shows at least the scatter part of each of the main data sets.

    University of York, Temperature Plotter

    Note that the nominal uncertainty of the warming rate calculated by the  HadCrut4 and GISTEMP data for the past 30 years is ±0.06 °C/decade, whereas the uncertainty of the RSS 3.3 TLT set is ±0.09 °C/decade and UAH 6.0 TLT is ±0.11 °C/decade.  While the satellites are showing lower warming rates, it also appears they are struggling to achieve a consistent measurement as it is.

    0 0
  5. Knaugle: I'm afraid that's not a very good indication of the uncertainty in the temperature estimates. The uncertainty given by the temperature plotter is a measure of the 'wiggliness', or more precisely the deviation from linearity. So what it is actually telling you is that the satellite data are more wiggly.

    While uncorrelated errors in the data would contribute to wiggliness, so can other things, and worse the long term biases of the kind which might be present in the MSU data may not. So for example a simple drift over time won't show up in the uncertainty at all, but will lead to a substantially wrong trend.

    What you can infer from the larger uncertainty in the trend tool is that the satellite data are more strongly influenced by El Nino. Which also reduces their usefulness in detecting long-term trends (unless you do an analysis to remove the impact of El Nino), however this has nothing to do with errors in the dataset.

    0 0
  6. If anything the warming is greater than we measure.  Sound strange?  We should be estimating how much warmer the earth is compared to where we should be if we were slowly sliding into a glacial as we should have been.  Apparently the interglacial most similar to our present one in terms of the Milankovitch cycle is the one that occured 400,000 years ago.

    0 0
  7. Art Vandelay mentions that the sun screens around thermometers can still radiate lw radiation, and I think he is also referring to the urban heat island effect?

    I thought these things were well known uncertainties and dealt with, and also relatively small biases. I thought the raw data was adjusted to ensure these sorts of things didn't bias temperatures upwards? Am I right? They are easy enough things to quantify, more so than problems with satellites.

    In comparison the satellite data seems to be full of controversies about possible biases and uncertainties, from what I read over at RC, so are more of an unknown quantity.

    It's also important to realise problems with sun shading for thermometers would either be constant over the decades, so unlikely to distort an actual changing temperature trend, or if the sun shades have been improved, this would obviously improve the accuracy of the trend. Neither would cause a warming bias to the trend.

    The "law of large numbers" would probably cancel out some of the biases in the surface record, because of so many thermometers.

    0 0
  8. Kevin@5, knaugle@4,

    Indeed. The noise of the linear regression residual (be it random or red noise of ElNino signal) - in fact linear regression itself does not represent the data variability across th etimespan shown -is not the subject of this OP. The subject is the biases from imprecise modeling of TLT temperature. When TLT is partially contaminated with TLS (and stratosaphere temp is supposed to fall) then we have a trouble obtaining accurate ands unbiased TLT data. That bias (and bias from satelite drift and trouble combining data from differrent sateltes) is a bigger problem than whether random noise and ElNino red noise. Thermometers place in Stephenson boxes don't suffer from orbital decay if the boxess stand steady.

    0 0
  9. Nigelj #7:

    "The "law of large numbers" would probably cancel out some of the biases in the surface record, because of so many thermometers."

    Indeed! Tamino demonstrated that point very clearly in his blog post Warts and All six years ago. There is an impressive correlation between the temperature trends in five large gridboxes in Europe when using raw data only!

    0 0
  10. Art Vandelay's description of the effects of a radiation shield is not correct. In addition to eliminating the heating effect of the sun, you also want to eliminate the normal surface IR imbalance.

    Under most conditions, IR is a net loss by the surface - the surface emits more than it receives from the sky. This might be close to zero with low overcast, but with clear skies the net loss will be well over 100 W/m2. Overall, net IR cools the surface. We do not want this effect on our air temperature measurement.

    The thermometer has an energy balance. If we look at the gain or loss of energy by the thermometer, there are three terms. The sum of the three tells you whether the thermometer is losing or gaining energy.

    1. Net radiation (solar + IR)
    2. Thermal heat gain/loss with the air
    3. Evaporative loss to the air (water evaporates from the surface of the thermometer, consuming latent energy which cools the thermometer).

    You need to end up as close as possible to net radiation = zero, so you want solar gain = 0 and net IR = 0. You need to keep the thermometer dry to eliminate evaporative cooling. Then the thermal gain/loss depends on whether the thermometer is warmer, cooler, or the same temperature as the air. If warmer, the thermometer will lose energy to the air and cool. If cooler, it will gain enery from the air and warm. At balance, the thermometer is equal to air temperature and neither cools nor warms, which is what we want. Now we have a measurement of air temperature.

    The temperature is then referred to as "dry bulb temnperature". Why? Because if we add water to the mix, we add evaporative cooling. With evaporative cooling the thermometer cools below air temperature, until the heat gain from the now-warmer air exactly balances the rate of heat loss by evaporation, and a stable temperature is reached. That temperature is called the "wet bulb temperature", and it is a fundamental way of measuring the humidity of the air (in combination with the dry bulb temperature).

    Even if net radiation is not exactly zero, ventilation using a fan reduces its effects. Big thermometers are worse than small thermometers. In fact, if you use a fine-wire thermocouple (diameter typically 0.001"), then you don't even need a radiation shield or ventilation. Such thermocouples are not particularly robust, though.

    Art's speculation about LWR variables contaminating climate data is off base, as far as radiation shields are concerned, because the radiation shield isolates the thermometer from the surrounding net IR fluxes, just as it isolates it from the solar fluxes.

    1 0
  11. Bob Loblaw @10, I think Art Vandalay was trying to allude to Watt's surface stations project.  The idea is that introduction of a cement slab or other artificial structure to the immediate vicinity of a meteorological station will contaminate the trend information.  While direct IR absorption by the thermometer does not have any impact on that, it is certainly possible that such degradation of the site might have an effect.  Indeed, the effect was quantified in Fall et al (2011).  They state in the abstract:

    "This initial study examines temperature differences among different levels of siting quality without controlling for other factors such as instrument type. Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends. The opposite‐signed differences of maximum and minimum temperature trends are similar in magnitude, so that the overall mean temperature trends are nearly identical across site classifications. Homogeneity adjustments tend to reduce trend differences, but statistically significant differences remain for all but average temperature trends. Comparison of observed temperatures with NARR [NorthAmerican Regional Reanalysis] shows that the most poorly sited stations are warmer compared to NARR than are other stations, and a major portion of this bias is associated with the siting classification rather than the geographical distribution of stations.  According to the best‐sited has no century‐scale trend."

    It should be noted that for the primary point of comparison with satellite data, ie, daily mean temperatures, the "... overall mean temperature trends are nearly identical across site classifications".

    You have shown that Art Vandalay was wrong in assuming site degradation would effect thermometers by IR radiation to any significant degree, but it does effect local air temperature which is measured at the site.

    Finally, I will note that homogeneity adjustments in the surface temperature record are conceptually equivalent to the adjustments made to the satellite record to ensure consistency between the records from different satellites.  The major difference is that in the surface record, the adjustment is not checked against the records of one or two other satellites, but against multiple nearby thermometer records, making the adjustment far more reliable.

    0 0
  12. Tom:

    Art may be thinking of Watt's surface stations project, but his comment focused almost entirely on an erroneous description/understanding of radiation shields.

    The purpose of the radiation shield is to get the best measurement possible (within budget or practicality) of the air temperature at a specific height above a specific point.

    • That air temperature may vary horizontally, depending on the variability of the surface conditions.
    • That air temperature will always vary vertically, because that is the temperature gradient along which thermal energy is transported. I have measured several degrees difference between heights of 0.5m and 2m - for example at night with light winds and a strong inversion due to surface radiative cooling (IR losses). Roughly, the temperature gradient is proportional to the logarithm of height (z)  - i.e. T = A*ln(z)+ a constant. (Note: this is a very simplified version of the full math.)

    The question of effects on local surface variation are related to the question of whether or not the temperature we measure is representative of the temperature of the region. That is an entirely different question from "can we measure air temperature accurately?" I know from experience that it is possible to measure air temperature accurately enough to easily detect vertical differences in temperature of a few hundredths of a degree per metre.

    One reason routine meteorological air temperatures are measured at a height of 1.5-2m is so that they average over some area. Rule of thumb: 2m height will react to 100-200m upwind surface conditions. That's why upwind surface conditions are a factor. We could avoid that by measuring close to the surface, but then we are risking very local effects: think of the surface temperature difference between a concrete sidewalk and wet grass on a sunny summer afternoon. That's what Watt's group tried to look at. Fall et al is an OK paper, but Watt's hasn't a clue about the physics.

    0 0

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us