Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.


Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe

Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...

Keep me logged in
New? Register here
Forgot your password?

Latest Posts


Climate Hustle

Making Sense of Sensitivity … and Keeping It in Perspective

Posted on 28 March 2013 by dana1981

Yesterday The Economist published an article about climate sensitivity – how much the planet's surface will warm in response to the increased greenhouse effect from a doubling of atmospheric CO2, including amplifying and dampening feedbacks.  For the most part the article was well-researched, with the exception of a few errors, like calling financier Nic Lewis "an independent climate scientist."  The main shortcomings in the article lie in its interpretation of the research that it presented.

For example, the article focused heavily on the slowed global surface warming over the past decade, and a few studies which, based on that slowed surface warming, have concluded that climate sensitivity is relatively low.  However, as we have discussed on Skeptical Science, those estimates do not include the accelerated warming of the deeper oceans over the past decade, and they appear to be overly sensitive to short-term natural variability.  The Economist article touched only briefly on the accelerated deep ocean warming, and oddly seemed to dismiss this data as "obscure."

The Economist article also referenced the circular Tung and Zhou (2013) paper we addressed here, and suggested that if equilibrium climate sensitivity is 2°C to a doubling of CO2, we might be better off adapting to rather than trying to mitigate climate change.  Unfortunately, as we discussed here, even a 2°C sensitivity would set us on a path for very dangerous climate change unless we take serious steps to reduce our greenhouse gas emissions.

Ultimately it was rather strange to see such a complex technical subject as climate sensitivity tackled in a business-related publication.  While The Economist made a good effort at the topic, their lack of expertise showed. 

For a more expert take on climate sensitivity, we re-post here an article published by Zeke Hausfather at the Yale Forum on Climate Change & the Media.

Climate sensitivity is suddenly a hot topic.

Some commenters skeptical of the severity of projected climate change have recently seized on two sources to argue that the climate may be less sensitive than many scientists say and the impacts of climate change therefore less serious: A yet-to-be-published study from Norwegian researchers, and remarks by James Annan, a climate scientist with the Japan Agency for Marine-Earth Science and Technology (JAMSTEC).

While the points skeptics are making significantly overstate their case, a look at recent developments in estimates of climate sensitivity may help provide a better estimate of future warming. These estimates are critical, as climate sensitivity will be one of the main factors determining how much warming the world experiences during the 21st century.

Climate sensitivity is an important and often poorly understood concept. Put simply, it is usually defined as the amount of global surface warming that will occur when atmospheric CO2 concentrations double. These estimates have proven remarkably stable over time, generally falling in the range of 1.5 to 4.5 degrees C per doubling of CO2.* Using its established terminology, IPCC in its Fourth Assessment Report slightly narrowed this range, arguing that climate sensitivity was “likely” between 2 C to 4.5 C, and that it was “very likely” more than 1.5 C.

The wide range of estimates of climate sensitivity is attributable to uncertainties about the magnitude of climate feedbacks (e.g., water vapor, clouds, and albedo). Those estimates also reflect uncertainties involving changes in temperature and forcing in the distant past. But based on the radiative properties, there is broad agreement that, all things being equal, a doubling of CO2 will yield a temperature increase of a bit more than 1 C if feedbacks are ignored. However, it is known from estimates of past climate changes and from atmospheric physics-based models that Earth’s climate is more sensitive than that. A prime example: Small perturbations in orbital forcings resulting in vast ice ages could not have occurred without strong feedbacks.

Water Vapor: Major GHG and Major Feedback

Water vapor is responsible for the major feedback, increasing sensitivity from 1 C to somewhere between 2 and 4.5 C. Water vapor is itself a powerful greenhouse gas, and the amount of water vapor in the atmosphere is in part determined by the temperature of the air. As the world warms, the absolute amount of water vapor in the atmosphere will increase and therefore so too will the greenhouse effect.

That increased atmospheric water vapor will also affect cloud cover, though impacts of changes in cloud cover on climate sensitivity are much more uncertain. What is clear is that a warming world will also be a world with less ice and snow cover. With less ice and snow reflecting the Sun’s rays, melting will decrease Earth’s albedo, with a predictable impact: more warming.

There are several different ways to estimate climate sensitivity:

  • Examining Earth’s temperature response during the last millennium, glacial periods in the past, or periods even further back in geological time, such as the Paleocene Eocene Thermal Maximum;
  • Looking at recent temperature measurements and data from satellites;
  • Examining the response of Earth’s climate to major volcanic eruptions; and
  • Using global climate models to test the response of a doubling of CO2 concentrations.

These methods produce generally comparable results, as shown in the figure below.

Figure from Knutti and Hegerl 2008.

The grey area shows IPCC’s estimated sensitivity ranges of 2 C to 4.5 C. Different approaches tend to obtain slightly different mean estimates. Those based on instrumental temperature records (e.g., thermometer measurements over the past 150 years or so) have a mean sensitivity of around 2.5 C, while climate models average closer to 3.5 C.

The ‘Sting’ of the Long Tail of Sensitivity

Much of the recent discussion of climate sensitivity in online forums and in peer-reviewed literature focuses on two areas: cutting off the so-called “long tail” of low probability\high climate sensitivities (e.g., above 6 C or so), and reconciling the recent slowdown in observed surface warming with predictions from global climate models.

Being able to rule out low-probability/high-sensitivity outcomes is important for a number of reasons. For one, the non-linear relationship between warming and economic harm means that the most extreme damages would occur in very high-sensitivity cases (as Harvard economist Marty Weitzman puts it, “the sting is in the long tail” of climate sensitivity). Being able to better rule out low probability/high climate sensitivities can change assessments of the potential economic damages resulting from climate change. Much of the recent work arguing against very high-sensitivity estimates has been done by James Annan and Jules Hargreaves.

The relatively slow rate of warming over the past decade has lowered some estimates of climate sensitivity based on surface temperature records. While temperatures have remained within the envelope of estimates from climate models, they have at times approached the 5 percent to 95 percent confidence intervals, as shown in the figure below.

Figure from Ed Hawkins at the University of Reading (UK).

However, reasonably comprehensive global temperature records exist only since around 1850, and sensitivity estimates derived from surface temperature records can be overly sensitive to decadal variability. To illustrate that latter point, in the Norwegian study referred to earlier, an estimate of sensitivity using temperature data up to the year 2000 resulted in a relatively high sensitivity of 3.9 C per doubling. Adding in just a single decade of data, from 2000 to 2010, significantly reduces the estimate of sensitivity to 1.9 C.

There’s an important lesson there: The fact that the results are so sensitive to relatively short periods of time should provide a cautionary tale against taking single numbers at face value. If the current decade turns out to be hotter than the first decade of this century, some sensitivity estimates based on surface temperature records may end up being much higher.

So what about climate sensitivity? We are left going back to the IPCC synthesis, that it is “likely” between 2 C and 4.5 C per doubling of CO2 concentrations, and “very likely” more than 1.5 C. While different researchers have different best estimates (James Annan, for example, says his best estimate is 2.5 C), uncertainties still mean that estimates cannot be narrowed down to a far narrower and more precise range.

Ultimately, from the perspective of policy makers and the general public, the impacts of climate change and the required mitigation and adaptation efforts are largely the same in a world of 2 or 4 C per doubling of CO2 concentrations where carbon dioxide emissions are rising quickly.

Just how warm the world will be in 2100 depends more on how much carbon is emitted into the atmosphere, and what might be done about it, than on what the precise climate sensitivity ends up being. A world with a relatively low climate sensitivity — say in the range of 2 C — but with high emissions and with atmospheric concentrations three to four times those of pre-industrial levels is still probably a far different planet than the one we humans have become accustomed to. And it’s likely not one we would find nearly so hospitable.

0 0

Bookmark and Share Printable Version  |  Link to this page


Prev  1  2  3  

Comments 101 to 117 out of 117:

  1. I looked at the link. I was referring to the formula dT = climate senstivity * dF.

    0 0
  2. engineer,

    You might also want to look at this page, courtesy of Barton Paul Levenson.  I don't think it's been updated since 2007, so it lacks a good 5 years worth of further research, but it gives you some idea of the breadth of the work that's been done in the area, and how much the end results give basically the same answer.

    [Be wary of any study that gives too high or too low a climate senstivity.  Like anything else, the outcome depends on underlying assumptions, and not all papers that are published withstand scrutiny forever.  In fact many are quickly refuted.  Peer-review is only the first hurdle.  A good example is Schmittner et all (2011), which found a lower climate sensitivity than many, but also assumed a lower temperature change from the last glacial to the current interglacial -- a lower temperature change obviously will yield a lower sensitivity, so the question shifts more towards establishing the actual temperature change in order to arrive at the correct sensitivity... as well as recognizing that his was the sensitivity of the planet exiting a glacial phase.]

    0 0
  3. I read through the link. The formula I was referring to was dT = climate sensitivity * dF.

    Hopefully this isn't a doble post. I'm not sure what happened to my other one.

    0 0
  4. ∆T = k log2(CO2final/CO2initial)

    Where k is the climate sensitivity in degrees C per doubling of CO2.

    I myself have never found the derivation for that, either. We at SkS should probably make a concerted effort to find it, as it would be well worth looking at and referencing.

    It may have arisen primarily from experimental observations, or else through "experimentation" using the MODTRAN line-by-line radiative transfer computations (developed by the US Air Force, one of the pioneers in this stuff, due to their interest in making infrared missiles work properly in the atmosphere).  If it was determined through physical principles, it would need to take into account the varying density of the atmosphere (with altitude), as well as the resulting variations in IR absorption and emission as balanced against the number of collisions per second with non-GHG molecules like O2 and N2 (and of course the number of collisions is affected by both density and temperature, i.e. the average velocity of each molecule).  Then there are other complications such as bandwidth overlaps with other greenhouse gases (like H2O), and broadening of the absorption spectrum (pressure broadening and doppler broadening).

    All in all, it's pretty complicated.

    I'll ask and see what people can turn up.

    0 0
  5. [engineer -- Your other post just went onto the next page.]

    0 0
  6. engineer --

    Spencer Weart has this reference to the first such calculation in 1967 by Manabe and Wetherald.

    You might want to look over this timeline.

    I'd also very strongly suggest reading Spencer Weart's The Discovery of Global Warming.  It's interesting reading, and it adds a lot of depth to both an understanding of the science and how old and broadly based climate science is.

    0 0
  7. Just remember that formula is post-hoc. You get sensitivity out of a climate model run by solving for k from deltaT/deltaF which gives you a useful way to estimate temperature for given forcing. However, the GCM do not derive temperature from that formula internally that internally.

    0 0
  8. thanks for the replies and links.


    I'm not sure if I'm understanding you the climate models estimate the temp increase from double CO2. Then taking the est. temp increase and dividing it by 3.7 gives climate sensitivity. So that equation is just the equation for slope i.e. rise over run, and it isn't directly used to calculate climate sensitivity from historical data. The reason I'm confused is because I think the wikipedia article on climate sensitivity says that the equation can be used directly, which would imply that there is a physical foundation for it.

    "The change in temperature, revealed in ice core samples, is 5 °C, while the change in solar forcing is 7.1 W/m2. The computed climate sensitivity is therefore 5/7.1 = 0.7. We can use this empirically derived climate sensitivity to predict the temperature rise from a forcing of 4 W/m2, arising from a doubling of the atmospheric CO2 from pre-industrial levels. The result is a predicted temperature increase of 3 °C...Ganopolski and Schneider von Deimling (2008) infer a range of 1.3 to 6.8 °C for climate sensitivity determined by this approach." - wikipedia

    0 0
  9. engineer

    There are two parts to this. Calculating the change in Radiative Forcing at the Top of Atmosphere (TOA) due to a change in GH gases etc - essentially the change in the Earth's energy balance. Then calculating  the temperature change expected to result as a consequence of that.

    The standard formula used for the radiative imbalance change is

    Delta F = 5.35 ln(C/C0) where CO is your reference CO2 concentration and C is the concentration you are comparing it to. The usual CO chosen is Pre Industrial of around 275 ppm. This formula is from Myrhe et al 1998 that was included in the IPCC's Third Assessment Report (TAR)

    So a doubling is 5.35 ln(2) or 3.7 W/M2

    This formula is in turn a regression curve fit to the results from a number of Radiative Transfer Codes. These are programs that perform numerical solutions to the Eqn of Radiative Transfer. Essentially they divide the atmosphere up into lots of layers and calculate the radiation fluxes between each layer, taking into account the properties of each layer - temperature, pressure, gas composition etc, the transmision, apsorption, emission and scattering of EM radiation in each layer based on detailed spectroscopic data for each of the gases present from databases such as HiTran. They perform this calculation, summing what is happening to each layer. And they do this for either each single spectral line - a large computational task - or by dividing the specta up into small bands. The accuracy of these programs has been well demonstrated since the early 1970's.

    It is important to understand that these are not climate models. They perform a single, although large, calculation of the radiative state of a column of air at one instant, based on the physical properties of that air column at that instant.

    The second stage of the problem is to work out how temperatures change based on the radiative change. Back of an Envelope calcualtions can get you into the ball park, which is what people did up until the 1960's. The very first, extremely simple climate models assumed a CS value. Current Climate Models, which are far from simple, now actually derive the CS as a result of the model. The radiative changes are fed into the model, along with lots of known physics - conservation of energy, mass & momentum; thermodynamics; cloud physics; meteorology; ice behaviour; atmospheric chemistry, carbon cycle chemistry, ocean models etc. These are then left to run, to see how the system evolves under the calculations. The result then, among other things, indicates the CS value.

    Climate Models howevber are not the  only other way to estimate CS. The Wiki entry you cite gives another example, of a class of exampled that are probably better than the climate models - the behavious of past climates. In order to determin CS you don't have to have just a CO2 change. Anything that will produce a forcing change - volcanic activity, dust, changes in solar output, will all provide data points to amass a broad estimate of what CS actually is.

    One trap to watch out for is that CS isn't always expressed the same way. Usually it is expressed as 'Deg C per doubling of CO2' but sometimes in the literature it is expressed as 'Deg C per w/M2 of forcing'

    So what we are looking for is multiple evidence streams indicating similar values for CS. And broadly they do. Although these estimates often have longer tails of possible outlier values the central point of the probability distribution of the results from most source, the majority of them derived from observations of present and past climate, is fairly strongly at around the 3-3.5 range.

    Hope this helps.

    0 0
  10. Glenn has answered a lot of your questions, but the confusion is about to use it. Once you know (or have estimated) a climate sensitivity, then you can use it to calculate deltaT directly. However, you need the full blown GCM to derive the climate sensitivity in the first place. This is the reason behind debate on CS. Estimates can be made empirically from paleoclimate or more commonly from the models but you have a range of values coming from those, with most clustering between 2.5 and 3. The key to CS is the feedbacks. By itself 3.7W/m2 TOA forcing gives you 1.2C of temperature. However, with a temperature rise you immediately have feedback from increased water vapour. In slightly longer term you get feedback from albedo (particularly change in ice) and on longer timescales you have temperature-induced increases in CO2 and CH4 from a variety of sources. Add into the equation change in cloudiness with temperature (and whether this is low level cloud or high level cloud) and you start to get feel for the complexity of GCMs.

    0 0
  11. engineer,

    I'd just like to add that Neal King has (offline) pointed out that this was previously discussed on this same thread, at comments 73, 76 and 78.

    Offline, he also pointed out that:

    ...the explanation from Pierrehumbert is that the radiative forcing is due to the change in flux when the critical point (at which the optical depth, as measured from outer space downward, reaches the value 1: Photons emitted upward from this point will escape, so this defines an effective photosphere for the given frequency.) changes its altitude.

    This would greatly simplify the calculation problem.

    I may pursue this further myself, if I can find the time... it's a very interesting question.  In particular, it's about time I plunked down the cash on Ray Pierrehumbert's text book Principles of Planetary Climate, and perhaps John Houghton's The Physics of Atmospheres.

    0 0
  12. engineer,

    After a brief exchange with Dr. Ray Pierrehumbert at the University of Chicago, he directed me to his 2007 post at Real Climate titled What Ångström didn’t know, wherein he basically presents the derivation in plain English (no math).  To supplement that, I'd also suggest doing some research on optical thickness and the Beer Lambert Law.  If you have the chops for it, the Science of Doom website has some very good explanations (warning: math!) of a lot of things.

    0 0
  13. @ Glenn and Scaddenp

    Thanks. The equation I'm really curious about though is the one that relates forcing to temp:

    ∆T = k * ∆F, where k is climate sensitivity. Scaddenp said that this is a post-hoc formula. However, at least according to wikipedia, the the formula can be/is used to calculate k directly from empirical data, which would suggest (to me at least) that the formula is based on physical principles.

    "The change in temperature, revealed in ice core samples, is 5 °C, while the change in solar forcing is 7.1 W/m2. The computed climate sensitivity is therefore 5/7.1 = 0.7. We can use this empirically derived climate sensitivity to predict the temperature rise from a forcing of 4 W/m2, arising from a doubling of the atmospheric CO2 from pre-industrial levels. The result is a predicted temperature increase of 3 °C...Ganopolski and Schneider von Deimling (2008) infer a range of 1.3 to 6.8 °C for climate sensitivity determined by this approach." - wikipedia

    The reason I'm curious where ∆T = k * ∆F came from is because it's a linear relationship. I might be reaching here, but just looking at the stefan boltzman eqn I would have guessed the relationship between ∆T and ∆F would be nonlinear. If ∆T = k * ∆F is just a post-hoc formula as Scaddenp stated that would explain a lot, but as wiki states the formula is used to empirically derive sensitivity which implies to me a physical foundation for the equation. If it is just a post-hoc formula why is it valid to use it to directly derive sensitivity empirically? Sorry for the long post.

    @ Sphaerica I'll try to dig through the links you provided thanks.

    0 0
  14. engineer,

    wikipedia.... bleh.  It's good for some things, as an introduction to concepts, but I wouldn't for a minute use it to learn real climate science.

    Stefan-Boltzmann... not for small values of ∆T.  For example, the energy received by the Earth from the sun (approx 239 W/m2) translates to a temperature of 255˚K.  Here's a graph of the relationship (temperature at the bottom) for temperatures near those at the surface of the earth.  Notice that it is for all intents and purposes, in that small range, linear.

    "Empirically" means from data, from observations.  Again, follow the links I already gave you and look at how they do it by measuring the response of global temperatures to a major volcanic eruption (effectively reducing solar input by a measurable amount), or by studying the transition from the last glacial as in the example given by wikipedia.

    The fact that it is linear (or near linear) is almost required.  Without a linear relationship you'd too easily get a runaway effect, or a climate so stable that it would not demonstrate the volatility that we see in climate in the history of the earth.  Another way to look at it is due to the fact that the Earth's climate (normally, naturally) never varies by all that much over short periods of time (where short equals thousands or tens of thousands of years).  There's just not much room for anything but something that is for all intents and purposes linear.

    To repeat, while the climate sensitivity is from physical mechanisms, none of these are so simple as to be modeled with very simple mathematics.  The melting of the ice sheets, the browning of the Amazon, natural changes in CO2, etc., etc., are all complex natural processes.  There's just no way to mathematically derive climate sensitivity short of the (clever) variety of methods used, including observations, paleoclimate data, and models.  Again... follow the links, and read up on feedbacks.

    0 0
  15. Engineer - models have to be verified. The physics and the numeric methods are both complex. You can run the models for past conditions of the earth and get a climate sensitivity from the model. You can also determine number empirically just as you illustrated given DelthF and DeltaT. You have confidence that your estimates of climate sensitivity are good if both the model-determined sensitivity and emperically-determined sensitvity match reasonably well. For emperical determinations, you are assuming a linear relation but model dont. It just turns out from models that the relationship is close enough to linear for a relatively small change in deltaT. As said in earlier post, it would not remain that way for very large changes.

    It appears that there is no possibility of a "runaway" greenhouse on earth (the oceans boil) without a hotter sun which will happen some time in the deep future. However, in that situation, the change in sensitivity to increasing deltaF because seriously non-linear,

    0 0
  16. Engineer @113.

    Restating the message of Sphaerica @114 - I would mention that the Stephan-Boltzmann equation yields a cubic relationship as it is the derivitive being used:-

    ΔT = ΔF/4σT^3

    As T is in Kelvin, even what would be a big change for earth's climate results in a small theoretical change in the T^3 term - eg 255 °K +/- 5 °K would result in a theoretical 5% change in sensitivity.

    The big changes in sensitivity, as described @114, comes not from the physics but from the climate system. When temperature change is large, when our planet is pushed towards becoming a 'snowball' or a 'steamed doughnut,' that is when sensitvity really starts to change in value. Hansen & Sato 2012 (discussed by SkSci here) show sensitivity more than doubling for such extremes.

    0 0
  17. "Notice that it is for all intents and purposes, in that small range, linear." You're right, but the global climate is so complex. It's still not sitting well with me so I decided to try and derive it myself. Take the following with a grain of salt. I might have done something wrong or interpreted something wrong.

    Anyway, using the energy balance...

    Power in = (1-α)πR2F0

    Power out = 4πR2ϵσT4, where ϵ is the emessivity

    Thus, F = ϵσT4, where F = (1-α)F0/4

    Thus, T=[F/(ϵσ)]1/4

    The total derivative is: dT/dF = δT/δF+δT/δϵ*dϵ/dF

    substituting and simplifying:

    dT/dF = T/(4F)-T/(4ϵ)*dϵ/dF, where the term T/(4ϵ)*dϵ/dF is from feedbacks.

    Thus, the linear approximation for climate sensitivity is k ≈ 4*dT/dF ≈ T/F.

    With this linear approximation we're assuming that for small changes in temp the term T/(4ϵ)*dϵ/dF is almost constant or negligible. Feedbacks aren't negligible so we're arguing it's almost constant.Thus, we're assuming

    dϵ/dF ≈ C*ϵ/T, where C is a constant. I still don't like this approximation. I guess I have to do more reading. Again I might have done something wrong or interpreted something wrong so take the stuff above with a grain of salt. Thanks

    0 0

Prev  1  2  3  

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

The Consensus Project Website


(free to republish)

© Copyright 2019 John Cook
Home | Links | Translations | About Us | Privacy | Contact Us