Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Twitter Facebook YouTube Mastodon MeWe

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Making Sense of Sensitivity … and Keeping It in Perspective

Posted on 28 March 2013 by dana1981

Yesterday The Economist published an article about climate sensitivity – how much the planet's surface will warm in response to the increased greenhouse effect from a doubling of atmospheric CO2, including amplifying and dampening feedbacks.  For the most part the article was well-researched, with the exception of a few errors, like calling financier Nic Lewis "an independent climate scientist."  The main shortcomings in the article lie in its interpretation of the research that it presented.

For example, the article focused heavily on the slowed global surface warming over the past decade, and a few studies which, based on that slowed surface warming, have concluded that climate sensitivity is relatively low.  However, as we have discussed on Skeptical Science, those estimates do not include the accelerated warming of the deeper oceans over the past decade, and they appear to be overly sensitive to short-term natural variability.  The Economist article touched only briefly on the accelerated deep ocean warming, and oddly seemed to dismiss this data as "obscure."

The Economist article also referenced the circular Tung and Zhou (2013) paper we addressed here, and suggested that if equilibrium climate sensitivity is 2°C to a doubling of CO2, we might be better off adapting to rather than trying to mitigate climate change.  Unfortunately, as we discussed here, even a 2°C sensitivity would set us on a path for very dangerous climate change unless we take serious steps to reduce our greenhouse gas emissions.

Ultimately it was rather strange to see such a complex technical subject as climate sensitivity tackled in a business-related publication.  While The Economist made a good effort at the topic, their lack of expertise showed. 

For a more expert take on climate sensitivity, we re-post here an article published by Zeke Hausfather at the Yale Forum on Climate Change & the Media.


Climate sensitivity is suddenly a hot topic.

Some commenters skeptical of the severity of projected climate change have recently seized on two sources to argue that the climate may be less sensitive than many scientists say and the impacts of climate change therefore less serious: A yet-to-be-published study from Norwegian researchers, and remarks by James Annan, a climate scientist with the Japan Agency for Marine-Earth Science and Technology (JAMSTEC).

While the points skeptics are making significantly overstate their case, a look at recent developments in estimates of climate sensitivity may help provide a better estimate of future warming. These estimates are critical, as climate sensitivity will be one of the main factors determining how much warming the world experiences during the 21st century.

Climate sensitivity is an important and often poorly understood concept. Put simply, it is usually defined as the amount of global surface warming that will occur when atmospheric CO2 concentrations double. These estimates have proven remarkably stable over time, generally falling in the range of 1.5 to 4.5 degrees C per doubling of CO2.* Using its established terminology, IPCC in its Fourth Assessment Report slightly narrowed this range, arguing that climate sensitivity was “likely” between 2 C to 4.5 C, and that it was “very likely” more than 1.5 C.

The wide range of estimates of climate sensitivity is attributable to uncertainties about the magnitude of climate feedbacks (e.g., water vapor, clouds, and albedo). Those estimates also reflect uncertainties involving changes in temperature and forcing in the distant past. But based on the radiative properties, there is broad agreement that, all things being equal, a doubling of CO2 will yield a temperature increase of a bit more than 1 C if feedbacks are ignored. However, it is known from estimates of past climate changes and from atmospheric physics-based models that Earth’s climate is more sensitive than that. A prime example: Small perturbations in orbital forcings resulting in vast ice ages could not have occurred without strong feedbacks.

Water Vapor: Major GHG and Major Feedback

Water vapor is responsible for the major feedback, increasing sensitivity from 1 C to somewhere between 2 and 4.5 C. Water vapor is itself a powerful greenhouse gas, and the amount of water vapor in the atmosphere is in part determined by the temperature of the air. As the world warms, the absolute amount of water vapor in the atmosphere will increase and therefore so too will the greenhouse effect.

That increased atmospheric water vapor will also affect cloud cover, though impacts of changes in cloud cover on climate sensitivity are much more uncertain. What is clear is that a warming world will also be a world with less ice and snow cover. With less ice and snow reflecting the Sun’s rays, melting will decrease Earth’s albedo, with a predictable impact: more warming.

There are several different ways to estimate climate sensitivity:

  • Examining Earth’s temperature response during the last millennium, glacial periods in the past, or periods even further back in geological time, such as the Paleocene Eocene Thermal Maximum;
  • Looking at recent temperature measurements and data from satellites;
  • Examining the response of Earth’s climate to major volcanic eruptions; and
  • Using global climate models to test the response of a doubling of CO2 concentrations.

These methods produce generally comparable results, as shown in the figure below.


Figure from Knutti and Hegerl 2008.

The grey area shows IPCC’s estimated sensitivity ranges of 2 C to 4.5 C. Different approaches tend to obtain slightly different mean estimates. Those based on instrumental temperature records (e.g., thermometer measurements over the past 150 years or so) have a mean sensitivity of around 2.5 C, while climate models average closer to 3.5 C.

The ‘Sting’ of the Long Tail of Sensitivity

Much of the recent discussion of climate sensitivity in online forums and in peer-reviewed literature focuses on two areas: cutting off the so-called “long tail” of low probability\high climate sensitivities (e.g., above 6 C or so), and reconciling the recent slowdown in observed surface warming with predictions from global climate models.

Being able to rule out low-probability/high-sensitivity outcomes is important for a number of reasons. For one, the non-linear relationship between warming and economic harm means that the most extreme damages would occur in very high-sensitivity cases (as Harvard economist Marty Weitzman puts it, “the sting is in the long tail” of climate sensitivity). Being able to better rule out low probability/high climate sensitivities can change assessments of the potential economic damages resulting from climate change. Much of the recent work arguing against very high-sensitivity estimates has been done by James Annan and Jules Hargreaves.

The relatively slow rate of warming over the past decade has lowered some estimates of climate sensitivity based on surface temperature records. While temperatures have remained within the envelope of estimates from climate models, they have at times approached the 5 percent to 95 percent confidence intervals, as shown in the figure below.


Figure from Ed Hawkins at the University of Reading (UK).

However, reasonably comprehensive global temperature records exist only since around 1850, and sensitivity estimates derived from surface temperature records can be overly sensitive to decadal variability. To illustrate that latter point, in the Norwegian study referred to earlier, an estimate of sensitivity using temperature data up to the year 2000 resulted in a relatively high sensitivity of 3.9 C per doubling. Adding in just a single decade of data, from 2000 to 2010, significantly reduces the estimate of sensitivity to 1.9 C.

There’s an important lesson there: The fact that the results are so sensitive to relatively short periods of time should provide a cautionary tale against taking single numbers at face value. If the current decade turns out to be hotter than the first decade of this century, some sensitivity estimates based on surface temperature records may end up being much higher.

So what about climate sensitivity? We are left going back to the IPCC synthesis, that it is “likely” between 2 C and 4.5 C per doubling of CO2 concentrations, and “very likely” more than 1.5 C. While different researchers have different best estimates (James Annan, for example, says his best estimate is 2.5 C), uncertainties still mean that estimates cannot be narrowed down to a far narrower and more precise range.

Ultimately, from the perspective of policy makers and the general public, the impacts of climate change and the required mitigation and adaptation efforts are largely the same in a world of 2 or 4 C per doubling of CO2 concentrations where carbon dioxide emissions are rising quickly.

Just how warm the world will be in 2100 depends more on how much carbon is emitted into the atmosphere, and what might be done about it, than on what the precise climate sensitivity ends up being. A world with a relatively low climate sensitivity — say in the range of 2 C — but with high emissions and with atmospheric concentrations three to four times those of pre-industrial levels is still probably a far different planet than the one we humans have become accustomed to. And it’s likely not one we would find nearly so hospitable.

0 0

Printable Version  |  Link to this page

Comments

Prev  1  2  3  Next

Comments 51 to 100 out of 117:

  1. Just a heads-up for the commenters in general.

    Elsa has a habit of capturing the first post on more than a few threads on various climate-related online fora.  I have a strong suspicion that it's not just simple chance...

    Beyond that I will not speculate as to motive.

    0 0
  2. I'm a little surprised that my interpretation of this Economist article was considered too harsh by others up thread. For the record, rather than to convince folk, here is my counter-comment.

    I think all agree the article is mainly factual although it does kick-off in error (the 15-year flat temperature myth) which is never a good sign. In my view its analysis is straightforwardly denialist all the way down to near the end with but the occasional grudglingly-provided caviat (eg 'if deep oceans are heating, it may help explain things') suggesting otherwise.

    The end could perhaps have redeemed it, the end being its conclusion, its take-away message, except it doesn't recap its coverage and pronounce judgement. Rather it goes unsignalled into 'warning mode' in the final two paragraphs, the last being.

    Since CO₂ accumulates in the atmosphere, this (ie, 1,500GtC total emissions) could increase temperatures compared with pre-industrial levels by around 2°C even with a lower sensitivity and perhaps nearer to 4°C at the top end of the estimates. Despite all the work on sensitivity, no one really knows how the climate would react if temperatures rose by as much as 4°C. Hardly reassuring.

    The numbers used in the first sentence are low (ECS=1.6°C and “at the top end of the estimates” 3.2°C) although for the lower one that was the point. The second sentence comes from nowhere. What is the relevance to work on sensitivity to predicting climate after 4°C of warming? And if “no one really knows,” how come they ran an article on it two years back?

    Then the final comment - is it meant to be ambiguous? Depending on its missing noun, it can easily be read to mean anything from 'all climatology is inept' to 'mankind is in deep do-do'.

    In the paper version of The Economist, this article is a tight fit onto the page. It is possible it may have suffered some severe editing to get it to fit. Still no excuse though.

    0 0
  3. I think it would be easier for elsa to come out from under the bridge and explain why she/he believes that an increase in the insulative properties of the atmosphere will not increase global temperatures. The analogy of wearing a coat on a cold day should be something he/she has experience of, so why would anybody think the atmosphere wouldn't be governed by similar physics?

    0 0
  4. Rethinking my post (#39) and gpwayne's (#47), I think we now have a pretty unassailable case against the lukewarmers - the only part of the scepticosphere that still has a leg to stand on.

    If +2°C is regarded as dodgy and +4°C is definitely dangerous, then even if Lindzen, Spencer and Christy's  (even Monckton's!!!) admitted ~1°C figure is correct, the second doubling to 1120 ppm puts us in trouble. As I pointed out before, if the world thinks that sensitivity is as low as the minority suggest, it makes it far more likely that little or nothing will be done to mitigate, or even stop the growth of, emissions.

    The only shred of Lindzen's ideas left to deal with is his contention that although the basic CO2 feedback is ~1°C, he claims there is a negative feedback from clouds based on his work with tropical clouds. Show that, even if that holds for the narrow band in the tropics, the probability is that it does not hold for the majority of the planet, particularly the polar regions, and the remaining scepticosphere case collapses entirely.

    That may not be enough to fully convince the voting public who have been relentlessly propagandised by experts. The standard infuriatingly deceptive but dumb denialist memes, that the general public are pretty receptive too, should be handled by large full page press ads in high circuation newspapers with the memes (it's the sun, it's not warming etc) down one side and SKS type rebuttals down the other.

    0 0
  5. Sphaerica

    You say "The relationship between CO2 and temperature is:

    ∆T = K log2(CO2final/CO2initial)"

    Let us examine this.  We are looking at what would happen if the concentration of CO2 doubled so I guess CO2 final/CO2 initial = 2

    The log to base 2 of 2 = 1

    So what you say is that the change in temperature is K


    This in unhelpful to say the least.  What we need (-snip-) is a value for delta T that we can compare with actual temperatures and concentrations of CO2.  So please, what is that equation?

    0 0
    Moderator Response: [DB] Off-topic snipped.
  6. elsa,

    K = climate sensitivity, the degree to which the climate responds to a CO2 forcing or any other forcing.  The measure of the forcing of CO2 is 3.7 W/m2, and the value of K in this equation is between 2.0 and 4, with the most likely value being 3 or 3.5.

    A lot of people are currently engaged in estimating climate sensitivity using a very wide variety of methods.

    See here and here.

    Current estimates of climate sensitivity are in the range of 2˚C to 4.5˚C, although based on the way the earth is currently responding to a mere 0.8˚C of warming to date, it seems that even if we luck into a climate sensitivity of 1.6˚C (which is highly unlikely) we will still regret it dearly.

    Note: All of your posts are off topic. If you wish to discuss the basics of climate sensitivity (as opposed to the specific issues of climate sensitivity raised by the Economist article), post your comment on one of the two threads linked above. If you wish to ask questions about the physics of CO2, post your question here. Many people will be more than happy to answer your questions and help you to learn more about the science, to correct your misconceptions, however it is important not to clutter the wrong threads with random diversions.
    0 0
  7. elsa @55

    I do not see why you would consider it unhelpful to use a base 2 log. Would you rather increase complication by using a different base? If so, try:-

    ΔF = 5.35 In(C/Co) W/sq m

    There is no argument about this, even from denialists like Lindzen, Spencer & Christy mentioned above (although to include the Mad March Monckton may be a step too far). Your inability to accept that CO2 can be a cause of significant climatic change would be easier to address if you explained your position on this, rather than us trying to give you a crash course in basic science. The present process here is being so poorly directed because it is being directed by you.

    So what is your problem with CO2 being the most significant LLGHG?

    0 0
  8. @52 MA Rodger

    I have to agree: For the huge majority of general readers, the middle of the roaders, the take-home message of both the article and the video @45 (entitled "Global Warming Slows Down", misleadingly, as shown by the ocean warming) will be that there is less need to worry and therefore less need to act.  This is exactly the opposite of the truth but it is the message of the article.  

    To discuss ECS so deeply in the context of The Economist provides effective distraction from the core problem of policy inaction, something one can be sure many of its readers do not wish to see.  The second video interviewee does say that ECS sensitivity discussion might be meaningful if policy had been delivering action but that is not the 'take home', especially when the first interviewee jumps in to say that policy-makers prefer to look at the next decade or decades – as if the future of 2100 is of little concern.  

    One trouble is that knowledgeable readers can look at the article as relatively balanced  compared to so many poor ones generally.  Indeed such readers may be gratified that the detail of the science is finally being discussed for a general readership.  This can please the knowledgeable and serve to blunt critique from them. 

    However, by focusing on scientific uncertainty in ECS values, and avoiding discussion of the complete lack of action by policy makers and developed nations, the article's impact on most readers will be to confirm their economic preference to delay action.  The policy makers and developed nations, along with major corporations and rich individuals, all comprising the target market of The Economist, will be well pleased with this approach  

    In the context of economics and the Economist, the ECS discussion is a denialist distraction that works to delay support for action.  By discussing scientific 'uncertainty' that translates in many readers mind to 'policy uncertainty', which is code for let's do nothing for now until things are more certainty.  This is how 'lukewarmist' denial works in the media to support continued inaction.

    Scientists and climate policy experts need to start talking to media about certainty in the sense that media and the public understand.  If the minimum ECS is 2ºC and the current path is to at least 3CO2 by 2100 then the experts need to say, "On our current path it is certain that we will reach 6ºC and that will be utterly catastrophic for human civilisation" or "If all emissions stopped then warming would stop".   Science experts need to state that the uncertainty they are referring to has nothing to do with policy failures or the urgent need for large-scale mitigation action.

    Scientists have to start learning how to communicate definitely in public rather than being led into discussing uncertainty – even though that is in fact their area of expertise! 

    Paul  - www.climie.blogspot.ie

    0 0
  9. NIck @54 - see A Glimpse at Our Possible Future Climate, Best to Worst Case Scenarios, which is relevant to the point you're making.

    0 0
  10. The video that dana1981 linked @45 gives us the name of the author of the article - John Parker, Globalisation Editor of The Economist (although in their list of journalists, he is listed as Energy & Environment Editor. This would be a recent appointment as James Astill was for a while from 2011 Energy & Environment Editor ). Parker doesn't seem too fluent with his message in the video & has colleague Oliver Morton riding shotgun for him.

    And in the video Parker kicks off again telling us temperatures have plateaued for at leat the last ten years and possibly the last fifteen. This is bonkers. Temperatures, say GISS 5-year average, have 'plateaued' between 2003 and 2010. Yet the creation of that plateau only begins in 2008. Until the 2008 temperature arrives to influence the 5-year average of 2006, the data is wholly consistent with a continuation of the accelerating temperature rise. The plateau or flat or pause or whatever is only 5 years old, going on 6. It is never "possibly fifteen"!!

    There is also a further video featuring John Parker from December 2012, this time putting the questions to fellow Economist journos Oliver Morton & Ryan Avant in the aftermath of Doha.

     

    http://www.economist.com/blogs/babbage/2013/03/global-warming-slows-down

    0 0
  11. 'Location, location, location'  Many commenters on this thread have mentioned that the author of the Economist hit-piece somewhat redeems himself by mentioning in the article's last paragraph the potential of a 4 C rise, which is 'hardly reassuring'.  But there's a reason for that placement: how many readers make it to the END of such an article?  

     

    To recount: "OVER the past 15 years air temperatures at the Earth’s surface have been flat"  This is the very first sentence, the one most likely to be read and retained by readers.  No provisional, like "of course, in the past 20 years, the surface temperature anomaly has almost doubled".  This could literally be the second sentence of his article: why isn't it?

    The second section is titled 'The insensitive planet'.  If you were just skimming this piece, what message would you take away from that title?

    The third section is title 'New Model Army'.  This section makes it very clear that there is a 'war going on' between the [possibly communist] IPCC and a 'New Army' of rambunctious young Scientists who are all converging on a sensitivity of 1-2 C.  Who will win?  Unsure, but we know who is losing: "the chances of climate sensitivity above 4.5°C become vanishingly small"

    There's 'obviously' a war going on in the climate science community, so why take action before 'they' resolve it?  As the fourth section is titled, there are 'Clouds of Uncertainty' on this whole subject.  Could the lack of recent atmospheric heating be due to better ocean heating?  The author: "Perhaps it lies in the oceans. But here, too, facts get in the way"  and there is a graph of the surface ocean ALSO not heating.  Again, if you're skimming this material, THAT'S the graph you're going to see, with its ready-made conclusion: Nope, its not the oceans.  Now the author produces the caveat that maybe warming is going into the deep ocean.  Or maybe not, since according to the author, that is "obscure". 

    In the section titled 'Double A-minus', the author directs the reader to sources of natural atmospheric heating, like the recent paper by Tung and Zhou.  It should be obvious that the ONLY reason this flawed paper deserves to be singled out is its use of the word 'natural'.  That's a very important word to the 'doubt is our product' crowd.

    All of this fresh doubt piled on IPCC's 3 C sensitivity leads to this statement, which Rupert Murdoch's 'The Australian' newsmagazine chose to lead off with: "If climate scientists were credit-rating agencies, climate sensitivity would be on negative watch".  Because, as we all know from the housing bubble, credit-rating agencies are the gold standard when it comes to understanding value...

    0 0
  12. Sphaerica - Not only are most of Elsa's posts offtopic, but Elsa is persistantly repeating false statements from who knows where, without any supporting evidence. We are replying with papers and links that as far as I can see are being simply ignored. Ignorance is excusable - we are all ignorant of many parts of sciene - but wilful ignorance isn't/ This appears to be sloganeering and I would hope moderators would take a stronger line in demanding substantiation of claim (in the appropriate thread).

    0 0
    Moderator Response: [DB] Indeed, much latitude has been given in the now-vain hope that elsa would have something substantive to offer to support her assertions. Until that new materiel is forthcoming, no further diversions to the OP of this thread will be permitted.
  13. Elsa, @55

    ∆T = K Ln(CO2final/CO2initial)"/Ln(2), where Ln(.) is the natural log function.

    If you can use Excel, it should be easy for you to download one of the temperature records, and the CO2 Mauna Loa series since 1958.

    Plot ∆T since 1959 against the right hand function and come up with a rough estimate for K, or at least the transient K for a non-equilibrium situation.

    John Niesen-Gammon has a good post here about this equation.

    http://blog.chron.com/climateabyss/2012/10/carbon-dioxide-and-temperature/

    0 0
  14. ubrew12:

    Articles in The Economist are frequently not designed along the well-known journalistic "upside-down pyramid" approach: It is often the case that you have to read to the end to get the point.

    0 0
  15. Shoyemore @63

    Your equation is of course mathematically identical to the equation @43.

    The question asked (or was it a demand) by elsa about an equation linking ΔCO2 with ΔT first appeared @38 - “Perhaps, since you say that the link between CO2 and temperature is derived from physics, you can tell us what the equation is that links the two and how it is derived?” - itself prompted by the statement @24 - “Both the models and how CO2 affects temperatures are entirely derived from physics.” So the question was a bit presumptious.

    The insistence on a single 'equation' kind of rules out addressing the model aspect but the CO2-temperature relationship when shorn of feedbacks can be reduced to that anticipated single equasion. Half of it I rather cheekily presented @57, the half which yields ΔF. From it with the derivative of the Stephan-Boltzman eqn (pretty much a constant for small changes in T) ΔT can be derived.

                                                          ΔF/ΔT = 4σT3

    This is all such straightforward stuff that usually just the result from it is quoted. A link to point folk at might be good but I cannot say that I know of one.

    0 0
  16. ubrew12:

    Articles in The Economist are frequently not designed along the well-known journalistic "upside-down pyramid" approach: It is often the case that you have to read to the end to get the point.

    0 0
  17. MA Rodger:

    I don't believe the equation

    ΔF/ΔT = 4σT^3

    has much relevance in this context. The blackbody spectral radiation formula is not applicable in the context of the greenhouse effect.

    0 0
  18. nealjking@66 - I think Economist readers are like any other.  The author made his point in his first sentence.

    0 0
  19. ubrew12:

    #61: You start off complaining that the author left his point to the end: "But there's a reason for that placement: how many readers make it to the END of such an article?"

    #68: Later, you point out that he made his point in his first sentence: "The author made his point in his first sentence."

    Which is it?

    All I'm pointing out, as I subscriber who reads The Economist every week, is that their articles are frequently written in such a fashion that you need to read to the very end to get the picture.

    Does that make it a bit harder to digest their articles? Yes, it does. But they've been in publcation since 1843, so I guess they think they know what they're doing.

    0 0
  20. MA Rodger:

    I mean the Stefan-Boltzmann formula for total blackbody radiation power is not applicable in the context of the enhanced greenhouse effect.

    0 0
  21. I posted a comment about two hours ago and although it was visible initially it has now been disappeared.  Is this due to snippage or should I post again?

    0 0
    Moderator Response: [DB] The moderator on duty at that time deemed it too far off-topic.
  22. MA Rodger,

    The equation cited (in some form or other) goes all the way back to Svante Arrhenius (1896), who wrote (see his bio on Wikipedia): If the quantity of carbonic acid ... increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression.

    I underlined "nearly" so I presume Arrhenius derived the relationship from his observations, and knew it was only approximately true.

    The place I encountered the equation was on page 36 of David Archer's Global Warming: Understanding the Forecast , where he stated it applied to long-term equilibrium changes only. Nielson-Gammon used it in a short-term context with a "transient" value for K.

    Most websites dealing with this topic, like Science of Doom, go straight to the radiation physics.

    Is there a way to derive this approximate relationship from the radiative transfer laws?

    0 0
  23. shoyemore:

    The most explicit derivation of the EGHE that I have seen personally is at the end of John Houghton's The Physics of Atmospheres; however, the derivation is effectively scattered through the book, partially in exercises.

    The explanation I gave above is from Pierrehumbert's book, but since I have only an early draft of the book, I don't know if he derives the entire formula explicitly; but he probably does.

    It's unfortunate that Weart's website, The Discovery of Global Warming, side-steps the derivation: I think he decided it was too hard to present.

    0 0
  24. nealjking @70
    You say "...the Stefan-Boltzmann formula for total blackbody radiation power is not applicable in the context of the enhanced greenhouse effect."

    Yet:-

    (1) Temperature increases in a zero-feedback-world are said to be easier to quantify.

    (2) That zero-feedback quantification appears to me to utilise the S-B formula, but for some reason that use is not very apparent. (I'm sure it used to be more apparent but now a search of the web turns up pages of denialists saying S-B is no good with zero reference to who it is they are having a go at for thinking otherwise.)

    (1) Non-feedback sensitivity is easier. According to AR4 - "While the direct temperature change that results from greenhouse gas forcing can be calculated in a relatively straightforward manner, uncertain atmospheric feedbacks (Section 8.6) lead to uncertainties in estimates of future climate change."

    And TAR - "In other words, the radiative forcing corresponding to a doubling of the CO2 concentration would be 4 Wm-2. To counteract this imbalance, the temperature of the surface-troposphere system would have to increase by 1.2°C (with an accuracy of ±10%), in the absence of other changes."

    A value of ECS-with-feedback ±10% is something we can today only dream of and this being so, I would suggest that, whatever the method which yields this non-feedback sensitivity, it should be more widely known, if for no other reason than to bash denialists with.

     

    (2) Does that method use the S-B formula? The S-B formula does give a ~1°C which is encouraging. And dodging all the denialists, I did find Roe & Baker 2007 who have a Suppliment where they use S-B to enumerate λo for use in a model which includes feedbacks.

    And the RF emissions from Earth are of the form of blackbody radiation, although highly motheaten. If a GHG increases to take another bite, I see no fundamental problem why the RF reaction wouldn't be as a blackbody all the way down to surface temperature.

    Of course this is nowhere definitive but I feel there is enough weight to counter your assertion of 'S-B non-applicability'  by saying  - The planet may not be a blackbody but it apparently behaves enough like one for ΔTs with zero-feedback to be derived from S-B and so it is likely that is how it is derived.

    0 0
  25. nealjking@69:  I would still like Sks readers to be open to the possibility that the Economist article is a subtle kind of 'hit piece' designed to leverage the magazines well-known reputation for balance on behalf of climate deniers.  As I detailed, the author makes the point he wants to leave readers with in his first sentence.  His LAST sentence is just there for 'plausible deniability': so he can claim 'balance' when confronted.  Maybe I'm innacurate in this, but I've seen it done before. The overall tone of the piece is to project doubt about climate science.  But as I've said elsewhere the REAL remaining doubt about global warming is in the costs of mitigation, damage, and adaptation.  This is not trivial: if the cost of mitigation is as low as I've seen it estimated, then it frankly DOESN'T MATTER whether the climate sensitivity is 2C or 4C: we should mitigate.  Can 'The Economist' tackle that ECONOMIC issue?  As it is, it seems to have a serious case of pointing out the grain in someone elses eye while missing the log in its own eye.

    0 0
  26. MA Rodgers:

    The Stefan-Boltzmann formula is the integral of the Planck spectral energy distribution over frequency (with no weighting). But the issues that give rise to the greenhouse effect (different absorption coefficients in different frequency bands) are frequency-specific. So the treatment in terms of S-B does not have nearly the specificity to explain the GHE.

    Here's another way to see the point: The constants in the S-B equation, and therefore in your ΔF/ΔT equation, do NOT depend in any way on the composition of the atmosphere. So if the equation were valid, it would be valid for any value of the concentrations; including 0 for the GHGs. But then there would be forcing with no GHGs, which is a contradiction.

    What determines the loss of radiated flux are the temperature differences at specific frequency absorption bands (due to the change in altitude of the critical point where optical depth = 1); using the Planck distribution, this gives the change in spectral density, which is integrated over the region of the spectrum that is absorbed by the GHGs. But the temperature difference for a particular band will depend on the GHG's absorption frequencies and concentration, and will differ from band to band. The total change in flux determines the radiative forcing.

    0 0
  27. nealjking @76.

    I have a feeling another look at what was said @65 and particularly its reference to comment@57 could be useful to this interchange we are having.

    0 0
  28. MA Rodger:

    If you consider the simplest case, in which there is just one spectral band of interest for greenhouse functionality, the ΔT corresponds to the change in temperature due to the change in altitude of the OP=1 point; and the ΔF corresponds to the same temperature change, but restricted to the GH spectral band.

    Thus, if the altitudinal change is Δz and the adiabatic lapse rate is A:

    ΔT = A*Δz

    ΔF = [B(f,To) - B(f, To-ΔT)]*Δf

    where

    To = temperature at original OD=1 point

    f = center frequency of band

    Δf = width of band

    B(f, T) = Blackbody spectral density at frequency f and temperature T

    Now if Δf = infinity, the ΔF becomes the integral of [B(f, To) - B(f, To-ΔT)] and this would give the result you're proposing. But since Δf is just one IR subband, ΔF is much smaller than that.

    If you add more subbands, the situation becomes a bit more complicated, because the Δz for each subband will be in general different. So that will not bring the situation closer to what you're proposing.

    0 0
  29. Are the effects of different air pressures changing absorptive properties included in the equations above describing the greenhouse effect? My physics is beyond weak: I guess I'm asking if this is accounted for, and if not, what difference it might make for calculating a change in temperature from a significant increase of CO2 (ignoring feedbacks)?

    0 0
  30. barry - Changing air pressures do not (within human-tolerable limits, at least) directly change gaseous absorption. What directly affects absorption is, rather, the number of IR absorbing/emitting greenhouse gas molecules in each volume of air, a product of both concentration (which may be altitude dependent, as in the case of H2O) and pressure. 

    And yes, amounts due to pressure and concentration for the various GHG's are accounted for in the line-by-line multiple layer radiative transfer calculations. See Myhre et al 1998 for the application of such calculations to CO2 sensitivity. 

    0 0
  31. KR,

    I'm sure that 'pressure broadening' is taken into account in detailed studies, but my question was prompted by the simplified equations above (not so simple to me, though) in relation to what I've read about the importance of the effects of pressure on absorption, a breakthrough in understanding, as far as I can make out from having read the realclimate articles (among others) on saturation;

    The breakthroughs that finally set the field back on the right track came from research during the 1940s... the new studies showed that in the frigid and rarified upper atmosphere where the crucial infrared absorption takes place, the nature of the absorption is different from what scientists had assumed from the old sea-level measurements... Among other things, the new studies showed that in the frigid and rarified upper atmosphere where the crucial infrared absorption takes place, the nature of the absorption is different from what scientists had assumed from the old sea-level measurements. Take a single molecule of CO2 or H2O. It will absorb light only in a set of specific wavelengths, which show up as thin dark lines in a spectrum. In a gas at sea-level temperature and pressure, the countless molecules colliding with one another at different velocities each absorb at slightly different wavelengths, so the lines are broadened and overlap to a considerable extent. Even at sea level pressure, the absorption is concentrated into discrete spikes, but the gaps between the spikes are fairly narrow and the "valleys" between the spikes are not terribly deep. None of this was known a century ago... Measurements done for the US Air Force drew scientists’ attention to the details of the absorption, and especially at high altitudes. At low pressure the spikes become much more sharply defined, like a picket fence.

    www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument/

    www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument-part-ii/http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument-part-ii/

    It seems to me, rightly or wrongly, that pressure effects on absorption are a significant factor, and I just wanted to know if this effect is included in the equations above, and if not, does it make much difference to the consequent surface temperature reasult. It's not a crucial question for the discussion, just a point of interest for me.

    0 0
  32. barry:

    The basic equations are radiative-transfer equations, that describe how the intensity of a radiation beam changes along its path. These are equations relating integrals of the absorption coefficient, whicih depend on frequency and also on the local pressure because of the pressure broadening. So in principle, the general picture is unchanged, but the real calculations have to be done for one frequency at aa time (line by line). Also, the description in terms of the equation of local radiative spectral density with local gas dynamics is approximate, because there is a "delay".

    0 0
  33. Thanks nealjking, that's speaks directly to my question. If you wouldn't mind pointing it out, where above are the equations that account for pressure broadening? I looked up the Beer-Lambert law and wondered if it is included in some form upthread (assuming I'm on the right track -  my curiosity far overreaches my ken).

    0 0
  34. barry:

    The pressure broadening affects the absorption coefficient:

    - In a thin gas, the absorption coefficient is derived by doing the quantum mechanical calculation for the absorption probability of a photon by a single atom; and then multiplying it by the number density of that type of atom.

    - The result is a function of frequency, with lots of regions of nothing and occasional blips where the frequency matches a quantum transition. The height of a peak is related to the likelihood of absorbing a photon in that region; the width is inversely related to the rapidity with which this transition will occur. Thus, the longer the lifetime of the state (before transition), the narrower the width.

    - The resulting absorption coefficient is what is integrated along the optical path of the radiation beam to calculate the optical depth. The significance of the optical depth is: If a photon travels along the beam for an optical depth of magnitude 1, that means it has a probability of (1 - 1/e ) of surviving that trip without having been absorbed. So a photon emitted towards space at an optical depth less than 1(as measured from outer space inward) has a decent chance of actually escaping the atmosphere without being absorbed; whereas a lower-altitude photon headed up will most likely be stopped along the way; it's energy will eventually be emitted as a new photon.

    - In a thicker gas, the picture is modified a little: The atoms of interest will be suffering collisions with the other atoms (of the same type or not, I don't believe it matters). The result is that the lifetime of the pre-transition state is shortened, because the atomic state can be changed without absorbing the photon. I believe this has the following effects on the absorption peak:
    a) broadens it, so the frequencies of interest are a wider subband; b) lowers the peak; c) reduces to some extent the total probability of absorption (but I don't know if this is at all significant; and there might be a countervailing factor).

    - So the effect of the pressure broadening is to flatten and spread out the absorption peaks in the absorption coefficient curves. Otherwise, these curves are used just as before to calculate the altitudes of the OD=1 points, as a function of frequency.

    [Now that we discuss this in detail, I wonder if there could be a reduction in the frequency integral of the absorption curve due to pressure broadening. It should be noted that there are other contributors to spectral-line broadening, like Doppler shifting due to the random kinetic motion of the molecules. Chris Colose originally mentioned the pressure broadening to me, and he claimed it would slightly increase the overall probability of absorption.  It would take a little reading to sort this question out.]

    0 0
  35. Neal, maybe your explanation of broadening could be added as a section in the Advanced tab of "Is the CO2 Effect Saturated?"

    0 0
  36. Tom Dayton:

    Thanks for pointing out the article on CO2 saturation: I have wondered where that article was hiding.

    Actually, my understanding of how the various aspects of line broadening affect the absorption coefficient is a current weak point in how I think about the GHE, so it would require some more study to pin it down better than what Riccardo has already written.

    Maybe later.

    0 0
  37. nealjking @78.

    I shall be less cryptic than @77.

    The forcing ΔF resulting from ΔCO2 is not at issue. As TAR describes this relationship can be presented usefully in its simplest form thus:-

    ΔF = 5.35 In(C/Co) W/sq m

    The issue at hand concerns only the global temperature change ΔT resulting from ΔF and specifically that change when feedbacks are zero. I have no definitive reference to state what method is used within climatology to calculate that value. Yet they do calculate it and do so with far more precision than ΔT with all feedbacks (as the relative imprecision of ECS estimates shows).

    So what is that method?

    (a) It is a "relatively straightforward" calculation according to AR4 so we seek nothing fancy.

    (b) It yields a number "1.2°C (with an accuracy of ±10%)" (although that is for 4 Wm^-2) according to TAR.

    (c) I recall from years ago many references saying it was calculated using the Stephan-Boltzmann equation but today I see nothing definitive to point you at.

    (d) I do not recall seeing any alternative method for this calculation.

    (e) And (bar internet searches topping out with a denialist force →100% Watts/sq page) I do find climatologists saying Stephan-Bortzmann provides a zero-feedback ΔT, eg Roe & Baker using it to calculate zero-feedback-sensitivity λo. Particulalry note Chris Colose posting at RealClimate who describes such a use of Stephan-Boltzmann as a 'back-of-envelope calculating' yielding λo=0.27°C/Wm^-2 while models yield λo=0.30-0.31°C/Wm^-2 or "about ~1.2°C" for double CO2.

    Thus I can but conclude that Stephan-Boltzmann is applicable in the context of the enhanced greenhouse effect. And that it does yield a reasonable answer (ie. of 'about 1°C' which is what is commonly quoted). And while "models" are used to improve accuracy, Stephan-Boltzmann does a pretty good job without recourse to those "models" which is so often the point of departure for folk of a denialist disposition, this last point being why the methods used should perhaps be better known.

    0 0
  38. MA Rodger:

    The fact that a back-of-the-envelope calculation happens to roughly match another roughly defined quantity does not give a good basis for concluding that that calculation is a valid explanation for the quantity. The linkage has to be shown by the relevant physics, some portion of which I have explained.

    Based on what has already been discussed, the formula would match if:

    - The entire infrared spectrum were covered by GHG bands (which it is NOT); AND

    - The absorption coefficient were a constant function over that range (which it is also NOT).

    Since in actual fact, neither are true, there is no applicability of this "derivation" to the EGHE. Don't fall into the freshman's fallacy of finding just some random formula that happens to give the right answer for some lucky combination and assuming that this explains the physics. It doesn't.

    If you really want to understand what is going on in a fairly complicated problem like this, you're better off looking to see what people have done, rather than cooking your own explanation. Riccardo's article referenced in #85 would be a good starting point. Beyond that, I have mentioned books by Houghton, Pierrehumbert; and there is probably a paper in Archer's collection of "warming papers". Try looking for these sources before rolling your own. My impression is that the formula didn't get general agreement untiil the 1960's or so; about 100 years after the GHE was conceived. Why do you think the explanation is going to be as simple as that?

    0 0
  39. nealjking @88
    It is entirely evident from your comment @76, @78 & @88 that the position you took @67 & @70 refers solely to the evaluation of forcing ΔF. I have not been at any time suggesting that Stephan-Boltzmann is so used, proposing rather that it is applicable (and indeed I have show it has been applied) to calculate zero-feedback-sensitivity λo.
    So I shall end this interchange here although I shall be now to enquiing elsewhere as to the nature of those "models" reportedly used to calculate λo and report back if I glean any findings.

    0 0
  40. Thanks again, nealjking.

    "Chris Colose originally mentioned the pressure broadening to me, and he claimed it would slightly increase the overall probability of absorption. It would take a little reading to sort this question out."

    I went for a casual trawl and found another realclimate article;

    ...Molecules composed of three or more atoms tend to act as greenhouse gases because they can possess energy in terms of rotation and vibrations which can be associated with the energy of photons at the infra-red range. This can be explained by theory and be demonstrated in lab experiments. Other effects are present too, such as pressure and Doppler broadening, however, these are secondary effects in this story.

    http://www.realclimate.org/index.php/archives/2010/07/a-simple-recipe-for-ghe/

    There they link to the following paper, which, if you have access, might interest you.

    A precise measurement of the vertical profiles of carbon dioxide is required for reducing the uncertainty in the carbon budget. In order to achieve measurements of the vertical CO2 distribution with an uncertainty better than approximately 4 ppm, a precise knowledge of the pressure-dependent broadening and shift coefficients of CO2 absorption lines is indispensable.

    jjap.jsap.jp/link?JJAP/47/325/

    0 0
  41. re: Pressure broadening.

    A vibrational absorbtion has breadth because the rotational energy quanta differ in the ground and 1st levels of the vibration. Because of this difference transitions between the different rotational levels "fan out". This "fine structure" thus has tiny windows within the band through which radiation can pass (if its of the correct frequency of course!). Pressure broadens the individual rotational fine structure of the vibration thus reducing the size of these windows. Since the rotation of molecules depends on the mass of the nuclei, isotopes cause two sligtly different spectra to be overlaid;

    Other things can affect the width of these bands; nuclear spin of the nuclei in the molecule is significant, especially for molecules containing Hydrogen. The other effect that may be significant is complexation - spectral absorbtion due to entities such as (CO2...H2O). These have vibrational frequencies similar to the lone molecule (and some extra) but significantly different rotational characteristics, which would make their spectra less window like. These entities were the subject of my Ph.D, and we occassionally speculated whether they played a significant effect in the greenhouse effect. To my (somewhat outdated) knowledge this question has not been answered.

    0 0
  42. Since I was e-mailed about this, I'll chime in...

    It doesn't matter what you define as the "no feedback climate sensitivity."  Obviously, dT = dF/(4 σT^3) ...(or dF = 4σT^3 dT) is a natural reference system derived directly from the Planck law, and because there's very little uncertaintly in how to calculate it, it has traditionally been used as a reference system by which other feedbacks are evaluated against.  In a climate model, this would be derived by perturbing the atmospheric temperature by, say, 1 K, and holding other variables constant (water vapor, etc) and then asking how much the infrared emission to space has increased (see e.g., Table 1 in Soden et al., 2006, "An Assessment of Climate Feedbacks in Coupled Ocean–Atmosphere Models").  The results are very close to the back-of-envelope calculation shown above.


    In the real world of course, water vapor makes the infrared emission increase more slowly than the Stefan-Boltzmann law, so a greater temperature rise is required to accomodate the need to balance energy at the top of the atmosphere.  If the system were dominated by negative feedbacks, then the flux adjustment would be more efficient than T^4, and you wouldn't need much temperature rise to balance the incoming sunlight. 


    If you wanted, you could just as well call your reference system one in which relative humidity stayed the same.  This would make the "no feedback climate sensitivity" appear much larger, but then reduce the magnitude of feedbacks, since the water vapor feedback would now only consist of a small residual that resulted from any departures in the relative humidity field.  In the same way, the Stefan-Boltzmann law is not an adequte description of radiative transfer in a real atmosphere, yet still provides a convenient baseline which allows us to talk about 'feedbacks' in a meaningful way.  It doesn't hurt the calculation too much that you have absorption to worry about, since "T" is evaluated at the top of the atmosphere and you need to eventually balance the fluxes regardless of what wavelength region they occur in. 

    0 0
  43. As mentioned before: If there is a flat absorption coefficient across the spectrum, the discussion provided before gives the same result for zero-feedback temperature change as differentiating the S-B.

    Not otherwise.

    0 0
  44. Joe Romm extensively cites Dana's OP in his article, Making Sense of Climate Sensitivity: How The Economist And MSM Keep Getting It Wrong posted today (Apr 2) on Climate Progress.

    0 0
  45. I have some questions regarding climate sensitivity.

    Basically temp data and changes in forcing are used to calculate climate sensitivity, which is then used to calculate the rise in temp that would result from 3.7W/m^2 of forcing from double CO2.

    My problem with this calculation is that we're assuming that climate sensitivity behaves nicely (ie. is almost constant). My question is how do we know that this a reasonably valid assumption? Thanks.

    0 0
  46. Engineer - that was done more or less by Broecker for his remarkably accurately 1975 prediction but that is not how any modern climate model work. Instead, climate is emergent from the interaction of forcings with the various equations in the model. If you want to know what the climate sensitivity of model is, then you work backwards from the temperature at end point as calculated by model compared to CO2 forcing. Can do run the model with various forcing to see what sensitivity to say a solar forcing of same magnitude is. (see for instance ModelE results). Over a very big temperature range, there would be good reason to suppose sensitivity would change. Eg when all ice is melted from both poles, then the only only albedo feedback would be weak ones from land cover change. Preserve us from having to worry about that for the next 100 years.

    0 0
  47. engineer,

    That's a good question, with a complex answer.

    It is absolutely true that climate sensitivity is not and would not be exactly constant. Climate sensitivity is a result of a wide variety of feedbacks which individually have different impacts.

    There are fast feedbacks which are physical mechamisms which are somewhat predictable through physical modeling (for instance, the fact that warmer air will hold more moisture, thus adding the greenhouse gas effect of H2O to the air).

    There are also slow feedbacks that depend on physical, initial conditions.  The ice sheets, for example, during a glacial period contribute a lot to keeping the planet cool by reflecting large amounts of incoming sunlight.  When temperatures warm and the ice sheets retreat, that results in a positive feedback.  If you imagine the ice sheets spread over the globe, it is easy to see that those ice sheets are larger when they are further south.  As the ice sheets retreat, they get smaller and smaller, and each further latitude of melt reduces them by less, so that the feedback is not continuously proportional.

    CO2 as a feedback instead of a forcing is also a diminishing feedback.  As you add more and more CO2 to the atmosphere, the additional CO2 has less and less of an effect, so you need even more CO2 to accomplish the same amount of warming.

    So for any particular feedback, the initial climate state is important.

    But there are a lot of different, intermixed feedbacks.  CO2 and CH4 can be added to the atmopshere due to major ecosystem changes (forest die offs or permafrost melt).  There is the melting of ice sheets.  Ocean circulation and stratification patterns can change.  The list goes on.

    As a result, given all of the varying feedbacks with varying effects under different conditions... it all averages out.

    There are many methods of estimating climate sensitivity.  Some look at past climates, to see what has happened before.  Some work with models the try to emulate the physical mechanisms.  Some directly observe how the climate changes in the very short term due to known and measured forcing changes.

    The thing is that all of these methods produce varying results in a broad range, but most converge on the more narrow range of 2 to 4.5C, and most converge on the same value of about 3C.  Taken individually, nothing is exactly the same as the current climate, but since most studies, past and present, seem to fall into the same range, it suggests that there is validity to the broad assumption (Occam's Razor) that the climate generally behaves in about the same way.

    0 0
  48. engineer,

    One last thing.  You said:

    ...we're assuming that climate sensitivity behaves nicely...

    No, we're not.  Scientists aren't stupid, and they don't work from arbitrary assumptions.  There are reasons for believing the climate sensitivity behaves a certain way, based on physics, past climate and present observations.  It's not just some assumption that has been wantonly adopted because it makes life easier.  Nobody in any field or profession gets to do things that way.  Why would climate scientists?

    0 0
  49. thanks for the replies @sphaerica I wasn't trying to insult climate scientists. I was trying to figure out the basis for the assumption and I wasn't implying that it was abitrary.

    Also do you guys know of any good links that goes into the details of the derivation of climate sensitivity? Not how the value is estimated, but the derivation of the formula. I couldn't find any good sites on Google. Thanks again.

    0 0
  50. It's not derived through a formula -- that would be like having a single derived formula that computes the expected age of a species of animal, based on the animal's biology.  It's just too complex for that. 

    The link I already gave you ("many methods of estimating climate sensitivity") gives some (not all -- in particular, that link skips over modeling, which is a very important and valuable technique) of the methods of computing sensitivity.  To really understand it you'd need to find copies of and read many of the actual papers.

    Another approach is to use the search box up top, and search for "climate sensitivity".

    The best thing you can do with climate sensitivity is to learn a lot about it.  Make it a multidimensional thing that you understand from many angles.

    0 0

Prev  1  2  3  Next

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us