Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
Keep me logged in
New? Register here
Forgot your password?

Latest Posts

Archives

Climate Hustle

Is the CO2 effect saturated?

What the science says...

Select a level... Basic Intermediate Advanced

The notion that the CO2 effect is 'saturated' is based on a misunderstanding of how the greenhouse effect works.

Climate Myth...

CO2 effect is saturated
"Each unit of CO2 you put into the atmosphere has less and less of a warming impact. Once the atmosphere reaches a saturation point, additional input of CO2 will not really have any major impact. It's like putting insulation in your attic. They give a recommended amount and after that you can stack the insulation up to the roof and it's going to have no impact." (Marc Morano, as quoted by Steve Eliot)

The mistaken idea that the Greenhouse Effect is 'saturated', that adding more CO2 will have virtually no effect, is based on a simple misunderstanding of how the Greenhouse Effect works.

The myth goes something like this:

  • CO2 absorbs nearly all the Infrared (heat) radiation leaving the Earth's surface that it can absorb. True!
  • Therefore adding more CO2 won't absorb much more IR radiation at the surface. True!
  • Therefore adding more CO2 can't cause more warming. FALSE!!!

Here's why; it ignores the very simplest arithmetic.

If the air is only absorbing heat from the surface then the air should just keep getting hotter and hotter. By now the Earth should be a cinder from all that absorbed heat. But not too surprisingly, it isn't! What are we missing?

The air doesn't just absorb heat, it also loses it as well! The atmosphere isn't just absorbing IR Radiation (heat) from the surface. It is also radiating IR Radiation (heat) to Space. If these two heat flows are in balance, the atmosphere doesn't warm or cool - it stays the same.

Lets think about a simple analogy:

We have a water tank. A pump is adding water to the tank at, perhaps, 100 litres per minute. And an outlet pipe is letting water drain out of the tank at 100 litres per minute. What is happening to the water level in the tank? It is remaining steady because the flows into and out of the tank are the same. In our analogy the pump adding water is the absorption of heat by the atmosphere; the water flowing from the outlet pipe is the heat being radiated out to space. And the volume of water inside the tank is the amount of heat in the atmosphere.

What might we do to increase the water level in the tank?

We might increase the speed of the pump that is adding water to the tank. That would raise the water level. But if the pump is already running at nearly its top speed, I can't add water any faster. That would fit the 'It's Saturated' claim: the pump can't run much faster just as the atmosphere can't absorb the Sun's heat any faster

But what if we restricted the outlet, so that it was harder for water to get out of the tank? The same amount of water is flowing in but less is flowing out. So the water level in the tank will rise. We can change the water level in our tank without changing how much water is flowing in, by changing how much water is flowing out.

water tank

Similarly we can change how much heat there is in the atmosphere by restricting how much heat leaves the atmosphere rather than by increasing how much is being absorbed by the atmosphere.

This is how the Greenhouse Effect works. The Greenhouse gases such as carbon dioxide and water vapour absorb most of the heat radiation leaving the Earth's surface. Then their concentration determines how much heat escapes from the top of the atmosphere to space. It is the change in what happens at the top of the atmosphere that matters, not what happens down here near the surface.

So how does changing the concentration of a Greenhouse gas change how much heat escapes from the upper atmosphere? As we climb higher in the atmosphere the air gets thinner. There is less of all gases, including the greenhouse gases. Eventually the air becomes thin enough that any heat radiated by the air can escape all the way to Space. How much heat escapes to space from this altitude then depends on how cold the air is at that height. The colder the air, the less heat it radiates.

atmosphere
(OK, I'm Australian so this image appeals to me)

So if we add more greenhouse gases the air needs to be thinner before heat radiation is able to escape to space. So this can only happen higher in the atmosphere. Where it is colder. So the amount of heat escaping is reduced.

By adding greenhouse gases, we force the radiation to space to come from higher, colder air, reducing the flow of radiation to space. And there is still a lot of scope for more greenhouse gases to push 'the action' higher and higher, into colder and colder air, restricting the rate of radiation to space even further.

The Greenhouse Effect isn't even remotely Saturated. Myth Busted!

Basic rebuttal written by dana1981


Update July 2015:

Here is a related lecture-video from Denial101x - Making Sense of Climate Science Denial

 

Last updated on 7 July 2015 by pattimer. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Related Arguments

Further reading

V. Ramanthan has written a comprehensive article Trace-Gas Greenhouse Effect and Global Warming.

Comments

Prev  1  2  3  4  5  6  7  8  9  10  Next

Comments 201 to 250 out of 454:

  1. Elsewhere, SASM asked some questions that were snipped for being off topic.  They are on topic here, so I will address them.  He says:

    "Is it true that CO2 is nearly fully saturated in the IR bands? I have read (http://www.skepticalscience.com/saturated-co2-effect.htm) and it clearly shows that CO2 is dimming some IR to space in the 700 to 760 wavenumber bands, which is mid-wave IR around 13-14 um. Brightness is reduced by about 1.5K, but what is this in watts/m^2 or how much warming does this blockage produce? My speculation is that this is not very much. WUWT has a post (http://wattsupwiththat.com/2013/06/08/by-the-numbers-having-the-courage-to-do-nothing/#more-87809) that states that 94.9% of the IR is absorbed by 400 ppm CO2. The increase from 300 ppm to 400 ppm (the amount from early 1900 until now) only blocks 2.3% more than what is naturally occurring (i.e., 92.6% absorption for 300 ppm CO2). Do you all agree with these numbers, and if not what numbers do you have? How much additional warming is directly caused by the increase from 280 ppm to 400 ppm today? Don’t include forcing, feedback, or anything like that – just effects of CO2"

    Turning to the WUWT post, it is complete nonsense.  It does not indicate how the values of any of its tables were determined, and makes absurd false statements such at that at least 200 ppmv is required in the atmosphere for plant life to grow (CO2 concentrations dropped to 182.2 ppmv at the Last Glacial Maximum, giving the lie to that common claim).

    More importantly, the claim that the "...proportional values shown above present are universally accepted by skeptics and Global Warming alarmists alike..." (PDF)  is complete bunk.  They are not accepted universally by AGW "skeptics" and are accepted by no defenders of climate science.  Specifically, the "universally accepted" formula for radiative forcing is RFt = 5.35* ln(ct/c0).  That is, the radiative forcing due to CO2 at time, t, relative to time, 0, equals 5.35 times the natural log of the CO2 concentration at time t divided by the CO2 concentration at time 0.  The equilibrium temperatue response to that radiative forcing is a linear function of the radiative forcing, so that it follows the same logarithmic relationship.

    An immediate consequence of that logarithmic relationship is that the temperature response for a doubling of CO2 concentration is the same for any doubling of CO2 concentration across the range over which that formula is valid (it clearly does not apply for very low values of CO2).  That is, if the temperature response for increasing CO2 from 100 to 200 ppmv is X, then the temperature response for increasing it from 200 to 400, or 300 to 600, or 400 to 800, or 500 to 1000 ppmv will also be x.

    Contrary to that relationship, however, Hoskins shows the increase from 100 to 200 as being 10.1% of some unknown value; that from 200 to 400 as being 7.3% of the same value; that from 300 to 600 as being 5.2%; that from 400 to 800 as being 4.6%, and that from 500 to 1000 as being 2.1% (PDF).  As such his tables contradict the best known, and most widely accepted formula in climate science.  Even worse, he then goes on to say that "beyond 1000+ ppmv the effect of increasing levels of CO2 can only ever be
    absolutely minimal even if CO2 concentrations were to increase indefinitely." (PDF)  That claim is the basis of setting the temperature response to 1000 ppmv as 100%, but is complete bunk.  As per the standard formula, the temperature response of an increase in CO2 level from 1000 to 2000 ppmv is the same as the the temperature response for an increase from 100 to 200 ppmv.

    Hoskins does not even apply his formula consistently.  Based on his first table, the increase in temperature for a given increase in CO2 concentration expressed as a ratio of earlier increases should be constant regardless of whether you use IPCC values, or "skeptical" values.  Yet in his second table (WUWT post) that condition is not met.  In other words, his calculated temperature responses in his second table are inconsistent with those in his first table.

    Anyway, with the standard IPCC climate sensitivity of 3 C per doubling of CO2, increasing CO2 from 400 to 1000 ppmv will increase temperature by 3.9 C.  That will represent a 5.4 C increase over pre-industrial levels - an increase equivalent to the difference in temperature between the coldest period of the last glacial and the pre-industrial.

  2. Stealth...  If you genuinely want to take a "skeptical" approach to the issue of climate change, WUWT is clearly not the place to go.  If you want to confirm your predetermined position that nearly all the published research and nearly all the actively publishing climate scientists are wrong... then WUWT is your one stop shop.

  3. To add to what Tom said, the flip side of the absurdity put forth by that WUWT post is that is fails to acknowledget that atmospheric CO2 in very high concentrations is clearly responsible getting the earth out of past deep glaciation events.  That well documented relationship could never occur if the CO2 effect was fully saturated at lower concentrations.

  4. Tom Cutris @ 201

    Thanks for the long reply. I have dug into what you have said and have some additional questions:

    You stated: “WUWT makes absurd false statements such at that at least 200 ppmv is required in the atmosphere for plant life to grow (CO2 concentrations dropped to 182.2 ppmv at the Last Glacial Maximum, giving the lie to that common claim).”

    I have done a Google search on CO2 and plant growth and have find many sources (some unrelated to climate and on plant research) that indicate plant growth is stunted at 200 ppmv CO2. At 150 ppmv a lot of plants are not doing very well. Based on this WUWT doesn’t seem absurd to me, why do you think so?

    Sources:

    http://www.es.ucsc.edu/~pkoch/EART_229/10-0120%20Appl.%20C%20in%20plants/Ehleringer%20et%2097%20Oeco%20112-285.pdf

    CO2 Science

    As for the rest of your post, I went to the very nice calculator (http://forecast.uchicago.edu/Projects/modtran2.html) pointed to me by scaddenp @ 46 from http://www.skepticalscience.com/imbers-et-al-2013-AGW-detection.html. It models the IR flux of various gases and looks pretty cool. I ran the calculator to produce the table below, which shows the upward IR flux in W/m^2 for various levels of CO2. With no CO2 and using the 1976 standard US atmosphere (I left the tool’s default setting in place and only changed to the 1976 USA atmosphere and the amount of CO2), the upward IR flux is 286.24 W/m^2. The first 100 ppmv reduces the upward IR flux to 264.17 W/m^2. If CO2 doubles from the current 400 ppmv to the hypothesized 800 ppmv, then upward IR flux drops to 255.75 W/m^2. From a “zero CO2” atmosphere, total reduction in IR flux at an 800 ppmv CO is 30.49 W/m^2. Of this total amount, 72.4% is captured by the first 100 ppmv of CO2. If CO2 increases from 400 ppmv to 800 ppmv, based on my math it appears that 91% of the heat trapping effect of CO2 is already “baked in” at 400 ppmv of CO2. This seems to line up very closely to what WUWT is stating, unless I made a mistake.

     

    CO2 ppmv Upward IR Flux
    0 286.24
    100 264.17 72.4% 72.4%
    200 261.41 81.4% 9.1%
    300 259.74 86.9% 5.5%
    400 258.58 90.7% 3.8%
    500 257.67 93.7% 3.0%
    600 256.91 96.2% 2.5%
    700 256.29 98.2% 2.0%
    800 255.75 100.0% 1.8%

    Rob Honeycutt @ 203 and @ 204

    Like Tom Curtis, you also assert that WUWT “is absurb”, yet using the very sources provided by other posters on this web site, I have seemed to confirmed what WUWT is saying about CO2, namely, the majority of the effects of CO2 are mostly captured due to logarithmic absorption of increasing CO2. Based, on the MODTRAN calculator, doubling CO2 to 800 ppmv is only going to trap and additional 2.83 W/m^2, which is 0.21% of the solar energy hitting the top of the atmosphere. I fail to see how this is can possibly be so bad – either things are so bad now, or the additional 2.83 W/m^2 isn’t going to matter at all. And natural variability has to be greater than 0.2%, especially since the change in total solar output varies by 0.1% over a solar cycle (http://en.wikipedia.org/wiki/Solar_constant). Given that global temperatures really haven’t increased much over the last 17 years, I suspect that things may not be “that bad.” If I am missing something, please help me out. Thanks! Stealth

     

    Response:

    [RH] Fixed link that was breaking page formatting.

  5. Stealth...  

    I think you're getting some figures wrong here.  The variation in the 11 year solar cycle is about 0.25W/m^2.  The change in radiative forcing for doubling CO2 over preindustrial, including feedbacks, is in the neighborhood of 4W/m^2.  (And we're potentially talking about TWO doublings if we do nothing to mitigate emissions.) Natural variability doesn't add any energy to the climate system.  Global temperatures over the past 17 years represent only a fraction of the energy in the climate system, and that trend is still well within the expected model range.

    We are likely to see an increase in surface temps for doubling CO2 of around 3C.  Two doublings would put us at 6C over preindustrial.  Even 3C is a change that take us well outside of what this planet has experienced in many millions of years, and we will have accomplished this in a matter of less than 200 years.  Do you really think that species and ecosystems can near-instantly (genetically and geologically speaking) adjust to such changes?

    When you read at WUWT about the logarithmic effect of CO2, you're reading a straw man argument.  Scientists understand the logarithmic effect and it's built into every aspect of the science and has been ever since Svante Arrhenius at the turn of the 20th century.  In fact, that position is directly contradicted by their own contrarian researchers like Roy Spencer and Richard Lindzen.

  6. And Stealth...  Think of this as a simple reality check.  We are very close to seeing seasonally ice free conditions in the Arctic.  This is a condition that has not seen on Earth in well over a million years.  The global glacial ice mass balance is also rapidly declining.  The Greenland ice sheet and the Antarctic ice mass balance are both in decline.  These are all well outside the range of natural variation.

    If an "additional 2.83 W/m^2 isn’t going to matter at all" then why do we see such a dramatic rapid loss of global ice?

  7. Stealth, Tony Watts et al. have ridden Phil Jones for years in his honest statement about the significance of a surface temp trend, knowing full well that the short-term surface trend is meaningless without extremely careful analysis.  I'm not giving Watts one angstrom of wiggle room.  Watts is in the game for rhetoric, not for science.  He's paid to cast doubt, not to advance science.  He wants to be able to say "all plants die at 200ppm CO2" rather than get it right.  If he could squeak 250ppm and get away with it, he'd do it.  How many errors does a guy get before we find him not worth the trouble?  Pointing out Watts', Eschenbach's, and Goddard's absurdities is a cottage industry.

  8. Stealth...  You also need to understand who the CO2 Science folks are.  These are the Idso's who are, literally, paid by the FF industry to produce material to cast doubt on climate science.

    The experimental test you link to is patently absurd.  You just can't compare CO2 concentrations in an aquarium to planetary level systems.  The very notion that this experiment has any larger implications should be a clue as to the motivations of the Idso's (and their conclusions are contradicted by published research).

    There is a large body of actual research published on this topic (which is going off topic for this thread) that you can read.  You just have to get out there and find it.  I would link to it for you but you should probably locate it yourself so that you know that I'm not trying to mislead you in any way.

    My favorite quote of all time related to the climate change issue comes from the late Dr Stephen Schneider, where he says, "'Good for us' and 'end of the world' are the two lowest probability outcomes."  So, when you see people like the Idso's claiming this is all good for us, that speaks volumes about their reliability.

  9. Craig Idso is paid over $11k/month.  (Just wanted to source the claim above.)

  10. Hmm. Check those figures. From pre-industrial to double CO2, the increase is 3.7W/m2. That is 1.1C above preindustrial without  feedbacks. However, you cant raise temperature without increasing water vapour, so at very least you need this feedback. Ice loss gives you an albedo feedback (and remember this is largely the driver for glacial-interglacial cycle) and on longer scale you have carbon cycle feedback. I dont see how can say "not too bad" without actually running the numbers. That's what models are for.

    By the way, if you stepping into a sewer like CO2 "Science" makes sure you actually read any reference he gives. This site is specialist at misrepresenting papers safe in the knowledge that most readers want good news and wont check. Dont fall for it.

  11. Stealth @204, you appear to have ignored the substance of my critique, while simply repeating distortions from the WUWT article.  Specifically, two key errors on the WUWT article that I focussed on was that there table of "universally accepted" was anything but, and that it did not show the most important feature of the "universally accepted" values CO2 forcing, ie, near constant forcing for each doubling of CO2.  You present your own table of values derived from the Univesity of Chicago version of Modtran, which is superficially similar to that at WUWT, without noticing that it supports my criticism, rather than rebuts it.  To illustrate that, I have expanded your table of values using Modtran, and shifted the baseline percentage to the forcing for 1000 ppmv so that the percentages match those at WUWT.  The result is as follows:

    Doubling: Forcing: Percentage: WUWT Perc:
    50-100___ 2.86____ 9.10%
    100-200__ 2.76____ 8.78%_____ 10.10%
    200-400__ 2.83____ 9.00%_____ 7.30%
    300-600__ 2.83____ 9.00%_____ 5.20%
    400-800__ 2.83_____9.00%_____ 4.60%
    500-1000__2.86____ 9.10%_____ 2.10%
    800-1600__2.85____ 9.07%
    Average:___2.83____ 9.01%

    Please note that while the WUWT values descend rapidly with increased CO2 concentration, the Modtran values are near constant for each doubling of CO2, regardless of the initial CO2 concentration.  In that, the Modtran values reflect the standard physics the radiative forcing of CO2.  In constrast, the WUWT figures are simply bullshit, as also is the claim that they are universally accepted values.

    A third key error was the claim that increases of CO2 concentration above 1000 ppmv "... can only ever be absolutely minimal even if CO2 concentrations were to increase indefinitely."  That claim is superficially supported by the declining values per doubling shown by WUWT, but are definitively refuted by the near constant forcing per doubling of CO2 shown by the general equation for radiative forcing of CO2 (see Table 1), as also by the Modtran model, which reflects the same radiative physics.  Again this can be shown on the Modtran model by simply redoubling from 140 ppmv, which show successive increments in radiative forcing for each doubling of 2.8, 2.82, 2.83, 2.86, 2.95, and an average of  2.85 W/m^2.  You will note that while all values are approximately the same, the final value, for a doubling from 2,240 to 4,480 ppmv is the largest.  I ran the values out to 4,480 ppmv because it is near, but below the upper limit of the increase to atmospheric CO2 that could be caused by humans by the combustion of fossil fuels.

    The argument in the WUWT article is based on the accuracy of their radiative forcing data, which we have seen to be bullshit; the absurd claim that radiative forcing reaches an assymptote at (or slightly above) 1000 ppmv, and an absurdly low value for radiative forcing which I did not adress.  You have shown nothing to the contrary, and indeed if you look carefully at your data, it contradicts the WUWT article as clearly as I did.

    Finally, with regard to the minimum CO2 concentration for the growth of plants, it is known that plants using C3 photosynthesis are dependent of ambient CO2 concentration for their growth rate.  In contrast, C4 plants rates of photosynthesis are "... independent of the intercellular CO2 concentration" (Ehlereringer and Bjorkman, 1977).  On a hunch, I looked up the pathway of Golden Pothos, the plant used by the Idsos' in their experiment.  Unsurprisingly it was a C3 plant.  Odd that they should not mention this important fact, and the importance of the fact in relation to their experiment.  Nevertheless, their experiment does show that for C3 plants, atmospheric CO2 concentrations << 150 ppmv are to low for growth when they are adapted to a high CO2 environment.  Given a multigeneration adaptation process to lowering CO2, however, it is quite possible that C3 plants would survive and even flourish on lower CO2 levels.  The would, however, be at a competitive disadvantage to C4 plants.   Any further discussion of this topic should be taken to the relevant thread, whose advanced article I highly recomment to you.  As related to my original criticism, it is IMO absurd to benchmark a minimum CO2 level at levels 33-100% higher than laboratory estimates of the minimum required, and 10% higher than CO2 concentrations plantlife is known to have survived for periods longer than the lifespan of most trees.

  12. Dont want to get too far into "CO2 is plant food junk", (see the articles here if you swallow this stuff), but also note that photosynthesis is temperature dependent and declines from 25C

  13. stealth @204 replied to Rob Honeycutt, saying:

    "Based, on the MODTRAN calculator, doubling CO2 to 800 ppmv is only going to trap and additional 2.83 W/m^2, which is 0.21% of the solar energy hitting the top of the atmosphere. I fail to see how this is can possibly be so bad ..."

    To begin with, let's notice that Modtran is a simple Line-by-line Radiative Transfer Model (LBLRTM), and the version online at the University of Chicago is a 1987 version of that Line-by-line model.  By its nature a LBLRTM only determines the radiative flux up and down at different levels of the atmosphere.  It does not show changes of surface temperature or any other response to differing conditions.  Further, no single model of atmospheric conditions can be the equivalent of "average" conditions.  This is especially so of the 1976 US Standard atmosphere, which was designed for the aerospace industry rather than for modelling radiative transfer.  This is evident in the approx 2.83 W/m^2 per doubling of CO2 on that model with the 1976 US standard atmosphere.  To determine the true forcing for a doubling of CO2, you need to run a LBLRT model for a variety of conditions to match the variety of conditions met on Earth, then weight the results according to the proportion of the Earth's surface on which those conditions are met.  Alternatively you can use a Global Circulation Model.  Myhre et al, 1998 did both, determining that the radiative forcing of CO2 equals 5.35 times the natural log of the new CO2 concentration divided by the initial CO2 concentration.  For doubling CO2, that is 3.7 W/m^2.

    Further, it is misleading to take the doubling from the current CO2 concentration.  The Earth's CO2 concentration has increased by 43% from the preindustrial, and temperatures have not yet reached the equilibrium temperature for that increase.  Estimating the further increase by taking a doubling of CO2 from current concentrations ignores the temperature increase still in the pipeline.  You can partially compensate for that by adding the current radiative imbalance (0.6 W/m^2) to the radiative forcing of doubling CO2, but only partially because the slower feedbacks such as sea ice and snow cover have not yet reached equilibrium for the current temperature, indicating future warming from constant CO2 at 400 ppmv is greater than would be estimated from the current radiative imbalance alone.

    Ignoring those additional factors, just how big, in relative terms is the 3.7 W/m^2 from a doubling of CO2?

    Stealth measures the relative scale by taking the Total Solar Irradiance (TSI), or approximately 1366 W/m^2.  That value, however, is the power of sunlight falling on a disc perendicular to the sunlight at the Earth's orbit.  The Earth is not a disc.  It is a sphere, and hence has 4 times the surface area of a disc of the same radius.  Therefore the TSI needs to be divided by 4 to determine the TOA insolation.  Further, 30% of the sunlight is simply reflected back to space.  As a result, the actual "solar forcing" is 239 W/m^2.  One doubling of CO2 concentration has a forcing equal to 1.5% of that value.

    The sun is a mildly variable star.  The range of its variability is about 1.2 W/m^2, or 0.21 W/m^2 for decadal average insolation between the Maunder minimum and the recent grand solar maximum.  The forcing of a doubling of CO2 is approximately 18 times (1,760%) that difference.

    The difference in radiative forcing between the Last Glacial Maximum (LGM) and the present is approximately 8 W/m^2.  The CO2 forcing for doubling CO2 is 46% of that amount. More importantly, the CO2 forcing of increasing CO2 from preindustrial levels to 850 ppmv (the likely value in 2100 with no mitigation) increases radiative forcing by 5.9, or 74% of the difference between the LGM and now.

    Set against these values, we see that Stealths calculation of a 0.21% difference is both wrong, and misdirected.  Wrong because it uses the wrong value for both denominator and numerator.  Correcting that, the value rises to 1.5%.  But wrong also because it does not use a human scale.  Humans could not survive on an Earth with zero solar radiation.  They could not survive on an Earth with even a 10% reduction or increase in solar radiation either.  The radiative forcing of CO2 introduced by industrialization, however, is very large compared to levels which humans could survive with great discomfort.  It is potentially larger than those which permit humans to maintain their civilization.  Trying to gloss over that fact by irrelevant comparisons does nobody any favours.

  14. Tom Curtis' comment is so incisive, devastating, and, in his usual clear style, makes plain the point of doing something about emissions so well, that I felt compelled to share it specifically on Facebook.

    You may now return to your regularly-scheduled discussion.

  15. Very nice Tom. Stealth, hopefully that analysis will also show you how easy it is for "simple facts" like you found, to be arranged in a way that results in a misleading conclusion when you lack expert domain knowledge. Skepticism is good but even better is your current practise of running your skeptical conclusions past other folks. Keep it up. Honest stuff like this is very educational.

  16. If the CO2 effect is saturated, then you'd better go and tell the people over at Licor. They seem to think that their infrared gas analyzers are capable of measuring CO2 from 0-3000ppm. If IR is saturated at the current 400ppm, then Licor is going to have to give a lot of money back to people that bought their sensors expecting to be able to get good measurements at higher CO2 values.

  17. And not just the size of the increase in atmospheric CO2 but also the rate, which may be unprecedented.

  18. I have been looking more carefully at the PDF which is the detailed explanation of the WUWT story which is the basis of Stealth's comments.  The inconsistency and, frankly, the dishonesty of the author, Ed Hoskins, is shown in the fifth chart of the PDF (page 3).  It purports to show the expected temperature response to increases in CO2 according to a group of "skeptics" (Plimer, Carter, Ball, and Archibald), and three "IPCC assessments" by three authors.  It also shows a "IPCC average", but that is not the average value from any IPCC assessment, but rather the average of the three "IPCC assessments" by the three authors.

    The first thing to note about this chart is that it gets the values wrong.  Below are selected values from the chart, with the values as calculated using the standard formula for CO2 forcing, and using their 100-200 value as a benchmark for the temperature response:

    Concentration Skeptic Lindzen Krondratjew Charnock “IPCC” Mean IPCC
    100-200______0.29____0.56____0.89________1.48______0.98_______3
    200-300______0.14____0.42____0.44________1.34______0.73
    Calc 200-300_0.17____0.33____0.52________0.87______0.57_______1.76
    400-1000_____0.15____0.7_____1.19________1.78______1.22
    Calc 400-1000_0.38___0.74____1.18________1.96______1.29_______3.97

    The "Calc" values are those calculated using the standard formula for radiative forcing, with a climate sensitivity factor determined by the claimed temperure response for a doubling of CO2 from 100-200 ppmv.  The '"IPCC" Mean' column is the mean of the three prior columns.

    Clearly the values in the table are not consistent with the standard formula, typically overestimating the response from 200-300 ppmv, and underestimating the response from 400-1000 ppmv.  That pattern, however, is not entirely consistent, being reversed in the case of Kondratjew.  Other than that odd inconsistency, this is just the same misrepresentation of temperature responses shown in my 211 above.

    More bizarre is the representation of the IPCC by Lindzen, Kondratjew and Charnock.  As can be seen, their values, and the mean of their values significantly underrepresent the best estimate of the IPCC AR4 of 3 C per doubling of CO2.  That is a well known result, and the misrepresentation can have no justification.  It especially cannot have any justification given that neither Kondratjew nor Charnock are authors (let alone lead authors) of any relevant chapter in the IPCC AR4.  Nor are they cited in any relevant chapter of the IPCC AR4.  Presenting their work as "IPCC assessments" is, therefore, grossly dishonest.

    Moving on, Hoskins shows another chart on page 2, which helps explain at least one cause of his error.  It is a reproduction of a chart produced by David Archibald, purportedly showing the temperature response for succesive 20 ppmv increases in CO2 concentration.  Looking at Archibald's article, he claims it is a presentation, in bar graph form, of a chart posted by Willis Eschenbach on Climate Audit:

     

    As a side note, the forcing shown is 2.94 log(CO2)+233.6, and hence the modtran settings used do not correspond to the global mean forcing.  The method used by Eschenbach, therefore, cannot produce a correct value for the global mean forcing of CO2.  As it happens, his values produce a forcing per doubling of CO2 of 2 W/m^2 per doubling of CO2, and hence underestimates the true forcing by 46%.  Note, however, that it does rise linearly for each doubling of CO2, so Hoskins has not even mimmicked Eschenbach accurately.

    Far more important is that it is a plot of the downward IR flux at ground level with all non-CO2 green house gases (including water vapour) present.  The IPCC, however, defines 'radiative forcing' as "... the change in net (down minus up) irradiance (solar plus longwave; in W m–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values".  (My emphasis.)

    It does so for two reasons.  First, the theory of radiative forcing is essentially a theory about the energy balance of the planet.  Therefore it is not the downward radiation at the surface that is at issue, but the balance between incoming and outgoing radiation at the top of the atmosphere.  

    Second, the temperature at the tropopause and at the surface are bound together by the lapse rate.  Therefore any temperature increase at the tropopause will be matched by a temperature increase at the surface.  Given reduced outward radiation at the tropopause, the energy imbalance between incoming solar radiation and outgoing IR radiation will result in warming at the surface and intermediate levels of the atmosphere.  Adjustments in the rate of convection driven by temperature differences will reestablish the lapse rate, maintaining the same linear relationship between tropopause and surface temperature (ignoring the lapse rate feedback).  The net effect is that the same effective temperature increase will occure at all levels, resulting in a larger downard radiation at the surface than the initial change at either the tropopause or the surface.

    So, Eschenbach (and Hoskins) derive their values incorrectly because they simply do not understand the theory they are criticizing, and which is accepted without dispute by knowledgeable "skeptics" such as Lindzen and Spencer.  They are in the same boat of denying simple physics as are the "skydragon slayers" who Watts excoriates.  Watts, however, publishes pseudo-scientific claptrap on the same level as the "skydragon slayers" on a daily basis, however, because he also is completely ignorant of the theory he so vehemently rejects.    

  19. Thanks, Tom.  This material needs to be worked into some sort of category level collection point -- e.g. WUWT Debunkings or WWWT (Watts Wrong With That).  I am most anxious to read Stealth's response, as s/he is a prime candidate for developing an authentic case of DK.  

  20. DSL @ 219: what does “DK” mean? I’d guess “denialist knowledge”.

    By the way, I am male, 52, with a BS in computer science and physics. And for the record, I do not blindly accept what you guys say, nor do I blindly accept what WUWT or Dr. Spencer’s website has to say. My natural inclination is to think that natural variability is a significant cause of recent warming, but I think CO2 may have a sizable role, hence the reason to discuss this with you guys. I figure you all are pro AGW, have access to the data, and can backup your position. I can be convinced with the right data and good arguments.

    Since Tom Curtis appropriately moved my question to this thread, I wanted to read the whole thing to avoid rehashing the same stuff all over. Overall, good stuff and information in this thread. As for my conversation with you guys, I think Tom Curtis made an excellent point @213, namely that energy at the TOA is not 1366 W/m^2 over the entire globe. I knew this since the curvature of earth reduces the W/m^2 as a function of the incidence angle. I was going to compute this with integration, but I like the clever way to just divide by 4 to arrive at an average “effective” energy input over the whole globe. Therefore, I agree that the average effective energy at the TOA is, over a 24 hour period, 341.5 W/m^2. I also think that the solar variance over a solar cycle probably isn’t meaningful to this value. It may be 1.3 W/m^2, but when divided by 4 we get a relatively small 0.325 W/m^2. I’m good with that.

    As for CO2 being fully saturated, I agree that it isn’t. The doubling of CO2 is probably on the order of 3 W/m^2 of reduced IR flux based on my computations with MODTRAN. What is the consensus value for the amount of reduced IR flux from the increase in CO2 from 1950 until now (CO2 increasing from 310 ppmv to 400 ppmv)? Running MODTRAN, I figure it is about 1 W/m^2.

    Next, I have some questions about energy balance, which related to TC’s @218 comments, but I don’t think those belong in the CO2 saturation thread. John Cook commented @132 of this thread to the moderators about starting a thread on the energy budget. Is there such a thread for that? If so, please post a link here.

  21. stealth @220, the version of Modtran available at the University of Chicago website is an early version (1987), which has been superceeded by 4 other versions since then.  More importantly, no single atmospheric condition will effectively model the mean effect over the entire Earth.  You need to take representative samples from a large number of conditions (tropical over forest, tropical over sand, tropical over ocean, various cloud conditions etc) and determine an average effect to get accurate values.  Unfortunately the University of Chicago interface does not allow that level of flexibility in conditions.  Nevertheless, Gunnar Myhre and associates did exactly that in 1998.  There result was that over a broad range of values, the radiative forcing of CO2 was 5.35 * ln(CO2c/CO2i) where CO2c is the new value and CO2i is the initial value.  The error given is +/-1%.  This yields a forcing for the doubling of CO2 of 3.7 W/m^2, and a forcing of 1.36 W/m^2 for the CO2 increase from 310-400 ppmv.

    NOAA maintains a usefull webpage showing the relevant formulas for the most significant GHG that do not condense at normal atmospheric pressures and temperatures, along with their estimated radiative forcing.  For what it is worth, this is the aspect of climate science that even Spenser and Lindzen agree with, and which Anthony Watts feels insulted if you suggest he does not, even though he frequently publishes and publicly endorses articles which disagree with it.

    Regarding your questions, it is hard to suggest an appropriate thread without knowing what they are.  You could either use the search function on this site to find an appropriate topic, or ask the questions and we can switch topic for the answers if appropriate.

  22. Stealth - The 'go-to' reference for direct CO2 forcing is Myhre 1988, who estimates it at a simplified (curve-fit to the more complex radiative computations) expression of:

    ΔF = 5.35*ln(C/C0) W/m2

    This means that the radiative forcing increase from 310 to 400 ppm of CO2 would be 5.35*ln(400/310), or about 1.36 W/m2. A doubling of CO2 will produce a non-feedback ΔF of 3.7 W/m2.

     

  23. Stealth,

    DK = Dunning Kruger, an unwarranted belief in one's own expertise (and an inability to recognise true expertise in others) due to lacking the expertise necessary to recognise that one's own expertise is limited. Note that it's not the same as "stupid", and it can apply to anyone, no matter how much of an expert they are in their own domain, when they venture outside of that domain — xkcd's "Physicists" comic is a good example of this.

    Regarding energy balance, a quick Google search came up with this, although it's a few years old now. The basic point is that the difference between energy entering the system and energy leaving the system has not only been modelled, it's been empiracally observed. Basic physics dictates that if there is a difference, then due to conservation of energy, that energy must be going somewhere. You can work out the accumulation of energy in the earth by trying to physically measure it everywhere you can think of, or you can just integrate the energy imbalance measured by the satellites at the top of the atmosphere over time. This last point really puts all the arguments over thermometer placement and adjustments into context.

  24. TC @221 and KR @222: I think my back-of-the-envelop hacks with MODTRAN are close enough to your 5.35 * ln(c1/c0) equation. They both produce relatively close numbers; but I’ll use the accepted 1.36 W/m^2 for CO2 from 310 ppmv to 400 ppmv. But this 1.36 W/m^2 is only 0.4% of the total back radiation from the sky based on the IPCC AR 4 energy balance (http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-1-1.html). That is not very much relative to the whole earth system.

    JasonB @223: LOL. I’ve been so wrong on so many things I think I would have a hard time being DK; I doubt I can have an unwarranted belief in my climate expertise, because I don’t have any (just a BS in physics).

  25. Stealth - It doesn't matter what the baseline is. Really. Because the baseline represents the current situation, the Holocene, the environment we have dealt with for the last 8-10 kY. 

    What matters is the change. The forcing deltas, the temperature deltas, the shifts in growth zone, in sea level, in heat wave frequencies, etc. Only 0.4%? Irrelevant! How much will the change affect us, what do we have to do to adapt - that is the real question. 

    See CO2 is just a trace gas - the baseline is comprised of multiple elements, of many components, and it simply doesn't matter what the magnitude of various components are. What matters is the change in components, in forcings, and how those changes affect us. Focusing on the scale of a change versus a baseline, without looking at how that change affects us in real terms, is a false minimization of the issue. 

  26. stealth @224, given that it is the Top Of Atmosphere radiative forcing that we are discussing, the proper comparison is not with the back radiation (which is of secondary importance) but with the Outgoing Longwave Radiation (OLR).  Granted that 1.3 W/m^2 is just 0.5% of the OLR, but then, just 0.5% of the Global Mean Surface Temperature (GMST) is 7 C.  Percentages without perspective are not very informative here.

    To calculate the temperature impact of a given radiative forcing prior to any feedbacks, you must recognize that a positive radiative forcing represents a reduction in the OLR.  In order to restore the TOA energy balance, and assuming no feedbacks, the OLR must be restored to its original value.  That requires an increase in the effective temperature of radiation to space.  Assume that 240 W/m^2 OLR is required for the energy balance, then the effective temperature or radiation to space must be (240/(5.67x10^-8))^0.125, or 255 K.  A radiative forcing of 1.3 W/m^2 then, reduces the effective temperature to 254.7.  Consequently a 0.3 K increase, ignoring feedbacks, is required to restore radiative balance.

    For the full 3.7 W/m^2 from a doubling of CO2, the reduction in effective temperature is 1 K, and hence a 1 K increase is required to restore radiative balance, ignoring feedbacks.  Finally, because atmospheric temperatures within the troposphere are locked together by convection so as to follow the lapse rate, any change in temperature at the top or middle of the troposphere the results for the need to restore radiative feedback will result in a change in surface temperature of the same size.  After that occurs, the increase in back radiation will be larger than the radiative forcing, but the energy balance at equilibrium will still be neutral because heat transfer from the surface by convection and latent heat will increase to make up the difference.

    Finally, the most recent surface and TOA energy balance diagram is from Stevens et al  2012:

  27. Also, just looking at the change in temperature from increased CO2 isnt that meaningful. You cannot change temperature without also invoking the water vapour feedback. Calculating the other feedbacks is complex (hence the range in estimates for climate sensitivity) but Planck feedback plus water vapour feedback should be the baseline for considering the effects on increased CO2.

  28. scaddenp @227, it is incorrect to think of the water vapour feedback as a singular factor.  To illustrate this, consider the procedure for estimating the planck response plus water vapour feedback using Modtran.  I will do so just using the 1976 US Standard Atmosphere with no clouds for illustrative purposes.  To do it properly, you should do it for a representative sample of environmental and cloud conditions, and take a weighted average, something it is not strictly possible to do with the University of Chicago Modtran model due to the limited number of environmental conditions specified.  Bearing that caveatte in mind, however, we proceed as follows:

    1. We determine the upward IR flux at 280 ppmv with all other values unadjusted (260.02 W/m^2).
    2. We increase the CO2 concentration to 560 ppmv, thereby reducing the upward upward IR flux.
    3. We increase the temperature offset until the upward IR flux again matches the initial value (Offset of 0.86 C required.)  That represents the Planck response.
    4. We increase the water vapor scale to equal ((288 plus offset)/288)^4 to allow for the increased water vapour pressure at the higher temperature (1.012 scale factor).
    5. We again increase the temperature offset to restore the upward IR flux to the original value (Offset of 0.96 C required).  This represents the increased water vapour pressure due to the initial water vapour response.
    6. You repeat step five until the value stabilizes.  You have now calculated the Planck response plus the water vapour feedback to the Planck response.

    Now, at this stage we may want to calculate the snow albedo feedback to the Planck response plus WV feedback to the Planck response.  That will again increase the offset temperature required, which will inturn result in another round of WV responses, and a further reduction in snow cover and so on.  

    It is because feedbacks iterate like this that it is not correct to talk about the WV feedback as a singular factor.  Supose, for example, that the total cloud feedback were slightly negative rather than (as is more likely) positive.  Then the total WV feedback will be less.  On the other hand, if the snow albedo feedback is stronger than expected (as is known from observation), that will result in a stronger WV feedback.

    Because feedbacks interact in this way, I think it is conceptually better to determine the Planck response, and then determine the feedback factors as a group to the extent that is possible.

  29. Tom, I do realise that H2O isnt that straightforward a feedback - especially if you take into account clouds, (I've worked through the excellent series at SoD ) but my understanding from th IPCC reports is that uncertainties with GHE of water vapour are in the "well understood" category with good agreement between theory and experimental/observational data. (unlike say clouds, ice sheet loss, clathrates etc) Ignoring clouds, I understand the effect to be effectively double planck response. For that reason, I think claims of "only" 1K for double CO2 are particularly spurious. You can argue about the feedbacks from clouds and melting ice, and especially ocean saturation and methane release, but you cant argue too much about the water vapour.

  30. scaddenp @229, it is not that the water vapour feedback is not well understood.  Rather, because feedbacks are responses to warming of cooling, other feedbacks which also warm (or cool) will also result in an additional WV feedback response.  Therefore you cannot quantify the WV feedback without quantifying all feedbacks.  The IPCC recognize this.  They quantify the WV feedback to the Planck response alone, but note that "... because of the inherently nonlinear nature of the response to feedbacks, the final impact on sensitivity is not simply the sum of these responses. The effect of multiple positive feedbacks is that they mutually amplify each other’s impact on climate sensitivity."  Consequently, while the WV+Lapse Rate feedback increases the temperature response by 50% of the Planck response ignoring other feedbacks, their total contribution to climate sensitivity will be greater than 0.5 C.

    Ignoring cloud feedbacks, the IPCC indicates the other feedbacks will result in an increase of temperature of 1.9 C for a doubling of CO2.  If the cloud feedback would increase the temperature response by 50% by itself, then the final climate sensitivity will be > 2.85 C, with a combined WV, Lapse rate and Surface Albedo feedback greater than 0.9 C.  If, however, it is -10%, the resulting climate sensitivity will be less than 1.71 C, and the contribution of the non-cloud feedbacks will be less than 0.9 C.

  31. Hello,

    I have a question that I was hoping might be answered here.  I've read through the comments and admit that most of what is being discussed are not things I understand well.  It seems that the results of difference spectra reported by Harries et al. are a smoking gun.  IR measurements from space over time provide concrete, easy to interpret proof that the composition of the atmosphere has changed with time in such a way that more IR is captured.

    In trying to understand the methodology better, I came across this more recent publication by the same author using the same approach.  It included data from another satellite in 2003.  Here is the result:

    The paper states that "The CO2 band at 720 cm-1 ... shows some interesting behavior, with strong negative brightness temperature difference features for 1997-1970 ... whereas, the 2003-1997 ... shows a zero signature." (Edited for clarity relative to my question--the essence of it is captured)

    The "expanation" offered in the paper is essentially that there most be some compensating effect since it is known that CO2 concentrations increased between 1997 and 2003.

    I'm willing, in my ignorance, to grant that that's true.  However, I wonder, if the presence of a difference between 1970 and 1970 is seen as proof that CO2 isn't saturated, why is zero difference between 1997 and 2003 not powerful evidence that it is?  It just seems to me that if the former evidence is enough to make one feel sure CO2 is absorbing more that the later evidence should convince the same person that CO2 is not absorbing more (between those dates).

    Any insight would be appreciated!

    Response:

    [RH] Fixed image width.

  32. Sorry for my various typos.  I should have read it over before submitting!

  33. basnapple @232, I have difficulty reconciling your description of the "explanation" by Griggs and Harries with that which they offer in the preprint version of their paper.

    Specifically, in that version they show (Fig 3 a&b) that two popular reanalysis products do not predict the observed changes in OLR.  They also show, however, that there is a profile constrained by observations that does predict the observed changes in OLR (Fig 3 c).  That means the observed changes in OLR are consistent with the expectations of radiative physics plus observed changes in gas and temperature profiles, even though the observations of those profiles is of insufficient resolution to permit accurate prediction of these small changes in OLR.  As they put it,

    "Simulations created using profiles merged from a number of datasets show that we can explain the differences seen in the CO2 and ozone bands by the known changes in the those gases over the last 34 years." 

    This contrasts sharply with your claimed "explanation", which is of course no explanation at all.  Converting a claim that changes in OLR lie within those expected given known limits of observation, and hence that there is no discrepancy, to a claim that a discrepancy exists for which there is no explanation is very substantial.  I doubt that editorial review would have forced so large a change on the paper.  Nevertheless I ask that you quote the original sections of the paper as published to show that you have indeed fairly represented Griggs and Harries. 

     

  34. Hello Tom,

    Thanks for the reply.  I read through the preprint as best I could.  I am unfamiliar with the models used and some of the terminology.  Is it true that, in their analysis, the temperature profile of the atmosphere is the fitting parameter?

    As far as how well I represented their claims, I will quote with some (perhaps unnecessary) context:

    First quote, commenting on the difference spectra figure I included in my previous post:

    "An initial inspection indicates that the processing of the data has not caused any major artifacts. In all cases the difference spectra are seen to have consistent and reproducible
    features. The only sign of asymmetry (which could indicate a mismatch of wavenumber scales between the spectra) is in the CO2 (0110 → 1000) band at 720 cm-1, which may be due to its position on the very steep high frequency wing of the CO2 fundamental centered
    at 667 cm-1."

    Second quote, in reference to the same:

    "A negative brightness temperature difference is observed in the CO2 band at 720 cm-1 in the IMG–IRIS (1997–70) and the AIRS–IRIS (2003–1970) difference spectra, indicating increasing CO2 concentrations, consistent with the Mauna Loa record (Keeling et al. 1995). However, this channel in the difference is also sensitive to temperature, and we note that in the 2003–1997 difference, despite a growth in CO2 between these years, there is no signal at 720 cm-1."

    Third quote is where the portion in my previous post comes from, now with context:

    "The CO2 band at 720 cm-1, though asymmetric for the reasons stated earlier, nevertheless shows some interesting behavior, with strong negative brightness temperature difference features for 1997–1970 and 2003–1970: whereas, the 2003–1997 (a much shorter period, of course) shows a zero signature. Since we know independently that the CO2 concentration globally continued to rise between 1997 and 2003, we must conclude that the 2003–1997 result must be due to changes in temperature that compensate for the increase in CO2. This would mean a warming of the atmosphere at those heights that are the source of the emission in the center of this band. This is somewhat contrary to the general (small) cooling of the stratosphere at tropical latitudes." (emphasis mine, I'm only trying to show where my "explanation" came from)

    Fourth quote, regarding differences between model results and observations:

    "Finally, there exists a marked gradient in the simulated spectrum between 800 and 700 cm-1, which is absent in the observations. This coincides with the far wings of the strong CO2 band centered at 667 cm-1. In sensitivity tests, this gradient showed sensitivity to the amount of CO2, and is therefore related to the strong CO2 band, and may reflect reanalysis uncertainties in temperature."

    The paper includes appendices detailing the temperature profiles used to coerce the models to the data.

    I would appreciate some help digesting this.

  35. basnappl @234, first a correction.  I thought the copy of Griggs and Harries 2004 was a preprint of the paper you were looking at.  In fact, you were looking at Griggs and Harries 2007, which is a seperate (although related) paper.  Based on that, my comment @233 is correct so far as it goes.  That is, observations of changes in OLR match those predicted by models,  within observational limits of ghg, temperature and H2O profiles.  That is, the slight discrepancy you have pointed out results from our limited knowledge conditions within the atmosphere rather than any failing of Line By Line (LBL) or Band models of radiative transfer.

    One thing we do know is that those radiative transfer models are extraordinarilly accurate.  This is shown by the match between one particular model and observations shown in the graph below:

     

    (For more examples, see my discussion here and my article here.)

    Because of these tests of accuracy when the conditions are well known, and because radiative transfer models are based on very well known physics, the discrepancy you point to is almost certainly the result of atmospheric conditions rather than deficiencies in the radiative transfer models.  That being the case, there is no question of the greenhouse effect being saturated, for all radiative transfer models predict the logarithmic relationship between CO2 concentration and radiative forcing, ie, that you get approximately the same increase in forcing for each doubling of CO2.

    It is interesting, however, to look at the reason for the "zero signature" between 1997 and 2003.  Griggs and Harries are more explicit in 2007 than in 2004, attributing the lack of signature to temperature profile.  As it happens, using the lapse rate it is possible to use the brightness temperature as a rough indicator of altitude.  If we do so, we see that in order to have zero influence in the band in question, the increase in CO2 between 1997 and 2003 would need to be matched by an increase in temperature at 7-10 km altitude greater than that at the surface, and that expected by the models.  In other words, it appears that Griggs and Harries have found a tropospheric hotspot.

  36. KR @222. I went and read Myhre 1998 that produced the ΔF = 5.35*ln(C/C0) W/m2 estimate for CO2 forcing. The issue I have with this is that the alpha coefficient of 5.35 is derived from three different models, which assume that the models are accurate relative to the global atmosphere.

    Being a software modeler myself (for 30+ years) dealing with RF energy through atmosphere, I understand that what is measured in the lab, what is model in software, and what the real world does are almost always very different. The real world is so noisy and chaotic that I have found models are almost useless in predicting what will really happen in the real world. I would be stunned if this is not also true for this forcing equation, and GCMs in general.

    Trying to measure this value for the real world, on average, is probably impossible given that the atmosphere is so different moment to moment and place to place, and changes in long term trends may be hard to determine since we have so little empirical measurement data. So while this is the “best go-to reference” we have, I doubt it is realistic or correct relative to what is really happening in the real world. I admit it *might be* correct, but I cannot prove or disprove it, nor can anyone else. This isn’t meant as a criticism of experts in this field, only a realization of my experience that the atmosphere is impossible to model accurately.

    Tom Curtis @226: I think you made a minor math error. If the average global temperature is 15C, or 288K, then 0.5% of this is 1.44K. I love the energy balance diagram by Stevens et al 2012. I went and read the paper and I found it quite interesting. Since Kevin Trenberth has generated a topic on the energy budget (http://www.skepticalscience.com/news.php?n=865), I am going to take my questions and comments about the energy budget over there.

    Scaddenp @227. Just because warmer air can hold more water vapor, doesn’t mean it will. Charts I have seen (http://www.climate4you.com/GreenhouseGasses.htm) of humidity for the atmosphere over time has been about the same at low levels, but at higher altitudes relative humidity has been decreasing. If these charts are correct, this seems to suggest that water vapor has not increased as the atmosphere has warmed over this time period (65 years).

    Response:

    [TD] For your comment on water vapor, please read the counterargument to "Humidity is Falling," and if you disagree with the peer-reviewed evidence presented there, please comment there.  Regarding water vapor in the stratosphere, see "What is the role of stratospheric water vapor in global warming?"  Anyone who responds to Stealth's comment here about water vapor, please, please do so on those other threads, not here.  Everybody must help to keep the conversations on the appropriate threads.  Thank you.

    [TD] Thank you for recognizing that another post is the right place to talk about energy budget!

  37. Stealth - Regarding Myhre 1998:

    "I doubt it is realistic or correct relative to what is really happening in the real world."

    You would be wrong. Those model results have been proven out, empirically measured by the satellite observations, such as those discussed in the opening post (Harries et al 2001 in particular). Have you read the opening post of this thread?

    Yes, the Myhre results are based on numeric models of radiative absorption/emission - using column estimates from three multiple latitudes, three different models to minimize bias and atmospheric variation. And they have been confirmed - the satellite spectra show the same outgoing radiation as predicted by those models. And therefore the modelling of slightly different atmospheric compositions is trustworthy. There is really no doubt about them, no significant uncertainties in direct forcing calculations. 

    If your model reproduces observations from basic physics, it's a good model. Your issues about uncertainties are unwarranted. 

  38. KR @237 To be honest, I’m fairly stunned by your response and I am not sure how to address it. Just out of curiosity, what is your background?

    You seem to be asserting that the alpha coefficient of 5.35 of the CO2 forcing function is both accurate and precise because it has been empirically measured. That cannot be true, otherwise people would not be building models as a way to attempt to arrive at these values. Correct? Why build a model when you can just measure it. Your assertion that this is basic physics and models match the real world simply cannot be true. I understand physics (I have a physics degree) and I build software models for living (I also have a computer science degree) so I think I am qualified to speak to models and physics.

    The climate is not that simple – far from it – so my assertion of uncertainty is, I believe, completely accurate and true. As further evidence to “prove” that there is enormous uncertainty in the climate, just read the TOA energy balance paper Stevens et al that was referenced by Tom Curtis @ 226. This is great paper! It is peer reviewed. At the end it states: “The net energy balance is the sum of individual fluxes. The current uncertainty in this net surface energy balance is large, and amounts to approximately 17 Wm–2. This uncertainty is an order of magnitude larger than the changes to the net surface fluxes associated with increasing greenhouse gases in the atmosphere.”

    Think about that – the uncertainty in the energy budget is ten times larger than the fluxes associated with GHGs. This is clearly proof that my assertions of uncertainty are completely warranted.

  39. @StealthAircraftSoftwareModeler:

    Out of curiousity, which climate models have you analyzed in depth? 

  40. Stealth - Perhaps you should re-read just what you have quoted:

    ...net energy balance... This uncertainty is an order of magnitude larger than the changes to the net surface fluxes associated with increasing greenhouse gases in the atmosphere.

    Since what we are discussing WRT Myhre 1998 are radiative transfer codes, and the change in forcings due to changes in atmospheric composition, we are indeed speaking of the 'net surface fluxes' which have far lower uncertainties. You seem to be conflating uncertainties in accounting for multiple energy flows into a total budget with uncertainties in computing atmospheric spectral response - applying an entire collection of uncertainties to a tiny portion of the puzzle.

    If you wish to discuss the total energy budget, the sum of individual components (and their uncertainties) of the energy budget, there is an appropriate thread. However, the radiative transfer codes are well proven, giving results within under 1% of observations (Chen et al 2007) including dealing with compositional changes - arguing any significant uncertainties in that regard (as you have) is quite frankly unsupportable. 

  41. Stealth:

    Appeals to personal qualifications and arguments from incredulity such as on display in #236 and #238 are not terribly convincing.

    All this:

    Being a software modeler myself (for 30+ years) dealing with RF energy through atmosphere, I understand that what is measured in the lab, what is model in software, and what the real world does are almost always very different. The real world is so noisy and chaotic that I have found models are almost useless in predicting what will really happen in the real world. I would be stunned if this is not also true for this forcing equation, and GCMs in general.

    Trying to measure this value for the real world, on average, is probably impossible given that the atmosphere is so different moment to moment and place to place, and changes in long term trends may be hard to determine since we have so little empirical measurement data.

    I admit it *might be* correct, but I cannot prove or disprove it, nor can anyone else. This isn’t meant as a criticism of experts in this field, only a realization of my experience that the atmosphere is impossible to model accurately. [Emphasis mine.]

    Your assertion that this is basic physics and models match the real world simply cannot be true. I understand physics (I have a physics degree) and I build software models for living (I also have a computer science degree) so I think I am qualified to speak to models and physics.

    The climate is not that simple – far from it – so my assertion of uncertainty is, I believe, completely accurate and true.

    strikes me as practically equivalent to:

    I know what I'm talking about, trust me & not the data.

    I don't believe this is true, therefore it is not true.

    although I am sure it was not your intent to communicate such sentiments.

    (I have highlighted in the quotes from your comments the three words that are often the butt of jokes on medical blogs: "in my experience" or variants there of are sometimes called "the most dangerous words in medicine". I see no reason why this maxim should not generally be applicable to other scientific domains, particularly when the person asserting it is arguing against the weight of evidence, as you are.)

  42. Stealth - you say. "That cannot be true, otherwise people would not be building models as a way to attempt to arrive at these values."


    I think you are confusing different models here. I dont think anyone is doing much on work on refining the 5.35 value from Myhre. As pointed out, the uncertainities are low and matches observation.

    By comparison global GCMs are trying to model what will be climate response to this deltaF. These models do have significant uncertainities resulting in  varying estimates for climate sensitivity from 2-4.5. Dont confuse the difficulties with modelling climate response with the modelling required for calculating the forcings. Different models.

  43. ps. After getting the full picture on humidity from the link indicated, I'd be interested in your assessment of tactics used by Climate4you to mislead.

  44. Stealth


    A very important distinction needs to be made and clarified here. The models that are being referred that calculate the 5.35 ln(C/C0) result are not climate models! They are Radiative Transfer Codes; solutions to the equation of Radiative Transfer. As such what they do is, given a known state for a column of gas - temperature, pressure, humidity and composition profiles - they calculate the instantaneous radiative state at any point in that column. As such, the underlying maths is actually relatively simple. And they work from databases of very well established spectroscopic data. There are no assumptions or time based modelling or projections, they calculate a single snapshot.


    A bit like engineering stress analysis programs, where they do the same simple calculations many times over for small cells to build up the composite picture. And their results are used in a wide range of applications, Climatology is only one of them. They are used in astronomy, military,, satellite communications modelling, a whole host of different domains. And their results have been extensively tested in the field and in the lab.

  45. John Hartz @239: I would love to examine the source code of some GCMs to see if Dr. Freeman Dyson’s claim that GCMs are full of fudge factors is true. I suspect that it is true because software modelers always have to make assumptions and design trades in order to get software to run in a reasonable amount of time. Can I get the source to any of the GCM models? I doubt that I can, but it is worth a try. I know there are a dozen so models, and if source code is available, which would you recommend I look at? I don’t have time to look at them all, and they give widely different projections based on the spaghetti graphs I’ve seen, so I’d only like to see the one that is considered the best.

  46. Source code is available for many of them. See here for GISS ModelE. Weather is chaotic so the same climate model will different wiggles for different initialisations. They dont pretend to be able to predict weather. 20-30 years are what climate is about. There is a very useful article on interpretation here.

    Understanding the real differences between different modelling approache is what CMIP5 (and its predecessors) is about.

  47. Stealth, the fancy computer models merely fine tune the basic projections that have turned out to be pretty accurate, starting in the 1800s, and the early ones certainly did not involve computer code because computers had not been invented yet. You can try some of those simple models yourself by getting an introductory textbook such as David Archer's Global Warming: Understanding the Forecast, or by taking notes while watching his free online lectures from his class at the University of Chicago.

    Tamino has illustrated a simple climate model you can run without a computer if you have a lot of time, or with a spreadsheet if you don't mind using a computer. He also has a followup that's only a bit more complicated.  

    There are a bunch of other climate models that are simple enough for learning and teaching. One list has been compiled by Steve Easterbrook.

    Code for slightly more complex or narrow models also is freely available.  RealClimate's "Data Sources" page has a handy but short list.  Even the full-blown General Circulation Models (GCMs) have freely available code. RealClimate's Data Sources page also has a handy list of links to those codes.  Steve Easterbrook has a three-year old list of GCMs with links to whatever info he could find about getting their codes.  See also Tamino's Climate Data Links.

  48. Stealth - RealClimate has links to both climate data and a number of model codes here, including GCMs and others. 

    "...GCMs are full of fudge factors..."

    Um, no. They are full of physics. There are parametric approximations of small-scale phenomena, and for limitations of sampling and scale - but those are anchored in physical measurements, they are not "fudge factors" or tuning knobs for giving a specific result. Temperature projections and climate sensitivity are outputs of the models, not inputs, and a great deal of the variation between individual model runs comes from differing initial conditions. That variation is in fact part of the results, indicating to some extent the range of potential weather we might see around climate trends. See scaddenp's link above for more discussion. 

    You do, I hope, realize that "fudge factor" claims are essentionally accusations of fraud aimed at the model authors? And unsupportable ones, to boot?

  49. Stealth,

    I don’t have time to look at them all, and they give widely different projections based on the spaghetti graphs I’ve seen, so I’d only like to see the one that is considered the best.

    This demonstrates a fundamental misunderstanding of what climate models are doing.

    Weather is chaotic. The timing of even signficant events like El Nino/La Nina cannot be predicted years into the future. In order to distingish between long term climate change, and the effects of internal variability, it is essential that repeated runs of the same climate model exhibit different realisations of that internal variability. This allows them to be averaged together so that the random variations cancel out leaving behind the systematic changes that will dominate in the longer term.

    Even then, there is a risk that the forecasts of individual climate models inadvertently contain systematic biases that won't be cancelled out by this procedure due to the choices that were made in designing them and potentially even software bugs. So different models from completely different groups are also combined, to see what is common in their forecasts and what varies between them. Given their varied nature, if they all predict the same thing to within some tolerance, then we can have a certain amount of confidence that the true answer lies within that range; if they disagree about something, then our confidence is reduced.

    Of course, since they are all embodying known physics to varying degrees, they will all contain systematic biases towards "reality, as we understand it" in that regard.

    Anyway, far from being an indication of failure, those spaghetti graphs are an essential element of determining the reliability of the forecasts and trying to predict the range of possible outcomes.

    Can I get the source to any of the GCM models? I doubt that I can, but it is worth a try.

    Perhaps you should check a little bit harder before beginning to "doubt"?

  50. Stealth - "Can I get the source to any of the GCM models? I doubt that I can..."

    From the first two pages of a Google search on "code for climate model":

    Not too difficult to find...

Prev  1  2  3  4  5  6  7  8  9  10  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)

Smartphone Apps

iPhone
Android
Nokia

© Copyright 2017 John Cook
Home | Links | Translations | About Us | Contact Us