Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  925  926  927  928  929  930  931  932  933  934  935  936  937  938  939  940  Next

Comments 46601 to 46650:

  1. Food Security - What Security?

    Glen Tamblyn @ 7 … The most obvious effects of ocean warming are on fish habitat (eg coral reefs) and fish physiology – forcing fish to move further north or south of the equator, which is why I gave it a mention. It also leads to accelerated melting of ice, reduced albedo, rising sea levels, loss of permafrost, carbon emissions and a whole host of nasties not considered here.

    Ainsworth et al (2011) point to the effects on biodiversity of ocean warming in their regional study of the NW Pacific. Pratchett et al (2011) also have some interesting stuff on the effects of ocean warming on seaweed and fish habitat and heaps of references to other material.

  2. Food Security - What Security?

    Ivoryorange @ 17 … Good point. Desertification could well result from global warming induced climate change, such as persistent drought. I have not read much on this topic and I am surprised that very little on the subject is available on SkS. Both need rectifying.

    Moderator Response:

    [DB] Desertification is an emergent outcome of a warming world, as is an intensification of the hydrological cycle.  For the former, see this post.  IIRC, Rob Painting has a forthcoming post on the latter.

  3. Food Security - What Security?

    Jonas @ 3 … Thanks for the reference on population which shows a mid-range estimate of 9.3 billion by 2050, not inconsistent with an estimate of 10 billion by 2065. Even if we assume global population does not exceed 9 billion by 2100 (unlikely), can we assume that they will all be adequately fed and housed? Can human ingenuity in the sphere of genetics produce food plants able to cope with a rapidly changing, less predictable and more extreme climate?

    Villabolo @ 5 … That’s a novel idea, canned wheat – very expensive but not very practical since it would involve transporting grain to a “cannery” The problem is how to store millions of tonnes of various grains to cover shortages arising from crop losses caused by severe climate events, grain losses due to insect and rodent predation and transport delay due to infrastructure damage. This already cause significant grain losses but nowhere near as large as those likely to occur as the effects of global warming increase.

  4. The two epochs of Marcott and the Wheelchair

    scaddenp @58, the temperature increase in the 20th century has been matched by an approx 2.2 C increase in temperatures at the GISP2 site, only 1.5 C of which is captured in the ice core record (Kobashi et al, 2011).  For comparison, the 8.2 Kya trough was approximately 3 C in the nearby Aggasizz-Renland core.

    And contrary to KR, it does not show up in the Marcott reconstruction.  Indeed, direct comparison with the data from the SI spreadsheet shows that at the time fo the dip, the reconstructed temperature is rising from a low point at 8.31 Kya to a high point at 8.05 Kya.  The 8.2 (technically 8.17 in the Agassiz-Renland data) trough is at best matched by a 0.01 C inflection in that rise.

    If the -1.5 C excursion at 8.25 Kya of the Dome C record was in reality the same event, this is possibly a global temperature excursion greater than 0.9 C.  The 80 year differnce in time matches 1 SD of temporal error in the A-R record and 0.5 SD in the Dome C record so allignment in reality is certainly within the bounds of possibility.  Indeed, it is probably that they, and similar troughs in the 50 odd proxies with data points in that interval align in at least one realization of the reconstruction, but with that many proxies, allignment across all would be rare so that such allignments would be washed out by the majority of realizations in which there is no alignment.

  5. Food Security - What Security?

    Steve Easterbrook summarized a recent lecture by Damon Matthews on the consequences of reductions of emissions.  (Hat tip to somebody in some comment here on SkS a while ago who brought this up, but I dunno who.)

  6. The History of Climate Science

    Oh - the graphic!  Snort.  Thanks, jg.

  7. The History of Climate Science

    Thanks, John (and jg?).  While I appreciate very much Spencer Weart's work, it's nice to have a shorter version and a nice timeline graphic.  I'll be using this out in the trenches.

  8. Food Security - What Security?

    Ray, can you be more specific about your reference for "half life of CO2 at around 5-50 years" please. I cant find it. It looks rather like a statement of CO2 has short residence time myth.

    For actual studies of what will happen to climate under constant emissions, or zero emissions, (ie what are we committed to already) see Hare and Meinshausen (2006) and Matthews and Weaver 2010.

  9. Global warming stopped in 1998, 1995, 2002, 2007, 2010, ????

    Thanks, Rob! Based on your clues I found it. Sombody should add it to this SkS post, at least in the Further Reading section....

    I need it to respond to a comment on the recent Economist article.

  10. Rob Painting at 05:03 AM on 8 April 2013
    Global warming stopped in 1998, 1995, 2002, 2007, 2010, ????

    Tom. IIRC, a commenter ,CTG at Hot Topic NZ, uploaded something similar to what you describe. Had a silder bar so you could make adjustments too. Ask Gareth at Hot Topic. 

  11. Global warming stopped in 1998, 1995, 2002, 2007, 2010, ????

    Does anybody know where I can find a graph of atmospheric global temperature with all possible 10-year trend lines drawn on top of it?  I thought I'd seen it on SkS or Tamino's site, but I've Googled my brains out to no avail. 

  12. Food Security - What Security?

    Even if CO2 levels even out or drop what about the delayed warmth "in the pipeline". I understand that is supposed to be 1F in the next 30 years.

    Also, if industries stop consuming energy in large amounts then the sulfur emissions will drop. Those emissions reflect light therefore there will be even more warming. I understand that would be over 1F.

    We're already at 1.4 globally and more in the Arctic.

  13. The History of Climate Science

    John and jg,

    An excellent summary and superb effort. This, IMHO, is a must read by anyone new to this field, or educators looking for a resource for classes.  The excellent graphics will no doubt be a popular and effective way of distilling the text/history. 

  14. Food Security - What Security?

    Ray,


    I think you are veering off topic , so I'll be brief.

    The Greenhouse effect is approximately proportionate with CO2 concentration. Stablise the level of CO2 in the atmosphere and the GHE stablises too, at the new global temperature. This stablisation doesn't happen immeadiately due to thermal inertia (so whenever we do eventually stabilise CO2 levels, there will be a bit more "warming in the pipeline")

    The lower the level we stabilise on, the less disruption to civilisation and the natural world.

    As Glenn states, reverting to pre-industrial CO2 levels (and hence a climate similar to the last few thousand years) will take in the order of 1000 years. At least we are not aware of any natural process that will rapidly draw CO2 out of atmosphere.

    I cannot read Moguitar7's mind, my view is broadly with Glenn's assertion that a decreasing population would, at best, stabilise CO2 levels. The proviso I mentioned @15 does suggest a mechanism by which population decreases whilst anthropogenic emissions continue.

    Note that for CO2 levels to rise, you only need net CO2 emission to continue, if the amount of CO2 emitted year-on-year decreases, CO2 levels are still rising; you only stabilise CO2 levels when the anthropogenic contribution is zero (in other words the CO2 from last year doesn't disappear)

  15. Food Security - What Security?

    Apologies Glen and Phil for not stating this earlier, Moguitar7 specifies AGW which as far as I know specifically means anthropogenic global warming. So if the anthropogenic component is diminishing geometrically as Moguitar7 asserts and which part of the population is diminishing is not specified, what component is causing an exponential rise in AGW? Surely in the face of a rapid decline in population CO2 levels would stabilise even if not fall, so, as Moguitar 7 asserts what would cause an exponential rise, not just a rise, in AGW?  Surely it must be something other than the anthropogenic component.

  16. Food Security - What Security?

    Hi, what about increased desertification for your list of impacts resulting in lower yields?

    Did I read somewhere (a while back) that a warmer world will lead to increased evaporation from soils? If so, then our industrial agricultural, which results in dead soils, is going to lead to significant erosion and desertification, even in developed countries.

    I'm under the impression tthat organic and biodynamic methods could help buffer this because these methods produce living soils that can hold more moisture and can better adapt to changing conditions. I've seen studies that show equivialant yields from both methods with fruits and vegetables, but I'm not sure that these methods are sufficient for larger grain and maize crops.

    Matt

  17. Food Security - What Security?

    Glen and Phil

     

    If what you say is correct then stabilising CO2 levels at say 400ppm, which is what it is suggested could help ameliorate if not avert AGW makes no sense.  According to your arguments it seems that AGW will increase in the presence of a stable CO2 level.  The IPCC have given the half life of CO2 at around 5-50 years. so presumably levels would graudally fall.  And Glen why would the poulation of the developed world crash rather than that of the developing world?  For a start the population of the developed world is less than that of the developing world and the developed world has more resources on which to call than does the developing world.  Look at the current situation where the developed world  is often asked to provide food aid to the developing world.  It certainly isn't the other way round.so your argument seems to fly in the face of reality

  18. The History of Climate Science

    @ Chriskoz #2 - These are important points, although the piece above is primarily an historical resource. I think this aspect of the carbon cycle deserves a post all of its own. As my SkS colleagues know I'm a geologist specialising in mineralogy, I've probably just volunteered myself to do this!

    You are right about the timescales involved with these major weathering cycles. Herabouts (Mid Wales) there are remnants of deep tropical terrestrial weathering (most was removed by glacial erosion in the Quaternary). Around the old lead-mines of the area, highly evolved secondary lead mineral assemblages occur in considerable amounts in association with remnants of this weathering (it leaves normally slate-grey metasediments bulk-leached to pinkish and buff shades). Post Quaternary secondary lead mineral assemblages occur in a slate-grey metasediment matrix in areas where the bulk-weathered material is now absent: the mineralisation is typically found in small amounts of non-equilibrium assemblages - perhaps just what might be expected for a weathering lasting just ten thousand, as opposed to tens of millions, of years. I'll stop rambling now!!

  19. Food Security - What Security?

     

    Glenn Tamblyn:

    but what would crash with any crashing of population levels would be the rate of CO2 emissions

    Doesn't this assume that the crash in population levels would happen in the developed world ? (which produces most of the CO2 emissions). Surely food security issues are faced by the subsection of the world population that produce practically zero net emissions. Which, of course, makes Rays point @13 even less correct.

  20. The History of Climate Science

    The article does not mention an important deail: the rock weathering climate "thermostat" is very slow to react. It works on a timescale of at least 100ky. Up to 500ky is required for Urey reaction to fully respond to an impulse forcing such as anthopo CO2.

    CO2 dissolves in the oceans much quicker (on 100-1000y timescale) but that carbon still stays within AO system and may degass back into A. The amount of CO2 that can be dissolved is limited (as oceans themselves are limitted), and Henry's law dictates that it won't be absorbed entirely. The AOCM models quantify that in a case of a proposed C cycle disturbance of ~1000GT, some 10-15% of that C must stay as CO2 and wait for rock weathering (100-500ky) to be removed out of the system. That's virtual eternity, even on the timescale of homo sapiens as a species. That point should be stressed to pre-bunk the ignorant interpretations - "no worries - the Earth's thermostat will take care of the climate" - of this article.

    It's also worth mentioning that rock weathering is too slow to signifficantly react on the Milankovic forcings of 100ky timescale, therfore we do not hear about "rock weathering influences on glacial cycles" in the literature.

    The setting of rock weathering thermostat is influenced, on still longer timescale of My, the plate techtonics. In fact, the rock weathering has been used as explanation of temp changes in Cenozoic (64my). Early part of Cenozoic (Paleoce-Eocene) have been hot because India was running like crazy accross the ocean from antarctica to Asia at 25mm/y - therefore triggering more volcanic activity with no change in weathering - more CO2 produced. When India finally smashed against Tibet giving rise to Himalayas - triggering increased rock weathering - more CO2 absorbed in climate cooled down to today's glacial cycles. This is very interesting science, although it must be admitted is based on far less certain prerequisites (i.e. plate techtonics) than the science of e.g. radiative forcing or Charney sensitivity. However, it is rarely challenged by today's "sceptics" - simply because it's not inconvenient - even opposite - it is actually convenient to claim ignorant nonsense ala "little warmer ain't bad because it was far more warmer few My ago".

  21. Glenn Tamblyn at 19:02 PM on 7 April 2013
    Food Security - What Security?

    Ray

    I'm not sure I agree with all of Moguitar7 argument but what would crash with any crashing of population levels would be the rate of CO2 emissions, not CO2 levels. Any decline in CO2 levels requires first a major drop in the rate of CO2 emissions and then the time needed for the chemistry of the Carbon Cycle to then draw down CO2 levels. A part of that will happen within years to decades due to equlibration with the ocean. After that we are looking at centuries to millenia for other geo-chemical processes to sequester remaining CO2.

    If natural feedbacks in the Carbon cycle occur due to higher temperatures - permafrost melt which has already started, destruction of some of the major carbon sinks in the form of rainforests, etc - then the starting point before any slow drawdown begins may be much higher than current CO2 levels.

  22. Food Security - What Security?

    Moguitar7 I can't follow your reasoning.   If as you say, the population "will be crashing geometrically while AGW increases exponentially" what will be driving this exponentional increase in AGW?  Presumambly  not CO2 as levels surely will be falling geomentrically in line with the geometric crashing of the population.  

  23. The History of Climate Science

    Good work! I like the graphics; and the rising CO2 trace is a nice touch.

  24. Making Sense of Sensitivity … and Keeping It in Perspective

    Engineer @113.

    Restating the message of Sphaerica @114 - I would mention that the Stephan-Boltzmann equation yields a cubic relationship as it is the derivitive being used:-

    ΔT = ΔF/4σT^3


    As T is in Kelvin, even what would be a big change for earth's climate results in a small theoretical change in the T^3 term - eg 255 °K +/- 5 °K would result in a theoretical 5% change in sensitivity.

    The big changes in sensitivity, as described @114, comes not from the physics but from the climate system. When temperature change is large, when our planet is pushed towards becoming a 'snowball' or a 'steamed doughnut,' that is when sensitvity really starts to change in value. Hansen & Sato 2012 (discussed by SkSci here) show sensitivity more than doubling for such extremes.

  25. Food Security - What Security?

    There are 3 billion people getting 60% of their protein from the oceans which will be depleted between 2035 and 50.  Aquifer depletion circa 2040 will vastly reduce yields, -83% in affected areas.  Soil salination from river water irrigation will also take out a good chunk as will soil micronutrient depletion from lack of organics.  Citification will take more land and so will desertification.  Then high priced oil and petro chemicals will affect prices, yields, and distribution, while its AGW will increase losses to crop failures from a number of reasons by climate fluctuation beyond historic.  Adding them all up we get a realistic figure of being only able to feed between 3 and 4.5 billion just before 2050.  An increasing death rate from a poor world economy will slow down population gain and by mid century it will be crashing geometrically while AGW increases exponentially.  Then in 3-500 years it will really get worse.  AETM and the finish of the Sixth Great Extinction.   Preventable in the 20th century to very early this century.  With nothing really sufficient being implemented, humanity is probably out of time to stop the Juggernaut of Ecocide.

  26. The two epochs of Marcott and the Wheelchair

    I think I will have to stop being lazy and dig out the data. 0.9C is a global average. For individual proxies (eg greenland), the change in temperature at the same location is much higher than 0.9. So when looking at individual proxies, a spike should be based on comparing the 20C change in temp at that location with the proxy. Eg for greenland, how common are spikes of 1.6C on a centennial scale?

  27. Making Sense of Sensitivity … and Keeping It in Perspective

    Engineer - models have to be verified. The physics and the numeric methods are both complex. You can run the models for past conditions of the earth and get a climate sensitivity from the model. You can also determine number empirically just as you illustrated given DelthF and DeltaT. You have confidence that your estimates of climate sensitivity are good if both the model-determined sensitivity and emperically-determined sensitvity match reasonably well. For emperical determinations, you are assuming a linear relation but model dont. It just turns out from models that the relationship is close enough to linear for a relatively small change in deltaT. As said in earlier post, it would not remain that way for very large changes.

    It appears that there is no possibility of a "runaway" greenhouse on earth (the oceans boil) without a hotter sun which will happen some time in the deep future. However, in that situation, the change in sensitivity to increasing deltaF because seriously non-linear,

  28. Bob Lacatena at 07:55 AM on 7 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer,

    wikipedia.... bleh.  It's good for some things, as an introduction to concepts, but I wouldn't for a minute use it to learn real climate science.

    Stefan-Boltzmann... not for small values of ∆T.  For example, the energy received by the Earth from the sun (approx 239 W/m2) translates to a temperature of 255˚K.  Here's a graph of the relationship (temperature at the bottom) for temperatures near those at the surface of the earth.  Notice that it is for all intents and purposes, in that small range, linear.

    "Empirically" means from data, from observations.  Again, follow the links I already gave you and look at how they do it by measuring the response of global temperatures to a major volcanic eruption (effectively reducing solar input by a measurable amount), or by studying the transition from the last glacial as in the example given by wikipedia.

    The fact that it is linear (or near linear) is almost required.  Without a linear relationship you'd too easily get a runaway effect, or a climate so stable that it would not demonstrate the volatility that we see in climate in the history of the earth.  Another way to look at it is due to the fact that the Earth's climate (normally, naturally) never varies by all that much over short periods of time (where short equals thousands or tens of thousands of years).  There's just not much room for anything but something that is for all intents and purposes linear.

    To repeat, while the climate sensitivity is from physical mechanisms, none of these are so simple as to be modeled with very simple mathematics.  The melting of the ice sheets, the browning of the Amazon, natural changes in CO2, etc., etc., are all complex natural processes.  There's just no way to mathematically derive climate sensitivity short of the (clever) variety of methods used, including observations, paleoclimate data, and models.  Again... follow the links, and read up on feedbacks.

  29. Matt Fitzpatrick at 07:50 AM on 7 April 2013
    Food Security - What Security?

    @villabolo#4

    Sorry, looks like I haven't kept up to date on that story. The bill, in amended form, was passed into law on August 1, 2012. As amended, it no longer forces the state coastal agency to predict sea level rise based only on past trends. Instead, it prevents the state from predicting sea level rise altogether, until July 1, 2016, and requires the state to study the costs and benefits of the sea level rise regulations which, until 2016, it's not allowed to make. Until 2016, local officials can approve coastal developments using any predictions they like.

    Gannon, Patrick (01-Aug-2012). "Sea-level rise bill becomes law." Star-News (Wilmington, NC).

  30. engineer8516 at 07:23 AM on 7 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    @ Glenn and Scaddenp

    Thanks. The equation I'm really curious about though is the one that relates forcing to temp:

    ∆T = k * ∆F, where k is climate sensitivity. Scaddenp said that this is a post-hoc formula. However, at least according to wikipedia, the the formula can be/is used to calculate k directly from empirical data, which would suggest (to me at least) that the formula is based on physical principles.

    "The change in temperature, revealed in ice core samples, is 5 °C, while the change in solar forcing is 7.1 W/m2. The computed climate sensitivity is therefore 5/7.1 = 0.7. We can use this empirically derived climate sensitivity to predict the temperature rise from a forcing of 4 W/m2, arising from a doubling of the atmospheric CO2 from pre-industrial levels. The result is a predicted temperature increase of 3 °C...Ganopolski and Schneider von Deimling (2008) infer a range of 1.3 to 6.8 °C for climate sensitivity determined by this approach." - wikipedia

    The reason I'm curious where ∆T = k * ∆F came from is because it's a linear relationship. I might be reaching here, but just looking at the stefan boltzman eqn I would have guessed the relationship between ∆T and ∆F would be nonlinear. If ∆T = k * ∆F is just a post-hoc formula as Scaddenp stated that would explain a lot, but as wiki states the formula is used to empirically derive sensitivity which implies to me a physical foundation for the equation. If it is just a post-hoc formula why is it valid to use it to directly derive sensitivity empirically? Sorry for the long post.

    @ Sphaerica I'll try to dig through the links you provided thanks.

  31. Trillions of Dollars are Pumped into our Fossil Fuel Addiction Every Year

    gaillardia - how can you claim - "my income and everyone else's won't change (assuming a %100 rebate)"  The 100% rebate idea means you get back the tax money. You can get more than your fair share if you less carbon than average. That gives companies a serious incentive to build low carbon infrastructure.

    The idea that tax money is used by government to improve infrastructure unfortunately is an anethema to the right who do not trust government (with some justification ) to do this efficiently.

  32. Trillions of Dollars are Pumped into our Fossil Fuel Addiction Every Year

    That depends on the type of solution. E.g. Jim Hansen's "charge at the source and divident" scheme bypasses any carbon trading and distrubutes money back to your (citizen taxpayer) pocket with minimal administrative overhead. And you could use that extra money for e.g. buying solar panels and investing into other renewable energy sources. Would you not like it?


    No, I wouldn't, because I won't have any extra money.  The energy companies will jack their prices to compensate for the tax they pay, my income and everyone else's won't change (assuming a %100 rebate), and the net result is merely to cycle more money through energy companies' bank accounts.


    A better plan is to use all or most of that tax money to build low energy communities, mass transit, and renewable energy infrastructure, as a society, not just as individuals.  Only individuals who are already rich would be able to afford a personal transition to renewables.


    Energy commodity speculation should be outlawed.  That will bring down prices some.

  33. Food Security - What Security?

    Decades ago, I got past my denial about global warming.   Just a quick review of the science is all it took.

    But now it's avoidance.   I really don't want to examine this kind of problem, I see that it is inevitable and UN-avoidable - yet, like so many others who get the science - we really don't want to face consequences.  Perhaps that's why we are drawn in to arguing about scientific methodology. 

    Thanks for this article.  

  34. The two epochs of Marcott and the Wheelchair

    Tom Curtis - "That is the reason Marcott et al compare modern temperatures to the PDF of temperatures in the realizations rather than the mean."

    Comparing PDF's is indeed the appropriate method - and comparing the means of those PDF's is part of that analysis.  

    It may be, looking at his results, that Tamino failed to apply the averaging of sampling resolutions when inserting his spike into the proxy data - but given the median 120 year sampling, that would at most reduce such 200-year spikes by a factor of ~2; still large enough to be visible in the full realization.

    WRT Monte Carlo analysis - the PDF of the full MC perturbed realization space in the presence of noise must include the raw data, and beyond, as at least some of the realizations will shift uncorrelated variations under the such a spike. The majority will blur a short spike by shifting proxies to not coincide, but will still pull up the mean in that area. That's true even given dating errors, as some of the perturbations will undo such errors. In the 1000 realization set (which should be a good exploration of the MC space) as shown by the 'spaghetti graph' - the outer bounds of those realizations do not include any such spikes.

    Now, it may be that 1000 realizations is not a sufficient exploration of the MC set (unlikely, though), or that the combination of proxy smearing and MC low-pass filtering might make a small spike difficult to distinguish. I would disagree, but I haven't gone through the full exercise myself.

    However - Isn't this all just a red herring? One initially raised by 'skeptics' in an attempt to dismiss the Marcott paper?

    • Events like the 8.2 Kya cooling show up quite strongly in multiple proxies (which is how we know of it), it even appears to be visible in the Marcott reconstruction as perhaps a 0.1C global cooling.
    • If a 0.9 C, 25x1022 Joule warming spike occurred in the Holocene we should have proxy evidence for it - and we don't.
    • There is no plausible physical mechanism for such an up/down spike.
    • There is a known physical mechanism for current warming (which won't be a short spike, for that matter).
    • There is therefore no support for the many 'skeptic' claims that "current warming might be natural" and out of our control.

    The Marcott et al paper is very interesting, it reinforces what we are already fairly certain of (that there is a lack of evidence for large spikes in temperature over the Holocene, that there is no physical basis for such spikes), and I expect their methods will be extended/improved by further work. But arguing about the potential existence of mythic and physically impossible spikes is an irrelevant distraction from the issues of current warming. 

  35. Food Security - What Security?

    Don't worry, WWIII is around the corner......it will solve your problems.

  36. Bob Lacatena at 00:21 AM on 7 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer,

    After a brief exchange with Dr. Ray Pierrehumbert at the University of Chicago, he directed me to his 2007 post at Real Climate titled What Ångström didn’t know, wherein he basically presents the derivation in plain English (no math).  To supplement that, I'd also suggest doing some research on optical thickness and the Beer Lambert Law.  If you have the chops for it, the Science of Doom website has some very good explanations (warning: math!) of a lot of things.

  37. Bob Lacatena at 23:04 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer,

    I'd just like to add that Neal King has (offline) pointed out that this was previously discussed on this same thread, at comments 73, 76 and 78.

    Offline, he also pointed out that:

    ...the explanation from Pierrehumbert is that the radiative forcing is due to the change in flux when the critical point (at which the optical depth, as measured from outer space downward, reaches the value 1: Photons emitted upward from this point will escape, so this defines an effective photosphere for the given frequency.) changes its altitude.

    This would greatly simplify the calculation problem.

    I may pursue this further myself, if I can find the time... it's a very interesting question.  In particular, it's about time I plunked down the cash on Ray Pierrehumbert's text book Principles of Planetary Climate, and perhaps John Houghton's The Physics of Atmospheres.

  38. Making Sense of Sensitivity … and Keeping It in Perspective

    Glenn has answered a lot of your questions, but the confusion is about to use it. Once you know (or have estimated) a climate sensitivity, then you can use it to calculate deltaT directly. However, you need the full blown GCM to derive the climate sensitivity in the first place. This is the reason behind debate on CS. Estimates can be made empirically from paleoclimate or more commonly from the models but you have a range of values coming from those, with most clustering between 2.5 and 3. The key to CS is the feedbacks. By itself 3.7W/m2 TOA forcing gives you 1.2C of temperature. However, with a temperature rise you immediately have feedback from increased water vapour. In slightly longer term you get feedback from albedo (particularly change in ice) and on longer timescales you have temperature-induced increases in CO2 and CH4 from a variety of sources. Add into the equation change in cloudiness with temperature (and whether this is low level cloud or high level cloud) and you start to get feel for the complexity of GCMs.

  39. Food Security - What Security?

    Jonas@2, thanks for the links. It awakens memories. I think Forrester's world dynamics model had an unusual first "public" appearance. So far as I know, the results of his "World 1" simulation model first appeared in Playboy magazine. Dennis Meadows presented a preliminary version of the "limits to growth" model at our institute. In 1971, I was invited to speak to the Ann Arbor chapter of the Sierra Club on these modeling efforts. I focused mostly on the Forrester model, with which I was intimately familiar, because the Meadows work had not yet been completed.

    Donella H. Meadows article "System dynamics meets the press" might have some useful suggestions for those interested in improving the communication of climate change and global warming issues to the public.

  40. Glenn Tamblyn at 18:00 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer

    There are two parts to this. Calculating the change in Radiative Forcing at the Top of Atmosphere (TOA) due to a change in GH gases etc - essentially the change in the Earth's energy balance. Then calculating  the temperature change expected to result as a consequence of that.

    The standard formula used for the radiative imbalance change is

    Delta F = 5.35 ln(C/C0) where CO is your reference CO2 concentration and C is the concentration you are comparing it to. The usual CO chosen is Pre Industrial of around 275 ppm. This formula is from Myrhe et al 1998 that was included in the IPCC's Third Assessment Report (TAR)

    So a doubling is 5.35 ln(2) or 3.7 W/M2

    This formula is in turn a regression curve fit to the results from a number of Radiative Transfer Codes. These are programs that perform numerical solutions to the Eqn of Radiative Transfer. Essentially they divide the atmosphere up into lots of layers and calculate the radiation fluxes between each layer, taking into account the properties of each layer - temperature, pressure, gas composition etc, the transmision, apsorption, emission and scattering of EM radiation in each layer based on detailed spectroscopic data for each of the gases present from databases such as HiTran. They perform this calculation, summing what is happening to each layer. And they do this for either each single spectral line - a large computational task - or by dividing the specta up into small bands. The accuracy of these programs has been well demonstrated since the early 1970's.

    It is important to understand that these are not climate models. They perform a single, although large, calculation of the radiative state of a column of air at one instant, based on the physical properties of that air column at that instant.

    The second stage of the problem is to work out how temperatures change based on the radiative change. Back of an Envelope calcualtions can get you into the ball park, which is what people did up until the 1960's. The very first, extremely simple climate models assumed a CS value. Current Climate Models, which are far from simple, now actually derive the CS as a result of the model. The radiative changes are fed into the model, along with lots of known physics - conservation of energy, mass & momentum; thermodynamics; cloud physics; meteorology; ice behaviour; atmospheric chemistry, carbon cycle chemistry, ocean models etc. These are then left to run, to see how the system evolves under the calculations. The result then, among other things, indicates the CS value.

    Climate Models howevber are not the  only other way to estimate CS. The Wiki entry you cite gives another example, of a class of exampled that are probably better than the climate models - the behavious of past climates. In order to determin CS you don't have to have just a CO2 change. Anything that will produce a forcing change - volcanic activity, dust, changes in solar output, will all provide data points to amass a broad estimate of what CS actually is.

    One trap to watch out for is that CS isn't always expressed the same way. Usually it is expressed as 'Deg C per doubling of CO2' but sometimes in the literature it is expressed as 'Deg C per w/M2 of forcing'

    So what we are looking for is multiple evidence streams indicating similar values for CS. And broadly they do. Although these estimates often have longer tails of possible outlier values the central point of the probability distribution of the results from most source, the majority of them derived from observations of present and past climate, is fairly strongly at around the 3-3.5 range.

    Hope this helps.

  41. engineer8516 at 16:38 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    thanks for the replies and links.

    @scaddenp

    I'm not sure if I'm understanding you correctly...so the climate models estimate the temp increase from double CO2. Then taking the est. temp increase and dividing it by 3.7 gives climate sensitivity. So that equation is just the equation for slope i.e. rise over run, and it isn't directly used to calculate climate sensitivity from historical data. The reason I'm confused is because I think the wikipedia article on climate sensitivity says that the equation can be used directly, which would imply that there is a physical foundation for it.

    "The change in temperature, revealed in ice core samples, is 5 °C, while the change in solar forcing is 7.1 W/m2. The computed climate sensitivity is therefore 5/7.1 = 0.7. We can use this empirically derived climate sensitivity to predict the temperature rise from a forcing of 4 W/m2, arising from a doubling of the atmospheric CO2 from pre-industrial levels. The result is a predicted temperature increase of 3 °C...Ganopolski and Schneider von Deimling (2008) infer a range of 1.3 to 6.8 °C for climate sensitivity determined by this approach." - wikipedia

  42. The two epochs of Marcott and the Wheelchair

    scaddenp @55, are there large spikes in individual proxies?  Yes, and there are also large spikes in multiple individual proxies simultaneiously to within temporal error (see my 53).

    KR @54:

    1)  A 0.9 C spike is approximately a 2 sigma spike, not a fifty sigma spike.  That is, the 300 year average of an interval containing that spike will be 0.3 C (approx 2 SD) above the mean.  If you want to argue it is more than that you actually have to report the frequency of such spikes in the unpruned Monte Carlo realizations.  Marcott et al did not report it (although I wish they had), and nobody else has reproduced it and reported it so we just don't know.

    2)  We don't see any density function excursions in the Monte Carlo realizations because:

    a)  Marcott et al did not plot the PDF of centenial trends in the realizations (or their absolute values); and

    b) In the spahhetti graph you cannot see enough to track individual realizations over their length to determine their properties.

    Perhaps you are forgetting that the realizations are reconstructions with all their flaws, including the low resolution in time.  That means a Tamino style spike in the realization will be equivalent in magnitude to his unperturbed reconstruction, not the full 0.9 C spike.   As such, such a spike starting from the mean temperature for an interval would not even rise sufficiently above other realizations to be visible in the spaghetti graph.

    3)  Pruning the realizations is a statistical blunder if you are plotting the PDF for any property.  It is not a blunder, or wrong in any way if you want to see if a statistically defined subset of realizations have a particular property.

    4)  If I throw two fair dice the maximum likilihood result of the sum of the dice is seven.  That does not mean I will always get a seven each time over one hundred throws.  In fact, with high probability, over one hundred throws I will get a 7 only 17 times (16.66%).  Also with significant probability, I will get a 12 about 3 times.  As it happens, the probability of a realization lying on the mean of the realizations at any time is about 5%. Ergo, about 95% of the time for any particular realization it will not lie on the mean, but be above it or below it.  Most realizations will lie on the mean more frequently than any other temperature, but on no temperature relative to the mean very oftern at all.

    That is the reason Marcott et al compare modern temperatures to the PDF of temperatures in the realizations rather than the mean.  Directly comparing with the mean is, unfortunately, tempting, but wrong.  So also is treating the mean as the course of temperature over the Holocene.  Rather it is a path that statistically constrains what Holocence temperatures could have been given what we know.

  43. The Fool's Gold of Current Climate

    Serendipity graphs above are also on a hopeful site, because they consider the A-O CO2 exchange only. They do not consider the earth system response. So far, what is known about it, is that we can expect only positive feedbacks: methane release from clathrates and permafrost, decreased albedo from melting arctic ice, warmer ocean degassing CO2 because warm water can hold less of it.

    The only problem is that the quantity of those feedbacks are unknown (maybe with the exception of ice albedo). I expect those figures (abstractive and outdated already, we need to update the starting level to 400ppm) to become more pessimistic (more warming in the pipeline) once those positive feedback are quantifiable.

  44. Making Sense of Sensitivity … and Keeping It in Perspective

    Just remember that formula is post-hoc. You get sensitivity out of a climate model run by solving for k from deltaT/deltaF which gives you a useful way to estimate temperature for given forcing. However, the GCM do not derive temperature from that formula internally that internally.

  45. Bob Lacatena at 14:28 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer --

    Spencer Weart has this reference to the first such calculation in 1967 by Manabe and Wetherald.

    You might want to look over this timeline.

    I'd also very strongly suggest reading Spencer Weart's The Discovery of Global Warming.  It's interesting reading, and it adds a lot of depth to both an understanding of the science and how old and broadly based climate science is.

  46. Bob Lacatena at 14:22 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    [engineer -- Your other post just went onto the next page.]

  47. Bob Lacatena at 14:21 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    ∆T = k log2(CO2final/CO2initial)

    Where k is the climate sensitivity in degrees C per doubling of CO2.

    I myself have never found the derivation for that, either. We at SkS should probably make a concerted effort to find it, as it would be well worth looking at and referencing.

    It may have arisen primarily from experimental observations, or else through "experimentation" using the MODTRAN line-by-line radiative transfer computations (developed by the US Air Force, one of the pioneers in this stuff, due to their interest in making infrared missiles work properly in the atmosphere).  If it was determined through physical principles, it would need to take into account the varying density of the atmosphere (with altitude), as well as the resulting variations in IR absorption and emission as balanced against the number of collisions per second with non-GHG molecules like O2 and N2 (and of course the number of collisions is affected by both density and temperature, i.e. the average velocity of each molecule).  Then there are other complications such as bandwidth overlaps with other greenhouse gases (like H2O), and broadening of the absorption spectrum (pressure broadening and doppler broadening).

    All in all, it's pretty complicated.

    I'll ask and see what people can turn up.

  48. engineer8516 at 14:17 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    I read through the link. The formula I was referring to was dT = climate sensitivity * dF.

    Hopefully this isn't a doble post. I'm not sure what happened to my other one.

  49. Bob Lacatena at 14:12 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer,

    You might also want to look at this page, courtesy of Barton Paul Levenson.  I don't think it's been updated since 2007, so it lacks a good 5 years worth of further research, but it gives you some idea of the breadth of the work that's been done in the area, and how much the end results give basically the same answer.

    [Be wary of any study that gives too high or too low a climate senstivity.  Like anything else, the outcome depends on underlying assumptions, and not all papers that are published withstand scrutiny forever.  In fact many are quickly refuted.  Peer-review is only the first hurdle.  A good example is Schmittner et all (2011), which found a lower climate sensitivity than many, but also assumed a lower temperature change from the last glacial to the current interglacial -- a lower temperature change obviously will yield a lower sensitivity, so the question shifts more towards establishing the actual temperature change in order to arrive at the correct sensitivity... as well as recognizing that his was the sensitivity of the planet exiting a glacial phase.]

  50. engineer8516 at 14:10 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    I looked at the link. I was referring to the formula dT = climate senstivity * dF.

Prev  925  926  927  928  929  930  931  932  933  934  935  936  937  938  939  940  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us