Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  923  924  925  926  927  928  929  930  931  932  933  934  935  936  937  938  Next

Comments 46501 to 46550:

  1. Making Sense of Sensitivity … and Keeping It in Perspective

    Engineer @113.

    Restating the message of Sphaerica @114 - I would mention that the Stephan-Boltzmann equation yields a cubic relationship as it is the derivitive being used:-

    ΔT = ΔF/4σT^3


    As T is in Kelvin, even what would be a big change for earth's climate results in a small theoretical change in the T^3 term - eg 255 °K +/- 5 °K would result in a theoretical 5% change in sensitivity.

    The big changes in sensitivity, as described @114, comes not from the physics but from the climate system. When temperature change is large, when our planet is pushed towards becoming a 'snowball' or a 'steamed doughnut,' that is when sensitvity really starts to change in value. Hansen & Sato 2012 (discussed by SkSci here) show sensitivity more than doubling for such extremes.

  2. Food Security - What Security?

    There are 3 billion people getting 60% of their protein from the oceans which will be depleted between 2035 and 50.  Aquifer depletion circa 2040 will vastly reduce yields, -83% in affected areas.  Soil salination from river water irrigation will also take out a good chunk as will soil micronutrient depletion from lack of organics.  Citification will take more land and so will desertification.  Then high priced oil and petro chemicals will affect prices, yields, and distribution, while its AGW will increase losses to crop failures from a number of reasons by climate fluctuation beyond historic.  Adding them all up we get a realistic figure of being only able to feed between 3 and 4.5 billion just before 2050.  An increasing death rate from a poor world economy will slow down population gain and by mid century it will be crashing geometrically while AGW increases exponentially.  Then in 3-500 years it will really get worse.  AETM and the finish of the Sixth Great Extinction.   Preventable in the 20th century to very early this century.  With nothing really sufficient being implemented, humanity is probably out of time to stop the Juggernaut of Ecocide.

  3. The two epochs of Marcott and the Wheelchair

    I think I will have to stop being lazy and dig out the data. 0.9C is a global average. For individual proxies (eg greenland), the change in temperature at the same location is much higher than 0.9. So when looking at individual proxies, a spike should be based on comparing the 20C change in temp at that location with the proxy. Eg for greenland, how common are spikes of 1.6C on a centennial scale?

  4. Making Sense of Sensitivity … and Keeping It in Perspective

    Engineer - models have to be verified. The physics and the numeric methods are both complex. You can run the models for past conditions of the earth and get a climate sensitivity from the model. You can also determine number empirically just as you illustrated given DelthF and DeltaT. You have confidence that your estimates of climate sensitivity are good if both the model-determined sensitivity and emperically-determined sensitvity match reasonably well. For emperical determinations, you are assuming a linear relation but model dont. It just turns out from models that the relationship is close enough to linear for a relatively small change in deltaT. As said in earlier post, it would not remain that way for very large changes.

    It appears that there is no possibility of a "runaway" greenhouse on earth (the oceans boil) without a hotter sun which will happen some time in the deep future. However, in that situation, the change in sensitivity to increasing deltaF because seriously non-linear,

  5. Bob Lacatena at 07:55 AM on 7 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer,

    wikipedia.... bleh.  It's good for some things, as an introduction to concepts, but I wouldn't for a minute use it to learn real climate science.

    Stefan-Boltzmann... not for small values of ∆T.  For example, the energy received by the Earth from the sun (approx 239 W/m2) translates to a temperature of 255˚K.  Here's a graph of the relationship (temperature at the bottom) for temperatures near those at the surface of the earth.  Notice that it is for all intents and purposes, in that small range, linear.

    "Empirically" means from data, from observations.  Again, follow the links I already gave you and look at how they do it by measuring the response of global temperatures to a major volcanic eruption (effectively reducing solar input by a measurable amount), or by studying the transition from the last glacial as in the example given by wikipedia.

    The fact that it is linear (or near linear) is almost required.  Without a linear relationship you'd too easily get a runaway effect, or a climate so stable that it would not demonstrate the volatility that we see in climate in the history of the earth.  Another way to look at it is due to the fact that the Earth's climate (normally, naturally) never varies by all that much over short periods of time (where short equals thousands or tens of thousands of years).  There's just not much room for anything but something that is for all intents and purposes linear.

    To repeat, while the climate sensitivity is from physical mechanisms, none of these are so simple as to be modeled with very simple mathematics.  The melting of the ice sheets, the browning of the Amazon, natural changes in CO2, etc., etc., are all complex natural processes.  There's just no way to mathematically derive climate sensitivity short of the (clever) variety of methods used, including observations, paleoclimate data, and models.  Again... follow the links, and read up on feedbacks.

  6. Matt Fitzpatrick at 07:50 AM on 7 April 2013
    Food Security - What Security?

    @villabolo#4

    Sorry, looks like I haven't kept up to date on that story. The bill, in amended form, was passed into law on August 1, 2012. As amended, it no longer forces the state coastal agency to predict sea level rise based only on past trends. Instead, it prevents the state from predicting sea level rise altogether, until July 1, 2016, and requires the state to study the costs and benefits of the sea level rise regulations which, until 2016, it's not allowed to make. Until 2016, local officials can approve coastal developments using any predictions they like.

    Gannon, Patrick (01-Aug-2012). "Sea-level rise bill becomes law." Star-News (Wilmington, NC).

  7. engineer8516 at 07:23 AM on 7 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    @ Glenn and Scaddenp

    Thanks. The equation I'm really curious about though is the one that relates forcing to temp:

    ∆T = k * ∆F, where k is climate sensitivity. Scaddenp said that this is a post-hoc formula. However, at least according to wikipedia, the the formula can be/is used to calculate k directly from empirical data, which would suggest (to me at least) that the formula is based on physical principles.

    "The change in temperature, revealed in ice core samples, is 5 °C, while the change in solar forcing is 7.1 W/m2. The computed climate sensitivity is therefore 5/7.1 = 0.7. We can use this empirically derived climate sensitivity to predict the temperature rise from a forcing of 4 W/m2, arising from a doubling of the atmospheric CO2 from pre-industrial levels. The result is a predicted temperature increase of 3 °C...Ganopolski and Schneider von Deimling (2008) infer a range of 1.3 to 6.8 °C for climate sensitivity determined by this approach." - wikipedia

    The reason I'm curious where ∆T = k * ∆F came from is because it's a linear relationship. I might be reaching here, but just looking at the stefan boltzman eqn I would have guessed the relationship between ∆T and ∆F would be nonlinear. If ∆T = k * ∆F is just a post-hoc formula as Scaddenp stated that would explain a lot, but as wiki states the formula is used to empirically derive sensitivity which implies to me a physical foundation for the equation. If it is just a post-hoc formula why is it valid to use it to directly derive sensitivity empirically? Sorry for the long post.

    @ Sphaerica I'll try to dig through the links you provided thanks.

  8. Trillions of Dollars are Pumped into our Fossil Fuel Addiction Every Year

    gaillardia - how can you claim - "my income and everyone else's won't change (assuming a %100 rebate)"  The 100% rebate idea means you get back the tax money. You can get more than your fair share if you less carbon than average. That gives companies a serious incentive to build low carbon infrastructure.

    The idea that tax money is used by government to improve infrastructure unfortunately is an anethema to the right who do not trust government (with some justification ) to do this efficiently.

  9. Trillions of Dollars are Pumped into our Fossil Fuel Addiction Every Year

    That depends on the type of solution. E.g. Jim Hansen's "charge at the source and divident" scheme bypasses any carbon trading and distrubutes money back to your (citizen taxpayer) pocket with minimal administrative overhead. And you could use that extra money for e.g. buying solar panels and investing into other renewable energy sources. Would you not like it?


    No, I wouldn't, because I won't have any extra money.  The energy companies will jack their prices to compensate for the tax they pay, my income and everyone else's won't change (assuming a %100 rebate), and the net result is merely to cycle more money through energy companies' bank accounts.


    A better plan is to use all or most of that tax money to build low energy communities, mass transit, and renewable energy infrastructure, as a society, not just as individuals.  Only individuals who are already rich would be able to afford a personal transition to renewables.


    Energy commodity speculation should be outlawed.  That will bring down prices some.

  10. Food Security - What Security?

    Decades ago, I got past my denial about global warming.   Just a quick review of the science is all it took.

    But now it's avoidance.   I really don't want to examine this kind of problem, I see that it is inevitable and UN-avoidable - yet, like so many others who get the science - we really don't want to face consequences.  Perhaps that's why we are drawn in to arguing about scientific methodology. 

    Thanks for this article.  

  11. The two epochs of Marcott and the Wheelchair

    Tom Curtis - "That is the reason Marcott et al compare modern temperatures to the PDF of temperatures in the realizations rather than the mean."

    Comparing PDF's is indeed the appropriate method - and comparing the means of those PDF's is part of that analysis.  

    It may be, looking at his results, that Tamino failed to apply the averaging of sampling resolutions when inserting his spike into the proxy data - but given the median 120 year sampling, that would at most reduce such 200-year spikes by a factor of ~2; still large enough to be visible in the full realization.

    WRT Monte Carlo analysis - the PDF of the full MC perturbed realization space in the presence of noise must include the raw data, and beyond, as at least some of the realizations will shift uncorrelated variations under the such a spike. The majority will blur a short spike by shifting proxies to not coincide, but will still pull up the mean in that area. That's true even given dating errors, as some of the perturbations will undo such errors. In the 1000 realization set (which should be a good exploration of the MC space) as shown by the 'spaghetti graph' - the outer bounds of those realizations do not include any such spikes.

    Now, it may be that 1000 realizations is not a sufficient exploration of the MC set (unlikely, though), or that the combination of proxy smearing and MC low-pass filtering might make a small spike difficult to distinguish. I would disagree, but I haven't gone through the full exercise myself.

    However - Isn't this all just a red herring? One initially raised by 'skeptics' in an attempt to dismiss the Marcott paper?

    • Events like the 8.2 Kya cooling show up quite strongly in multiple proxies (which is how we know of it), it even appears to be visible in the Marcott reconstruction as perhaps a 0.1C global cooling.
    • If a 0.9 C, 25x1022 Joule warming spike occurred in the Holocene we should have proxy evidence for it - and we don't.
    • There is no plausible physical mechanism for such an up/down spike.
    • There is a known physical mechanism for current warming (which won't be a short spike, for that matter).
    • There is therefore no support for the many 'skeptic' claims that "current warming might be natural" and out of our control.

    The Marcott et al paper is very interesting, it reinforces what we are already fairly certain of (that there is a lack of evidence for large spikes in temperature over the Holocene, that there is no physical basis for such spikes), and I expect their methods will be extended/improved by further work. But arguing about the potential existence of mythic and physically impossible spikes is an irrelevant distraction from the issues of current warming. 

  12. Food Security - What Security?

    Don't worry, WWIII is around the corner......it will solve your problems.

  13. Bob Lacatena at 00:21 AM on 7 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer,

    After a brief exchange with Dr. Ray Pierrehumbert at the University of Chicago, he directed me to his 2007 post at Real Climate titled What Ångström didn’t know, wherein he basically presents the derivation in plain English (no math).  To supplement that, I'd also suggest doing some research on optical thickness and the Beer Lambert Law.  If you have the chops for it, the Science of Doom website has some very good explanations (warning: math!) of a lot of things.

  14. Bob Lacatena at 23:04 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer,

    I'd just like to add that Neal King has (offline) pointed out that this was previously discussed on this same thread, at comments 73, 76 and 78.

    Offline, he also pointed out that:

    ...the explanation from Pierrehumbert is that the radiative forcing is due to the change in flux when the critical point (at which the optical depth, as measured from outer space downward, reaches the value 1: Photons emitted upward from this point will escape, so this defines an effective photosphere for the given frequency.) changes its altitude.

    This would greatly simplify the calculation problem.

    I may pursue this further myself, if I can find the time... it's a very interesting question.  In particular, it's about time I plunked down the cash on Ray Pierrehumbert's text book Principles of Planetary Climate, and perhaps John Houghton's The Physics of Atmospheres.

  15. Making Sense of Sensitivity … and Keeping It in Perspective

    Glenn has answered a lot of your questions, but the confusion is about to use it. Once you know (or have estimated) a climate sensitivity, then you can use it to calculate deltaT directly. However, you need the full blown GCM to derive the climate sensitivity in the first place. This is the reason behind debate on CS. Estimates can be made empirically from paleoclimate or more commonly from the models but you have a range of values coming from those, with most clustering between 2.5 and 3. The key to CS is the feedbacks. By itself 3.7W/m2 TOA forcing gives you 1.2C of temperature. However, with a temperature rise you immediately have feedback from increased water vapour. In slightly longer term you get feedback from albedo (particularly change in ice) and on longer timescales you have temperature-induced increases in CO2 and CH4 from a variety of sources. Add into the equation change in cloudiness with temperature (and whether this is low level cloud or high level cloud) and you start to get feel for the complexity of GCMs.

  16. Food Security - What Security?

    Jonas@2, thanks for the links. It awakens memories. I think Forrester's world dynamics model had an unusual first "public" appearance. So far as I know, the results of his "World 1" simulation model first appeared in Playboy magazine. Dennis Meadows presented a preliminary version of the "limits to growth" model at our institute. In 1971, I was invited to speak to the Ann Arbor chapter of the Sierra Club on these modeling efforts. I focused mostly on the Forrester model, with which I was intimately familiar, because the Meadows work had not yet been completed.

    Donella H. Meadows article "System dynamics meets the press" might have some useful suggestions for those interested in improving the communication of climate change and global warming issues to the public.

  17. Glenn Tamblyn at 18:00 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer

    There are two parts to this. Calculating the change in Radiative Forcing at the Top of Atmosphere (TOA) due to a change in GH gases etc - essentially the change in the Earth's energy balance. Then calculating  the temperature change expected to result as a consequence of that.

    The standard formula used for the radiative imbalance change is

    Delta F = 5.35 ln(C/C0) where CO is your reference CO2 concentration and C is the concentration you are comparing it to. The usual CO chosen is Pre Industrial of around 275 ppm. This formula is from Myrhe et al 1998 that was included in the IPCC's Third Assessment Report (TAR)

    So a doubling is 5.35 ln(2) or 3.7 W/M2

    This formula is in turn a regression curve fit to the results from a number of Radiative Transfer Codes. These are programs that perform numerical solutions to the Eqn of Radiative Transfer. Essentially they divide the atmosphere up into lots of layers and calculate the radiation fluxes between each layer, taking into account the properties of each layer - temperature, pressure, gas composition etc, the transmision, apsorption, emission and scattering of EM radiation in each layer based on detailed spectroscopic data for each of the gases present from databases such as HiTran. They perform this calculation, summing what is happening to each layer. And they do this for either each single spectral line - a large computational task - or by dividing the specta up into small bands. The accuracy of these programs has been well demonstrated since the early 1970's.

    It is important to understand that these are not climate models. They perform a single, although large, calculation of the radiative state of a column of air at one instant, based on the physical properties of that air column at that instant.

    The second stage of the problem is to work out how temperatures change based on the radiative change. Back of an Envelope calcualtions can get you into the ball park, which is what people did up until the 1960's. The very first, extremely simple climate models assumed a CS value. Current Climate Models, which are far from simple, now actually derive the CS as a result of the model. The radiative changes are fed into the model, along with lots of known physics - conservation of energy, mass & momentum; thermodynamics; cloud physics; meteorology; ice behaviour; atmospheric chemistry, carbon cycle chemistry, ocean models etc. These are then left to run, to see how the system evolves under the calculations. The result then, among other things, indicates the CS value.

    Climate Models howevber are not the  only other way to estimate CS. The Wiki entry you cite gives another example, of a class of exampled that are probably better than the climate models - the behavious of past climates. In order to determin CS you don't have to have just a CO2 change. Anything that will produce a forcing change - volcanic activity, dust, changes in solar output, will all provide data points to amass a broad estimate of what CS actually is.

    One trap to watch out for is that CS isn't always expressed the same way. Usually it is expressed as 'Deg C per doubling of CO2' but sometimes in the literature it is expressed as 'Deg C per w/M2 of forcing'

    So what we are looking for is multiple evidence streams indicating similar values for CS. And broadly they do. Although these estimates often have longer tails of possible outlier values the central point of the probability distribution of the results from most source, the majority of them derived from observations of present and past climate, is fairly strongly at around the 3-3.5 range.

    Hope this helps.

  18. engineer8516 at 16:38 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    thanks for the replies and links.

    @scaddenp

    I'm not sure if I'm understanding you correctly...so the climate models estimate the temp increase from double CO2. Then taking the est. temp increase and dividing it by 3.7 gives climate sensitivity. So that equation is just the equation for slope i.e. rise over run, and it isn't directly used to calculate climate sensitivity from historical data. The reason I'm confused is because I think the wikipedia article on climate sensitivity says that the equation can be used directly, which would imply that there is a physical foundation for it.

    "The change in temperature, revealed in ice core samples, is 5 °C, while the change in solar forcing is 7.1 W/m2. The computed climate sensitivity is therefore 5/7.1 = 0.7. We can use this empirically derived climate sensitivity to predict the temperature rise from a forcing of 4 W/m2, arising from a doubling of the atmospheric CO2 from pre-industrial levels. The result is a predicted temperature increase of 3 °C...Ganopolski and Schneider von Deimling (2008) infer a range of 1.3 to 6.8 °C for climate sensitivity determined by this approach." - wikipedia

  19. The two epochs of Marcott and the Wheelchair

    scaddenp @55, are there large spikes in individual proxies?  Yes, and there are also large spikes in multiple individual proxies simultaneiously to within temporal error (see my 53).

    KR @54:

    1)  A 0.9 C spike is approximately a 2 sigma spike, not a fifty sigma spike.  That is, the 300 year average of an interval containing that spike will be 0.3 C (approx 2 SD) above the mean.  If you want to argue it is more than that you actually have to report the frequency of such spikes in the unpruned Monte Carlo realizations.  Marcott et al did not report it (although I wish they had), and nobody else has reproduced it and reported it so we just don't know.

    2)  We don't see any density function excursions in the Monte Carlo realizations because:

    a)  Marcott et al did not plot the PDF of centenial trends in the realizations (or their absolute values); and

    b) In the spahhetti graph you cannot see enough to track individual realizations over their length to determine their properties.

    Perhaps you are forgetting that the realizations are reconstructions with all their flaws, including the low resolution in time.  That means a Tamino style spike in the realization will be equivalent in magnitude to his unperturbed reconstruction, not the full 0.9 C spike.   As such, such a spike starting from the mean temperature for an interval would not even rise sufficiently above other realizations to be visible in the spaghetti graph.

    3)  Pruning the realizations is a statistical blunder if you are plotting the PDF for any property.  It is not a blunder, or wrong in any way if you want to see if a statistically defined subset of realizations have a particular property.

    4)  If I throw two fair dice the maximum likilihood result of the sum of the dice is seven.  That does not mean I will always get a seven each time over one hundred throws.  In fact, with high probability, over one hundred throws I will get a 7 only 17 times (16.66%).  Also with significant probability, I will get a 12 about 3 times.  As it happens, the probability of a realization lying on the mean of the realizations at any time is about 5%. Ergo, about 95% of the time for any particular realization it will not lie on the mean, but be above it or below it.  Most realizations will lie on the mean more frequently than any other temperature, but on no temperature relative to the mean very oftern at all.

    That is the reason Marcott et al compare modern temperatures to the PDF of temperatures in the realizations rather than the mean.  Directly comparing with the mean is, unfortunately, tempting, but wrong.  So also is treating the mean as the course of temperature over the Holocene.  Rather it is a path that statistically constrains what Holocence temperatures could have been given what we know.

  20. The Fool's Gold of Current Climate

    Serendipity graphs above are also on a hopeful site, because they consider the A-O CO2 exchange only. They do not consider the earth system response. So far, what is known about it, is that we can expect only positive feedbacks: methane release from clathrates and permafrost, decreased albedo from melting arctic ice, warmer ocean degassing CO2 because warm water can hold less of it.

    The only problem is that the quantity of those feedbacks are unknown (maybe with the exception of ice albedo). I expect those figures (abstractive and outdated already, we need to update the starting level to 400ppm) to become more pessimistic (more warming in the pipeline) once those positive feedback are quantifiable.

  21. Making Sense of Sensitivity … and Keeping It in Perspective

    Just remember that formula is post-hoc. You get sensitivity out of a climate model run by solving for k from deltaT/deltaF which gives you a useful way to estimate temperature for given forcing. However, the GCM do not derive temperature from that formula internally that internally.

  22. Bob Lacatena at 14:28 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer --

    Spencer Weart has this reference to the first such calculation in 1967 by Manabe and Wetherald.

    You might want to look over this timeline.

    I'd also very strongly suggest reading Spencer Weart's The Discovery of Global Warming.  It's interesting reading, and it adds a lot of depth to both an understanding of the science and how old and broadly based climate science is.

  23. Bob Lacatena at 14:22 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    [engineer -- Your other post just went onto the next page.]

  24. Bob Lacatena at 14:21 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    ∆T = k log2(CO2final/CO2initial)

    Where k is the climate sensitivity in degrees C per doubling of CO2.

    I myself have never found the derivation for that, either. We at SkS should probably make a concerted effort to find it, as it would be well worth looking at and referencing.

    It may have arisen primarily from experimental observations, or else through "experimentation" using the MODTRAN line-by-line radiative transfer computations (developed by the US Air Force, one of the pioneers in this stuff, due to their interest in making infrared missiles work properly in the atmosphere).  If it was determined through physical principles, it would need to take into account the varying density of the atmosphere (with altitude), as well as the resulting variations in IR absorption and emission as balanced against the number of collisions per second with non-GHG molecules like O2 and N2 (and of course the number of collisions is affected by both density and temperature, i.e. the average velocity of each molecule).  Then there are other complications such as bandwidth overlaps with other greenhouse gases (like H2O), and broadening of the absorption spectrum (pressure broadening and doppler broadening).

    All in all, it's pretty complicated.

    I'll ask and see what people can turn up.

  25. engineer8516 at 14:17 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    I read through the link. The formula I was referring to was dT = climate sensitivity * dF.

    Hopefully this isn't a doble post. I'm not sure what happened to my other one.

  26. Bob Lacatena at 14:12 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer,

    You might also want to look at this page, courtesy of Barton Paul Levenson.  I don't think it's been updated since 2007, so it lacks a good 5 years worth of further research, but it gives you some idea of the breadth of the work that's been done in the area, and how much the end results give basically the same answer.

    [Be wary of any study that gives too high or too low a climate senstivity.  Like anything else, the outcome depends on underlying assumptions, and not all papers that are published withstand scrutiny forever.  In fact many are quickly refuted.  Peer-review is only the first hurdle.  A good example is Schmittner et all (2011), which found a lower climate sensitivity than many, but also assumed a lower temperature change from the last glacial to the current interglacial -- a lower temperature change obviously will yield a lower sensitivity, so the question shifts more towards establishing the actual temperature change in order to arrive at the correct sensitivity... as well as recognizing that his was the sensitivity of the planet exiting a glacial phase.]

  27. engineer8516 at 14:10 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    I looked at the link. I was referring to the formula dT = climate senstivity * dF.

  28. Bob Lacatena at 14:05 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    It's not derived through a formula -- that would be like having a single derived formula that computes the expected age of a species of animal, based on the animal's biology.  It's just too complex for that. 

    The link I already gave you ("many methods of estimating climate sensitivity") gives some (not all -- in particular, that link skips over modeling, which is a very important and valuable technique) of the methods of computing sensitivity.  To really understand it you'd need to find copies of and read many of the actual papers.

    Another approach is to use the search box up top, and search for "climate sensitivity".

    The best thing you can do with climate sensitivity is to learn a lot about it.  Make it a multidimensional thing that you understand from many angles.

  29. engineer8516 at 13:32 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    thanks for the replies @sphaerica I wasn't trying to insult climate scientists. I was trying to figure out the basis for the assumption and I wasn't implying that it was abitrary.

    Also do you guys know of any good links that goes into the details of the derivation of climate sensitivity? Not how the value is estimated, but the derivation of the formula. I couldn't find any good sites on Google. Thanks again.

  30. The two epochs of Marcott and the Wheelchair

    Forgive me if I havent been following this closely enough, but surely no spike in global temperatures is possible unless there are spikes of the appropriately same magnitude in individual proxies. So one question, is there any spikes of centennial scale with size of modern warming evident in individual proxies (like ice core) that have sub-century age resolution? If there are none, then it hardly matters about the nicities of the multi-proxy processing surely?

  31. Bob Lacatena at 12:58 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer,

    One last thing.  You said:

    ...we're assuming that climate sensitivity behaves nicely...

    No, we're not.  Scientists aren't stupid, and they don't work from arbitrary assumptions.  There are reasons for believing the climate sensitivity behaves a certain way, based on physics, past climate and present observations.  It's not just some assumption that has been wantonly adopted because it makes life easier.  Nobody in any field or profession gets to do things that way.  Why would climate scientists?

  32. Bob Lacatena at 12:55 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    engineer,

    That's a good question, with a complex answer.

    It is absolutely true that climate sensitivity is not and would not be exactly constant. Climate sensitivity is a result of a wide variety of feedbacks which individually have different impacts.

    There are fast feedbacks which are physical mechamisms which are somewhat predictable through physical modeling (for instance, the fact that warmer air will hold more moisture, thus adding the greenhouse gas effect of H2O to the air).

    There are also slow feedbacks that depend on physical, initial conditions.  The ice sheets, for example, during a glacial period contribute a lot to keeping the planet cool by reflecting large amounts of incoming sunlight.  When temperatures warm and the ice sheets retreat, that results in a positive feedback.  If you imagine the ice sheets spread over the globe, it is easy to see that those ice sheets are larger when they are further south.  As the ice sheets retreat, they get smaller and smaller, and each further latitude of melt reduces them by less, so that the feedback is not continuously proportional.

    CO2 as a feedback instead of a forcing is also a diminishing feedback.  As you add more and more CO2 to the atmosphere, the additional CO2 has less and less of an effect, so you need even more CO2 to accomplish the same amount of warming.

    So for any particular feedback, the initial climate state is important.

    But there are a lot of different, intermixed feedbacks.  CO2 and CH4 can be added to the atmopshere due to major ecosystem changes (forest die offs or permafrost melt).  There is the melting of ice sheets.  Ocean circulation and stratification patterns can change.  The list goes on.

    As a result, given all of the varying feedbacks with varying effects under different conditions... it all averages out.

    There are many methods of estimating climate sensitivity.  Some look at past climates, to see what has happened before.  Some work with models the try to emulate the physical mechanisms.  Some directly observe how the climate changes in the very short term due to known and measured forcing changes.

    The thing is that all of these methods produce varying results in a broad range, but most converge on the more narrow range of 2 to 4.5C, and most converge on the same value of about 3C.  Taken individually, nothing is exactly the same as the current climate, but since most studies, past and present, seem to fall into the same range, it suggests that there is validity to the broad assumption (Occam's Razor) that the climate generally behaves in about the same way.

  33. Making Sense of Sensitivity … and Keeping It in Perspective

    Engineer - that was done more or less by Broecker for his remarkably accurately 1975 prediction but that is not how any modern climate model work. Instead, climate is emergent from the interaction of forcings with the various equations in the model. If you want to know what the climate sensitivity of model is, then you work backwards from the temperature at end point as calculated by model compared to CO2 forcing. Can do run the model with various forcing to see what sensitivity to say a solar forcing of same magnitude is. (see for instance ModelE results). Over a very big temperature range, there would be good reason to suppose sensitivity would change. Eg when all ice is melted from both poles, then the only only albedo feedback would be weak ones from land cover change. Preserve us from having to worry about that for the next 100 years.

  34. The two epochs of Marcott and the Wheelchair

    Tom Curtis - To clarify my earlier comments regarding Monte Carlo analysis: 

    Given a set of proxies with date uncertainties, if there is a large (50 sigma?) signal involved, even if the initial dates are incorrect at least some of the realizations in the perterbation space will accurately reflect that spike, being shifted to reinforce each other. And some will show more than that spike, due to unrelated variations being date-shifted intot he same time period. The number, and density of the Monte Carlo probability function, will be dependent on the distance of the various date errors - but there will be a density function including the spike and more in the full realization space. This is very important - if an alignment that shows such a spike is possible under the uncertainty ranges, it is part of the Monte Carlo realization space

    1000 runs is going to sample that space very thoroughly. And nowhere in the 1000 Marcott runs pre-19th century do you see a density function excursion of this nature. 

    [ Side note - pruning the Monte Carlo distribution would be completely inappropriate - the entire set of realizations contributes to the density function pointing to the maximum likelyhood mean, and pruning induces error. Even extreme variations are part of the probability density function (PDF). The mean in the Marcott case is clearly plotted; "Just what were the mean values for the thousand years before and after that spike in that realization? Clearly you cannot say as its hidden in the spaghetti." is thereby answered. ]

    With very high probability, the real temperature history does not follow the mean of the monte carlo reconstructions...

    I disagree entirely. That is indeed the maximum likelihood of the history, based upon the date and temperature uncertainties. And the more data you have, the closer the Monte Carlo reconstruction will be to the actuality. Exactly? No, of course not, reconstructions are never exact. But quite close. Monte Carlo (MC) reconstructions are an excellent and well tested method of establishing the PDF of otherwise ill-formed or non-analytic functions. 

    It would certainly be possible for somebody to go through and make a maximum variability alignment of idividual proxies as constrained by uncertainties in temporal probability. If that were done, and no convincing, global Tamino style spikes existed in that record, that would be convincing evidence that no such spikes existed.

    That is exactly what a MC exploration of date and temperature uncertainties performs. No such excursions are seen in any of the 1000 realizations. Meanwhile, the rather smaller +/- excursions over the last 1000 years are quite visible, as is the 8.2 Kya event of 2-4 centuries (and yes, it's visible as a ~0.1-0.15 C drop in Marcott). I believe it is clear that the Marcott data would show a larger 0.9 C global two-century spike event if it existed - and it does not show. 

  35. Glenn Tamblyn at 12:34 PM on 6 April 2013
    Food Security - What Security?

    Agnostic, you mention one impact of warming in the summary but don't expand on it any further; warming of the oceans. I have never actually seen an assessment of the negative impact on biological productivity in the oceans of increased water temperatures. Have you any references on that?

  36. engineer8516 at 12:26 PM on 6 April 2013
    Making Sense of Sensitivity … and Keeping It in Perspective

    I have some questions regarding climate sensitivity.

    Basically temp data and changes in forcing are used to calculate climate sensitivity, which is then used to calculate the rise in temp that would result from 3.7W/m^2 of forcing from double CO2.

    My problem with this calculation is that we're assuming that climate sensitivity behaves nicely (ie. is almost constant). My question is how do we know that this a reasonably valid assumption? Thanks.

  37. Food Security - What Security?

    Villabolo @ 4:

    Da Google sent me to this link...

    http://www.skepticalscience.com/North-Carolina-Lawmakers-Turning-a-Blind-Eye-to-Sea-Level-Reality.html

  38. Food Security - What Security?

    Concerning food storage, it is possible to store wheat and other grains for 10 to 30 years depending on the method.

    One method is conventional cans (about gallon size and larger) in an oxygen free environment - nitrogen is used to replace the oxygen. So long as its stored, out of the heat, at room temperature it will last for 20-30 years. Brown rice will only keep for 10 years.

    Abrupt changes in climate will give us a 'feast or famine' cycle where we'll be able to grow adequate amounts some years but suffer extensive loss on other years. The only way to alleviate that is to store large quantities of grains during the 'good' years to make up for the bad years.

    Eliminating biofuel production will also give us a large safety margin.

    Going vegetarian will help even more but it's not going to happen voluntarily.

  39. The two epochs of Marcott and the Wheelchair

    Sphaerica @48, I assume you do not think I am following the steps to denial.

    The fact is that some people, most notably Tamino who at least has made a check of the claim, are claiming that the Marcott et al. reconstruction excludes Tamino style spikes.  That is more than is claimed by the authors of the paper.  Before we can accept that Marcot et al does exclude such spikes, we need to check the evidence.  So far the evidence is tantalizing but far from conclusive.  Ergo we should not use that claim as evidence in rebutting AGW skepticism.  When "skeptics" get to step two, we need only point out (truthfully) that they are over interpreting Marcot et al's remark about frequency and that they have not shown that such a spike could exist and not be shown in the Marcot et al reconstruction.  In short, they have no evidence for their claim at step two.  Although many AGW "skeptics" like to treat an absence of refutation of their position as proof their position is true, we need not accept such irrationality and cannot reach those who continue to accept it once it is pointed out to them.

    However, this discussion is indeed a distraction and absent a significantly new argument or evidence that changes my opinion, I shall withdraw from further discussion of this point.

  40. The two epochs of Marcott and the Wheelchair

    KR @43:

    "Hence I would consider that a single realization at that point represents a value that is too high for the data, an outlier, that consists of perturbations that happen to overlay variations in a reinforcing fashion at that point."

    I think you are missing the point of the monte carlo reconstructions.  We know that there are errors in the temperature reconstruction and dating of the proxies.  That means at some points the unperturbed reconstruction will be too low, or too high relative to reality.  At those points, particular realizations in the Monte Carlo reconstructions which are low or high (respectively) are closer to reality.  Not knowing the precise course of reality, we cannot know whether or not particular spike in a particular realization, whether high or low, maps reality closely or not.

    One way you could approach the general problem would be to do 20,000 monte carlo realizations and to prune away all realizations with an overal probability relative the the original data and errors of less than 5%.  That will leave you with approximately 1000 realizations all of which could plausibly be the real history of temperature over the Holocene.  You could then examine those thousand plausible realizations to see if any contained Tamino style spikes.  If not, then the data excludes such spikes.  If they do, the data does not.

    As it happens, Marcott et al did not prune their realizations based on global probability. Consequently about 50 of their realizations are not plausible temperature reconstructions.  Importantly, though, even plausible reconstructions will contain short intervals which are implausible given the data.  Indeed, about 5% of their length, on average will be implausible based on the data.  Therefore we cannot look at a single peak and conclude from the fact that  the realization is clearly an outlier in that time period, that the realization is an outlier over the entire holocene.  You need the entire realization to come to that conclusion, and due to the nature of 1000 realization spagheti graphs, we do not have the entire history of any realization.

    So, for all we know from the spaghetti graph, there are plausible realizations containing Tamino style spikes within  it.  In fact, what we can conclude from the Marcott et al realizations is that:

    1) With very high probability, the real temperature history does not follow the mean of the monte carlo reconstructions;

    2) With very high probability, the 300 year mean of the real temperature history lies within 0.27 C of the  mean 95% approximately 95% of the time; but that

    3) With high probability the 300 year mean of the real temperature history is further than 0.27C from the mean about 5% of the time.

    The question is whether the approximately 275 years we expect the 300 year mean of the real temperature to be over 0.27 C above the mean is the result of a sustained multicentury warmth, or whether it could be from a Tamino style spike.  (Note, a Tamino style spike as a 300 year mean temperature increase of 0.3 C.)  On information todate, the later is improbable, but not excluded.

    "But this is all really a side-show. We have proxies down to decadal and near-annual resolution (ice cores and speleotherms, in particular), and none of them show a global 'spike' signal of this nature. The only reason the question was raised in the first place, IMO and as noted here, is as obfuscation regarding global warming. Current warming is unprecedented in the Holocene, and it is due to our actions - it's not a 'natural cycle'."

    Well, yes, they don't show global anything because they are regional proxies.  I assume you mean stacks of such proxies.  Well, consider the only two high resolution (20 year) proxies used by Marcot et al over the period 8 to 9 Kya:

    In particular consider the spike of 3 C at 8.39 Kya in the Dome C icecore, which is matched by a 1.5 C spike in the Agassiz-Renland icecore.  That looks suspiciously like a Tamino style spike.  Or consider the 3 C trough at 8.15 Kya in the Aggassiz-Renland icecore and the 2 C trough at 8.23 K in Dome C.  Although the resolution of these ice cores is actually annual, at 8 Kya plus years, the 1 sigma error of age estimate relative to 1950 is greater than 150 years.  Ergo it is quite possible those two troughs should align, creating a Tamino style trough.

    These two possible candidates do not show up in the mean of the Marcott et al reconstruction.

    It would certainly be possible for somebody to go through and make a maximum variability alignment of idividual proxies as constrained by uncertainties in temporal probability.  If that were done, and no convincing, global Tamino style spikes existed in that record, that would be convincing evidence that no such spikes existed.  But until it is done, there is sufficient variability in individual regional proxies that we cannot make the claim that the individual proxies exclude that possibility.

  41. Food Security - What Security?

    @ Matt #1

    "Remember North Carolina's 2012 bill that would have required people to only use past trends to predict sea level rise?"

    Can you provide me a reference to that bill?

     

  42. Daniel Bailey at 09:20 AM on 6 April 2013
    The Fool's Gold of Current Climate

    It is the mark of wisdom to also consider the Temperature Change portion of Andy's graphic above.  And note that this is the result of a complete cessation of human-sourced emissions (brought to zero and held there for 300 years).

    Questions, anyone?  Bueller?

  43. The Fool's Gold of Current Climate

    william: What comes out of his talk is that if we stop putting carbon dioxide into the atmosphere, it could reduce in the atmosphere rather rapidly.

    Actually, if we had stopped emissions dead, in 2010, the atmospheric concentration of CO2 would have declined to about 340 ppm by 2300. For comparison, that's the amount it was in 1980. The CO2 will reduce, over 190 years, at approximately one-sixth of the rate that we are currently increasing it. That's the most drastic case imaginable. Graph below is from Serendipity.

  44. Food Security - What Security?

    PS: Lester Brown has *lots* of raw data for his book 
    "Full Planet, Empty Plates; The new geopolitics of food scarcity"
    on his website (as excel files): http://www.earth-policy.org/data_center 

  45. Food Security - What Security?

    Not Everybody thinks we will have 10 billion humans: Jorgen Randers (Club of Rome) predicts a peak of 8 billion due to lower fertility in cities, see short introduction of his report to the club of Rome "2052, a global forecast for the next 40 years", picking up after 40 years of "Limits to Growth": http://www.clubofrome.org/?p=703 . Randers says, that this lower than generally assumed population will cause lower growth than expected and a push back of the more catastrophic climate change effects to the second half of the century (if nothing is changed, which is what he explicitly assumes after 40 years of environmental activity with limited success ...).

    It is also interresting to view the three videos given a the Smithsonian institute for the 40 years of "Limits to Growth", with each of the three speakers (Meadows, Randers, Brown) giving a different priority to the three dangers from the "Limits to Growth": resource scarcity (Oil, ...), pollution (climate change), population (food, water, ..).

    Dennis Meadows (Oil; Resource Scarcity):
    http://www.youtube.com/watch?v=f2oyU0RusiA
    Jorgen Randers (Climate Change; Pollution):
    http://www.youtube.com/watch?v=ILrPmT6NP4I
    Lester Brown (Food+Water; see his book http://www.earth-policy.org):
    http://www.youtube.com/watch?v=KPfUqEj5mok

  46. Matt Fitzpatrick at 07:26 AM on 6 April 2013
    Food Security - What Security?

    The "10 billion by 2065"(pdf) projection appears to have been made by the U.S. Census Bureau under the Bush administration in 2004, based on 2002 data, in a report with zero mentions of climate change, and zero references to climate change publications. This leads to eyebrow raising predictions, like Chad being among the fastest growing nations on Earth through 2050, more than tripling its population, even as Lake Chad shrinks to a record minimum in the west and desertification creeps into the east.

    Remember North Carolina's 2012 bill that would have required people to only use past trends to predict sea level rise? This Census Bureau report reminded me a lot of that.

    So it'll be interesting to see what effects climate change predictions will have on the Census Bureau's next world population projection revision - assuming it doesn't ignore climate change this time. Surely projected birth and mortality rates should change, at least on a regional basis. Combined with migration triggered by climate change, I'd expect the distribution of population growth to be on a different track, and perhaps even the total population curve.

  47. The two epochs of Marcott and the Wheelchair

    Thanks KR, I will have a look at them.

  48. The two epochs of Marcott and the Wheelchair

    Dissident - NOAA has a great number of climate reconstructions here, most with the data tables. Others can be found with a simple Google Scholar search, such as on "Holocene temperature reconstruction". 

    Available Holocene reconstructions include those based on ice cores, pollen deposition, speleothems, alkenonesbenthic foraminifera, corals, and so on. 

  49. grindupBaker at 06:21 AM on 6 April 2013
    The Fool's Gold of Current Climate

    @Michael Whittemore(20) "...a little warming from induced CO2 could be a good thing, which Dana also seems to suggest", so there's the obvious corollary that humans this century using every last drip of coal & problem-recovery unconventionals is a tad selfish now we know that future humans might have used it to mitigate a glacial when it started rather than having it dissolved uselessly in oceans after a long hot spell.

  50. The Fool's Gold of Current Climate

    It is far to flippant to call Riley's comments greenwash.  If one can take his points as being valid, and I see no reason why he would lie about them, this is the other side of fossil fuels.  We have increased growth of plant life due to increased carbon dioxide in the atmosphere and a reduced the use of resources from nature which were to the  detriment of the natural environment.   It is still, probably a valid point that if we continue to put ever increasing amounts of carbon dioxide into the atmosphere, we will very likely cause a climate flip and that would likely be disasterous.  What comes out of his talk is that if we stop putting carbon dioxide into the atmosphere, it could reduce in the atmosphere rather rapidly.  The other point is that the more we switch to wind, solar and other energy sources such as wave and tidal currents, the better off we will be all around.  We won't be testing the theory of sudden climate change with gay abandon and we won't be putting a strain on the woodpeckers.  I think we should take his comments seriously, examine them as scientists should and combine them into a bigger picture.

Prev  923  924  925  926  927  928  929  930  931  932  933  934  935  936  937  938  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us