Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Search Tips

Comment Search Results

Search for CO2 varies with sea temp

Comments matching the search CO2 varies with sea temp:

  • At a glance - Is the CO2 effect saturated?

    MA Rodger at 19:42 PM on 25 April, 2024

    Eclectic @14,


    You say the work of these jokers Kubicki, Kopczyński & Młyńczak failed the WUWT test, being too bonkers even for Anthony Willard Watts to cope-with. I would say Watts has happily promoted work just as bonkers in the past.


    And as you say, there is no WUWT coverage of this Kubicki et al 2024 paper although Google shows it is mentioned once in one of the comment threads, as is an earlier paper from the same jokers. Indeed, there are two such earlier papers from 2020 and 2022. Thankfully, these are relatively brief and thus they easily expose the main error these jokers are promoting.


    In Kubicki et al (2020) they kick-off by misusing the Schwarzschild equation. The error they employ even gets a mention within this Wiki-ref which says:-



    At equilibrium, dIλ = 0 even when the density of the GHG (n) increases. This has led some to falsely believe that Schwarzschild's equation predicts no radiative forcing at wavelengths where absorption is "saturated".



    They then measure the radiation from the Moon through a chamber either filled with air or with CO2 and show there is no difference and thus, as their misuse of Schwarzschild suggests, that the Earth's CO2 is "saturated." In preparing for this grand experiment, they research the thermal properties of the Moon as an IR source and thus tell us:-



    The moon. The temperature of its surface varies a lot, but for the part illuminated by the Sun, according to encyclopaedic information, it may slightly exceed 1100ºC.



    This well demonstrates that these jokers are on a different planet to us as it is well know our Moon only manages 120ºC under the equatorial noon-day sun.

  • CO2 effect is saturated

    MA Rodger at 18:41 PM on 24 March, 2023

    Gootmud @680,


    Responding to the individual detail of your post not addressed so far:-


    ☻ The blanket analogy is not useful when the detailed mechanism of AGW is considered. The temperature of the air against the outer blanket will remain effectively constant when extra blankets are added. The emission-to-space altitude of the planet at the CO2 IR-emitting frequencies (and thus the temperature which dictates the level of emission-to-space) varies with the level of CO2 in the atmosphere. CO2 is well-mixed up to 50km and more.


    ☻ Convection is not a great player in cooling the planet. The atmospheric cells of the troposphere are responsible for the trade winds and if convection were significant, these would be premanently of super hurricane strength. It takes about two weeks for a packet of air to rise to the tropopause in these cells and potentially half the convection is seen in cyclones which are relatively rare events.


    ☻ Pretty-much all CO2 molecules excited by IR lose their excitation through collision. There are many many more CO2 molecules excited by collision and the level of IR-emission is thus determined by air temperature which sets the level of collision.


    ☻ Sympathetic emission from an excited CO2 molecule (your laser effect) occurs at all altitudes.


    ☻ The greenhouse effect from increased CO2 is not linear except when CO2 levels are very small. For higher levels, the relationship is rougly logarithmic. This the forcing resulting from CO2 levels increasing 200ppm to 400ppm would roughly equal that of 2,000ppm to 4,000ppm. There is actually a boost above that logarithmic relationship for high levels of CO2, beginning at ~1,500ppm, as an emissions frequency at around 10 microns begins to be significant. In this, you may find Zhong & Haig (2013) 'The Greenhouse Effect & Carbon Dioxide'  a useful read.

  • There's no tropospheric hot spot

    Cedders at 07:42 AM on 21 August, 2022

    Thanks to you both for trawling through this. I was wary that linking to the document might boost its search engine ranking, but here it is via archive.org (16 of the 24 pages in the printed version). It doesn't have the professional gloss of a Heartland publication, but any documentation with one or two specious arguments helps some members of the public justify their preferred position.


    Both papers look useful in understanding important features of atmospheric physics before thinking about how greenhouse forcings affect them. I'm not sure why search engines didn't find them for me; or maybe I was looking specifically for greenhouse effects. I find Seidel et al the easier of the two to read and providing the simplest rebuttal: there really isn't much diurnal temperature variation to influence OLR in any case. Gristey et al confirms 'diurnal temperature range becomes negligible at around 100 m altitude' (above surface), suggesting it's surface temperature that is most important in their Sahara example, followed by clouds and convection.


    Gristey et al also suggests to me that the effect of increases in LLGHGs and specific humidity will reduce transmittance, so also reducing OLR coupling to surface 'hotspots' and further reducing what small diurnal range there is at, say, the tropopause. I am a bit confused why the OLR varies proportionately more over the day than the temperature at 500 hPa (Seidel fig 3), but suppose the atmospheric window (IR emitted from the surface that never interacts with atmosphere) is important.

    On the logic of rising emission layers: 'Being thinner (less O2 & N2 but same CO2), it would presumably have a lower specific heat capacity, so for the same IR flux it would cool quicker,' - well put, that is where I thought there might be some merit in the new argument - 'but being at a higher/colder altitude it would be shooting off less cooling IR.' I wasn't convinced by that caveat, because shouldn't the OLR/cooling IR at the TOA remain what you expect at 255 K? So 'upper atmosphere adjusting quicker' might have a grain of truth in it, but adjusting to what (space mostly?), and doesn't it imply regressing to diurnal mean more quickly? In any case, there's no empirical support for increased diurnal range.


    'And in the literature I haven't heard of any consideration of the diurnal range up in the upper troposphere beyond cloud formation which suggests if there is some effect, it is all rather obscure.'


    Very obscure, but that doesn't mean it doesn't serve the contrarian's purpose. I probably would never have thought about the 'hotspot' itself, not being prone to expeditions in high-altitude balloons or worried about that part of the atmosphere, were it not for various contrarians using the very lack of empirical data to argue for a flaw in established theory. These arguments are about fine details, emphasising them as if they were important, to distract from a central point that someone wants to avoid.

  • Skeptical Science New Research for Week #35, 2021

    MA Rodger at 22:40 PM on 6 September, 2021

    Eclectic @4,


    Pangborn presents a long list of purile misconceptions which, if you are not familiar with the subject, can take a while to rattle out into the light-of-day.


    Thus in his latest masterpiece of nonsense linked above we have in Section 1 para 1:-



    "The radiation energy travels from ghg molecule to ghg molecule (or between surface and ghg) at the speed of light but dwells in each molecule for a few microseconds making the molecule warmer. The dwell time is also called the relaxation time and cumulative dwell time is what causes the GHE"



    It is not "a few microseconds" but on average "many microseconds" and, as atmospheric molecules do collide every "few microseconds", the radiation energy is almost always transferred from the flapping CO2 molecule to other atmospheric molecules. The far-more-numerous atmospheric collisions can set CO2 flapping as well as robbing the energy from a flapping one. As collisions are a product of temperature and and very very numerous, they are overwhelmingly responsible for setting CO2 molecules flapping, often enough for some of the flapping CO2 molecules to radiate-away the equivalent energy trapped from radiation by the CO2. The "cumulative dwell time" of individual molecues is certainly not "what causes the GHE."


    Section 1 para 2 is a bit garbled. The main error is in saying radiation escapes into space "Increasingly with altitude" . There is a quite distinct threshold altitude below which no radiation can escape, and above which all radiation can escape. The temperature of this threshold altitude (which varies with wavelength and GHG concentrations) is what determines how much radiation cools the planet.


    Section 1 para 3 arrives at a 1200-to-one ratio for surface/tropopause H2O levels which lookes like a back-of-fag-packet calculation as it is far too low.


    Section 2 talks of a "notch and 'hash'"  in the Earth's IR spectrum. (I assume notch=hash.) The notches in his Fig 1 are is actually due to the higher altitudes where CO2 & O3 are still absorbing their radation but where H2O is so defuse it is letting it out into space. The CO2 "notch" is not 18Wm^-1 but actually about 30Wm^-2, so causing directly about +8ºC of GH-effect and it is only through such planet-wide warming that the warming is "shared with (redirected to) WV molecules which radiate it to space" because it is now warmer at the altitudes where radiation can escape to space because higher temperatures mean more flapping molecules.


    And so the fool goes on. Pure nonsense through and through.

  • 2021 SkS Weekly Climate Change & Global Warming Digest #1

    Dr. S. Jeevananda Reddy at 17:47 PM on 11 January, 2021

    Negelj


    Global Warming: It is an estimate of the annual average part of temperature trend. The trend of 1880 to 2010 is 0.6oC per century in which global warming component is 0.3oC – 1951 to 2100 is 0.45oC – according to linear trend. But in reality it is not so as the energy component is constant over which superposed sunspot cycle. However, the reliability depends up on the data used. For example number of stations in around 1850 were < 100 and by around 1980 [started satellite data collection started around this time] they were more than 6000 and with the availability of satellite data the number of stations drastically come down to around 2500. The satellite data covered both urban-heat-island effect and rural-cold-island effect and showed practically no trend – US raw data series also showed this. However, this data was removed from internet [Reddy, 2008 – Climate Change: Myths & Realities, available on line] and replaced with new adjusted data series that matches with ground data series. Here cold-island effect is not covered. With all this, what I want say is warmings associated with solar power plants is added to global warming. How much?? This needs collection of data for all the solar power stations. Met station covers a small area only but acts like UHI effect – I saw a report “surface temperatures in downtown Sacramento at 11 a.m. June 30, 1998 – this presents high variation from area to area based on land use [met station refers to that point only]. So, solar wind power plants effect covers similar to heatwaves and coldwaves. Here general Circulation Pattern plays main role.


    Nuclear Power: Nuclear power production processes contribute to “global warming process” while hydropower production processes contribute to “global cooling process”; the nuclear power production processes don’t fit into “security, safety & economy” on the one hand and on the other “environment & social” concepts; unlike other power production processes, in nuclear power production process different stages of nuclear fuel cycles are counted as separate entities while assessing the cost of power per unit and only the power production component is accounted in the estimation of cost of power per unit; carbon dioxide is released in every component of nuclear fuel cycle except the actual fusion in the reactor. Fossil fuels are involved in the mining-transport-milling conversion-processing of ore-enrichment of the fuel, in the handling of the mill tailings-in the fuel can preparation-in the construction of plant and it decommissioning-demolition, in the handling of the spent waste-in its processing and vitrification and in digging the hole in rock for its deposition, etc. and in the manufacturing of necessary required equipment in all these stages and thus their transportation. In all these stages radiological and non-radiological pollution occurs – in the case of tail pond it runs in to hundreds of years. Around 60% of the power plant cost goes towards the equipment, most of which is to be imported. The spent fuel storage is a critical issue, yet no solution was found. Also the life of reactors is very short and the dismantling of such reactors is costly & risky, etc., etc.


    Michael Sweet/ Negelj


    In 70&80s I worked and published several articles relating to radiation [global solar and net and evaporation/evapotranspiration] – referred in my book of 1993 [based on articles published in international and national journals]. Coal fired power plants reduces ground level temperature by reducing incoming solar radiation. In the case of Solar Panels create urban heat island condition and thus increases the surrounding temperature. In both the cases these changes depends upon several local conditions including general circulation patterns. Ground condition plays major role on radiation at the surface that define the surface temperature [hill stations, inland stations & coastal stations] – albedo factor varies. Also varies with soil conditions – black soil, red soil. Sea Breeze/land breeze – relates to temperature gradient [soil quickly warm up and quickly release the heat and water slowly warm up and slowly release heat] and general circulation pattern existing in that area plays the major role in advection.


    Response to Moderator


    See some of my publications for information only:


    Reddy, S.J., (1993): Agroclimatic/Agrometeorological Techniques: As applicable to Dry-land Agriculture in Developing Countries, www.scribd.com/Google Books, 205p; Book Review appeared in Agricultural and Forest Meteorology, 67 (1994):325-327.
    Reddy, S.J., (2002): Dry-land Agriculture in India: An Agroclimatological and Agrometeorological Perspective, BS Publications, Hyderabad, 429.
    Reddy, S.J., (2008): Climate Change: Myths & Realities, www.scribd.com/Google Books, 176.
    Reddy, S.J., (2016): Climate Change and its Impacts: Ground Realities. BS Publications, Hyderabad, 276.
    Reddy, S.J., (2019a): Agroclimatic/Agrometeorological Techniques: As applicable to Dry-land Agriculture in Developing Countries [2nd Edition]. Brillion Publishing, New Delhi, 372p.


    2.1.2 Water vapour


    Earth’s temperature is primarily driven by energy cycle; and then by the hydrological cycle. Global solar radiation reaching the Earth’s surface and net radiation/radiation balance at the Earth’s surface is generally estimated as a function of hours of bright Sunshine. Total cloud cover [average of low, medium & high clouds] has a direct relation to hours of bright Sunshine (Reddy, 1974). Cube root of precipitation showed a direct relation to total solar radiation and net radiation (Reddy, 1987). In all these latitude plays major role (Reddy & Rao, 1973; Reddy, 1987). Evaporation presents a relation with net and global solar radiation (Reddy & Rao, 1973) wherein relative humidity plays an important role that reduces with increasing relative humidity. If ‘X’ is global solar radiation received under100% relative humidity then with the dryness [with relative humidity coming down] it may reach a maximum of 2X; and under net radiation also with increasing relative humidity net radiation is reduced. That means water vapour in the atmosphere is the principal component that controls the incoming and outgoing radiation and thus temperature at the Earth’s surface. Thar Desert presents high temperature with negligible water vapour in the atmosphere as maximum energy reaches the earth’s surface. However, these impacts differ under inland (dryness), hill (declining temperature with height – lapse rate) & coastal (wetness) locations and sun’s movement (latitude and declination of the Sun — seasons) (Reddy & Rao, 1973). IPCC integrated these under “climate system” and the advective condition by general circulation pattern [GCP].
    Cold-island effect [I coined this, see Reddy (2008)] is part of human induced climate change associated with changes in land use and land cover. Since 1960’s to meet the food needs of ever increasing population, started intensive agriculture – conversion of dryland to wetland; & creation of water resources; etc. In this process increased levels of evaporation and evapotranspiration contributed to raise in water vapour up to around 850 mb levels in the lower atmosphere. Unusual changes in water vapour beyond 850 mb level [for example at 700 mb level] become a cause for thunderstorm activity (Reddy & Rao, 1978). Wet bulb temperature (oC) at the surface of the Earth provides the square root of total water vapour (g/cm2) in the vertical column of the atmosphere; and also wet bulb temperature (oC) is a function of dry bulb temperature (oC), relative humidity (%) and square root of station level pressure (height) relative to standard value in mb [p/1060] (Reddy, 1976). Thus, unlike CO2, water vapour presents a short life with steadily increasing with land use and land cover changes. However, met network in this zones have been sparse and thus the cold island effect is not properly accounted under global average temperature computations. Though satellite data takes this in to account, this data series were withdrawn from the internet and introduced new adjusted data series that matches with adjusted ground data series. Annual state-wise temperature data series in India wherein intensive agriculture practices are existing, namely Punjab, Haryana & UP belt, showed decreasing trend in annual average temperature – cooling. Some of these are explained below:


    Reddy (1983) presented a daily soil water balance model that computes daily evapotranspiration, known as ICSWAB Model. The daily soil water balance equation is generally written as:


    ▲Mn = Rn – AEn – ROn - Dn


    In the above equation left to right represent the soil moisture change, rainfall or irrigation, actual evapotranspiration, surface runoff and deep drainage on a given day (n). The term Actual Evapotranspiration [AEn] is to be estimated as a function of f(E), f(S) & f(C), wherein they represent functions of evaporative demand on day n, soil & crop factors, respectively. As these three factors are mutually interactive, the multiplicative type of function is used.


    AEn = f(En) x f(S) x f(C)


    However, the crop factor does not act independently of the soil factor. Thus it is given as:


    AEn = f(En) x f(S,C) and f(S,C) = K x bn


    Where f(S,C) is the effective soil factor, K = soil water holding capacity [that varies with soil type] in mm and bn is the crop growth stage [that vary with crop & cropping pattern] factor that varies between 0.02 to 0.24 — fallow to full crop cover conditions (with leaf area index crossing 2.75). Evaporative demand is expressed by the terms evaporation and/or evapotranspiration. Evaporation (E) and evapotranspiration (PE) are related as:


    PE = 0.85 x E [with mesh cover] or = 0.75 x E [without mesh cover].


    However, the relationship holds good only under non-advective conditions [i.e., under wind speeds less than 2.5 m/sec]. Under advective conditions E is influenced more by advection compared to PE. In the case of PE, by definition, no soil evaporation takes place and thus PE relates to transpiration only – where the crop grows on conserved soil moisture with negligible soil evaporation. With the presence of soil evaporation, the potential evapotranspiration reaches as high as 1.2 x PE or E with mesh cover. McKenney & Rosenberg (1993) studied sensitivity of some potential evapotranspiration estimation methods to climate change. The widely used methods are Thornthwaite and Penman presented 750 mm and 1500 mm wherein Thornthwaite method is basically uses temperature and Penman uses several meteorological parameters (Reddy, 1995).
    In this process the temperature is controlled by solar energy but moisture under different soil types [water holding capacity] it is modified. This modified temperature cause actual evapotranspiration and thus water vapour. This is a vicious circle. For example average annual temperature in red soils Anantapur it is 27.6oC; in deep black soils Kadapa it is 29.25oC & in medium soils Kurnool it is 28.05oC. That means, local temperature is controlled by soils.
    Reddy (1976a&b) presented a method of estimating precipitable water in the entire column of the atmosphere at a given location using Wet Bulb Temperature. The equations are given as follows:


    Tw = T x [0.45 + 0.006 x h x (p/1060)1/2]


    W = c’ x Tw2


    Where T & Tw are dry and wet bulb temperatures in oC; h is the relative humidity in %; p is the annual normal station level pressure in mb [1060 normal pressure in mb, a constant] ; W is the precipitable water vapour in gm/cm2 and c’ is the regression coefficient.
    WMO (1966) presented methods to separate trend from natural rhythmic variations in rainfall and assessing the cycles if any. (Late) Dr. B. Parthasarathy from IITM/Pune used these techniques in Indian rainfall analysis. Reddy (2008) presented such analysis with global average annual temperature anomaly data series of 1880 to 2010 and found the natural cycle of 60-years varying between -0.3 to +0.3oC & trend of 0.6oC per century [Reddy, 2008]. This is based on adjusted data series but in USA raw data [Reddy, 2016] there is no trend. The hottest daily temperature data series of Sydney in Australia shows no trend [Reddy, 2019a]. Thus, the trend needs correction if the starting and ending point parts are in the same phase of the cycle – below and below or above and above the average parts. During 1880 to 2010 period two full 60-year cycles are covered and thus, no need to correct the trend as the trend passes through the mean points of the two cycles.


    3.2.4 What is global warming part of the trend?


    According to IPCC AR5, this trend of 0.6oC per century is not global warming but it consists of several factors:
    a. More than half is [human induced] greenhouse effect part:
    i. It consists of global warming component & aerosols component, etc. If we assume global warming component alone is 50% of the total trend, then it will be 0.3oC per Century under linear trend;
    ii. Global warming starting year is 1951 & thus the global warming from 1951 to 2100 [150 years] is 0.45oC under linear trend;
    iii. But in nature this can’t be linear as the energy is constant and thus CSF can’t be a constant but it should be decreasing non-linearly;
    iv. Under non-linear condition by 2100 the global warming will be far less than 0.45oC and thus the trend will be far less than half;
    b. Less than half the trend is ecological changes [land use and land cover change] part – mostly local & regional factors:
    i. This consists of urban-heat-island effect and rural-cold-island effect;
    1. Urban-heat-island effect – with the concentrated met network overestimates warming;
    2. Rural-cold-island effect – with the sparse met network underestimates cooling;


    2.2.1 Uncertainty on “Climate Sensitivity Factor”


    The word “climate Crisis” is primarily linked to global warming. To know whether there is really global warming, if so how much, climate sensitivity factor plays the main role. Climate sensitivity is a measure [oC/(W/m2)] – how much warming we expect (both near-term and long-term) for a given increase in CO2? According to Mark, D. Zilinka (2020), “Equilibrium climate sensitivity, the global surface temperature response to the CO2 doubling, has been persistently uncertain”.
    Recent modelling data suggests the climate is considerably more sensitive to carbon emissions than previously believed, and experts said the projections had the potential to be “incredibly alarming”, though they stressed further research would be needed to validate the new numbers. Johan Rockström, the director of the Potsdam Institute for Climate Impact Research, said. “Climate sensitivity is the holy grail of climate science. It is the prime indicator of climate risk.
    The role of clouds is one of the most uncertain areas in climate science because they are hard to measure and, depending on altitude, droplet temperature and other factors can play either a warming or a cooling role. For decades, this has been the focus of fierce academic disputes. Catherine Senior, head of understanding climate change at the Met Office Hadley Centre, said more studies and more data are needed to fully understand the role of clouds and aerosols. With this vital disputes how anyone can say there is global warming without solving this issue; so I said “global warming hysteria factor is climate crisis”.


     

  • CO2 lags temperature

    Philippe Chantreau at 03:44 AM on 3 February, 2020

    I believe that I had read Map's initial comment correctly and that he is indeed referring to what some call the Mid-Pleistocene transition. This is an area of active research and, although his posts are poorly formulated, I do not see that map deserves scorn for enquiring about it.

    The recent regime of 100,000 years interglacial was preceded in the paleo record by a longer period where the 41,000 years cycle dominated. Wikipedia has a good explanation and plenty of links to scientific literature, including some recent ones.

    Citing the wiki page:

    "There is strong evidence that the Milankovitch cycles affect the occurrence of glacial and interglacial periods within an ice age. The present ice age is the most studied and best understood, particularly the last 400,000 years, since this is the period covered by ice cores that record atmospheric composition and proxies for temperature and ice volume. Within this period, the match of glacial/interglacial frequencies to the Milanković orbital forcing periods is so close that orbital forcing is generally accepted (emphasis mine). The combined effects of the changing distance to the Sun, the precession of the Earth's axis, and the changing tilt of the Earth's axis redistribute the sunlight received by the Earth. Of particular importance are changes in the tilt of the Earth's axis, which affect the intensity of seasons. For example, the amount of solar influx in July at 65 degrees north latitude varies by as much as 22% (from 450 W/m² to 550 W/m²). It is widely believed that ice sheets advance when summers become too cool to melt all of the accumulated snowfall from the previous winter. Some believe that the strength of the orbital forcing is too small to trigger glaciations, but feedback mechanisms like CO2 may explain this mismatch."

    Further down: "During the period 3.0–0.8 million years ago, the dominant pattern of glaciation corresponded to the 41,000-year period of changes in Earth's obliquity (tilt of the axis). The reasons for dominance of one frequency versus another are poorly understood and an active area of current research, but the answer probably relates to some form of resonance in the Earth's climate system. Recent work suggests that the 100K year cycle dominates due to increased southern-pole sea-ice increasing total solar reflectivity."

    Worth noting:

    https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2016GL071307

    There are plenty of other interesting references in the Wiki.

    Map,

    To have a productive exchange here, you can not be vague with statements like "I have read somewhere" and such. These are used all the times by people who argue in bad faith and trigger the corresponding response from other posters, for which you can not blame those who respond. Scientific references are a must. Specific inquiries and precise questions are helpful.

    It should be noted that the regime in the paleo record has now been completely replaced and that we are in entirely new conditions because of the massive injection of CO2 in the atmosphere from the past 100 years.

  • CO2 is just a trace gas

    KR at 11:30 AM on 20 May, 2019

    Rovinpiper - Some time ago I ran through the numbers on this. CO2 takes about 10^-6 seconds to emit excess energy as infrared radiation. At sea level each air molecule collides with another 10^9 times. This means than an excited GHG molecule will undergo 1000 collisions before it's statistially likely to emit, meaning that yes, the atmosphere as a whole is warmed by GHG absorption, and the emissions are due to the statistical emissions of the air mass as a whole. 

    And now a (very) brief explanation of how this works:

    The rest of the equation is tied to the lapse rate, the rate of which the air is cooler with rising altitude, and the statistical likelyhood of an IR emission escaping to the space. The absorption and emission of energy repeats throughout the atmosphere until GHGs decrease with pressure to the point that 50% or more of the IR escapes to space, which is where convection stops. This is the tropopause, the separation between the convective troposphere and the static stratosphere. 

    The emission rate is determined by temperature, and the lapse rate (about 6.5C/km, varies widely with humidity, temps, etc) means that the emitting gases at the tropopause are cooler than the earths surface. Very importantly, changing the GHG concentrations changes that altitude. And that change in altitude means that there is an imbalance between incoming and outgoing energies until the entire atmospheric column to the tropopause has warmed or cooled to match incoming energy. 

    Global Warming Linked To Increase In Tropopause Height

    So the surface is hotter than the tropopause, linearly by altitude, the tropopause emissions have to match incoming solar energy to stabilize, and our emissions have raised the tropopause. We're therefore warming. 

  • On climate change, zero-sum thinking doesn't work

    scaddenp at 19:31 PM on 7 April, 2018

    "we should understand how CO2, which is heavier than Air, can get in the Upper Atmosphere". 

    Indeed, and you can get this understanding from pretty elementary texts on gas diffusion.  The short answer is that given the strength of gravitational force on earth and the temperature of the atmosphere, kinetic energy from molecular collisions dominates over gravity. The video I showed with bromine, much heavier than CO2, shows it diffusing upward, not settling at the bottom, and it eventually becomes well mixed. No turbulence required but convection and turbulence certainly speed the process.  

    The bromine experiment by itself shows your intuition is wrong about CO2. Notice also how evenly CO2 is distributed vertically through the atmosphere If your hypothesis that molecular weight overrides diffusion was correct and planes were needed to move stuff up, then you would predict at heavy hydrocarbons would be at base of atmosphere of Titan (no planes to mix anything) and lighter methane would be at the top. Instead all the gases are well-mixed vertically as on earth.

    If you want to learn more about the distribution, sources of CO2, then have a look at the resources (images, video) available from the NASA CO2 measuring satellite (OCO2), on the NASA page, as CO2 varies with season etc.

  • CO2 limits won't cool the planet

    Aaron Davis at 03:40 AM on 31 December, 2017

    MA Rodger @26

    Albedo is something I have data for [Kukla_and_Robinson_1980 table 2]. I've calculated the average albedo from

    • 60N to 82N  during the months FebMar  64.0% and AugSep 35.1% and from 
    • 60S to 70S    FM 21.3% AS 64.2%

    While the Cold months (in the dark mostly have high Albedo about the same from North to South, the Warm months have actually higher albedo in the North (35 vs 21%).  Since this is also when CO2 is low due to plant uptake in the North compared with the South where CO2 is at baseline changes onlt 1 ppm, a higher Nothern albedo would move the temperature in the same lower direction as the lower Northern CO2. So, if anything albedo and CO2 combination, should  overstate the seasonal effect over CO2 alone.  

    I am not familiar with how you arrived at the 9e18 Joules corresponding to 15 ppm change in CO2 figure.  It is quite low compared to the total annual global heat accumulation figure I've been using (12.5e21 Joules per year).   If that's all we would get out of all the cost and effort to reduce CO2 over the next 5-10 years by 15 ppm, it appears you would agree it's hardly worth the effort.  

    If the variations in the amount of energy reaching the ground (insolation) varies over periods more than a year, the analysis should still be good, as I am comparing differences over the same year, but that is something I'd like to verify as well.  The idea here is not to get a 15 ppm sensitivity.  It's hard enough to justify the utility of the low confidence ECS analysis. My point here is to support the principle hypothesis of this section "CO2 limits won't cool the planet", which I gather from your remarks you agree with, to at least some extent.  

    I appreciate your interest and share your objective to seek the truth.

  • Climate's changed before

    NN1953VAN-CA at 16:35 PM on 17 November, 2015

    Few centuries or even millennia observation may not give real trend in global climate, as there were abrupt changes after long steady trend. My point is that CO2 does not affect climate change in measure some are trying to prove.

    Looking at Atmospheric CO2 content in time span of several centuries:
    CO2 stable for several thousand years at 250 – 280 ppm
    Vostok and Gereenland are the places with least temperature change expected. (I am aware it’s not global). When mentioned ‘least temperature change’ at any time and for any given period temperature variation day/night on equator or seasonal on mid latitudes varies much more than on polar regions, again, averages do not change much on either.
    Vostok temperature varied 3 degrees with CO2 content stable, and in last 60 years with CO2 gone to 400 ppm (looking at 280 ppm it is 40% increase) temperature is stable. Please see table.

    Vostok station

    Vostok surface temptemperature graph

    This is recent average surface temperature at Vostok – there is lack of data for 1962,1964, 1996, and 2004 other than that there is no temperature increase.
    Next graphs are for Greenland:

    temp.graphs

     


    Please check top graph – right side from 1950 to 20xx
    Atmospheric CO2 gradually raised to 400 and at same time span Greenland temp. varries, but doesn’t follow CO2 trend.

    Global temperature reconstruction done by Loehle, 2007 and Loehle and McCulloch, 2008 studiesproxy locations

    temperature

    Average temperature change from +0.5 to -0.8, does not follow CO2 content which is stable until 1950.
    Yes, there are other forcing but somehow all other forcing are overcoming CO2 is at present level as there should be much greater GT change for 40% CO2 increase. If so, which other forcing works now and was not present when CO2 was stable and temperature rose. Because if ‘other forcing’ are preventing temperature raise, it means CO2 GHG effect is not that great as GHG theory is trying to show.
    Recent GT should reflect higher upward trend if CO2 is 25% of GHG effect, and risen for 40%.
    Beside, GW is not that bad, there are parts of the world which would greatly benefit from some temperature raise, e.g. great part of north hemisphere presently uninhabitable is going to be much more humans friendly. Of course there are just as many parts of planet which will become uninhabitable or under the sea. It will get back on it's own as it has times before.

  • Heat from the Earth’s interior does not control climate

    Tom Curtis at 23:46 PM on 29 May, 2015

    CBDunkerson @32:

    1)  essentially the greenhouse effect comes from condensing and non-condensing gases.  The non-condensing gases (CO2, CH4, NO2, O3, etc) have concentrations that do not primarilly depend on GMST, although they are influenced by them.  Of them only CO2 and CH4 had appreciable effects in the 1980s, ie, the time period covered by Schmidt et al, and in that period CH4 represented only 1% of the total greenhouse effect.  As the vast majority of that 1% came from anthropogenic emissions from 1750-1980, I decided it was easier to just ignore it, and fold it and the other minor non-condensing greenhouse gases in with the condensing gases.

    The vast majority of the "greenhouse feedback" represent the greenhouse effect from water vapour and clouds.  These are the condensing greenhouse gases, where temperature very tightly controlls concentration.  As a result, there presense in the atmosphere is always a feedback on other energy sources plus the CO2 greenhouse effect.  In particular, absent the solar energy input, the greenhouse feedback would be zero; and absent the CO2 greenhouse effect, it would be substantially less (Lacis et al, 2010).  I put it as a seperate item because its behaviour is so different at temperature consistent with solar input.

    It is, of course, not intended to indicate feedbacks only from the greenhouse effect, or all feedbacks from the greenhouse effect.

    2)  I thought I had already clarrified this point in the paragraph starting, "The most important thing...".  In all cases the temperature response to a given factor is:

    T = (j*/σ)^0.25, where j* is the energy input in W/m^ and σ is the Stefan-Boltzman constant 

    For j*= 0.09 W/m^2, T = 35.49 oK

    For j* = 240 W/m^2, T = 255.06 oK

    But for j* = 240.09, T = 255.09 oK

    The crux is that the relationship between energ input and temperature is far from linear, so any energy input with a big impact at low temperatures has negligible impact at high temperatures.

    3)  No.  Values are effectively for 2010 in that I used the IPCC AR5 value for total greenhouse effect.  As such, these values include an anthropogenic forcing larger than any of the non-solar energy impacts.

    The two major sources of inaccuracy in determining the GMST from a given energy input are the assumption that the Earth is a black body (emissivity = 1), and the assumption that the Earth has a constant temperature at all locations.  (I mentioned these briefly among the missing factors.)  Of these, the fact that the Earth's emissivity is slightly less than 1 will increase the GMST by about 2 to 8 oK depending by how much the emissivity is overstated.  Probably closer to 2 than 8, but absent a global radiation budget model I cannot determine the exact value.  

    In contrast, unequal temperatures (which certainly exist) will reduce the estimated GMST.  In an extreme case where the Earth has a permanently sunlit hemisphere, and a permanently dark hemisphere with constant temperature in each hemisphere, but no energy shared between hemispheres so that the dark hemisphere is much cooler than the sunlit hemisphere, the GMST would fall to 181.31 oK, a drop of 108.33 oK.  That is an interesting case in that it approximates to conditions on the Moon.  It also shows how large an effect unequal temperatures can have.  The Earth certainly has unequal surface temperatures, and they are even unequal at the tropopause from which most IR radiation escapes.  Therefore this reduction in expected temperature certainly is a factor.  However, again without a complex and accurate model, it is impossible to determine how much of a factor.  Indeed, in this case you would need a full climate model, as temperature variation also varies with time of day and season.

    Given these two significant, and opposite effect factors which cannot easilly be determined, it is surprising the above calculations are as accurate as they are.  Certainly the minor inaccuracy is nothing to be concerned about against that backdrop.  In fact, the errors in calculated values from observed values are less than the range of errors between different estimates of the observed values.

  • CO2 effect is saturated

    Tom Curtis at 13:00 PM on 23 December, 2014

    Digby @383, if you look at the right hand panel of the second figure in KR's post you will see three "typical" temperature profiles.  The temperature profile in my post @376 corresponds to the green profile in KR's post, ie, middle latitude.  As you can see, the profile varies based on latitude, but also on season and local conditions (including local humidity).  The profile over desert, for example, would be different to that over ocean.

    KR's refference to a temperature range, therefore, does not represent a range of temperatures in the tropopause.  It represents a range of temperatures of the tropopause at different latitudes (as shown in the right hand panel of his second figure).  While it would be possible with a sufficiently distant instrument to get a whole hemisphere IR spectrum for the Earth, the actual instruments used are in low Earth orbit and so can only profile a limited area at a time so the brightness temperature of the base of the CO2 trough will vary depending on where and when the profile was taken.

  • CO2 lags temperature

    KR at 13:29 PM on 19 December, 2014

    Looks like one of the heavily massaged graphs that Smokey/dbs/dbstealey, WUWT moderator and sock puppet extraordinare keeps posting. Congratulate your guy, he's (re) discovered that atmospheric CO2 varies with the growth and die-off of global seasonal vegetation. Which we already knew. 

    The short term and the use of 'isolate' are the give-aways; removing the long term rise in CO2 and ignoring mass-balance, isotope, oxygen level, and all the other evidence demonstrating an anthropogenic cause for rising CO2.

    It's simply amazing how much deliberate effort goes into these denial graphs. At best (!) confirmation bias, searching for a combination that confirms what they believe despite the evidence, or at worst, flatly attempting to lie with a misrepresentation of the data. Really no way to tell which, unless the person presenting this junk is a known lobbyist...

  • How we know the greenhouse effect isn't saturated

    mgardner at 00:30 AM on 20 February, 2014

    HK et al

    I did some more searching and came up with this:Troposphere

     

    If I understand correctly, we take something like 5km to be the altitude above which attenuation of IR by CO2 is negligible--"where energy radiates freely to space".

    Now, without telling me about how it all varies with season and latitude, and all of the complications involved in doing the calculations, can I find out about what that altitude would be if we double the CO2 concentration? Is it 100m higher, or 1km higher, or 4 km or what? I can eyeball that the temp changes between 5-10C in 1 km.

    I would also like to know, again, order of magnitude, what the thickness of an 'opaque' 'layer' just below that original 5 km altitude would be. And what the attenuation would be for IR radiation in some CO2 band through that 'layer'.

    This is what I'd like to see represented on a graphic. I'm beginning to realize that there may not be one anywhere with that kind of resolution, and perhaps I will set up a chalkboard, draw some diagrams, take a picture, and post it here to have you guys check it out.

     

     

  • CO2 effect is saturated

    StealthAircraftSoftwareModeler at 11:38 AM on 18 June, 2013

    Tom Cutris @ 201

    Thanks for the long reply. I have dug into what you have said and have some additional questions:

    You stated: “WUWT makes absurd false statements such at that at least 200 ppmv is required in the atmosphere for plant life to grow (CO2 concentrations dropped to 182.2 ppmv at the Last Glacial Maximum, giving the lie to that common claim).”

    I have done a Google search on CO2 and plant growth and have find many sources (some unrelated to climate and on plant research) that indicate plant growth is stunted at 200 ppmv CO2. At 150 ppmv a lot of plants are not doing very well. Based on this WUWT doesn’t seem absurd to me, why do you think so?

    Sources:

    http://www.es.ucsc.edu/~pkoch/EART_229/10-0120%20Appl.%20C%20in%20plants/Ehleringer%20et%2097%20Oeco%20112-285.pdf

    CO2 Science

    As for the rest of your post, I went to the very nice calculator (http://forecast.uchicago.edu/Projects/modtran2.html) pointed to me by scaddenp @ 46 from http://www.skepticalscience.com/imbers-et-al-2013-AGW-detection.html. It models the IR flux of various gases and looks pretty cool. I ran the calculator to produce the table below, which shows the upward IR flux in W/m^2 for various levels of CO2. With no CO2 and using the 1976 standard US atmosphere (I left the tool’s default setting in place and only changed to the 1976 USA atmosphere and the amount of CO2), the upward IR flux is 286.24 W/m^2. The first 100 ppmv reduces the upward IR flux to 264.17 W/m^2. If CO2 doubles from the current 400 ppmv to the hypothesized 800 ppmv, then upward IR flux drops to 255.75 W/m^2. From a “zero CO2” atmosphere, total reduction in IR flux at an 800 ppmv CO is 30.49 W/m^2. Of this total amount, 72.4% is captured by the first 100 ppmv of CO2. If CO2 increases from 400 ppmv to 800 ppmv, based on my math it appears that 91% of the heat trapping effect of CO2 is already “baked in” at 400 ppmv of CO2. This seems to line up very closely to what WUWT is stating, unless I made a mistake.

     

    CO2 ppmv Upward IR Flux
    0 286.24
    100 264.17 72.4% 72.4%
    200 261.41 81.4% 9.1%
    300 259.74 86.9% 5.5%
    400 258.58 90.7% 3.8%
    500 257.67 93.7% 3.0%
    600 256.91 96.2% 2.5%
    700 256.29 98.2% 2.0%
    800 255.75 100.0% 1.8%

    Rob Honeycutt @ 203 and @ 204

    Like Tom Curtis, you also assert that WUWT “is absurb”, yet using the very sources provided by other posters on this web site, I have seemed to confirmed what WUWT is saying about CO2, namely, the majority of the effects of CO2 are mostly captured due to logarithmic absorption of increasing CO2. Based, on the MODTRAN calculator, doubling CO2 to 800 ppmv is only going to trap and additional 2.83 W/m^2, which is 0.21% of the solar energy hitting the top of the atmosphere. I fail to see how this is can possibly be so bad – either things are so bad now, or the additional 2.83 W/m^2 isn’t going to matter at all. And natural variability has to be greater than 0.2%, especially since the change in total solar output varies by 0.1% over a solar cycle (http://en.wikipedia.org/wiki/Solar_constant). Given that global temperatures really haven’t increased much over the last 17 years, I suspect that things may not be “that bad.” If I am missing something, please help me out. Thanks! Stealth

     

  • The anthropogenic global warming rate: Is it steady for the last 100 years?

    Dumb Scientist at 12:02 PM on 12 April, 2013

    How can the anthropogenic warming be approximately linear in time when we know that atmospheric CO2 has been measured to increase almost exponentially? Implicit in that statement is the expectation that the warming (i.e. the rate of surface temperature increase) should follow the rate of increase of greenhouse gas concentration in the atmosphere. This rather common expectation is incorrect. An accessible reference is that from Britannica.com: "Radiative forcing caused by carbon dioxide varies in an approximately logarithmic fashion with the concentration of that gas in the atmosphere. ...

    No, I'm not ignoring the last century of physics. It's exasperating to be lectured about the ancient fact that CO2's radiative forcing in Earth's current atmosphere depends approximately on the logarithm of its concentration. My article linked to a graph of CO2's radiative forcing, which accounts for this logarithmic dependence. Notice that CO2's radiative forcing increases faster after 1950, because increasing CO2 faster also increases its logarithm faster. That's what makes the forcing "slightly more curvy than linear".

    As shown in Figure 2a, the green line is quite nonlinear and shows the acceleration of greenhouse gas forcing after 1950 referred to by DS, but the aerosol cooling also increased after 1950. The net anthropogenic forcing is the small difference of the two large terms.

    That same radiative forcings graph also accounts for aerosols. Notice that the black line includes aerosols and also increases faster after 1950.

    Because the aerosol cooling part is uncertain, we actually do not know what the net anthropogenic forcing looks like. There is no obvious argument that one can appeal to on what the expected warming should be. There is nothing obviously wrong if the anthropogenic warming is found to be almost linear in time.

    Perhaps the IPCC's estimates are wrong, but subtracting the standard NOAA AMO index to determine anthropogenic warming is equivalent to assuming that anthropogenic warming is steady before and after 1950. If it isn't, you'll never know because subtracting the AMO will just subtract signal after 1950.

    That was probably the source of the circular argument criticism from DS: "Tung and Zhou implicitly assumed that the anthropogenic warming rate is constant before and after 1950, and (surprise!) that's what they found. This led them to circularly blame about half of global warming on regional warming." It is important to note that the trend we were talking about is the trend of the Adjusted data, and not the presumed anthropogenic predictor.

    No, that wasn't the source of my criticism. Dana1981, KR and bouke correctly pointed out that your circular argument results from adding the AMO(t) regressor, which is correlated with surface temperatures after 1950 if you used the standard NOAA AMO index.

    Concerning Dana181's statement that most models use radiative forcing that show acceleration after 1970s, I just want to make the following observation. The models that adopted the kind of net radiative forcing that varies in time in approximately the same manner as the observed global mean temperature---with cooling in the 1970s and accelerated warming in the 1980s to 2000--were trying to simulate the observed warming using forced response alone (under ensemble average). So the net heating used has to have that time behavior otherwise the model simulation would not have been considered successful. Here we are questioning the assumption that the observed warming, including the accelerated warming in the later part of the 20th century, is mainly due to forced response to radiative heating.

    That's only true for inverse models of aerosol forcings. It's important to note that they're compared to independent forward calculations which are based on estimates of emissions and models of aerosol physics and chemistry.

    Dumb Scientists claim of circular argument on our part consists of two parts. The first part deals with the linear regressor used, which is discussed here, and the second part deals with the AMO index used, which will be discussed in my second post. ... the choice of the AMO Index (whether the detrending should be point by point or by the global mean)...

    If you used the AMO index with global SST removed that KR mentioned, then your result is really interesting. I assumed that you used NOAA's linearly detrended N. Atlantic sea surface temperatures, in which case the anthropogenic warming would be hiding in your AMO(t) function. Again, that's because warming the globe also warms the N. Atlantic, and anthropogenic warming was faster after 1950.

    We have tried many other predictors with similar results. Using a nonlinear anthropogenic regressor would still yield an almost linear trend for the past 100 years, if the Residual is added back. And so this procedure is not circular.

    It's only circular if you used NOAA's standard detrended AMO index. If so, you added a regressor that's correlated with surface temperatures since 1950. Again, in that case the warming would be hiding in your AMO(t) function.

  • It's not us

    Carbon500 at 07:11 AM on 4 October, 2012

    Response to Sphaerica from 'Inuit Perspectives on Recent Climate Change' - transferred to this thread at moderator's request.
    Sphaerica:
    Just to reinforce my point – more observations from the real world.
    Ice conditions in the Baltic Sea vary a lot from one year to another. The maximum ice covered area varies between
    52,000 and 422,000 square kilometres(12-100 per cent of the total Baltic Sea area)
    Baltic Sea Portal: itameriportaali.fi/en/tietoa/jaa/jaatalvi/en_GB/jaatalvi
    Clearly the Baltic Sea has remained free of the malign influence of CO2.
    Here’s more:
    JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 106, NO. C3, P. 4493, 2001 doi:10.1029/1999JC000173
    'Influence of atmospheric circulation on the maximum ice extent in the Baltic Sea'
    Anders Omstedt
    Swedish Meteorological and Hydrological Institute, Norrköping, Sweden
    Deliang Chen
    Department of Earth Sciences, University of Göteborg, Göteborg, Sweden
    This work analyzes long-term changes in the annual maximum ice extent in the Baltic Sea and Skagerrak between 1720 and 1997. It focuses on the sensitivity of the ice extent to changes in air temperature and on the relationships between the ice extent and large-scale atmospheric circulation. A significant regime shift in 1877 explains the decreasing trend in the ice extent. The regime shift indicates a change from a relatively cold climate regime to a relatively warm one, which is likely a result of changed atmospheric circulation. In addition, the analysis shows that a colder climate is associated with higher variability in the ice extent and with higher sensitivity of the ice extent to changes in winter air temperature. Moreover, the ice extent is fairly well correlated with the North Atlantic Oscillation (NAO) index during winter, which supports the results of earlier studies. However, the moving correlation analysis shows that the relationship between the NAO index and the ice extent is not stationary over time. A statistical model was established that links the ice extent and a set of circulation indices. It not only confirms the importance of the zonal flow but also implies the impact of meridional wind and vorticity. The usefulness of the statistical model is demonstrated by comparing its performance with that of a numerical model and with independent observations. The statistical model achieves a skill close to that of the numerical model. We conclude that this model can be a useful tool in estimating the mean conditions of the ice extent from monthly pressures, allowing for the use of the general circulation model output for predictions of mean ice extent.
    Finally, the globe is warming? Is it?
    http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.C.gif
  • Inuit Perspectives on Recent Climate Change

    Carbon500 at 07:00 AM on 4 October, 2012

    Sphaerica:
    Just to reinforce my point – more observations from the real world.
    Ice conditions in the Baltic Sea vary a lot from one year to another. The maximum ice covered area varies between 52,000 and 422,000 square kilometres(12-100 per cent of the total Baltic Sea area)
    Baltic Sea Portal: itameriportaali.fi/en/tietoa/jaa/jaatalvi/en_GB/jaatalvi
    It would seem that the Baltic Sea has remained free of the malign influence of CO2.
    Here’s more:
    JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 106, NO. C3, P. 4493, 2001 doi:10.1029/1999JC000173
    'Influence of atmospheric circulation on the maximum ice extent in the Baltic Sea'
    Anders Omstedt
    Swedish Meteorological and Hydrological Institute, Norrköping, Sweden
    Deliang Chen
    Department of Earth Sciences, University of Göteborg, Göteborg, Sweden
    This work analyzes long-term changes in the annual maximum ice extent in the Baltic Sea and Skagerrak between 1720 and 1997. It focuses on the sensitivity of the ice extent to changes in air temperature and on the relationships between the ice extent and large-scale atmospheric circulation. A significant regime shift in 1877 explains the decreasing trend in the ice extent. The regime shift indicates a change from a relatively cold climate regime to a relatively warm one, which is likely a result of changed atmospheric circulation. In addition, the analysis shows that a colder climate is associated with higher variability in the ice extent and with higher sensitivity of the ice extent to changes in winter air temperature. Moreover, the ice extent is fairly well correlated with the North Atlantic Oscillation (NAO) index during winter, which supports the results of earlier studies. However, the moving correlation analysis shows that the relationship between the NAO index and the ice extent is not stationary over time. A statistical model was established that links the ice extent and a set of circulation indices. It not only confirms the importance of the zonal flow but also implies the impact of meridional wind and vorticity. The usefulness of the statistical model is demonstrated by comparing its performance with that of a numerical model and with independent observations. The statistical model achieves a skill close to that of the numerical model. We conclude that this model can be a useful tool in estimating the mean conditions of the ice extent from monthly pressures, allowing for the use of the general circulation model output for predictions of mean ice extent.

    Finally, you state that the globe is warming? Is it?
    http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.C.gif
  • Newcomers, Start Here

    bmac3130 at 00:55 AM on 22 June, 2012

    Please do not misunderstand. I am not the one questioning the science. I am a believer in the, what seem to me to be, obvious truths of global climate change and mankind's significant role in that process. The question I posted is from a thread I started on the environment in an Amazon discussion board and is one which one of the skeptics I have been talking with posted. Since I am a recent convert to believing there is a climate change problem occurring (I used to be right-wing, conservative, fundamentalist Christian pastor) I am not as well versed in some of the scientific data available as I would like to be.

    I ended up providing the questioner with a link to the National Research Council's "Climate Change Science" Report

    I then added this,

    "Here is a quote from the report which I provided a link for, which addresses some of the points which you brought up in your post.

    "That the burning of fossil fuels is a major cause of the CO2 increase is evidenced by the concomitant decreases in the relative abundance of both the stable and radioactive carbon isotopes and the decrease in atmospheric oxygen. Continuous high-precision measurements have been made of its atmospheric concentrations only since 1958, and by the year 2000 the concentrations had increased 17% from 315 parts per million by volume (ppmv) to 370 ppmv. While the year-to-year increase varies, the average annual increase of 1.5 ppmv/year over the past two decades is slightly greater than during the 1960s and 1970s."

    The key for me is the fact that while an increase of 1 to 2 ppm/yr. by itself would not raise temperatures greatly, the cumulative increase over time would begin to raise Earth's temperatures and cause greater issues. An increase of 55 ppm from 1958 to the time of the writing of this report is more significant and concerning. If these numbers continue increasing at the current rate of 1 to 2 ppm/yr. by 2058 the ppm levels of CO2 in the atmosphere will be between 415 ppm (low end) and 515 ppm (high end). (I did those numbers in my head as I was typing this so they might be off a little but I am pretty sure they are right). So, the increase in a one hundred year period between 1958 and 2058 would be 31.7% (low end) and 63.49% (high end). Now we are starting to get into areas of greater concern than simply stating that the increase is simply 1 to 2 ppm/yr."

    Thank you all for this site and for the information which you have provided me so far. :)

    Brent McCay
  • ConCERN Trolling on Cosmic Rays, Clouds, and Climate Change

    Cole at 05:54 AM on 8 May, 2012

    Muon,
    The Agee results were Spurious, (not genuine, authentic, or true) as per the inability to properly observe GCC changes, and as a result an inability to show proper correlations in either direction.

    From the paper:

    'Research on the GCR-cloud correlations must continue, particularly in view of the two physical mechanisms mentioned above (as well as the uncertainty in the reliability of the ISCCP lower troposphere cloudiness to show the proposed correlations).'

    The paper itself isn't useless, but your waving it around as proof is. The rest of your papers are just as innefectual for the same reason. Now, let's get back to examining what fits.


    Bond et al. (2001), who in studying ice-rafted debris in the North Atlantic Ocean determined, in Svensmark’s words, that “over the past 12,000 years, there were many icy intervals like the Little Ice Age” that “alternated with warm phases, of which the most recent were the Medieval Warm Period (roughly AD 900-1300) and the Modern Warm Period (since 1900).” And as Bond’s 10-member team clearly indicates, “over the last 12,000 years virtually every centennial time-scale increase in drift ice documented in our North Atlantic records was tied to a solar minimum:"
    http://www.sciencemag.org/content/294/5549/2130.short

    Parker (1999) noted that the number of sunspots had also doubled over the prior 100 years, and that one consequence of the latter phenomenon would have been “a much more vigorous sun” that was slightly brighter. Parker pointed out that spacecraft measurements suggest that the brightness (B) of the sun varies by an amount ΔB/B = 0.15%, in step with the 11-year magnetic cycle. He then pointed out that during times of much reduced activity of this sort (such as the Maunder Minimum of 1645-1715) and much increased activity (such as the twelfth century Mediaeval Maximum), brightness variations on the order of ΔB/B = 0.5% typically occur, after which he noted the mean temperature (T) of the northern portion of the earth varied by 1 to 2°C in association with these variations in solar activity, stating finally that “we cannot help noting that change in T/T = change in B/B.”

    http://www.co2science.org/articles/V2/N14/C3.php .

    Digging deeper into the subject, Feynman and Ruzmaikin (1999) investigated twentieth century changes in the intensity of cosmic rays incident upon the earth’s magnetopause and their transmission through the magnetosphere to the upper troposphere. This work revealed “the intensity of cosmic rays incident on the magnetopause has decreased markedly during this century” and “the pattern of cosmic ray precipitation through the magnetosphere to the upper troposphere has also changed.”

    With respect to the first and more basic of these changes, they noted that “at 300 MeV the difference between the proton flux incident on the magnetosphere at the beginning of the century and that incident now is estimated to be a factor of 5 decrease between solar minima at the beginning of the century and recent solar minima,” and that “at 1 GeV the change is a factor of 2.5.” With respect to the second phenomenon, they noted that the part of the troposphere open to cosmic rays of all energies increased by a little over 25 percent and shifted equatorward by about 6.5° of latitude. And with the great decrease in the intensity of cosmic rays incident on earth’s magnetosphere over the twentieth century, one would have expected to see a progressive decrease in the presence of low-level clouds and, therefore, an increase in global air temperature, as has indeed been observed:
    http://trs-new.jpl.nasa.gov/dspace/bitstream/2014/20689/1/98-1743.pdf

    A number of other pertinent papers also appeared at this time. Black et al. (1999) conducted a high-resolution study of sediments in the southern Caribbean that were deposited over the past 825 years, finding substantial variability of both a decadal and centennial nature, which suggested that such climate regime shifts are a natural aspect of Atlantic variability; and relating these features to other records of climate variability, they concluded that “these shifts may play a role in triggering changes in the frequency and persistence of drought over North America.” Another of their findings was a strong correspondence between the changes in North Atlantic climate and similar changes in 14C production; and they concluded that this finding “suggests that small changes in solar output may influence Atlantic variability on centennial time scales:
    http://www.sciencemag.org/content/286/5445/1709.short

    Van Geel et al. (1999) reviewed what was known at the time about the relationship between variations in the abundances of the cosmogenic isotopes 14C and 10Be and millennial-scale climate oscillations during the Holocene and portions of the last great ice age. This exercise indicated “there is mounting evidence suggesting that the variation in solar activity is a cause for millennial scale climate change,” which is known to operate independently of the glacial-interglacial cycles that are forced by variations in the earth’s orbit about the sun. They also reviewed the evidence for various mechanisms by which the postulated solar-climate connection might be implemented, finally concluding that “the climate system is far more sensitive to small variations in solar activity than generally believed” and that “it could mean that the global temperature fluctuations during the last decades are partly, or completely explained by small changes in solar radiation:
    http://www.gg.rhul.ac.uk/elias/teaching/VanGeel.pdf
    Noting that recent research findings in both palaeoecology and solar science “indicate a greater role for solar forcing in Holocene climate change than has previously been recognized,”

    Solanki et al. (2000) developed a model of the long-term evolution of the sun’s large-scale magnetic field and compared its predictions against two proxy measures of this parameter. The model proved successful in reproducing the observed century-long doubling of the strength of the part of the sun’s magnetic field that reaches out from the sun’s surface into interplanetary space. It also indicated there is a direct connection between the length of the 11-year sunspot cycle and secular variations in solar activity that occur on timescales of centuries, such as the Maunder Minimum of the latter part of the seventeenth century, when sunspots were few in number and earth was in the midst of the Little Ice Age.
    http://www.nature.com/nature/journal/v408/n6811/abs/408445a0.html

    In discussing their findings, the solar scientists say their modeled reconstruction of the solar magnetic field “provides the major parameter needed to reconstruct the secular variation of the cosmic ray flux impinging on the terrestrial atmosphere,” because, as they continue, a stronger solar magnetic field “more efficiently shields the earth from cosmic rays,” and “cosmic rays affect the total cloud cover of the earth and thus drive the terrestrial climate.”

    Next, using cosmic ray data recorded by ground-based neutron monitors, global precipitation data from the Climate Predictions Center Merged Analysis of Precipitation project, and estimates of monthly global moisture from the National Centers for Environmental Prediction reanalysis project, Kniveton and Todd (2001) set out to evaluate whether there is empirical evidence to support the hypothesis that solar variability (represented by changes in cosmic ray flux) is linked to climate change (manifested by changes in precipitation and precipitation efficiency) over the period 1979-1999. In doing so, they determined there is “evidence of a statistically strong relationship between cosmic ray flux, precipitation and precipitation efficiency over ocean surfaces at mid to high latitudes,” since variations in both precipitation and precipitation efficiency for mid to high latitudes showed a close relationship in both phase and magnitude with variations in cosmic ray flux, varying 7-9 percent during the solar cycle of the 1980s, while other potential forcing factors were ruled out due to poorer statistical relationships.

    http://www2.geog.ucl.ac.uk/~mtodd/papers/grl_2001/grl_total.pdf

    Carslaw et al. point out that cosmic ray intensity declined by about 15 percent during the last century “owing to an increase in the solar open magnetic flux by more than a factor of 2.” They further report that “this 100-year change in intensity is about the same magnitude as the observed change over the last solar cycle.” In addition, we note that the cosmic ray intensity was already much lower at the start of the twentieth century than it was just after the start of the nineteenth century, when the Esper et al. (2002) record indicates the planet began its nearly two-century-long recovery from the chilly depths of the Little Ice Age.

    http://www.sciencemag.org/content/298/5599/1732

    These observations strongly suggest that solar-mediated variations in the intensity of cosmic rays bombarding the earth are indeed responsible for the temperature variations of the past three centuries. They provide a much better fit to the temperature data than do atmospheric CO2 data; and as Carslaw et al. remark, “if the cosmic ray-cloud effect is real, then these long-term changes of cosmic ray intensity could substantially influence climate.” It is this possibility, they say, that makes it “all the more important to understand the cause of the cloudiness variations,” which is basically the message of their essay; i.e., that we must work hard to deepen our understanding of the cosmic ray-cloud connection, as it may well hold the key to resolving what they call this “fiercely debated geophysical phenomenon.”

    Now I've shown you correlations without having to rely on GCC data (that we can not observe) You guys seem to be grasping at straws trying to dismiss this. Finally
    to quote the press release from CERN: 'Climate models will need to be substantially revised'
  • Solar Cycle Length proves its the sun

    tompinlb at 06:04 AM on 29 March, 2012

    The authors of the paper discuss in some detail the method they use to calculate the length of the solar cycle. If you wish to criticize their approach, it would be more helpful if you raised specific objections to their method. You state that their method yields different results than Friis-Christensen 1991, but you do not address the substance of the authors’ method.

    You make a comment that we must “assume the theory is not simply astrology.” The authors are respected scientists, not astrologists. Is this an ad hominem criticism?

    There are various theories that posit causal relationships between solar activity and changes in global temperature. One of the more interesting posits a relationship between solar magnetic flux, the incidence of cosmic rays in the earth’s atmosphere, and the formation of low level clouds especially in the tropics. The research exploring these relationships is a work in process, but there are clear theories of causality that are being investigated. This is hardly astrology.

    You say that “we expect temperatures to significantly track solar cycle length if the theory is true.” And then you proceed to say what you think this would mean, and how it would show up, peaking in 1930, etc. But you do no analysis here, and you ignore the actual analysis and results of the authors’ work. In this paper they demonstrate that for a number of stations in Norway and the North Atlantic, temperatures do in fact significantly track solar cycle length. Do you take issue with their methods and their statistical analysis?

    When skeptical scientists point out that global temperatures in the last ten to fifteen years have not risen as fast as they had been projected to rise by the IPCC models, while CO2 continues to increase steadily, advocates of catastrophic anthropogenic global warming respond that other factors affect global temperatures, that the heat is hidden in the ocean, etc. So when you say that temperatures should significantly track solar cycle length, you do not allow for other influences. So this appears to be a straw man argument.

    The authors postulate a mechanism that solar influences affect the absorption of heat by the tropical oceans, and that this heat in turn affects surface temperatures as it is distributed through the oceanic circulation. They also acknowledge that the lag between solar heating influences and surface temperature varies depending on how many years it takes for these influences to reach various parts of the earth.

    These authors make specific forecasts that can be tested, and we will see whether temperatures do in fact decline in the areas they identify. I give them credit for that. It would be more helpful to the progress of science if those who argue for the singular importance of CO2 to global temperatures would make falsifiable hypotheses that can be rigorously tested. Many of us thought that the IPCC’s prediction of a tropospheric hot spot would be a testable hypothesis for the presence of water vapor amplification/positive feedback, which is assumed in their models, but instead it seems most defenders of the consensus science now argue that it is really there but there are problems with the thousands of radiosondes that can’t find it.

    Your criticisms and straw man argument do not address the substance of the arguments that are being made by the authors. And when you conclude that their work has no more substance than projecting temperatures based on hemlines, one must conclude that your prefer to argue with humour and insults rather than seriously address the paper.
  • Weather vs Climate

    Eric (skeptic) at 22:35 PM on 11 October, 2011

    KR, a permanent shift in the jet stream (southward in the NH) due to ice sheets growing in the NH is a forcing just like the change in (global average) albedo caused by the ice sheets. It must therefore be added to the diagram that was shown in the Pielke thread that claims the only forcing is from ice albedo and GHGs. If the southward migration of the jet (NH) only caused more snow and a larger ice sheet, then the chart in the diagram would be correct. If the change in jet configuration were due merely to temperature changes specifically GAT, that chart would be more or less correct. But neither of those conditions is true. Also that diagram is missing forcing from dust.

    In short, the estimates of the first order forcing effects of ice sheets must include the permanent changes in weather, not just ice and snow albedo changes. There are many papers on the topic of LGM climate discussing LGM weather. Among the cooling influences are increased meridional flow (more heat loss in higher latitudes) and increased water cycle (although that one less certain). Here's one such paper: http://www.cnrm.meteo.fr/recyf/IMG/pdf/Laine_et_al09.pdf noting that the results are very model-dependent.

    The two consequences of permanent weather change forcing is that the diagram in the Pielke thread is incomplete and the CO2 change sensitivity cannot be calculated statically. Thus it requires the same models used for modern climate sensitivity studies, except with much more uncertainty since modern studies can be verified against present-day weather and paleo-weather models cannot. Another consequence is that the paleo sensitivity evidence is not an independent line of evidence, but another use of the same models used for modern sensitivity derivations.

    Muoncounter, seasonal cycles show that weather is a feedback to solar-driven temperature changes. But for the glacial period, summer is gone, replaced by a short rainy season. The cause of the change is the ice sheets, specifically their orographic and local temperature effects. Note that is not global temperature or average temperature or anything that can be used to estimate sensitivity but a specific local temperature contrast between the cold ice sheet and the warm land areas, a southerly jet with a stronger storm track.

    I should have used clearer language. The change in paleo weather varies by season and there are seasonal cycles. But the change is secular. ENSO is is mostly a cycle that redistributes heat between parts of the ocean and atmosphere and then reverses and returns the heat to the source. But ENSO can also exhibit secular changes that amount to a forcing. As a simple hypothetical example, if we entered a regime of El Nino all the time then we would have less outgoing longwave radiation all of the time and thus global warming and in that (hypothetical) case, ENSO would be a forcing. The connection is that ENSO modulates precipitation (El Nino = more precip) and precip is negatively correlated with OLR. Regional differences are quite pronounced so it can be a bit difficult to see the global effect.

    La Nina is a major cause for the Texas drought. If we entered a semi-permanent La Nina, then Texas would have a semi-permanent drought all other things being equal.
  • Pielke Sr. and SkS Disagreements and Open Questions

    Tom Curtis at 10:57 AM on 11 October, 2011

    Eric (skeptic) @51, the supposition that weather should be considered as a forcing rather than a feedback presumes that the weather does not respond sensitively to changes in global mean surface temperature. This is not true, for example, of the southern Sub Tropical Front (mentioned by you in 47). The Sub Tropical Front is at least partly driven by the presence of strong westerlies in the latitudes south of Australia and Africa. The location of those westerlies are known to be sensitive to global mean temperature, moving south as climate warms, and north as climate cools - behaviour mimicked by the STF. The sensitivity of the STF's location to temperature is also shown by the fact that its location varies with season.

    The same is also true of the location of the Jet Streams, whose position is primarily determined by the location of the boundaries between Hadley Cells, Ferrel cells and Polar Cells (see diagram below). Although there location is influenced by geographical barriers such as the Himalayas, the global circulation is their primary determinant, and that circulation is determined by the GMST. As temperatures rise, Hadley cell becomes large, pushing the jet streams further from the tropics. As it cools, the Hadley cell shrinks bringing the jet streams closer to the tropics.



    In both cases, because the "weather" effect is temperature sensitive, it is properly treated as a feedback.

    Treating these weather effects as feedbacks rather than as semi-permanent geographical features allows the issue to be stated this way:

    "Measuring temperature and forcing differences between glacial and interglacial will not determine current climate sensitivity because climate sensitivity varies greatly with different temperatures and geographical arrangement."

    So, stated, the claim is obviously open to empirical testing. As it happens, climate sensitivity has been tested across the range of phanerozoic distributions of continents and climate conditions and resulted in very similar values:





    While it is true that changes in ice albedo will have far less effect now than during the LGM, Hansen determined the climate sensitivity for green house gases independent of that for albedo, so that determination is not effected by that difference. Further, current conditions are more susceptible to the water vapour feedback than at the LGM.
  • The Planetary Greenhouse Engine Revisited

    Patrick 027 at 07:32 AM on 30 June, 2011

    The EUV dissociates/ionizes all molecular gas within the thermosphere and so, whatever is the characteristics of the molecular gases below the thermosphere, they will be heated from above and, if the layer surface-thermosphere is perfectly transparent, this heat will be radiated to space only by the surface where it can be carried only by conduction. In this case all the heat radiated by the mesosphere in the reality must be subtracted to albedo and Te increases.

    You were correct up to the part about the mesosphere. (Except I'm not sure you can say that *all* molecular gas is ionized - I think some neutral (as well as multiatomic) molecules do remain, but I'll have to double check.)
    Heating of the mesosphere just adds to the downward flux of heat that must be carried to the surface before emission to space. If you don't change the total amount of solar heating but only rearrange it, whether among different layers of air or between them and the surface, Te stays constant. Of course you can change the effective TOA albedo by changing atmospheric solar absorption of some layers so that more or less can be reflected by other layers or the surface, but that changes the total solar heating, which of course will change Te. Generally reducing absorption of solar radiation in the upper atmosphere should tend to reduce Te because of a greater potential for reflection by clouds or the surface or the scattering of the air itself, although this might not be a strong effect if the layers below mostly absorb the same radiation.

    Yes if the photon is absorbed/emitted with a EM forcing. Not at all if the photon is created/destroyed with a thermal forcing.

    The vast vast majority of radiation emitted by the Earth - surface or atmosphere - is from thermal processes, and in accord with the Planck function for the temperature and optical properties of the material. Aurora are different (I think 'fluorescence' applies), but that involves a very very very small amount of energy.

    And you can have stimulated emission without isothermal conditions, too. You don't need isothermal conditions for either.

    The Planck function is a function of temperature, and not of temperature gradient or time derivative. When temperature varies in space and time, the material in each location in space and time still has a temperature.

    If you still think otherwise, please explain how CO2 molecules at any temperature found in the Earth's atmosphere, which are continually colliding and thus at any given moment with some fraction in various excited states, could be prevented from emitting radiation, when it otherwise happens spontaneously (as has been observed).

    This is the last time I will address this issue; look it up in physics books if you need more.

    We can continue to argue until infinity if we don’t know the order of magnitude of all the contributes, or their weighted contributes,

    But you don't know; I at least know some things.

    because you continue to consider very marginal (pretty negligible) the role of the fluid dynamics.

    Not at all. Did you not notice my discussion (maybe some of this on the Real climate thread) on the mechanics behind the tendency for warmer air to rise and cooler air to sink. On the conversion between APE and kinetic energy, where, when APE is in the form of heat, APE to kinetic energy is thermally direct and acts like a heat engine, while the reverse is a heat pump (converts work to heat whil pumping heat from lower to higher temperature). In case you needed this link, the mechanism by which kinetic energy is produced from APE is the flow from higher to lower pressure - a pressure gradient is a force per unit distace per unit area, and work is done when there is flow across isobars - energy = force * distance; and if I'm not mistaken the kinetic energy (per unit mass) gain or loss is equal to the loss or gain in pressure, divided by density - thus in accord with Bernoulli's Law/principle ( http://en.wikipedia.org/wiki/Bernoulli's_principle ) - or for vertical motion, the combination of change in pressure and gravitational potential energy is converted to or from a change in kinetic energy. Regarding the deviation of the upper atmosphere from radiative equilibrium, I did discusss the work done on the upper atmosphere (running heat pumps there, driving thermally indirect circulation) by fluid mechanical waves which propagate vertically from the troposphere which has the heat engines that supply their kinetic energy.Now I could go into the coriolis effect, geostrophic balance, gradient wind balance, cyclostrophic balance, baroclinic instability, Hadley cells, etc, but it's not necessary - not because they're unimportant, but because it isn't specified in a simple first-order explanation of radiative-convective equilibrium.

    (In the full four dimensional climate system, horizontal radiant heating variations, in combination with the vertical variation, and along with latent heating, produce APE which drives Hadley cells, monsoonal circulations, and midlatitude storm track activity - the later provides kinetic energy to the zonal mean Ferrel cell, which itself is thermally indirect. This extratropical storm track activity in particular involves colder air sliding under warmer air and thus can have a stabilizing effect (these large-horizontal scale overturning can produce lapse rates that are stable to localized overturning). the troposphere and surface in high winter latitudes in particular is heated from horizontal transport from lower latitudes, and, especially when/where there is land or sea ice or ocean circulation is otherwise not supplying this heat, the heat goes through the troposphere and down to the surface, and I think both have net radiant cooling. Because of this pattern, the lower troposphere can be especially stable at high winter latitudes.)

    (Off on a tangent - when you create a warm air mass surrounded by cooler air, the warmer air rises and flows out over the cooler air, which sinks and slides underneath the warm air. But if this is taking place on relatively large horizontal scales, the coriolis effect will eventually stop this circulation before it is completed; geostrophic adjustment occurs, with the wind blowing parallel to isobars; the temperature contrast is stabilized and some APE remains. However, in the right conditions, there remains baroclinic instability, wherein wavy displacements (PV anomalies) of the air alter the wind field and induce other displacements such that the displacments at different vertical levels mutually amplify each other; this transfers some APE from the prexisting temperature contrast and puts it into the temperature variations of the waves, which convert some of that APE into kinetic energy. Aside from some other things, this is how extratropical storm tracks work.)

    (Having achieved geostrophic balance, warmING air tends to rise and coolING air tends to sink, even if the warmING air is still cooler than the coolING air.)

    All in all I think I've paid more attention to dynamics than you have. But it remains that to a good first approximation the upper atmosphere (at least up through the stratopause, maybe most of the mesosphere) is, in a global annual average, in radiative equilibrium. The effects of circulation cause some interesting deviations from that which I think may be much more important when considering the seasonal and latitidunal variations in temperature, rather than a global average or globally representative temperature profile. (Note in figure 10 of http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3829.1
    "The HAMMONIA Chemistry Climate Model: Sensitivity of the Mesopause Region to the 11-Year Solar Cycle and CO2 Doubling"
    Schmidt et al.
    how the distribution of solar heating and LW radiant cooling match up in the lower mesosphere. Note also that heating rates are proportional to fluxes absorbed or emitted and inversely proportional to mass, so the fluxes involved in large heating or cooling rates higher up are much smaller than they might appear in a graph which uses geometric height rather than pressure as a vertical coordinate. Which isn't to say that the small fluxes are not important at those heights, but they are small compared to what is going in and coming out of some layers far below.)

    In the preceding post I saw that,e.g., the heating power yielded by a column radiator within a room is about 75% by convection, 25% by radiation, as certified by the engineering physics laboratories. What occurs, really, within the atmosphere, what will be the ratio convective/radiative? We cannot say anything (at least we cold guess something) without a well-advised synthesis of the fluid dynamics and the radiative transfer which, actually, represents the one way to obtain weighted answers and so to have realistic reasons for neglecting or not some aspect.

    But that work has been done. If you don't believe my account, read it yourself.
  • How climate change deniers led me to set up Skeptical Science website

    JMurphy at 03:55 AM on 3 May, 2011

    Interesting how some not only feel the need to defend those who they feel are on the same side as themselves and accuse others of a version of ad hominem (while turning a blind-eye to any such examples from said fellow-travellers). But even more interestingly is how such people also believe that websites like Judith Curry's are paragons of scientific discussion carried out in a rational, level-headed and objective version of reality - as opposed to Skeptical Science, I presume.

    Anyway, from the latest thread there on Curry's site, these fine examples :


    Ya know, you really haven’t got a clue about the true objectives of the UN via its front organization the IPCC and the white-coated wiseguys, but you do now.

    What if CO2 is beneficial? Yes, but not just more likely, it has been proven with experiments.

    Temperature and mean sea level indices are objective and non- manipulable? News to me.

    There have been far too many inaccurate climate predictions which seem to somehow be swept under the carpet and expediently forgotten when the forecast is ‘busted’ and there are seemingly no penalties for such alarmist behaviour which continues unabated.

    For me, Judith, this thread shows that neither you or the lawyer understand the idea of the “null hypothesis”.

    Climate change has always happened and it always will. It’s unstoppable.

    (I’m sure “Travesty Trenberth”, “Juggler Jones” and “Hysteria Hansen” have thought the same recently.)

    Of course ivory tower scientists and impractical wafflers have no clue about realities of the world.


    And the reaction to anyone who varies slightly (and I mean slightly) from the tribal viewpoint :


    Oh Bart – such obvious posturing from the clown prince of verbosity.

    Yes, I had a problem replying to silly Bart's silly description of Aus


    But were any of them "warmists", "lukewarmists", whatever-term-you-wish-to-make-up-to-define-those-you-oppose-ists ? Who knows.
    And what fine examples of courtesy and respect...
  • Climate Emergency: Time to Slam on the Brakes

    ranyl at 21:25 PM on 8 March, 2011

    Not sure that paleoclimatic records support CS is 3C...they seem to point to it being somewhat higher!

    "Together, it is clear that during the Cretaceous and Paleogene climate sensitivity commonly exceeded 3°C per CO2 doubling."
    "Fossil soils constrain ancient climate sensitivity"
    Dana L. Royer1, PNAS | January 12, 2010 | vol. 107 | no. 2 | 517–518,

    Birgit Schneider and ralph Schneider "Global warmth with little extra co2" nature geoscience | VOL 3 | JANUARY 2010 |pg 6,
    "The conclusion of a high Earth system sensitivity5,13 is particularly worrying if there is a potential for the hitherto slow components of the climate system to respond
    more quickly in the face of rapidly increasing CO2 emissions."

    In this paper the CS long term (1000yr say) with all natural variation taken into account is ~8-12C,

    "If changes in carbon dioxide and associated feedbacks were the primary agents forcing climate over these timescales, and estimates of global temperatures are correct, then our results imply a very high Earth-system climate sensitivity for the middle (3.3 Myr) to early (4.2 Myr) Pliocene ranging between 7:1 +/- 1:0C and 8:7 +/-1:3 C per CO2 doubling, and 9:6=+/-1:4 C per CO2 doubling, respectively."
    "High Earth-system climate sensitivity determined
    from Pliocene carbon dioxide concentrations" Mark Pagani1*,NATURE GEOSCIENCE j VOL 3 j JANUARY 2010

    "The surface in our PE control simulation is on average 297K warm and ice-free, despite a moderate atmospheric CO2 concentration of 560 ppm. Compared to a pre-industrial reference simulation (PR), low latitudes are 5 to 8K warmer, while high latitudes are up to 40K warmer."
    Warm Paleocene/Eocene climate as simulated in ECHAM5/MPI-OM M. Heinemann, Clim. Past, 5, 785–802, 2009

    On average the PE was 9.4C hotter with large polar amplification and a CO2 basically double, so that makes CS 9.4C.

    It must be remembered that 1000yr CS from Paleo data is higher than the 100CS used in models, the 100CS is about 60% of the 1000yr.

    So for the long term 9.4C that is 5.64C and so on,

    "If the temperature reconstructions are correct, then feedbacks and/or forcings other than atmospheric CO2 caused a major portion of the PETM warming."
    "Carbon dioxide forcing alone insufficient to explain
    Palaeocene–Eocene Thermal Maximum warming"
    Richard E. Zeebe1, Nat. Geo. PUBLISHED ONLINE: 13 JULY 2009 | DOI: 10.1038/NGEO578

    In this one the CS is about 9-12C again, but as in the quote the authors feel the CO2 CS is a definitive and thus say another factor is necessary rather than CS being higher.

    There are plenty more of these and the recent article in science again suggesting CS is underestimated and the article last year by Gavin Schmidt saying it was 30-40% down.

    It does seem to make no sense to me that the CS is going to a standard figure all the time, as it is dependent on multiple none-linear feedbacks the size of which varies depending on the initial conditions. How can Earth with no ice albedo feedback have the same CS to GHG as one with loads of ice? One with no permafrost have the same CS as one with ,lots melting?

    At present we have a polar ocean melting, and lots of permafrost to melt. It also clear form this paper that temeprature changes can be dramatic and tipping point in nature, ("Another look at climate sensitivity" I. Zaliapin1 and M. Ghil2,3 Nonlinear Processes in Geophysics), so trying to get a statistical CS from paleodata isn't going to easy as the CS is dependent on initial conditions, and many studies suggest it is higher than thought or modelled, it is more likely that the PDF of CS should be a range humps and bumps ranging long twerm from 6-12C. Which hump the world is currently at is hard ot say, but with polar sea ice to go and permaforst etc, it is likely to be on the high side of things probably.


    "The conclusion from this analysis—resting on data for CO2 levels, paleotemperatures, and radiative transfer knowledge—is that Earth’s sensitivity to CO2 radiative forcing may be much greater than that obtained from climate models ( 12– 14)."
    "Lessons from Earth’s Past " Jeffrey Kiehl;14 JANUARY 2011 VO 158 L 331 SCIENCE

    SO yeah things are urgent very very urgent, for a CS as high as suggested from the past means 350ppm gives a 95% probability spread of temperautre rise by 2100 of 1.8-3C.

    Now how are we going to get 40ppm of CO2 out of the atmopshere, especially considerign that some models suggest that the CO2 that gone into the sinks will be released and there is of course the climate warming CO2 feedback with gives out about 10-20ppm per 1C.

    How high can CO2 go before the accumulation of heating is too much for 2C not be a definitive, 400ppm peak, even that seems risky buisness considering.

    Of course the present CO2 is 390ppm, so to peak at 400ppm would mean only adding another 5 year or less carbon into the atmosphere, divide that up fairly arround the world and it means the west has 1 year of emissions to play with for a carbon budget, so not much and considering all the adaptation that will be needed not much at all.

    Does anyone think that this is in anyway doable?

    If bold plans like ZeroCarbonBritain by 2030, cause peak CO2 of 434ppm and that isn't counting all the extra carbon needed to replace everything (cars with electric cars, power infra-structure, all white goods for efficient ones, changing the face of farming etc).

    And also remember the biosphere basically are only hope of drawing CO2 down (carbon cpature is a ruse to keep using fossil fuels and there isn't enough energy in the world to run special CO2 exchange machines and where do we put the carbondioxide for it seems to leaking from the all sites it has been burioed at so far!!).

    Is a fossil fuel free society even possible anymore?

    For that would take changing the whole economic system as the current system has to grow and the only way for that to occur is by using fossil fuels.

    The only way to get CO2 out the atmosphere quickly is to stop putting in quickly and the only way to do that is stop using power.
  • Seawater Equilibria

    Chris G at 02:37 AM on 11 January, 2011

    Chemist1,
    Look, Dr Franzen is describing the arc of a falling object as a parabola, and you are interjecting air resistance as a function of altitude, temperature, and humidity. Besides, it's more accurately an arc-segment of an ellipse.

    Yeah, we know there is more to it than what Dr Franzen describes above, but as he stated, it's a starting point. Unless you really feel that a student should be taught drag coefficients for various shaped bodies and how drag varies with atmospheric composition and approximately with the square of the velocity at the same time that they are taught about projectile paths using the mechanics of gravitational constants (which aren't constant, btw), your points are extraneous. You can get a working estimate of what a throw rock will do using a parabolic curve and 9.8 m/s^2, and you can get a working estimate of CO2 exchange between air and sea with the above.
  • Stratospheric Cooling and Tropospheric Warming

    Tom Curtis at 18:22 PM on 8 December, 2010

    Bob, I have two points, and I will start with your second explanation first.

    Below is an image of the outgoing emmissions spectra over the Sahara, the Mediterainian, and over Antarctica:



    You will notice the small spike at the center of the CO2 absorption/emmision pattern in each case. That spike represents emissions by stratospheric CO2, which being warmer than stropospheric CO2, has a higher brightness temperature. The important point to notice is that the spike is confined to the center of the CO2 absorption/emmision band.

    Looking at your figure 2 brought my attention to the fact that the majority of excess absorption in the troposphere with increased CO2 takes place on the wings of the band, not at the center. It appears from your figure 2 that there is no reduction in CO2 absorption at the center of the band. But because it is at the center of the band where stratospheric CO2 emits and absorbs, it follows that the reduction in IR on the wings of the band will have no tendency to cool the stratosphere.

    Clearly, in the non-equilibrium state, adding extra CO2 will reduce the brightness temperature at the center of the CO2 band as well as at the wings, and will consequently reduce stratospheric temperatures. But as the troposphere achieves radiative equilibrium, it may be that the loss of IR radiation on the wings of the band undercompensates for, exactly compensates for, or over compensates for the increased emmisions outside that band due to increased surface temperature. In the first case, the equilibrium brightness temperature at the center of the CO2 band will be less than it was before introducing more CO2, thus cooling the stratosphere. In the second, it will have no effect; and in the third it will slightly warm the stratosphere. As to which case will actually apply, you will have to ask an expert; and it may be the models insufficiently clarify the situation. For practical purposes though, it appears that the cooling of the stratosphere due to the second method is a temporary effect, which declines to close to zero as the atmosphere achieves radiative equilibrium.

    (As an aside,the emission spectrum for Antarctica is especially interesting; showing, as it does, that the tropospheric CO2 was warmer than the surface. In the situation at the time of this observation, the effect of CO2 in the atmosphere would have been to cool the surface of Antarctica, rather than to warm it. ;) )

    On to the first method:

    Where you say, "... this vibration is related to the energy content of CO2, it is not related to the temperature of the gaseous mixture", this is not strictly correct. The energy stored as vibration is not measured by the temperature, but there is an equilibrium relationship between the heat stored as molecular vibrations and the temperature of the gass. The actual relationship varies from gas to gas, and depends of the degrees of freedom of their vibrational modes.

    Because the relationship between heat stored as vibration, and heat stored as translation energy, adding more CO2 at the same temperature will not cool a gass (ignoring considerations of pressure and volume), for the added CO2 will have the same proportion of energy stored as internal vibrations. Adding a cooler or warmer amount of CO2 will, of course, temporarilly cool or warm the stratosphere, but the stratosphere will quickly return to equilibrium.

    What is happening in any gass is that the energy stored as vibration interacts with, and seeks to achieve equilibrium with two sources of energy. The first is the energy from collisions within the gass, which is a function of temperature. The second is the radiant energy it emits (which is a function of its temperature) and recieves (which is a function of the temperature of the source of the radiation it captures). If the temperature of the gas is less than the temperature of source of its radiant energy, its the energy it radiates will be less than that which it recieves, increasing its vibrational energy. This excess will then be passed onto the ambient gass, increasing its temperature. If the radiant energy it recieves has a lower "temperature" than the ambient gass, its will emit more energy than it recieves, draining its pool of vibrational energy. This shortfall will then be made up by collisions with other gass molecules, cooling the ambient gass.

    Applying this to your model, and assuming all energy transfers are radiant, the effect is that the stratospheric gass will reach an equilibrium temperature equal to the brigtness temperature of the tropospheric CO2. If the stratosphere were cooler than that, than the stratospheric CO2 would be a net absorber of radiant energy, thus warming the stratosphere. If it were warmer, the CO2 would be a net emitter, thus cooling the stratosphere. (In reality, the temperature would be determined by convection and the adiabatic lapse rate, which would dominate at stratospheric altitudes were it not for a major source of radiant energy to those levels.)

    So, once again, I come back to Ozone. where it not for Ozone being a net absorber of energy in the stratosphere, CO2 would not be a net emitter of energy in the stratosphere. And it is only by being a net emitter that CO2 can cool.
  • The human fingerprint in the seasons

    Norman at 16:18 PM on 8 December, 2010

    muoncounter,

    This claim "Solar warming should result in the tropics warming faster than the poles. What we observe instead is the poles warming around 3 times faster than the equator. All these pieces of evidence paint a consistent picture - greenhouse gases, not the sun, are driving global warming."

    Has a flaw in the reasoning. I linked to this article on a previous thread but a very important point should not be missed. I will post a quote and then a hyperlink to the article.

    "Understanding Arctic temperature variability is essential
    for assessing possible future melting of the Greenland ice
    sheet, Arctic sea ice and Arctic permafrost. Temperature trend
    reversals in 1940 and 1970 separate two Arctic warming
    periods (1910–1940 and 1970–2008) by a significant 1940–
    1970 cooling period. Analyzing temperature records of the
    Arctic meteorological stations we find that (a) the Arctic
    amplification (ratio of the Arctic to global temperature trends)
    is not a constant but varies in time on a multi-decadal time
    scale, (b) the Arctic warming from 1910–1940 proceeded
    at a significantly faster rate than the current 1970–2008
    warming, and (c) the Arctic temperature changes are highly
    correlated with the Atlantic Multi-decadal Oscillation
    (AMO) suggesting the Atlantic Ocean thermohaline
    circulation is linked to the Arctic temperature variability on
    a multi-decadal time scale."

    Of significance "the Arctic warming from 1910–1940 proceeded
    at a significantly faster rate than the current 1970–2008
    warming."

    From what I have read, CO2 levels were much lower in 1910 as compared to today. Yet the Arctic had a higher amplification than current, this would be very strong evidence that the greater warming at the poles is not due to atmospheric carbon dioxide levels but some other unrelated phenomena.

    Here is a link to the article.

    Peer-reviewed and accepted.
  • Ice-Free Arctic

    Camburn at 12:11 PM on 8 November, 2010

    The St Rock sailed the northern northwest passage in 1944. Henry Larson was the captain. They left Halifax, Nova Scotia and docked in Vancourver, British Columbia 86 days later.

    Using fixed wing aircraft, the Alfred Wegener Institute for Polar and Marine Research found the ice to be thicker than anticipated. This was done in 2009.

    This is emperical data, not modeled nor guesses from the current satillites.

    The approx 60 year ice cycle is not dependent on the PDO. Within that 60 year cycle there is also a ten year cycle.

    Interesting information to study. One other thing that must be taken into consideration is the effect of magnetic flux on high latitude temperatures. There are numerous published papers that show that cause and effect.

    Here is something from the US Weather Bureau.
    "The Arctic seems to be warming up. Reports from fishermen, seal hunters and explorers … all point to a radical change in climate conditions, and hitherto unheard-of high temperatures in that part of the earth's surface. … Ice conditions were exceptional. In fact so little ice has never before been noted. The expedition all but established a record, sailing as far north as 81 degrees 29 minutes in ice-free water. … Many old landmarks are so changed as to be unrecognizable. Where formerly great masses of ice have been were found, there are now often moraines... At many points where glaciers formerly extended far into the sea, they have entirely disappeared."
    The date was October 1922.

    Antidoal evidence shows that we have a lot to learn about the Arctic and Ice. And the reasons for the increase and decrease of said ice.

    While co2 potentially plays a part, it is far from the only reason that the ice varies on a decadal scale.
  • CO2 lags temperature

    sentient at 04:46 AM on 24 October, 2010

    You know, in science, there was once this thing we called the Theory of Multiple Working Hypotheses. Anathema (a formal ecclesiastical curse accompanied by excommunication) in modern climate science. So, in juxtaposition to the hypothesis of future global climate disruption from CO2, a scientist might well consider an antithesis or two in order to maintain ones objectivity.

    One such antithesis, which happens to be a long running debate in climate science, concerns the end Holocene. Or just how long the present interglacial will last.

    Looking at orbital mechanics and model results, Loutre and Berger (2003) in a landmark paper (meaning a widely quoted and discussed paper) for the time predicted that the current interglacial, the Holocene, might very well last another 50,000 years, particularly if CO2 were factored in. This would make the Holocene the longest lived interglacial since the onset of the Northern Hemisphere Glaciations some 2.8 million years ago. Five of the last 6 interglacials have each lasted about half of a precession cycle. The precession cycle varies from 19-23k years, and we are at the 23kyr part now, making 11,500 years half, which is also the present age of the Holocene. Which is why this discussion has relevance.

    But what about that 6th interglacial, the one that wasn’t on the half-precessional “clock”. That would be MIS-11 (or the Holsteinian) which according to the most recently published estimate may have lasted on the order of 20-22kyrs, with the longest estimate ranging up to 32kyrs.

    Loutre and Berger’s 2003 paper was soon followed by another landmark paper by Lisieki and Raymo (Oceanography, 2004), an exhaustive look at 57 globally distributed deep Ocean Drilling Project (and other) cores, which stated:

    “Recent research has focused on MIS 11 as a possible analog for the present interglacial [e.g., Loutre and Berger, 2003; EPICA community members, 2004] because both occur during times of low eccentricity. The LR04 age model establishes that MIS 11 spans two precession cycles, with 18O values below 3.6o/oo for 20 kyr, from 398-418 ka. In comparison, stages 9 and 5 remained below 3.6o/oo for 13 and 12 kyr, respectively, and the Holocene interglacial has lasted 11 kyr so far. In the LR04 age model, the average LSR of 29 sites is the same from 398-418 ka as from 250-650 ka; consequently, stage 11 is unlikely to be artificially stretched. However, the June 21 insolation minimum at 65N during MIS 11 is only 489 W/m2, much less pronounced than the present minimum of 474 W/m2. In addition, current insolation values are not predicted to return to the high values of late MIS 11 for another 65 kyr. We propose that this effectively precludes a ‘double precession-cycle’ interglacial [e.g., Raymo, 1997] in the Holocene without human influence.”

    To bring this discussion up to date, Tzedakis, in perhaps the most open peer review process currently being practised in the world today (The European Geosciences Union website Climate of the Past Discussions) published a quite thorough examination of the state of the science related to the two most recent interglacials, which like the present one, the Holocene (or MIS-1) is compared to MIS-19 and MIS-11. The other two interglacials which have occurred since the Mid Pleistocene Transition (MPT) also occurred at eccentricity minimums. Since its initial publication in 2009, and its republication after the open online peer review process again in march of this year, this paper is now also considered a landmark review of the state of paleoclimate science. In it he also considers Ruddiman’s Early Anthropogenic Hypothesis, with Rudddiman a part of the online review. Tzedakis’ concluding remarks are enlightening:

    “On balance, what emerges is that projections on the natural duration of the current interglacial depend on the choice of analogue, while corroboration or refutation of the “early anthropogenic hypothesis” on the basis of comparisons with earlier interglacials remains irritatingly inconclusive.”

    As we move further towards the construction of the antithetic argument, we will take a closer look at the post-MPT end interglacials and the last glacial for some clues.

    An astute reader might have gleaned that even on things which have happened, the science is not that particularly well settled. Which makes consideration of the science being settled on things which have not yet happened dubious at best.

    Higher resolution proxy studies from many parts of the planet suggest that the end interglacials may be quite the wild climate ride from the perspective of global climate disruption.

    Boettger, et al (Quaternary International 207 [2009] 137–144) abstract it:

    “In terrestrial records from Central and Eastern Europe the end of the Last Interglacial seems to be characterized by evident climatic and environmental instabilities recorded by geochemical and vegetation indicators. The transition (MIS 5e/5d) from the Last Interglacial (Eemian, Mikulino) to the Early Last Glacial (Early Weichselian, Early Valdai) is marked by at least two warming events as observed in geochemical data on the lake sediment profiles of Central (Gro¨bern, Neumark–Nord, Klinge) and of Eastern Europe (Ples). Results of palynological studies of all these sequences indicate simultaneously a strong increase of environmental oscillations during the very end of the Last Interglacial and the beginning of the Last Glaciation. This paper discusses possible correlations of these events between regions in Central and Eastern Europe. The pronounced climate and environment instability during the interglacial/glacial transition could be consistent with the assumption that it is about a natural phenomenon, characteristic for transitional stages. Taking into consideration that currently observed ‘‘human-induced’’ global warming coincides with the natural trend to cooling, the study of such transitional stages is important for understanding the underlying processes of the climate changes.”

    Hearty and Neumann (Quaternary Science Reviews 20 [2001] 1881–1895) abstracting their work in the Bahamas state:

    “The geology ofthe Last Interglaciation (sensu stricto, marine isotope substage (MIS) 5e) in the Bahamas records the nature of sea level and climate change. After a period of quasi-stability for most of the interglaciation, during which reefs grew to +2.5 m, sea level rose rapidly at the end ofthe period, incising notches in older limestone. After briefstillstands at +6 and perhaps +8.5 m, sea level fell with apparent speed to the MIS 5d lowstand and much cooler climatic conditions. It was during this regression from the MIS 5e highstand that the North Atlantic suffered an oceanographic ‘‘reorganization’’ about 11873 ka ago. During this same interval, massive dune-building greatly enlarged the Bahama Islands. Giant waves reshaped exposed lowlands into chevron-shaped beach ridges, ran up on older coastal ridges, and also broke off and threw megaboulders onto and over 20 m-high cliffs. The oolitic rocks recording these features yield concordant whole-rock amino acid ratios across the archipelago. Whether or not the Last Interglaciation serves as an appropriate analog for our ‘‘greenhouse’’ world, it nonetheless reveals the intricate details ofclimatic transitions between warm interglaciations and near glacial conditions.”

    The picture which emerges is that the post-MPT end interglacials appear to be populated with dramatic, abrupt global climate disruptions which appear to have occurred on decadal to centennial time scales. Given that the Holocene, one of at least 3 post-MPT “extreme” interglacials, may not be immune to this repetitive phenomena, and as it is half a precession cycle old now, and perhaps unlikely to grow that much older, this could very well be the natural climate “noise” from which we must discern our anthropogenic “signal” from.

    If we take a stroll between this interglacial and the last one back, the Eemian, we find in the Greenland ice cores that there were 24 Dansgaard-Oeschger oscillations, or abrupt warmings that occurred from just a few years to mere decades that average between 8-10C rises (D-O 19 scored 16C). The nominal difference between earth’s cold (glacial) and warm (interglacial) states being on the order of 20C. D-O events average 1470 years, the range being 1-4kyrs.

    Sole, Turiel and Llebot writing in Physics Letters A (366 [2007] 184–189) identified three classes of D-O oscillations in the Greenland GISP2 ice cores A (brief), B (medium) and C (long), reflecting the speed at which the warming relaxes back to the cold glacial state:

    “In this work ice-core CO2 time evolution in the period going from 20 to 60 kyr BP [15] has been qualitatively compared to our temperature cycles, according to the class they belong to. It can be observed in Fig. 6 that class A cycles are completely unrelated to changes in CO2 concentration. We have observed some correlation between B and C cycles and CO2 concentration, but of the opposite sign to the one expected: maxima in atmospheric CO2 concentration tend to correspond to the middle part or the end the cooling period. The role of CO2 in the oscillation phenomena seems to be more related to extend the duration of the cooling phase than to trigger warming. This could explain why cycles not coincident in time with maxima of CO2 (A cycles) rapidly decay back to the cold state. ”

    “Nor CO2 concentration either the astronomical cycle change the way in which the warming phase takes place. The coincidence in this phase is strong among all the characterised cycles; also, we have been able to recognise the presence of a similar warming phase in the early stages of the transition from glacial to interglacial age. Our analysis of the warming phase seems to indicate a universal triggering mechanism, what has been related with the possible existence of stochastic resonance [1,13, 21]. It has also been argued that a possible cause for the repetitive sequence of D/O events could be found in the change in the thermohaline Atlantic circulation [2,8,22,25]. However, a cause for this regular arrangement of cycles, together with a justification on the abruptness of the warming phase, is still absent in the scientific literature.”

    In their work, at least 13 of the 24 D-O oscillations (indeed other workers suggest the same for them all), CO2 was not the agent provocateur of the warmings but served to ameliorate the relaxation back to the cold glacial state, something which might have import whenever we finally do reach the end Holocene. Instead of triggering the abrupt warmings it appears to function as somewhat of a climate “security blanket”, if you will.

    Therefore in constructing the antithesis, and taking into consideration the precautionary principle, we are left to ponder if reducing CO2’s concentration in the late Holocene atmosphere might actually be the wrong thing to do.
  • The Big Picture (2010 version)

    Berényi Péter at 01:00 AM on 29 September, 2010

    #134 I wrote: "Modeled and empirical evidence indicates that the actual climate sensitivity is ~3°C for a doubling of CO2 or an equivalent radiological forcing." - see How sensitive is our climate?

    I see. Climate sensitivity is at leas 3°C (Lorius 1990), but not more than 2.3°C (Tung 2007). Fine. The science is settled.

    Seriously. As I have already mentioned multiple times, not all "forcings" are created equal. They act at different parts of the climate system (soot: snow covered surface, CO2: upper troposphere to stratosphere) and influence different processes (SW absorption vs. LW emission). Sensitivity of average surface temperature can be radically different for such agents, even if their magnitude converted to the common currency of energy flux anomaly happens to be the same. Also, regional distribution of climate response also varies widely depending on the particular kind of forcing applied.

    With paleoclimatic studies it is a bit more difficult. It is quite easy to see that general climate sensitivity should be higher in a world where permanent continental ice caps reach down to 40N than in our present day setup. Therefore "climate sensitivity" does not only depend on the kind of forcing but also on structural aspects of the climate system, changing themselves slowly over geologic times (as mountain ranges, configuration of continents, oceanic currents, presence or lack of ice sheets, etc.)
  • The contradictory nature of global warming skepticism

    Eric (skeptic) at 01:01 AM on 13 September, 2010

    Marcus (#66), on your first point, no linear trend in neutron counts, the effect of neutron counts on clouds is highly nonlinear, see my first link in #63. You also claim "delta T is linear" while neutron counts are cyclical. Delta T is not linear thanks to many natural internal and external factors, but the lack of cycles in delta T is a valid point. More on that later.

    On your second point, the highly nonlinear relationship between clouds and neutron count along with other nonlinear relationships between clouds and other natural factors particularly ocean current cycles will eliminate the possibility of a linear change in clouds.

    Your third point, the "net effect" of clouds is zero, is a red herring. In my second link in #63, the effect of clouds varies mainly according to what's underneath them. What you didn't address is the lowering of water vapor by GCR-induced clouds. Global average water vapor depends mainly on sea surface temperature and the annual cycle. Local water vapor is diurnal and there are many local factors. Here's one paper on water vapor:
    http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%281996%29009%3C0427%3AIVOUTW%3E2.0.CO%3B2


    In it the 1983, 1987 and 1992 El Nino peaks show up clearly. The 1987 peak is suppressed even though the El Nino was strong than in 1992. A plausible reason for that is the uptake of excess water vapor by GCR nuclei in 1987. There could be other reasons for the difference as well. The sea surface temperature dominates and the neutron count effect, being nonlinear, will only show up at some latitudes under some conditions.

    The rest of the time and space, the cloud effect from GCR might not have much affect. Other effects come from UV (various including ozone) and other magnetic effects other than GCR. Although complex it is not useful to say since there is no linear trend in the past we have to ignore them or consider them to be random fluctuations. The reason is that since they influence weather nonlinearly they influence sensitivity to CO2 and will do so differently as CO2 warming increases. The also have and will easily override CO2 warming and may make it moot at some point in the future.
  • 10 key climate indicators all point to the same finding: global warming is unmistakable

    Glenn Tamblyn at 19:25 PM on 29 July, 2010

    Geo Guy @ 6

    Two questions.

    Firstly, what is the fascination with GeoCraft.com among Sceptics. I have had the last graph you refer to pushed at me dozens of times by sceptics. Always associated with the question, implied or not that if CO2 varies so differently with Temp well in the distant past well....?

    Addressing that in a moment, I would first like an answer to why Geocraft.com attracts sceptics like Pooh bear to Honey. The site is self-described as belonging to a Monte Hieb.

    www.takeonit.com lists Monte Hieb as:

    "Monte Hieb is the author of several popular web pages skeptical of Anthropogenic Global Warming, serving as a evangelist for the viewpoint (he does not state his qualification in climatology or a related science). He is an employee at the West Virginia Office of Miner’s Health, Safety, and Training. "

    So what is the fascination with Monte Hieb?

    Now to the graph, yet again; I am getting tired of repeating this one.

    This graph, and better versions of it are available elsewhere, is not the full answer because it doesn't take into account the fact that Solar output was lower in the distant past. The world needed more CO2 to compensate for the cooler Sun.

    If your question was honestly meant then I suggest you read some of John's posts here. A Search for 'Royer' will turn up a number of them. One is here at http://www.skepticalscience.com/CO2-has-been-higher-in-the-past.html.

    Then you might like to look at Wiki, looking for the 'Faint Young Sun Problem' and working from there.

    However, if your 'question' was a little more disengenuous than that (and citing Monte Hieb implies that) then ScepticalScience was probably the wrong site to 'ask' that question.

    So please, Why Monte Hieb's site?
  • The nature of authority

    chris at 09:17 AM on 25 July, 2010

    johnd at 04:21 AM on 24 July, 2010

    ”…clouds have been determined as having an overall nett cooling effect…”

    One needs to be careful. As Palle et al (2006) have described an albedo change due to secular cloud variation doesn't necessarily imply a surface temperature response since clouds have warming ("heat trapping") as well as cooling (albedo) effects. Palle have more recently (Palle et al., 2009) described the total albedo variation (expected to be mostly cloud-related) and found that this has been pretty trendless during the last 10 years.

    E. Pallé et al (2006) Can Earth's Albedo and Surface Temperatures Increase Together? Eos Trans. AGU, 87(4), doi:10.1029/2006EO040002
    link to paper

    Palle et al. (2009) Inter-annual variations in Earth's reflectance, 1999-2007 J. Geophys. Res. 114, D00D03
    link to abstract


    ”But there is still that indecision as to whether temperature is a function of clouds or clouds a function of temperature.

    We may find that there is rather little relationship between Earth temperature and cloud cover, largely due to the fact that a warmer atmosphere maintains a higher concentration of water vapour ( KR has described this), and so cloud cover has no necessary systematic relationship with temperature. After all the Earth has warmed by an amount (0.8-0.9 oC) since the middle of the 19th century, that supports the conclusion that the climate sensitivity cannot really be below 2.0 oC (i.e. the temperature rise is that expected even without factoring in the slow response times of the climate system and the cooling effects of atmospheric aerosols, although one should consider non-CO2 contributions like nitrous oxides, methane and black carbon). So there pretty much has to be a positive feedback from water vapour as predicted by our knowledge of the greenhouse effect. Otherwise one might ask: “where is this supposed cooling effect of clouds”?!

    So far (as far as I’m aware) there is only one direct analysis of the cloud response to warming surface temperatures. This study (Clement et al., 2009) tends to support the conclusion that the cloud feedback is a positive one (i.e. a warmer equatorial sea surface results in a reduced cloud cover). However more data is needed on this.

    A. C. Clement et al. (2009) Observational and Model Evidence for Positive Low-Level Cloud Feedback Science 325, 460 – 464
    link to abstract

    Likewise many of the determinations of climate sensitivity (Earth equilibrium surface temperature response to increased greenhouse gas concentrations) are phenomenological, in that they assess the relationship between CO2 and surface temperature during ice age transitions or during the deep past. In these analyses all of the feedbacks (whether positive or negative) are “lumped in”. Since these analyses pretty uniformly find a climate sensitivity near 3 oC, it’s difficult to support a significant negative cloud feedback (unless there is a positive feedback we’ve not yet discovered).

    R. Knutti and G. C. Hegerl (2008) The equilibrium sensitivity of the Earth's temperature to radiation changes Nature Geoscience 1, 735-743
    link to paper

    johnd at 09:26 AM on 24 July, 2010

    ”Thus it is reasonable to expect that as atmospheric water vapour content varies, so too would that of clouds.”

    No that’s not a “reasonable” expectation. The fact that a warming atmosphere (so far) tends to maintain a near constant relative humidity means that cloud cover doesn’t necessarily vary with water vapour content. A warmer atmosphere maintains a higher water vapour content than a cooler one, and there is no reason to expect the extent of cloud cover to vary with temperature.

    That’s not to say that there may not be more rainfall in a warming world (Allen et al. 2008). But remember that rain clouds are just a proportion of total clouds. We expect in a warming world that rainfall will decrease in the equatorial regions of the Earth (consistent with Clement et al’s observation of reduced cloud cover above warming sea surface) and we will have increased rainfall at higher latitudes. That’s pretty much what is observed (Zhang et al, 2007).

    Thus during the 20th century, the latitude band from around the equator to around 30 oN has become drier (reduced rainfall; enhanced drought) as the Earth has warmed during the 20th century, much as predicted. This latitudinal band of reduced precipitation will widen as the Earth continues to warm (and so, for example, Amazonia is expected to dry progressively towards the South as the Earth continues to warm).

    The higher latitudes (especially above 50o N and below 10 o) have seen enhanced precipitation. Global warming and shifts in precipitation regimes is expected (and already observed) to lead to amplification of extreme precipitation events (e.g. Allen et al. 2008).

    X. Zhang et al. (2007) Detection of human influence on twentieth-century precipitation trends Nature 448, 461-465
    link to abstract

    RP Allen et al. (2008) Atmospheric warming and the amplification of precipitation extremes Science 321, 1481-1484
    link to abstract
  • What causes Arctic amplification?

    Arkadiusz Semczyszak at 19:08 PM on 4 May, 2010

    Work Screen & Simmonds 2010, is so short - a rather fundamental conclusions. Its advantage, however, is a references.
    But whether the writers really benefited from - such as - this work: Chylek, P., Folland, C. K. & Lesins, G. Dubey, M. K. & Wang, M. Arctic air temperature change amplification and the Atlantic multidecadal oscillation. Geophys. Res. Lett. 36, (2009); ?
    "Analyzing temperature records of the Arctic meteorological stations we find that (a) the Arctic amplification (ratio of the Arctic to global temperature trends) is not a constant but varies in time on a multi-decadal time scale, (b) the Arctic warming from 1910-1940 proceeded at a significantly faster rate than the current 1970-2008 warming, and (c) the Arctic temperature changes are highly correlated with the Atlantic Multi-decadal Oscillation (AMO) suggesting the Atlantic Ocean THERMOHALINE CIRCULATION is linked to the Arctic temperature variability on a multi decadal time scale."
    Recall it: Chylek et al. 2006 - Greenland warming of 1920–1930 and 1995–2005; write that instrumental measurements indicate a large and rapid heating of the coast of Greenland in the decade of 1920. The average annual temperature has risen when the 2 to 6 degrees C in less than 10 years. "Temperature increases in the two warming periods are of a similar magnitude, however, the rate of warming in 1920–1930 was about 50% higher than that in 1995–2005." This rapid increase in temperature, while CO2 emissions to the atmosphere was 9-fold lower than in 2003 (Marland et al., 2006) speak of a natural cause of such a powerful warming.

    The same is here, only :http://ocean.am.gdynia.pl/p_k_p/pkp_19/Marsz-Stysz-pkp19.pdf
    "... the STRONG CORRELATION between the sea surface temperature (SST) in the region of the Gulf Stream delta and anomalies in surface air temperature (SAT) in the Arctic over the period 1880-2007. SEA ICE MAY EITHER INCREASE OR LIMIT THE HEAT FLOW FROM THE OCEAN TO THE ATMOSPHERE."
    "THE GENESIS OF THE 'GREAT WARMING OF THE ARCTIC' IN THE 1930S AND '40S IS THE SAME AS THAT OF THE PRESENT DAY."

    Really worth reading the two works. Are long and ... full of very interesting calculations - in contrast to the Screen & Simmonds 2010, but ...

    Marsz, also 2009 but by: Present warming - oceanic climate control; said:
    "Changes in SST in the Sargasso Sea, explains about 70% of the variability of SAT anomalies in the NH in the period 1880-2008 and 68% of the variability of global SAT anomalies [...]. In times of growth of SST in the Atlantic and North Atlantic sea sector of the Arctic, associated with INTENSIFICATION of the INTENSITY of THERMOHALINE CIRCULATION [!], there is an increase in air temperature in the NH. This increase is particularly strong in the higher latitudes - the Arctic and temperate zone."
    And I'll be back again also to FIG 11 - maps from this work: K.E. Trenberth, J. Fasullo, L. Smith, 2005: Trends and variability in column-integrated atmospheric water vapor. Climate Dynamics 24: 741–758; The largest increase in humidity over the past decades, we see it is in a place where the return of energy by the Gulf Stream ... Here the difference - in relation to the whole Arctic - is significant. The largest increase in humidity over the past decades in a place where we see the return of energy by the Gulf Stream ... Here the difference - in relation to the whole Arctic - is significant.
    The balance of energy resulting from the local greenhouse effect caused by water vapor - "positives" are often strongly underestimated.

    Conclusion. As Frank said - it's energy imports determines the current scale and pace of rising temperatures in the Arctic.
  • Tracking the energy from global warming

    Ken Lambert at 00:15 AM on 28 April, 2010

    Chris #76

    We are all waiting for BP to respond to Posts #73 thru #76.

    Meanwhile your points ignore the 'missing heat' divergence over the last 5-6 years as exampled at the start of this discussion.

    BP's argument is that the OHC for the top 700m of ocean is a direct measure of the integrated TOA forcing imbalance WRT time because there are no other serious heat storages in the system other than the oceans.

    So far, only von Schukmann has found 'missing heat' down to 2000m - but we lack a convincing theory of a mechanism to get it there.

    It should be noted that sea level rise over the last 5-6 years on your above chart has flattened to a slope of 1-2mm/year consistent with flattening temperatures.

    The 11 year solar cycle varies the solar forcing by at most about 0.25 W/sq.m - when Dr Trenberth postulates a TOA imbalance of 0.9 W/sq.m due to CO2GHG and other heating and cooling effects.

    If you are claiming that a solar drop of 0.25W/sq.m or less has flattened the temperatures, and if the other heating and cooling effects (aerosols etc) forcings remain the same then the CO2GHG effects must be efectively negated by a 0.25W/sq.m drop in solar, which implies that they are nearer to 0.25W/sq.m than 0.9W/sq.m imbalance (mainly based on a CO2 component of about 1.6W/sq.m)

    Again the 'missing forcing' would be about 0.65W/sq.m.
  • Ocean acidification: Global warming's evil twin

    JRuss at 07:22 AM on 8 April, 2010

    The solubility of carbon dioxide in water varies greatly with temperature and pressure. At the surface of the ocean, [760 mm Hg] the solubility in grams of CO2 in 100 g (pure water)is .3346 @ 0 C, .2318 @ 10 C, .1688 @ 20 C, .1257 @ 30 C, etc. Global warming warms the ocean surface causing release of CO2. Global cooling allows the ocean to absorb CO2 as the ocean cools.

    But in the deep oceans there are lakes of almost pure CO2. Presumably, much of this is from under sea volcanoes where the pressure is high enough for CO2 to liquefy, and being denser the water, fall to the ocean floor. Slowly, this CO2 does migrate to the ocean surface.

    During the global cooling from 2004 to 2009, our oceans did absorb more CO2 resulting in a decrease in pH. Now that our globe is again warming, I expect the oceans to release vast quantities of CO2 and thus increase the pH of the ocean's surface. During of period of global warming, the amount of oceanic release of CO2 can be greater then all from all the power plants and vehicles man has made.
  • A peer-reviewed response to McLean's El Nino paper

    Marcel Bökstedt at 06:46 AM on 19 March, 2010

    Albatross> Kyle L. Swanson, George Sugihara, and Anastasios A. Tsonis, Long-term natural variability and 20th century climate change, (16120–16130  PNAS  September 22, 2009  vol. 106  no. 38) looks very interesting, but also not so easy to read. I'l try to summarize it, and hope that those who know more (thats you, Albatross) can correct my misunderstandings.

    The way I see it, they use known climate models, run them under conditions assuming NO CO2 increase to deduce how global temperature varies from year to year in dependence on sea surface temperature. This step is not about how the global temperature varies over the whole period, by definition of the model the average is supposed not to vary at all, but about the year to year variations around this average.

    The weak point is that you are working with models, not with actual, measured data, the strong point is that you can get precise information inside each model.

    You do this for a number of popular models, and somehow average them, to get a prediction (regression coefficients) about how a particular distribution of temperature of the sea surface temperature in a certain year will influence the global mean temperature in this year. It's important that these coefficients do not see global warming, since we are interested in the natural variations from the general trend.

    The outcome of the theory is this set of regression cofefficients. Then, there are a number of internal consistency checks (including testing the models against each other). I'm a bit unsure about the next step, but I believe that what happens is that the computed regression coefficients are used with actually measured sea surface temperatures. This should give the internal variability (unrelated to global warming) which depends on the distribution of the sea surface temperature.

    The result is not perfect, but it does suggest that the known variation are completely explainable in terms of SST, which is new to me!
  • Have American Thinker disproven global warming?

    Tom Dayton at 17:28 PM on 26 February, 2010

    Gary, you wrote "according to the paper by Ramanathan...'an increase in greenhouse gas such as CO2 will lead to a further reduction in OLR.... Notice there is no clarifying statement about having to use model simulated graphs to 'correct' for surface temperatures and water vapor before seeing that OLR reduction." And you wrote "he made the general statement that OLR would decrease with increased CO2 in the atmosphere. i could be reading too much into the Ramanathan paper."

    Yes, Gary, you are reading far, far too much into that statement. That statement was made in a journal for climate scientists, who know perfectly well that the total effect on OLR depends on all the mechanisms that come into play when CO2 is increased, and on mechanisms independent of CO2 that come into play nonetheless concurrently. It is so well known that it need not be stated for that audience. Indeed, if the author had stated it, the editor probably would have insisted it be removed to shorten the article and reduce clutter. Professional journals are not like textbooks, Science News or Scientific American, let alone a newspaper or the American Thinker blog. Journals rarely need or want "clarifying statements" about rudimentary knowledge, unless the editors strongly expect that the audience will include substantial numbers of people outside the normal, professionally specialized, audience of that particular journal.

    What if the authors had tried to make a "clarifying statement"? Hmmm.... Given the complex set of variables involved in determining the precise amount of OLR in response to the CO2 increase, they would not have been able to give a single answer, because the answer varies across situations, depending on the precise details of the situation being predicted, and there is an infinite number of situations.

    Instead they would have to, let's see... construct a model that they and others could run separately for each situation. They and others also would use that same model as a component of models for predicting temperature responses to increased CO2.

    But responsible scientists would want to verify that model's OLR predictions against real world observations! They would have to run it, then show its results...say as a graph line...maybe displayed underneath a graph line of the observed OLR over the same time period. They might even label that graph Figure 1.b and c.

    Gary, nobody is taking issue with you for not knowing all that. You would have if you had spent significant time writing articles for professional scientific journals in any field, even as a mere grad student. But you don't have that experience, so no foul.

    What people are taking issue with, is your quick leap to very public and strong proclamation before investigating sufficiently. When faced with your own "obvious" conclusion that flies in the faces of thousands of professional, specialized scientists who have spent many decades researching that topic, the stronger your feeling of certainty is, the more you should suspect that you, not they, are missing a fundamental piece of knowledge. And the harder you should dig to verify your own conclusion. That's what I do. That's what John Cook does. That's what most of the commenters on this blog do. Sometimes (and sometimes often) we don't dig deep enough to verify our opinions, and so write a comment that is wrong. But we write a comment, not a whole, highly publicized blog post. And we usually prefix our comment with "I think I'm missing something, but it seems to me...," and other folks correct us. Often not gently.

    That difference between your behavior and our typical (not perfect) behavior is what the Dunning-Kruger effect is about.
  • It's the sun

    Patrick 027 at 14:49 PM on 8 June, 2009

    ________________________

    MORE ABOUT REFRACTION: COMPLEX N:

    Let the complex index of refraction be N.

    Previously I used n to represent the real component of the index of refraction. It would be better to refer to that as nr.

    (From class notes):

    Imaginary component of the index of refraction ni:

    The absorption coefficient (equal to the absorption cross section per unit volume)

    =

    4 * pi * ni / L#

    where L# is the wavelength in a vacuum of radiation with the same frequency v.

    ----

    The real and imaginary components of the index of refraction,

    N = nr + i*ni ,

    do not vary independently of each other over v or L#. nr and ni are related by the Kramer-Kronig relationships.

    ----

    My understanding is that, When (magnetic) permeability is not different from a vacuum, the complex dielectric coefficient is equal to the square of the complex index of refraction.

    ------------------------
    **** IMPORTANT CLARIFICATION/CORRECTION ****

    The statement that I / nr^2 was conserved in the absence of absorption, reflection, or scattering is true at least in so far as the group velocity and phase propagation are in the same direction.

    However, my intent was that the change in I over a distance dx along a ray path would then be given by letting I# = I / nr^2, and then the differential formula would be:

    d(I#(Q,v,P)) = Ibb#(v,T) * ecsv - I#(Q,v,P) * (acsv + scsv) * dx + Iscat * dx

    where Ibb#(v,T) = Ibb(v,T) * nr^2 is the blackbody intensity for the index of refraction (given per unit frequency dv and per unit polarization dP) and:

    ecsv, acsv, and scsv are the emission, absorption, and scattering cross sections per unit volume,

    Iscat * dx is the radiation scattered into the path from other directions,

    and any reflection (removing and/or adding to I along the path) at an interface within dx is included in the scattering terms.

    And of course, ecsv = acsv if in local thermodynamic equilibrium.

    ---

    Such a relationship is constructed with I# = I/ nr^2 is based on Snell's law where nr*sin(q) = constant (not quite true, actually (???) - see below) is the relationship that determines q as a function of N, where N varies only in one direction z (planes of constant N are normal to the z direction) and q is the angle from the z direction.

    I# = I / nr^2 is derived from Snell's law, by determining that the solid angle dw that encompasses a group of rays expands or compresses with variation in the index of refraction, specifically so that dw is proportional to 1/nr^2.

    **** HOWEVER:

    When N is complex, Snell's law actually still uses the complex N, not just it's real component. I haven't entirely figured out what that means for q, though I have the impression that nr*sin(q) = constant should be at least approximately true.

    Snell's law itself is based on the requirement that the phase surfaces of incident and tranmitted waves line up at an interface, and that the phase speed is inversely proportional to N when N = nr. What is the phase speed when N has a nonzero imaginary component?

    And then there is also the complexity of what happens if group velocity is not in the same (or exact opposite) direction as the wave vector (the wave vector is normal to phase planes and thus is in the direction of phase propagation).

    --------

    So for the time being, let I# = I / n^2, where n is whatever N-related value that works in that relationship and also:

    d(I#(Q,v,P)) = Ibb#(v,T) * ecsv - I#(Q,v,P) * (acsv + scsv) * dx + Iscat * dx

    At least when N = nr and N is not a function of direction Q, n = N = nr.

    ________________________

    REFLECTION AND EVANESCENT WAVES:

    When the entirety of wave amplitude and wave energy is reflected by an interface (as in total internal reflection), some of the energy actually penetrates across the interface. The portion of the wave across the interface that is associated with the reflected portion of the wave is called an evanescent wave. The energy and amplitude of the evanescent wave decay exponentially away from the interface into the region where the reflected wave cannot propagate (in the time average, there is no group velocity component normal to the interface within the region of the evanescent wave). The evanescent wave is mathematically required for continuity of the electric and magnetic fields, and for the time-integrated divergence of the energy flux to be zero over a wave cycle when there is a constant incident energy flux (and no absorption or emission).

    If there is another interface, beyond which the wave could propagate, then waves energy can emanate from that interface to the extent that the evanescent wave penetrates to it. This flux of energy must pull energy from the evanescent wave, which results in reduced reflection from the first interface. This is how wave energy can 'leak' through a barrier. The same general concept applies to mechanical waves, including fluid mechanical waves in the atmosphere and ocean, and to quantum mechanical waves - electrons tunnel through barriers via evanescent waves.

    Absorption within the region of the evanescent wave allows some net energy flow across the interface and thus reduces the reflectivity, so that even when a nonzero 'transmissivity' is not allowed except for tunneling, the reflectivity can be less than 1.

    I'm not sure but I think absorption on the side of a transmitted wave might also reduce reflectivity even when trasmission is allowed. (??)

    ________________________

    I think I actually have the equations necessary to find some answers to questions just raised (if unable to find them elsewhere), but it will take time and it isn't pertinent to the matter here (I've already gone off on these tangents far enough).

    ________________________
    GROUP VELOCITY:

    Group Velocity in geometric space specifically is equal to the gradient of the angular frequency (omega = 2*pi*v) in the corresponding wave vector space (a wave vector is a vector with components of wave numer; wave number is equal to 2*pi / wavelength measured in the correpsonding dimension; the wavelength in the direction of phase propagation is equal to 2*pi divided by the magnitude of the wave vector).

    Where the wavevector = [k,l,m], where k,l,and m are the wave numbers in the x, y, and z directions

    Angular frequency = omega

    group velocity in x,y,z space = [ d(omega)/dk , d(omega)/dl , d(omega)/dm ]

    phase speeds cx, cy, cz in directions x,y,z

    cx = omega/k
    cy = omega/l
    cz = omega/m

    IN the direction of phase propagation, the phase speed c and wavelength L are related as:

    c = v*L

    --------

    IT IS POSSIBLE for some materials, at some values of v, to have a real component of the index of refraction less than 1. This (tends to or approximately??) corresponds to a phase speed that is greater than the speed of light in a vaccuum. This does not violate special relativity; the group velocity is the direction and speed of energy flux and information transport, and the group velocity will not be larger than the speed of light in a vaccuum.



    ________________________

    Relativity: Effects are gravitational lensing and gravitational redshift, and effects related to relevant motion (of the Earth and sun, for example.

    One effect of relative motion is that small dust particles orbiting the sun tend to fall in toward the sun over time (their semimajor axes shrink) because as they orbit, they recieve radiation more from their leading side because of their motion, so radiation pressure tends to slow them down. Of course, radiation pressure also pushes them out - on the other hand, any stable orbit would be such that the outward radiation pressure would only partly cancel the gravitational acceleration, thus causing a slower but stable orbit in the absence of the radiative pressure torque on the orbit from orbital velocity. Larger objects of a given density have more mass per unit surface area and are less affected by radiation pressure.


    ________________________

    Another way of looking at magnification by atmospheric refraction (considering mainly N = nr cases):

    For flat surfaces, leaving some vertical level moves upward across falling n values, total internal reflection keeps a portion of rays trapped; the other rays spread out to fill the full hemisphere solid angle, reducing the intensity and, if the radiation was initially isotropic, keeping the total upward flux per unit horizontal area proportional to n^2 at each level (so long as n only either decreases or remains constant with height).

    But for concentric spherical surfaces (with N decreasing outward to N = 1), the cone of acceptance defined for flat interfaces is narrower than the cone of rays that is able to escape upward to any given level, because as the rays curve over and downward, the interfaces - the locally defined horizontal surfaces - curve downward. Thus, the height to which any ray can reach is raised, and a greater solid angle of rays escapes all the way out to N = 1. This means a greater total upward flux per unit horizontal area reaches to any given height and to N = 1. But the intensity of the radiation still falls by the same amount as it passes to lower N (being proportional to N^2); so the greater flux requires a greater solid angle - hence, the underlying surfaces that are the source of the fluxes occupy a greater solid angle from any given viewing point - they appear larger - hence they are magnified.

    Some rays will still be trapped, so the magnification must be less than N^2.

    ----------

    From some class notes: N of the air, in the visible part of the spectrum, at STP (standard temperature and pressure) is about 1.0003. Near sea level conditions are broadly similar to STP.

    If N ~= 1.0003, then N^2 ~= 1.0006. In that case, the effect on I would be a 0.06 % increase relative to I in a vaccuum. The magnification of the surface as seen from space cannot be more than that (and is also limited by the vertical extent of different N values). The magnification of higher layers will tend to be less because N will tend to decrease with height toward 1.





    ------------------

    A particular important point about the climatic effects of increasing surface area and changes in N with height is that they would not be a source of significant positive or negative feedback (at least for Earthly conditions).

    As the greenhouse effect increases, the distribution of radiative cooling to space does shift upward. But a doubling of optical thickness per unit mass path would only shift the distribution within the atmosphere of transmissivity to space upward by about 5 km, give or take ~ 1 km (it is less at heights and locations where the temperature is colder, more where warmer). Doubling CO2 would have that effect only over the wavelengths in which it dominates (covering roughly 30 % of the total radiant power involved), and not quite even that, since at most wavelengths, emission cross sections are smaller with increasing height, at least for the troposphere and maybe lower stratosphere, and also, there would be some overlaps with clouds. Also, the tropopause level forcing would be due mostly to expansion in the wavelength interval in which CO2 significantly blocks radiation from the surface, water vapor, and clouds, given the shape of the CO2 absorption spectrum and that the central part of the CO2 band is saturated at the tropopause level with regards to further radiative forcing. The water vapor feedback is less than a doubling of water vapor, and water vapor density decreases much faster with height than air density within the troposphere. The height of the troposphere will shift upward to lower pressure as the temperature pattern shifts, so the highest cloud tops will be higher, but not by much in terms of affecting the emitting surface area.

    An upward shift in the LW radiative cooling distribution of about 5 km would result in about a 0.1 K cooling; it seems that a vertical shift less than 5 km likely causes a warming of about 3 +/- 1 K, 20 to 40 times larger (PS the vertical shift includes all LW radiative feedbacks; if there were no positive LW feedbacks, the warming would be reduced, but so would the vertical shift. If SW feedbacks were excluded, then maybe the warming would be at the lower end of the range given, I think).

    What would truly be required to result in a 0.1 K cooling by this process is if all LW radiative cooling that occured within the troposphere and at the surface were confined to be below 5 km from the tropopause initially (or else, to have the tropopause level rise to accomodate the shift?), and then to have the whole distribution raised 5 km, so that there would then be no LW cooling below 5 km from the surface. This would actually cause warming of roughly 30 K, given a lapse rate of 6 K/km. Thus the cooling by area increasing with height would be roughly just 1/3 % of the warming.

    There is yet one other way to vertically shift the LW cooling distribution: Thermal expansion. The atmospheric mass in the troposphere would only expand about 1 % in response to a 3 K warming, which would only contribute a 0.002 K cooling for an initial 10 km effective level for LW cooling. (This mass expansion is a seperate issue from the increasing height of the tropopause mentioned in the previous paragraph - the later is a rise in the tropopause relative to the distribution of mass - essentially a transfer of mass from the stratosphere to the troposphere. This is actually an expected result of global warming in general, although nothing on the order of 5 km so far as I know).

    ---------

    Feedbacks involving changes in the index of refraction of the air, and forced changes in the index of refraction of the air, can also be expected to be insignificant.

    ________________________

    MORE ON THE 'I CAN SEE YOU AS MUCH AS YOU CAN SEE ME' PRINCIPLE (THAT IS REQUIRED if THE SECOND LAW OF THERMODYNAMICS IS TO APPLY):

    SPECULAR REFLECTION:

    For simplicity of illustration, consider an interface between materials, where on side A, N = NA, and on side B, N = NB, and in both cases, let the imaginary component be zero. Let N be invariant over direction within each material.

    Identify directions Q by two angles: Q = (q,h). q is the angle from the normal (perpendicular) to the interface (specifically, measured from the direction leaving the interface on the side that q is taken), and h is the angle around the normal to the surface; h going from 0 to 360 deg at constant q traces a circle in a plane parallel to the interface; rather than reversing the direction h is measured across the interface to keep the same overall coordinate system (right-handed or left-handed) for each side A and B, measure h on each side in the same sense (clockwise or counterclockwise) as viewed from just one side.

    Consider four ray paths that approach this interface. Two rays, 1 and 2, are incident from side B with directions Q1 and Q2, respectively. Two other rays, 3 and 4, are incident from side B with directions Q3 and Q4.

    Q1 = (qA,h0)
    Q2 = (qA,-h0)
    Q3 = (qB,-h0)
    Q4 = (qB,h0)

    So rays 1 and 2 have the same q = qA, rays 3 and 4 have the same q = qB, and 1 and 4 have the same h = h0, while 2 and 3 have the same h that is the opposite of the h of the other two rays.

    Let NA*sin(qA) = NB*sin(qB).

    A portion of each rays is reflected (denoted r) and a portion is transmitted (denoted t). For example, from the incident ray 1, the reflected ray is 1r and the transmitted ray is 1t.

    Notice:

    If all incident rays intersect the interface at the same point, then:

    ray 1t goes backwards along the path of ray 3
    ray 3t goes backwards along the path of ray 1

    ray 1r goes backwards along the path of ray 2
    ray 2r goes backwards along the path of ray 1

    ray 2t goes backwards along the path of ray 4
    ray 4t goes backwards along the path of ray 2

    ray 3r goes backwards along the path of ray 4
    ray 4r goes backwards along the path of ray 3

    etc.



    The incident rays have spectral (monochromatic) Intensities I# = I/N^2 of I1, I2, I3, I4.

    Along paths 1, 2, 3, and 4, the I/N^2 toward the interface are:


    I1
    I2
    I3
    I4

    IF the reflectivity from side A is RA and the reflectivity from side B is RB then:

    Along paths 1, 2, 3, and 4, the I/N^2 away from the interface are:

    RA*I2 + (1-RB)*I3
    RA*I1 + (1-RB)*I4
    RB*I4 + (1-RA)*I1
    RB*I3 + (1-RA)*I2

    Suppose each ray is emanating from a blackbody, and each blackbody has the same temperature. In that case, the net intensity (forwards - backwards) = 0 along each path so that there is no net heat transfer, assuming the second law of thermodynamics holds for the consequences of reflection and refraction.

    RA*I2 + (1-RB)*I3 - I1 = 0
    RA*I1 + (1-RB)*I4 - I2 = 0
    RB*I4 + (1-RA)*I1 - I3 = 0
    RB*I3 + (1-RA)*I2 - I4 = 0

    Also, I1 = I2 = I3 = I4. Then:

    RA + (1-RB) = 1
    RA + (1-RB) = 1
    RB + (1-RA) = 1
    RB + (1-RA) = 1

    Each relationship yields the same conclusion: RA = RB = R. Reflectivity is the same for any two rays approaching the same interface from opposite sides in which each of their transmitted rays goes backwards along the other incident ray.

    Reflectivity can vary with polarization P, so this only applies if either the incident rays are completely unpolarized or polarized specifically to fit some variation of N over P (as in perfect blackbody radiation), or if the intensity is evaluated for each polarization seperately.

    The formulas (one for parallel and one for perpendicular polarizations) for reflectivity (Fresnel relations) as a function of NA/NB and qA or qB (each q determines the other) give the same R for RA and RB for each polarization. The Fresnel relations are not determined by the second law of thermodynamics; they are determined (at least in part) with the constraint that the electric and magnetic fields only vary continuously in space; they each have only one value at any one location and time, including on the interface. I'm not sure if it is necessary for deriving the relations, but the Fresnel relations would also have to fit with the conservation of radiant energy (except for allowed sources and sinks such as emission and absorption).

    An interesting further point:

    Along ray 1, the I/N^2 coming back from the interface = R * I2 + (1-R) * I3

    What happens to the radiation going toward the interface along I1 ? R * I1 is reflected backward along ray path 2, and (1-R) * I1 is transmitted backward along ray path 3.

    In other words, the distribution of where the radiation going forward along a path reaches matches the distribution of the sources of the radiation that goes backwards along the same path (before weighting by the strength of each source).

    ------

    There should be a similar pattern of behavior for scattering. That is, if I is isotropic, then the I scattered out of a path must equal the I scattered into the path over the same unit of path dx. Otherwise, anisotropy in I# could spontaneously arise, which would break the second law of thermodynamics (clever use of such spontaneous anisotropy could run a perpetual motion machine).

    Of course, a scattering cross section per unit volume, scsv, can vary over direction, but it should (except for macroscopic scatterers where absorption cross section on one side can be matched with scattering cross section on the opposite side) be the same for a pair of opposing directions along a ray path.

    More generally:

    What I expect is that the distribution over directions of radiant intensity that is scatterd out of a ray going in the x direction over a distance dx should fit the distribution of the source directions of radiant intensity that is scattered into the ray in the same dx to go in the negative x direction, before weighting by anisotropy of the source direction intensities.

    When the same type of scattering has the same scattering cross section in all directions (and the scattering distribution has 0-fold rotational symmetry about the direction from which I is scattered), I think this can be shown to be true: Consider a fraction of radiation that is scattered by an angle q' into the solid angle dw in the direction Q2, from an initial direction Q1 and solid angle dw. The same angle of scattering applies to radiation that is scattered from the direction Q2 into the direction Q1. Thus the same fraction of radiation is scattered into Q2 from Q1 as is scattered from Q1 into Q2.

    ________

    CONCLUSION (in the expectation that more complex forms of scattering and reflection and dependence on complex N values that vary over direction, and where group velocity is the direction of propagation of intensity, while phase propagation may be in a different direction - that these situations still fit the general concepts illustrated above):

    At a given frequency v, and if necessary, polarization P (could be a function of position along paths):

    At a reference point O along a path in the direction Q (which can bend as refraction requires), with distance forward along the path given by x:

    Considering the forward intensity I#f in the solid angle dw, the distribution of where I#f goes is proportional to the derivative of the transmission with respect to x. It can be visualized by considered a distribution of distance x over dw, where each cross section per unit area normal to the path over the unit distance dx (including reflectivity at any interface within dx) occupies a fraction of dw equal to itself multiplied by the transmission over x. The solid angle dw is the sum of many fractions of dw that are each filled by cross section at a different distance. The fractions are the fractions of I#f that are intercepted at the corresponding distances. They are also the fractions of the total visible cross section from point O at the corresponding distances. This is essentially a distribution of dw over x, that describes the distribution over x of what is visible along x from point O and of how much visibility of I#f at O exists at x. d(dw)/dx can be integrated along x to give a visible cross section per unit area of 1. I#f(O) * d(dw)/dx is the interception at x per unit dx of whatever I#f passes through O. Integration of d(I#b(x)) * d(dw)/dx gives the total backward intensity I#b(O) that reaches O; it sums the proximate sources of I#b(O) over the fractions of dw that correspond to a distribution over x.

    This can be extended farther. Suppose we are interested in the absorption of I#f. For all the fractions of dw that are scattering or reflection cross sections per unit area, the fate of those fractions of I#f can be traced farther, through successive scatterings and reflections, until every last bit is absorbed. This will be a distribution of dw that may extend outside of the path and over other paths, perhaps over some volume. It is the distribution of the absorption of I#f(O). Assuming local thermodynamic equilibrium within each unit volume, It is also the distribution of the emission of I#b(O) - multiplying the distribution density by I#bb(T) and integrating over the distribution gives the I#b(O) value, where I#bb(T) is the blackbody intensity (normalized relative to refraction) as a function of T, however it varies over space.

    I#f(O) also has an emission distribution that can be found, tracing back in the opposite direction from point O.

    The net I# at point O in the forward direction is I#(O) = I#f(O) - I#b(O), and will be in some way proportional to the difference in temperature T (specifically, the difference in the weighted average of Ibb#(T) over the emission distribution) between the emission distribution of I#f(O) and the emission distribution of I#b(O).

    Assuming the forward direction is from higher to lower temperature within the emission distributions, if the two distributions extend over a larger range in T (at a given overall average T), then the net radiant transfer I#(O) will be larger; if the two distributions extend over a smaller range in T (at a given overall average T), then the net radiant transfer I#(O) will be smaller. I#(O) can be changed by either changing the emission distributions or the temperature distributions. Increasing overlap of the distributions will tend to reduce I#(O) (the distribution of emission for either I#b(O) and/or I#f(O) can wrap around the point O when there is sufficient scattering and/or reflection).

    EXAMPLES:

    -----------
    PURE EMISSION AND ABSORPTION (loss to space is 'absorption' by space for climatological purposes):

    The emission distributions of each of I#f(O) and I#b(O) are entirely along the ray path, with zero overlap. Each distribution density decays exponentially from point O in optical thickness coordinates; the distributions are more concentrated near point O in (x,y,z) space when there is greater cross section per unit (x,y,z) volume.

    If there is an emitting/absorbing surface (approximately infinite optical thickness per unit distance), that the ray path intersects, increasing the opacity of the intervening distance between the surface and O reduces the portion of the distribution that is on the surface, pulling it into the path between the surface and O while concentrating the distribution within the path toward point O.

    For a given temperature variation along the path, if the temperature is increasing in one direction, increasing the cross section per unit volume decreases the net I#(O). IF the temperature fluctuates sinusoidally about a constant average, then increasing the cross section per unit volume may have little effect on net I#(O), until the emission distribution becomes concentrated into a small number of wavelengths; if the temperature is antisymmetric about O, the maximum net I#(O) will occur when the emission distribution is mostly within 1 to 1/2 wavelength, so the nearest temperature maximum and minimum dominate the emission distribution; beyond that point, further concentration of the emission distribution reduces I#(O).

    -----------
    PURE SCATTERING within a volume between perfect blackbody surfaces at temperatures T1 and T2 (with 100 % emissivity and absorptivity):

    When the scattering cross section density is zero, net I#(O) is the difference between Ibb#(T1) and Ibb#(T2), only except for paths that are parallel to the surfaces, or with variations in N, never intersect both surfaces.

    The emission distributions will only be at the surfaces and not in the intervening space. The effect of scattering will be to redirect radiation so that a portion of the emission distribution for I#(O) from one side of O will come from the other side of O; this reduces the net I#(O) by mixing some of both Ibb#(T1) and Ibb#(T2) into both I#f(O) and I#b(O).

    Without multiple scattering, forward scattering makes no difference for direction Q that is everwhere normal to the surfaces (assuming constant N everywhere).

    Increasing either the scattering cross section density, the deflection angles of forward scattering, the portion of single scatter backscattering, or the angle from the normal perpendicular the surfaces, will increase the portion of the emission distribution for I#(O) from one side of O that is shifted to the other side of O.

    Interestingly, scattering could introduce some net I# into paths that do not intersect both surfaces, such as those that intersect the same surface twice due to total internal reflection. But increasing scattering will eventually reduce I# for even those cases.

    Partially reflecting surfaces between the emitting surfaces will have the same general effect as scattering; the emission distributions will not fill a volume with specular reflection alone, but they will spread out over a branching network of paths that penetrates space both back and forth.

    ****

    Having an emitting surface that have emissivity and absorptivity less than 100% would be analogous to placing either some partially reflective surfaces and/or a layer of high scattering cross section density immediately in front of the surface so that none of the points O considered would be between that layer and the emitting surface.

    -------------------------

    In the case of perfect transparency between surfaces with reflectivities R1 and R2, the I#f from surface 1 to surface 2 along any path that intersects both surfaces would be (PS this assumes that R1 and R2 are not direction dependent, or happen to be constant for all the directions that occur when tracing back and forth along a given path at a point O; at least in the case of no directional dependence, this applies to both specular and diffuse reflection):

    (1-R1) * Ibb#(T1) + R1*(1-R2)*Ibb#(T2) + R1*R2*(1-R1)*Ibb#(T1) + ...

    =

    SUM(j=0 to infinity)[ Ibb#(T1) * (1-R1)*(R1*R2)^j + Ibb#(T2) * R1*(1-R2)*(R1*R2)^j ]

    ---

    And the I#b would be:

    SUM(j=0 to infinity)[ Ibb#(T2) * (1-R2)*(R1*R2)^j + Ibb#(T1) * R2*(1-R1)*(R1*R2)^j ]

    ---

    So the net I# would be:

    SUM(j=0 to infinity)[ Ibb#(T1) * (1-R1)*(R1*R2)^j + Ibb#(T2) * R1*(1-R2)*(R1*R2)^j ]
    -
    SUM(j=0 to infinity)[ Ibb#(T1) * R2*(1-R1)*(R1*R2)^j + Ibb#(T2) * (1-R2)*(R1*R2)^j ]

    =

    SUM(j=0 to infinity)[ Ibb#(T1) * (1-R1)*(1-R2)*(R1*R2)^j - Ibb#(T2) * (1-R1)*(1-R2)*(R1*R2)^j ]

    =

    [ Ibb#(T1) - Ibb#(T2) ] * SUM(j=0 to infinity)[(1-R1)*(1-R2)*(R1*R2)^j ]


    =

    [ Ibb#(T1) - Ibb#(T2) ] * (1-R1)*(1-R2) / [1-(R1*R2)]

    -------

    When R2 = 0,

    I# =

    [ Ibb#(T1) - Ibb#(T2) ] * (1-R1)

    ---

    When R2 = R1 = R,

    I# =

    [ Ibb#(T1) - Ibb#(T2) ] * (1-R)*(1-R) / [(1-R)*(1+R)]

    =

    [ Ibb#(T1) - Ibb#(T2) ] * (1-R)/(1+R)

    -------

    Increasing either R1 or R2 decreases I#.

    -------------------------

    MIX OF SCATTERING/REFLECTION AND ABSORPTION/EMISSION (between two opaque surfaces with nonzero emissivities and absorptivities):

    -----------
    Weak scattering and reflection relative to absorption/emission:

    Adding scattering and reflection concentrate the emission distributions closer to O, pulling them off of and away from any opaque surfaces, and spread them out from the ray path into a volume of space, and can cause some overlap of the two distributions. Specular reflection will not cause the emission distributions to fill a volume but will have other effects broadly similar to scattering. For the same scattering cross section density, single scatter dominated by forward scattering with small deflections has the weakest effect. Most of the photons may only scatter or reflect once if the emission/absorption cross section density is enough relative to scattering/reflection cross section density.

    -----------
    Weak emission/absorption relative to scattering/reflection:

    Adding emission/absorption cross section density pulls the emission distributions off of the opaque surfaces; that portion which is lifted off the surfaces is concentrated near the point O. Adding more emission/absorption cross section pulls more of the emission distributions off the opaque surfaces and increases the concentration near O.

    With sufficient scattering relative to emission, multiple scattering will tend to diffuse an intensely anisotropic I# into nearly isotropic I#; this will make the emission distributions for both I#f(O) and I#b(O) into nearly spherical regions that are both centered near O, so that there will be great overlap of the two distributions, reducing the net I#(O). With less emission/absorption cross section density, the spheres expand; with zero emission/absorption within space between opaque surfaces, the distributions are left on the surfaces (both distributions nearly evenly divided among surfaces if scattering and/or reflection is sufficient).

    -----------
    Varying proportions of emission/absorption cross section and scattering/reflection cross section:

    All cross sections tend to restrict the emission distributions, but within the larger distribution, pockets of higher densities of emission cross section will have a greater density of the emission distribution. Pockets of sufficiently high densities of scattering and reflection cross sections may reflect and deflect the emission distribution around themselves.

    ------------------------
  • It's the sun

    Patrick 027 at 15:07 PM on 18 May, 2009

    Dan Pangburn (cont.) -

    "Without significant net positive feedback AOGCMs do not predict significant global warming."

    Approx. 1 deg C for doubling CO2 may or may not be considered significant; it is certainly a significant relationship of CO2 varies by a large enough amount.

    "Zero feedback results in 1.2°C from doubling of atmospheric carbon dioxide per p631 of ch8 of UN IPCC AR4 "

    That sounds about right.

    What I want to emphasize here is that the logarithmic proportionality of radiative forcing to atmospheric CO2 level has nothing directly to do with whether or not there are positive or negative feedbacks to radiative forcing or whether tipping points might be crossed as radiative forcing is changed.

    -----------

    From the time scale dependence of feedbacks: There could be some exceptions, but the general tendency is for Earth's climate to vary the most in response to externally-imposed forcings with time scales ranging from perhaps many decades to perhaps hundreds of thousands of years, or something similar to that.

    Simplified hypothetical examples (with a qualititative resemblence to reality, but I don't actually know some of the real numerical values) to illustrate the point:

    Suppose at time 0, there is a sharp change in radiative forcing of + 4 W/m2 - perhaps from an increase in solar radiation absorbed over the Earth's surface (for an albedo of 0.3 and taking into account that the surface area of a sphere is 4 times its cross sectional area, a 4 W/m2 solar forcing actually requires about a 23 W/m2 increase in solar TSI, quite a bit larger than any variation known to occur outside the long-term solar brightenning over 100s of millions of years that is a characteristic of stellar evolution; recent solar TSI variations (over the period of time relevant to AGW) may be a tenth of that or perhaps less).

    BEFORE CONTINUING THAT, BACKGROUND INFO:
    ------------
    (PS actually, often what is used for 'radiative forcing' is the tropopause level radiative forcing with an equilibrated stratosphere. I think this is the value that is close to 4 W/m2 (Actually maybe 3.7 W/m2, give or take a little) for a doubling of CO2 (and I think that includes the SW effects of CO2, which are much smaller than the LW effects but are present (CO2 can absorb some SW radiation). Radiative forcing at any level is the sum of a decrease in net outward (upward minus downward) LW (mainly emitted by Earth's surface and atmosphere) radiation at that level and an increase in absorbed SW (essentially all solar) radiation below that level; the climatic response involves changes in temperature that change the LW radiant fluxes to balance the forcing plus any radiative feedbacks that occur (which can be LW and/or SW). Variation in radiative forcing over vertical distance is equal to a radiatively forced heating or cooling.

    Top-of-atmosphere (TOA) radiative forcing is the sum of a decrease in LW emission to space and an increase in all absorption of SW radiation. An increase in solar TSI of 2 W/m2 results in a (globally averaged) TOA SW forcing of 0.35 W/m2 if the TOA albedo (the fraction of all SW radiation incident at TOA that is reflected to space) is 0.3. But the tropopause level forcing will be less than the TOA forcing because some of that 0.35 W/m2 is absorbed in the stratosphere - and it generally will be a larger fraction than the fraction of all SW radiation absorbed in the stratosphere, because solar UV fluxes are proportionately more variable than total TSI.

    An increase in the greenhouse effect involves increasing the opacity of the atmosphere over portions of the LW spectrum. Aside from LW scattering
    ...
    (which is minor for Earthly conditions, but can also contribute to a greenhouse effect in theory under some conditions (such as with dry ice clouds), but in a different way than atmospheric absorptivity and emissivity (by reflecting LW radiation from the surface or lower layers of air back downward); for Earthly conditions, scattering is much more important at shorter wavelengths)
    ...,
    each layer of atmosphere emits and absorbs LW radiation to the extent that it lacks transparency to radiation from behind it (in either direction). The surface also emits and absorbs LW radiation, almost as a perfect blackbody (but not pefectly; it does reflect a little LW radiation from the atmosphere back to the atmosphere). Along a given path at a given wavelength, Absorptivity = emissivity when in local thermodynamic equilibrium (a good approximation for the vast majority of the mass of the atmosphere and surface), where emissivity is the intensity of emitted radiation divided by blackbody radiation intensity (function of wavelength and temperature, and index of refraction, but that last point can be set aside for radiation in the atmosphere) for the temperature of the layer or surface, and the absorptivity is the fraction of radiant intensity absorbed along a path. As a path's optical thinckness increases either by geometric lengthening or by increasing density of absorbant gases or cloud matter, absorptivity and emissitivity both exponentially 'decay' from zero toward 1, or toward a lower number if there is reflection or scattering involved.

    Positive TOA LW forcing is caused a decrease in LW emission to space from increased opacity, which hides a greater portion of the (globally and time-averaged) larger LW fluxes from the (globally and time-averaged) warmer surface and lower atmosphere from space, replacing it with reduced LW fluxes from generally cooler upper levels of the atmosphere (the warmth of the upper stratosphere is in a very optically thin layer at most LW wavelengths and the thermosphere is too optically thin to have much effect).

    For relatively well-mixed gases (such as CO2), increasing concentration also cools the stratosphere by increasing the stratosphere's emmission to space and decreasing the upward LW flux that reaches the stratosphere. Thus, the tropopause level radiative forcing from an increase in CO2 is actually greater than the TOA level radiative forcing.

    (The SW forcing from CO2 absorption of SW radiation tends to heat the stratosphere, but the LW effect dominates. If there were an increase in SW absorption in the troposphere, this would add to tropopause level forcing, but it would (along with stratospheric SW absorption) reduce forcing at the surface.)

    Increasing LW opacity also tends to increase radiative forcing at the surface by increasing downward emission from the lowest (and generally, on average, warmest) layers of the atmosphere, by making them more opaque (they replace a fracton of the smaller LW fluxes from the upper layers and lack of LW flux from space with a larger increase in their emitted LW flux).

    Increasing solar TSI has a positive radiative forcing at the surface, which is smaller than that at the tropopause level because some SW radiation is absorbed in the troposphere.

    Other points:

    Volcanic stratospheric aerosols have a larger negative SW forcing at the surface and tropopause than at TOA because they absorb some solar radiation as well as scatter it.

    An increase in albedo at one level (at the surface or within the atmosphere) tends to produce a negative SW forcing, but it will be larger below that level than above to the extent that the increase in upward SW radiation above increases SW absorption (heating) above that level. An increase in absorption of SW radiation (such as by water vapor) only results in a positive TOA forcing in so far as it reduces the amount of SW radiation reflected to space (by intercepting SW radiation both before and after scattering), and will result in a negative forcing at lower levels.

    ----

    The stratosphere has a low heat capacity and tends to reach equilibrium with radiative forcing on short timescales (sub-seasonal, as I recall). Radiative forcing with stratospheric adjustment includes changes in LW radiation within and from the stratosphere resulting from stratospheric temperature changes. This tends to reduce the difference between TOA and troposphere-level forcing from before stratospheric adjustment. It is useful to use tropopause-level forcing with stratospheric adjustment because the remaining climatic response will tend to be more similar among different forcing mechanisms (solar forcing warms the stratosphere and thus stratospheric adjustment increases forcing at the tropopause; the opposite is the case with CO2), although there can still be differences in efficacy (the climate sensitivity to global and annual average forcing, to one forcing agent relative to a reference forcing agent - for example, black aerosols on snow and ice (I am not 100% sure but I think the effect may be amplified because the warming is concentrated in regions where there is a strong positive feedback, resulting in greater global-average warming per unit global average radiative forcing), and also, perhaps how the effects of solar, volcanic, well-mixed greenhouse gas, and stratospheric ozone depletion forcings affect the circulation patterns of the stratosphere and troposphere and interactions between them...(NAM, SAM, circumpolar vortex); also, solar forcing can change the ozone level in the stratosphere - but so can climate change in general (temperature dependant chemical reactions, polar stratospheric clouds, circulation patterns that bring ozone from the tropics to the high latitudes and then downward).

    Why is tropopause level radiative forcing so important?

    In the global average, solar heating, although somewhat distributed among the surface and atmosphere, is displaced downward relative to the distribution of radiative cooling to space. In pure radiative equilibrium, this would be balanced by radiative fluxes among the surface and different levels of the atmosphere. However, the temperature gradient required for such radiative equilibrium is unstable to convection in the lower atmosphere. Thus, the climate tends to approach a radiative-convective equilibrium, in which, to a first approximation, a net convective flux (including surface evaporative cooling and latent heating upon condensation/freezing of water) cools the surface and heats the troposphere, balancing a net radiative heating of the surface and net radiative cooling distributed within the troposphere. Localized vertical convection, where it occurs, causes the troposphere's vertical temperature distribution to approach neutral stability - a temperature decline with height near the adiabatic lapse rate (the rate at which temperature decreases due to expansion of some mass of gas with decreasing pressure, in the absense of a heat flux into or out of that mass). Because of condensation, the lapse rate that applies (except near the surface, below cloud level) is the moist adiabatic lapse rate - it is less than the dry adiabatic lapse rate because of latent heating upon ascent. It diverges most when latent heating per unit vertical lifting is greatest - which is at higher temperatures (found lower in the atmosphere). Thus the moist adabiatic lapse rate varies over the globe and with weather conditions and seasons, though a good representative value is 6 or 6.5 K per km.

    Because radiative fluxes by themselves would drive the lower atmosphere toward being convectively unstable, the surface and various levels within the troposphere tend to warm up or cool off together in response to forcings - they are convectively coupled. Any increase in radiative forcing at the tropopause level corresponds to some change in radiative heating below the tropopause level. If this radiative heating is concentrated at some level, it will, without changes in convective heat fluxes, warm up that level, decreasing vertical stability above and increasing it below, thus slowing convective heat transport up to that level from below and increasing it from that level to above. Convection thus spreads the heating effect vertically throughout the depth that convection can occur. So the surface and all levels within the troposphere warm up by similar amounts. The warming may be a bit less at the surface because the moist adiabatic lapse rate decreases with increasing temperature (assuming the cloud base level (lifting condensation level) does not rise on average, etc., because the dry adiabatic lapse rate applies to convection below that level and it is larger and is less sensitive to temperature).

    Complexities of response:

    1.
    This is complicated by spatial and temporal variations.

    1a.
    The radiative forcing (and it's vertical variations) for any given change is not generally evenly distributed over space and time; just as each additional unit of any one substance (Gas or otherwise) will, beyond some point, have decreasing marginal effect, different agents can overlap with each other; additional CO2 will have less effect in cloudy and humid air masses (although the tropopause level forcing will depend much more on high level clouds and upper tropospheric humidity than low level clouds and humidity, since the CO2 in the cold air above a warm cloud or warm humid air mass will still block some LW radiation emitted from those warming layers;
    ...
    it is also worth pointing out for other reasons that reduction in CO2 radiative forcing by H2O vapor will be greater for surface forcing than for tropopause level forcing at least in part because H2O vapor relative concentration decreases generally exponentially with height, whereas CO2 is well mixed).
    ...
    There is, however, a (climate-dependent) average distribution of optical properties and their alignment with temperature variations, and thus radiative forcing, and the resulting temperature change takes time (short term weather phenomena can actually be described to a large extent without taking into account much radiation, except for the diurnal solar heating cycle). Clouds and humidity cannot realistically be rearranged relative to the horizontal and vertical distribution of temperature with infinite freedom; some things are linked by simple physics and some things correspond predictably because of the basic structure of the atmosphere and it's long-term climate (diurnal and annual cycles, land-sea and other geographical heating contrasts, the coriolis effect, Hadley cells, Walker circulation, monsoons, subtropical dry belts, midlatitude storm tracks, wind-driven and thermohaline ocean circulation, mesoscale convection phenomena, characteristics of variability in QBO, ENSO, NAM and SAM, PDO, AMO, etc, inertial oscillations, inertio-gravity waves, Rossby waves, ...). The global average radiative forcing by mathematical definition corresponds to a global average radiative heating rate below the level considered; if the level forms a closed surface, that heating, however horizontally distributed, cannot simply leak out without some change in climate itself - increased temperature to increase the net LW flux out to balance the radiative forcing + any radiative feedbacks.

    --------------
    (When in climatic equilibrium, the Earth loses heat to space by LW emission at the same rate as it absorbs SW radiation (plus a TINY fudge factor for geothermal and tidal heating). This is a necessary but not sufficient condition for a climatic equilibrium, because climate change can in principle involve spatial and seasonal rearrangements of radiative heating and cooling and the convection/advection that balances them when averaged over fluctuations that could result in zero global-time average change in radiant fluxes. However, there are tendencies for the climate to behave in some ways and not others for any given set of solar, greenhouse, aerosol, geographic, biologic, and orbital (Milankovitch) forcings, etc.); a longer term equilibrium climate can be defined that includes patterns/textures of cyclical and/or chaotic shorter term variability, both from internal variability and from forcing cycles and fluctuations on the shorter time scales (annual and daily cycles, volcanic eruptions (when the statistics of such short term episodic events do not vary over longer time periods, then the resulting short term climate fluctuations can be incorporated into a description of longer-term equilibrium climate).
    --------------

    1b.


    There are daily, seasonal, latitudinal and regional, and weather-related and interannual variations in the distribution of convection and vertical stability in particular. Because much or most latent heating is associated with precipitation that reaches the surface, regions of descent are often dry; descent is also often slow over large areas and so adiabatic warming may be balanced by radiative cooling. Horizontal heat transport in the air from regions where much heat is convected from the surface can produce regions where the air is stable to localized overturning; this is especially true of polar regions in winter, where the surface and lowermost air is often or generally colder than some of the higher tropospheric air. Over land, there is a significant diurnal temperature cycle at and near the surface that is not matched by a similar cycle above - this is because a majority of solar heating is concentrated near the surface over a smaller heat capacity (in sufficiently deep water, there is a large heat capacity that damps short-term temperature cycling; finite thermal conductivity into soil and rock limits the depth available to supply heat capacity for radiative cycling as a function of frequency); thus, the daily high temperature near the surface is more coupled convectively to the temperatures in the rest of the troposphere than the nightime/morning low temperature.

    Horizontal temperature gradients can and do supply potential energy for large-scale overturning even when the air is locally stable to vertical convection, but this occurs more readily when the air is less stable; when air is more stable, a smaller amount of overturning is sufficient to eliminate horizontal temperature gradients by adiabatic cooling of rising air and warming of sinking air. There is a sort of large-scale convective/advective coupling of temperature change patterns, as either reduced horizontal temperature gradients or increased vertical stability will tend to reduce the large scale overturning (the Hadley cells, monsoons, Walker circulations, and the synoptic-scale circulations of strengthening baroclinic waves (the midlatitude storm track pressure systems and the jet stream undulations that correlate with them) - when any overturning on any scale increases, it reduces the tendency for more overturning by mixing heat horizontally and/or stabilizing the air to local vertical convection; a decrease in overturning has the opposite effect, so there is a tendency to approach an equilibrium overturning rate or at least fluctuate about such a rate; however, the spatial arrangment and category of overturning are a bit less constrained, allowing for internal (unforced) variability. And some circulation patterns (cumulus clouds and hurricanes in the short term, ENSO and some forms of storm track variability) can reinforce and strengthem themselves with feedbacks involving self-reinforcing distributions of latent heating and self-reinforcing momentum fluxes (but beyond some point, the midlatitude storm tracks are anchored to the way solar radiation varies with latitude, hurrican activity is regulated by sea surface temperatures and large scale circulation tendencies and temperature gradients, etc, and ENSO is in a way limited in magnitude by the width of the Pacific ocean - the warm water normally in the western tropical Pacific can only slosh back as far as the Americas)...

    The simple 1-dimensional globally representative model (describing everything in terms of a balance between vertical fluxes) also implies that the stratosphere is exactly in radiative equilibrium, but this is only approximately true for the global average. Some kinetic energy produced by overturning in the troposphere actually propogates (via Rossby waves and gravity waves) into the stratosphere and mesosphere and drives circulations there - that kinetic energy is converted to heat in the process, though it is a small amount - the larger effect, as I understand it - is large regional deviations from radiative equilibrium - sinking regions are adiabatically warmed, causing them to be warmer than the radiative equilibrium temperature, so they radiatively cool; rising regions do the opposite.

    (PS the QBO is a nearly-cyclical fluctuation of winds in the equatorial stratosphere that is driven by noncyclical fluxes of momentum from the troposphere, carried by a family of equatorial waves (including in particular Rossy-gravity and Kelvin waves); the cycle is self-organizing - the vertical distribution of winds in the stratosphere regulates where the momentum in different directions carried by different kinds of waves is actually deposited, so that regions of westerly and easterly flow alternately appear at higher levels and slowly propagate downward.)

    2. While the temperature response of the surface and troposphere together tends to follow the (global-average) tropopause level forcing, the distribution of radiative forcing will affect the convection rates and thus the circulation patterns.

    However, except when a forcing is too idiosyncratic, the general tendency of the climate response to a positive tropopause level radiative forcing is:

    At the surface, greatest warming is in higher latitudes in winter where the albedo-feedback is strongest (the summer reduction of sea ice causes winter warming because the solar radiation is absorbed by water without much temperature increase, but this stored heat must then be released in the colder months before ice can reform). In the tropics, increased evaporative cooling is a negative feedback (at least over moist surfaces), but this is balanced by increased latent heating at higher levels - at low latitudes, the greatest warming will tend to be in the mid-to-upper troposphere because of the decrease in the moist adiabatic lapse rate. The stability of the air at high latitudes could help explain why high latitude warming is concentrated near the surface. Because of the opposite tendencies in the large-scale horizontal temperature gradients between lower and higher levels of the troposphere, the effect on baroclinic wave activity (midlatitude storm tracks) is not immediately clear, but more water vapor will be available for latent heating (the horizontal temperature gradient is a necessary condition for baroclinic waves but it is not their only fuel source), and perhaps the reduced vertical stability at higher levels might contribute to a poleward shift in activity (possibly with a positive cloud feedback on the storm tracks' subtropical flanks) - but there are other factors, including changes in the stratosphere and stratosphere-troposphere mechanical interactions (also affected by ozone depletion). The tropopause height will also increase (but is that more for greenhouse forcing than solar forcing?). Because of the dominance of the ocean in the Southern midlatitudes, the wind-driven upwelling of cold water (which, coming from below, will not warm much until the temperature signal of climate change has spread sufficiently through the deeper ocean), and the relative stability of much of the Antarctic Ice sheet (at least for a while) (as opposed to Arctic sea ice in particular), the near-surface high latitude polar warming will not be especially large relative to low latitudes in the the Southern Hemisphere, at least during the first few centuries (??). (Northern hemisphere land masses also have a seasonal snow albedo feedback.)

    The similarity of radiative feedbacks might overwhelm some differences in radiative forcings. The water vapor feedback in particular will have a much stronger radiative forcing at the surface than at the tropopause level (but the tropopause level water vapor feedback is sizable compared to the externally imposed forcing). Because of this, changes in vertical convection rates due to different forcing mechanisms might be more similar. (However, setting aside the radiative implications of the diurnal temperature cycle over land, the global average net convective cooling of the surface cannot get any larger than the direct solar heating of the surface; and precipiation (aside from dew and frost) can only balance evaporative surface cooling, which cannot exceed total convective cooling. Increasing the greenhouse effect will tend to increase precipiation but it cannot do so beyond such limits; aerosol cooling tends to decrease precipitation in a greater proportion to its effect on temperature, so balancing greenhouse warming with aerosol cooling would reduce precipiation in the global average. Where there is a regionally-concentrated forcing, such as by the Asian Brown Cloud, in which there is some tropospheric radiatively-forced warming but a negative radiative forcing at the surface, the temperature response at different levels on the same regional scale will not be coupled so much by convection; convection may be reduced in that region with perhaps some increase elsewhere depending on how much radiative forcing of each sign occurs, etc...

    The greenhouse effect tends to decrease the diurnal temperature cycle near the surface by decrease the relative importance of solar heating in the radiative energy budget - by increasing downward LW radiation by increasing LW opacity, and maybe by increasing LW radiation in both directions by increasing temperature (but only to the point that the net LW flux from the surface doesn't increase (??)). This is related to the larger diurnal temperature cycle found in higher elevations and clear nights with dry air. Wind can reduce the diurnal temperature variation by producing turbulence to mix heat downward at night when the surface is radiatively cooling.

    (Some feedbacks to global warming could regionally alter the surface temperature relative to temperature at other levels by affecting the rate of evapotraspiration.)

    (When there is sufficient solar heating on land, surface temperature is actually warmer than the air temperature just above it. The surface impedes effective convection, leaving thermal conduction and diffusion to transport heat and humidity from the surface to the air and within that very thin layer of air next to the surface. This doesn't destroy the convective coupling of surface temperature to air temperature, but it adds another chain in the link.)

    ------------

    (to be continued...)
  • Arctic sea ice melt - natural or man-made?

    Arkadiusz Semczyszak at 01:37 AM on 2 December, 2008

    Thanks, Patrick,
    Chris, Philippe Chantreau
    I can’t agree with You,
    The stomatal pores, their density in fossil plant leafs, it’s a fundamental legal instrument (not only in ice core context), that We not have a real unbalanced surplus of anthropogenic CO2, that CO2 don’t have long live of atmosphere, likely as present, variability of GHG was always; in finish: confirm it that first growing temperature, later CO2 concentration of air…, summary: not only man-mode melting glaciers…
    The density of stomata varies with such factors as: the temperature, humidity, and light intensity around the plant and also the concentration of carbon dioxide. The mature leaves on the plant detect the conditions around them and send a signal that adjusts the number of stomata that will form on the developing leaves.
    Not all plants We can take on experiments. Only some species have of the line relationship CO2 – stoma. They are tested in greenhouse - very wide range conditions – calibrated. First research works about it, makes in 1974… The results reported by Gregory Retallack (in Nature , 411 :287, 17 May 2001), his study of the fossil leaves of the ginkgo, was cited in the IPCC elaborations…
    “The reliability of this method testing on a total of 285 previously published SD and 145 SI responses to variable CO(2) concentrations from a pool of 176 C(3) plant species.” – Wagner said for students…

    A resolution this method is limited and "smoothed" because “…although the mechanism may involve genetic adaptation and therefore is often not clearly expressed under short CO(2) exposure times.” – “…don't show wild and massive up and down jumps…”
    (Wagner et al, 2002) “…to vary by around 295 +/- 10 ppm over a period of around 2000 years” – It is inadmissible “shortening”. Observed the variability in Fig. 2. is between ~ 275 – 330 ppmv CO2, and with standard deviation ~245 – 340 ppmv (the greatest down - certainly + s. deviations; in a few years ! ~7750 BP = 280 – 340 ppmv CO2, in a ~30-40 years 250 – 320 ppmv around 8700 BP; at the greatest grove – ~ 245 – 320 ppmv CO2 in < 150 years - ~ 8450 – 8600 BP). The range of variability in analyzed period for ice core is ~10 ppmv…, even around 55 ppmv (95 ppmv to vary range with s. deviation) contra 10 ppmv, is it: “relatively small disagreement”?
    Very interesting is comparison it with Fig. 3C in Baker at al. 1998. Correlation, even r-squadron, between a Europe fossil stoma and % C4 in America should be > 80 percent… If its true the range of variability CO2 in Holocene will be between ~200 – 340 ppmv CO2 with specially very quickly and big change between 4800 – 3400 BP. It is fine confirmation by the δ13C composition of stalagmites calcite (Fig. 3A) and…
    … for example, from news - about this variability; but “sedimentary total organic” is in „Holocene weak summer East Asian monsoon intervals in subtropical Taiwan and their global synchronicity” (http://www.clim-past-discuss.net/4/929/2008/cpd-4-929-2008.pdf - see specially Fig. 3). The four centennial periods: ~8–8.3, 5.1–5.7, 4.5–~2.1, and 2–1.6 kyr BP – “of relatively reduced summer East Asian monsoon” having a very interesting mark of reference whit all index in Baker et al., and Wagner at al.…
    Finished, I think percentage C4, maybe will by “fairest” proxy for reconstruction CO2 level (small influence of warm, rain, other falls, etc.)

    E. Steig i J. Severinghaus 27.04.2007 y. on RealClimate say: However very important is it, then concentration CO2 in last 650,000 years wasn’t never above 290 ppmv…, “I'd be very interested to know what they thinks will be achieved trying to cheat us in this way”…

    T. B. van Hoof et al (2008) – “CO2 levels varied by around +/- 10-15 ppmv” (often > 30 ppmv - more in s. d.; by a few years !) to base at early studies: “Coupling between atmospheric CO 2 and temperature during the onset of the Little Ice Age (van Hoof 2004)”. There is one: the shapes confirmations by D 47 core (however it’s only ± 6 ppmv); both: comparisons in other researching studies at fossil stoma (into L. Kouwenberg dissertation). Interesting is Fig. 2.6 (chapter 2) – growing of temperature with reconstruction Man and Jones 2003 (likely Moberg, Esper, etc.) ~ 1180; 1250; 1320 AD preceded a increase CO2 level… - “a temperature response rather” ?
    Kouwenberg in here research conclusion, said:
    “Four native North American conifer species (Tsuga heterophylla, Picea glauca, P. mariana, and Larix laricina) show a decrease in stomatal frequency to a range of historical CO 2 mixing ratios (290 to 370 ppmv). [!]”
    Well, well…
  • Arctic sea ice melt - natural or man-made?

    chris at 04:22 AM on 20 November, 2008

    Re #333

    Arkadiusz,

    ONE: plant leaf stomatal index of past CO2 levels.

    I'm not sure where you get your "..CO2 jumping - even above 100 ppmv by ~ forty - fifty years."! You've brought two stomatal index papers to our attention. These are:

    F. Wagner et al. (2002) "Rapid atmospheric CO2 changes associated with the 8,200-years-B.P. cooling event" Proc. Natl. Acad. Sci. USA 92, 12011-12014

    and:

    T. B. van Hoof et al (2008) "A role for atmospheric CO2 in preindustrial climate forcing" Proc. Natl. Acad. Sci. USA 105, 15815-15818

    In the first one (Wagner et al, 2002) plant leaf stomatal proxy CO2 levels were reconstructed to vary by around 295 +/- 10 ppm over a period of around 2000 years (8700 BP - 6800 BP). That's certainly not consistent with Beck's massive jumps over short periods.

    In the second one reconstructed atmospheric CO2 levels varied by around +/- 10-15 ppm over a period of around 500 years.

    So neither of these is really consistent with Beck. One might make the conclusion that the ice core CO2 data is somewhat "smoothed" out by averaging of atmospheric CO2 over a period of several years before the air in the firn is sealed off (see my post #330 above and discussion of formation of polar ice) . However the stomatal index reconstructions don't show wild and massive up and down jumps of atmospheric CO2.

    One should also be a little careful in assessing the stomatal index CO2 reconstructions. As you may know, the analysis is based on observing the size/number of the stomatal pores in plant leafs, with the assumption that as CO2 levels rise the plants respond by reducing their stomatal pore density (I think that's right!). However there is still quite a bit of controversy amongst the pactitioners themselves as to how reliable the proxy CO2 levels are. If you look at the two papers cited one can see that the error bars are very large (e.g. in the Wagner et al paper they encompass up to almost the entire range of CO2 variation). One might also question whether the stomatal index varies with temperature; e.g. the cold spell near 8200 BP studied by Wagner et al (2002) is though to be due to the collapse of the remnant of the Laurentide ice sheet as part of the late stage of the deglaciation into the Holocene. The effect on the plant stomatal index might have contributions from a temperature response rather than from a drop in atmospheric CO2.

    But whatever the relatively small disagreement between the stomatal plant proxies for CO2 and the ice core measures, all of the paleoproxy reconstructions of atmopsheric CO2 show reasonably steady CO2 levels before the preindustrial age. They certainly don't display "Beck-style" massive up and down jumps.

    And of course we know exactly why Beck's "analysis" shows massive up and down jumps. Much of the data he presents is from data measured in cities. If one looks a some of the original papers that Beck trawls through for his "analysis" one finds that "atmospheric CO2 levels" jump by 40 ppm from the morning to the afternoon, for example.

    That's what happens in cities! We don't need to pretend to be taken in by Beck's ludicrous misrepresentation.

    A more detailed critique is described here (see post #172):

    http://www.skepticalscience.com/solar-activity-sunspots-global-warming.htm





    TWO: historical paleotemperatures in Fontainebleau.

    I don't see your point in directing us to this paper. It seems a nice paper:

    i.e. N. Etien et al. (2008) "A bi-proxy reconstruction of Fontainebleau (France) growing season temperature from A.D. 1596 to 2000" Clim. Past, 4, 91-106

    It shows a very typical proxy temperature evolution over the last 140 years that indicates that the region of Fontainebleau in France is warmer now than it's been in the past. As the authors conclude their abstract:

    "The persistency of the late 20th century warming trend appears unprecedented."

    What did you consider significant? The delta 13C spikes that you note are not necessarily very significant. Again the authors state that one needs to be careful in interpreting delta 13C data from timber; they say:

    "...This argument acts against the use of delta-13C measurements for long term temperature reconstructions despite the fact that it can slightly improve reconstructions for the 20th century."

    Again, this paper seems entirely consistent with our understanding of the climate in Europe during the last 200 years. It doesn't really bear on Becks nonsensical analysis at all. Remember that when we are considering atmospheric CO2 concentrations we are interested in the globally averaged levels, and aren't interested in the sort of local effects that make Beck's analysis completely useless.
  • Volcanoes emit more CO2 than humans

    Patrick 027 at 13:13 PM on 19 October, 2008

    ...
    or

    5. internal variability greater than thought

    __________


    About efficacy of forcings:

    I haven't actually read much about that but here's what I would expect:

    Consider a forcing by

    Solar TSI
    LW (greenhouse) forcing
    volcanic stratospheric aerosols
    tropospheric aerosols
    surface albedo

    For any given forcing - let's start with radiative forcing - there is:

    1. a global average TOA (top of atmosphere) value, R-TOA.

    2. a global average tropopause value, R-tp

    3. a global average surface value, R-sfc.

    4. Some spatial-temporal (seasonal, perhaps interannual) variation in either of R-TOA, R-tp, R-sfc, which I will simply refer to here as R-var.

    5. Some climatic response which results from the effect of R and feedbacks.

    -

    To start with, we might assume an approximation that the climatic response in so far as global average is concerned, is similar to any R-tp or R-TOA for any forcing. Then we might look for deviations from that.

    Differences:

    R-TOA is the forced net change in downward minus outgoing radiation 'at' the top of the atmosphere.

    R-tp is different then R-TOA; both are different from R-sfc - First:

    1.
    An increase in solar TSI - if the same % increase at all wavelengths - the forcing is a heating distributed (unevenly) through the atmosphere and surface. R-TOA is the sum of all of this heating; R-tp is only the heating below the tropopause and is therefore somewhat less than R-TOA; R-sfc is only surface heating and is therefore less than R-tp.

    Typically changes in solar TSI are greater in UV in particular, so a larger fraction than otherwise of solar forcing goes into heating the upper atmosphere, thus decreasing R-tp even further.

    2.
    Greenhouse forcing is a reduced cooling to space, which is a heating of the surface and/or lower atmosphere. The cooling to space of the stratosphere and above, however, increases, while the heating of higher atmospheric layers by the surface and/or lower troposphere decreases. Thus for greenhouse forcing, R-TOA will be a little less then R-tp. Starting at minimal LW opacity, R-sfc might be greater than R-tp (?), but at least for CO2, my impression is increases from the current amount result in greater R-tp than R-sfc. Water vapor is a feedback, but applying the same concepts to water vapor, I think, at least under some conditions, R-sfc is greater than R-tp for water vapor. This is at least in part due to water vapor's increasing concentration toward the surface. Ozone concentration is also variable so greenhouse effects of ozone changes may be a bit different than the 'typical' well-mixed greenhouse gas.

    The exact relationship between R-tp, R-sfc, and R-TOA for even well-mixed greenhouse gases (like CO2, CH4, N2O, CFCs) (they have some spatial and seasonal variations but not to the degree of ozone or water vapor) could vary because they have different spectrums, and temperature (and water vapor, ozone, cloud content) varies with height (and other dimensions), they may overlap with each other and other things in different ways due to the above differences, and they have different initial amounts before changes occur.

    3.
    a decrease (to keep the same sign of forcings for more straightforward comparison) in volcanic stratospheric aerosols - this would reduce albedo. The aerosols reflect SW (solar) radiation back up from the stratosphere, thus cooling the troposphere and surface but possibly heating the air above; and perhaps heating the stratosphere a little bit (? I think the stratosphere or some part of it actually warms up after relevant eruptions - this might be due to the nonzero absorption of solar radiation by the aerosols themselves) (some of the solar radiation is scattered downward or sideways - for a near-overhead sun (middle of day, summer midlatitude, or at low latitude), this can increase the path length before reaching the surface, thus increasing the portion absorbed in the air...) ... SO scattering of radiation is complicated (but not so much that it isn't understood), but reducing volcanic aerosols results in an R-sfc and R-tp greater than R-TOA, and I suspect R-sfc would be greater than R-tp.

    4.
    tropospheric aerosols

    4a.
    A decrease in the albedo from reduced scattering by aerosols:

    R-sfc will be greater than R-tp and R-TOA as some of the reflected and scattered radiation had been absorbed by air and clouds.

    4b.

    An increase in the atmospheric heating by increased absorption of aerosols:

    R-TOA and R-tp will be positive while R-sfc is negative.

    4c. scattered radiation can be subsequently absorbed in the air; the total effect of aerosols is not simple, but again, it isn't an impossible riddle either.

    5.
    Decrease in albedo due to surface conditions: The change in albedo actually at the surface may have to be greater than that which results at TOA, due to clouds, but also time of day and year issues, and latitude. Anyway, reflected solar radiation has a second chance to be absorbed by the air, so the decrease in albedo because of surface conditions may result in R-sfc greater than R-tp and R-tp greater than R-TOA (but perhaps only slightly).

    HOWEVER:

    R-tp may be (as it is in IPCC work) defined as that which occurs after the stratosphere and above have reached thermal equilibrium with the forced heating or cooling (R-TOA - R-tp) which occurs there (PS notice this is not the same as that equilibrium which would result after the climate response including the tropsophere and surface). If R-TOA is greater than R-tp, then the stratosphere, etc, will have warmed, so R-tp will be a little higher as a result due to increased downward LW radiation (or a decrease in net upward LW radiation). If R-TOA is less than R-tp, the opposite will be true. In other words, R-tp will get closer to the original R-TOA (But I don't think it would be equal to the original R-TOA - I expect it to still be less or greater than R-TOA, whichever was the case to begin with). R-sfc might also shift in the same direction but not as much so long as there are any greenhouse agents within the troposphere.

    Of course, in the full climatic response, however tropospheric heating (R-tp - R-sfc) is distributed within the troposphere, or however much it is, as an upper layer warms up, it reduces convective heat transport from below, thus the tendency is for the full effect of R-tp to propogate by convection to the surface, whatever R-sfc was. However, a larger R-tp - R-sfc and/or smaller or negative R-sfc value will tend to reduce convection from the surface - HOWEVER, after all feedbacks have occured, the radiative heating/cooling distribution may be different again. ***I think this would be less true for regionally-concentrated forcings (pockets of high aerosol concentrations, for example), because advection into and out of the area would prevent a radiative convective equilibrium on the regional scale, so perhaps this is partly why I hear of atmospheric brown clouds (dark absorbing aerosols) in particular reducing vertical motion by increasing stability.


    So a global average R-tp will tend to result in some global average tropospheric and surface temperature increase. Some other effects due to the vertical distribution may change the feedbacks that occur and thus the resulting temperature changes in the surface and troposphere - but to my knowledge that is not a big effect (?).

    The horizontal (and seasonal, if and when it matters (ozone)) variations could also affect the actual global average results. For example - the R-tp and R-TOA of albedo reduction from BC landing on snow/ice will likely be a little smaller than the R-sfc value (some radiation reflected from the surface can be reflected back to the surface by clouds, aerosols, and air molecules); furthermore and perhaps much more importantly, the effect is concentrated where a positive feedback is also concentrated (snow-ice albedo feedback). Thus the climate sensitivity could be expected to be larger to BC on snow/ice forcing than to some other forcings, to the extent that the forced heating is not entirely advected away from similar locations.


    As far as anthropogenic well mixed greenhouse gases (WMGHG - to adopt the acronymn I saw in a paper - this includes CO2, CH4, N2O, CFCs - well, at least a couple CFCs) compare to solar radiative forcing - the geographic distribution of R-tp is going to be at least a little similar on a broad scale - the LW forcing is highest in the subtropics because of the relatively dry cloud-free air and higher lapse rates; high cloud tops in the tropics prevent greenhouse gases below them from having any direct effect on R-tp; lower tropospheric and surface temperatures in general and smaller lapse rates at higher latitudes reduce the difference in outgoing LW radiation (at least at tropopause level - and the tropopause is lower there, too) that would result from changing greenhouse gas concentrations (and the lower surface temperatures. Solar forcing will generally be greatest at low latitudes, during the day, and/or in summer, where there are fewer clouds, reflective aerosols, darker surfaces (ocean, forests), etc. For example, the dry subtropics (but unlike WMGHGs, solar forcing would not be as large over dry light-colored landscapes as it would be over dark oceans). Etc. R-tp will be higher than otherwise when there is less stratospheric ozone. There is a latitudinal and seasonal ozone variation - there tends to be more ozone at higher latitudes in winter/spring, I think - because while stratospheric ozone is produced more at low latitudes, winter stratospheric circulation brings it into high latitudes, and actually 'piles it up' there, in part (if not in whole) because the stratosphere is thicker at higher latitudes (lower tropopause)...

    ----

    Of course anthropogentic GHG forcing is expected to result in a cooler stratosphere (observed - although stratospheric ozone depletion also has a similar effect - but each can be calculated so it should be possible to attribute portions of cooling), and greater warming at nights during days near and at the surface over land (not much diurnal cycle to begin with over oceans because of heat capacity) - (also observed, at least somewhat). Positive solar forcing that would warm the surface and troposphere would also warm the stratosphere (not observed). However, because of this, there could be effects on atmospheric circulation that are different than for GHGs, which might affect climate sensitivity (but how much and in what direction?).***

    (Quietman - if you want to show a reduced climate sensitivity by way of greater total forcing, you might try looking into how solar forcing, including non-TSI or non-UV effects, affect not only the stratosphere, but also the ionosphere, and for example the E-region dynamo, and how geomagnetic effects also affec the E-region dynamo and solar-magnetospheric-ionospheric interactions, and what any resulting circulation pattern changes would be, and if and how that propogates downward. I am not saying that I expect you to be successful, but it's a thought - while I have my doubts, I think it's got a lot more potential than submarine volcanism, solar jerk, tides on sun, Spencer's PDO+ENSO work, Spencer's cloud forcing work, urban heat island dominance, or the idea that there hasn't been a recent spurt of global warming above and beyond internal variability.)
  • Empirical evidence for positive feedback

    Wondering Aloud at 02:33 AM on 14 December, 2007

    You mention here Tung and Camp, in a paper by Camp and Tung published in Geophysical Research letters volume 34 this year, these same two authors found a link between total solar irradience and temperature change including a short time lag. I suspect this is part of the same research. If this research is valid than one of the things it suggests is that total solar irradience explains a significant chunk of late 20th century warming, perhaps all of it detected by satellite measurements. While this research may support your point here it is somewhat contradicting your argument in the "it's the sun segment."

    From reading your link it appears to me that they are showing a positive feedback for total solar irradience changing. This is likely to mean CO2 would also have a positive feedback but it is not actually evidence of that. As the sun varies the amount of energy reaching us varies not only in amount but in distribution of wavelengths. Since high solar activity correlates with higher temperatures it may be that UV of x-ray radiation have a larger proportional affect on climate than visible and IR. If so they would also have a place to look for a mechanism for the phenomena they report.

    Am I the only one out here who is waiting hopefully for global warming? We are having a heat wave, warmest day in weeks at -8.


The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us