Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Twitter Facebook YouTube Mastodon MeWe

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Search Tips

Comment Search Results

Search for Argo data

Comments matching the search Argo data:

    More than 100 comments found. Only the most recent 100 have been displayed.

  • The Big Picture

    peppers at 02:50 AM on 21 March, 2023

    HI One World, and also Rob made some sea level comments as well.
    Im sorry I don’t have more time, and some of you commit large swaths of time here and I appreciate that.
    Using NASA data, presuming they have the best resources and equipment, satellites and those argo sea probes et al. to gather original data, they show sea level rising since 1993 to be 3.8 inches.
    https://sealevel.nasa.gov/
    But what I think you are trying to indicate, and what the graphs show mostly, are that the rate of sea rise increases as the warming continues higher and higher. An exponential effect is presented. So taking past temperature increases will not explain future expected gains. It is either an exponential increase or the suggestion is that the increase is delayed so that as we go up in temperature, the rise happens decades later and there is a build.
    I think we have finished with the run away suggestions for nature. The train bearing down on a child and all that. I see that as tactics to get people to listen and pay attention, but nothing true in our environment. Nature balances. She reacts. This Co2 rise is a reaction and right now she is reacting to this human population boom, which is unprecedented in history. And all the energy use associated with all these new counts of people on earth, living longer and healthier than ever, this is increasing Co2 counts and enriching our surface world in all the ways Co2 can do that.
    Nasa used 4-5 scales to predict sea rise, 1. tracking if nothing is done, 2. some is done and 3. complete zero new emissions is achieved ( which cannot happen until the population levels out in 60 or so years ).
    By 2100 there is Nasa modeling of .4 to .8 meter rise, using the data set of 2. some is being done. Doing what we can will be instrumental in keeping high tide from being higher than usual in that future time. I’ve tries to stay with the median predictions, so this is not discrepancy conversation of the outer 5%’s.
    Science American believes no new storms are made but the severity of moisture based storms may increase by 2-4 miles per hour. The threat of sea rise is about the most serious threat.
    I understand better where you are coming from. I still have the higher philosophical orientation to grapple with.
    If mankind has finally achieved the goal of conquering the mission of dreams pondered throughout the pain filled ages, of solving misery and pain and finding medical success beyond any expectations. Is this worth it? A sea level rise?
    The highest gain has been with infant mortality, which has plummeted from the high middle ages at 400-500 per thousand to 5.5 infants per thousand today. Think of all the occasions of birth deaths which also took the mother too, to quantify misery. That and antibiotics alone have caused this phenomenon of Co2 rise. Life spans have increased 61%, living conditions have soared, medicine is in a wonderland of abilities and birth to adulthood stats are beyond anyone’s wildest dreams. The question is; is that worth a side effect of sea level rising a foot and a half, maybe 2 feet at high tide.
    This endeavor appears to goad and cajole and shame people using fossil fuel and I suppose that is the fastest way to get attention. But I do not believe it to be honest. This appears to be unwittingly human caused and one must decide if it is worth the subsequent consequences ahead. It is not from derelict and wanton people, it is from the results of scientific achievement, sought after for ages and finally achieved within the science that coincided with the industrial revolution. The origin of this is important to be able to consider context to this issue. If I were there and had the choice in my hands, I’d have us standing exactly where we were today. Reducing Co2 is still important, but I wouldn’t be bullying any brothers from any mothers over this. It is important, but not that important all things considered.

  • It's not bad

    peppers at 02:49 AM on 21 March, 2023

    HI One World, and also Rob made some sea level comments as well.
    Im sorry I don’t have more time, and some of you commit large swaths of time here and I appreciate that.
    Using NASA data, presuming they have the best resources and equipment, satellites and those argo sea probes et al. to gather original data, they show sea level rising since 1993 to be 3.8 inches.
    https://sealevel.nasa.gov/
    But what I think you are trying to indicate, and what the graphs show mostly, are that the rate of sea rise increases as the warming continues higher and higher. An exponential effect is presented. So taking past temperature increases will not explain future expected gains. It is either an exponential increase or the suggestion is that the increase is delayed so that as we go up in temperature, the rise happens decades later and there is a build.
    I think we have finished with the run away suggestions for nature. The train bearing down on a child and all that. I see that as tactics to get people to listen and pay attention, but nothing true in our environment. Nature balances. She reacts. This Co2 rise is a reaction and right now she is reacting to this human population boom, which is unprecedented in history. And all the energy use associated with all these new counts of people on earth, living longer and healthier than ever, this is increasing Co2 counts and enriching our surface world in all the ways Co2 can do that.
    Nasa used 4-5 scales to predict sea rise, 1. tracking if nothing is done, 2. some is done and 3. complete zero new emissions is achieved ( which cannot happen until the population levels out in 60 or so years ).
    By 2100 there is Nasa modeling of .4 to .8 meter rise, using the data set of 2. some is being done. Doing what we can will be instrumental in keeping high tide from being higher than usual in that future time. I’ve tries to stay with the median predictions, so this is not discrepancy conversation of the outer 5%’s.
    Science American believes no new storms are made but the severity of moisture based storms may increase by 2-4 miles per hour. The threat of sea rise is about the most serious threat.
    I understand better where you are coming from. I still have the higher philosophical orientation to grapple with.
    If mankind has finally achieved the goal of conquering the mission of dreams pondered throughout the pain filled ages, of solving misery and pain and finding medical success beyond any expectations. Is this worth it? A sea level rise?
    The highest gain has been with infant mortality, which has plummeted from the high middle ages at 400-500 per thousand to 5.5 infants per thousand today. Think of all the occasions of birth deaths which also took the mother too, to quantify misery. That and antibiotics alone have caused this phenomenon of Co2 rise. Life spans have increased 61%, living conditions have soared, medicine is in a wonderland of abilities and birth to adulthood stats are beyond anyone’s wildest dreams. The question is; is that worth a side effect of sea level rising a foot and a half, maybe 2 feet at high tide.
    This endeavor appears to goad and cajole and shame people using fossil fuel and I suppose that is the fastest way to get attention. But I do not believe it to be honest. This appears to be unwittingly human caused and one must decide if it is worth the subsequent consequences ahead. It is not from derelict and wanton people, it is from the results of scientific achievement, sought after for ages and finally achieved within the science that coincided with the industrial revolution. The origin of this is important to be able to consider context to this issue. If I were there and had the choice in my hands, I’d have us standing exactly where we were today. Reducing Co2 is still important, but I wouldn’t be bullying any brothers from any mothers over this. It is important, but not that important all things considered.

  • SkS Analogy 22 - Energy SeaSaw

    scaddenp at 11:29 AM on 7 May, 2021

    In the Argo age, you could argue that OHC www.ncei.noaa.gov/access/global-ocean-heat-content/ is both a less noisy dataset (and so significance of trends is established over shorter time frames), and a better indicator of climate.

  • It hasn't warmed since 1998

    michael sweet at 07:16 AM on 24 April, 2021

    Vonyisz,


    The answer to your question "what we know about changes in energy across the ocean today?".  The ARGO floats measure most of the ocean to a depth of 2,000 meters.  This part of the ocean is pretty well known.  The areas under sea ice are harder to measure but not that extensive (and they are measured to some extent).  Deeper than 2,000 meters is hard because there are not many old measurements.  Fortunately, the change in temperature is small, hundredths to thousandths of a degree.


    This article  gives information on ocean temperatures to a depth of 4757 meters near Argentina.  They were using equipment designed to measure currents and realized that they had sensitive temperature measurements also.  Apparently these current measurements are done in many locations and scientists will use them to determine deep ocean changes for the past 10-15 years.   These detailed measurements can be used to calibrate other older records.  


    The bottom line is that the deep ocean has not changed very much yet.  Because it is so hard to measure the changes are not well characterized.  Recent data will start to track deep ocean changes.  Because the changes are small they do not affect the big picture of AGW.


    A lot is known about ocean flow also.  This article details changes in large eddies.  Other currents are monitored regularly.  Scientists often report that they are surprised by how fast everything is changing.  They are optimistic at first.

  • Hurricanes, wildfires, and heat dominated U.S. weather in 2020

    iskepticaluser at 08:13 AM on 24 February, 2021

    Jamesh ~


    The ocean heat content (OHC) measure of heat build-up is particularly relevant (since over 90% of excess heat trapped by our thickened greenhouse blanket is stored in the oceans), and millions of readings from ARGO ocean-profiling floats plus advances in statistical analysis of those and other observations are giving us a clearer picture its evolution (though there are still discrepancies between estimates of different analyses, quite common in a relatively-new observational science). A recent paper by Cheng et al. ("Upper Ocean Temperatures Hit Record High in 2020") exposes full-depth OHC since 1960 of 380 ± 81 ZJ (that's Zettajoules = 10^21 or a billion trillion joules; a 100W light-bulb consumes 100 joules of energy per second).


    Most worrying, the RATE of increase in OHC since 1986 equals almost eight times that of 1958-1985, at 9.1 ZJ per year, or roughly 10 ZJ for the entire Earth system (OHC plus heat to warm the land and atmosphere and melt ice world-wide).


    This excess energy STAYS IN THE SYSTEM, cycling between ocean and atmosphere to drive everything from deeper droughts and deluges to incresingly-severe fire seasons to changing ocean and atmospheric circulation patterns, novel disease distributions, rising sea levels and attendant economic, social and, increasingly, political turmoil.


    As to the SCALE of the problem, consider this. Average 1986-2019 global energy consumption - backed by everything from hydro to wind to nuclear, oil, coal and cow dung - is 0.48 ZJ per year. That means that minute by minute, hour by hour and year by year, since 1986 the planet has trapped an amount of excess heat equal to TWENTY-ONE TIMES the energy consumed by the global economy.


    Given the early climate-change impacts we are already suffering, WE HAVE TO REVERSE COURSE. If atmospheric GHG (and cooling aerosol) concentrations were somehow stabilized at current levels, the planet would continue heating up (though at declining rates) until atmospheric temperatures were high enough to re-establish incoming/outgoing radiative balance at the edge of space.


    But if we want to forestall worsening impacts, let alone eventually bring global temperature levels back down to those for which human civilization and biological diversity are designed, we have to somehow DRAW DOWN those GHG levels from the current 415 to around 350 ppm CO2.

  • Is Nuclear Energy the Answer?

    Preston Urka at 13:36 PM on 31 August, 2020

    michael sweet @199 per your comment


    "Nuclear supporters frequently claim data from 1999 to support nuclear in 2020. Here Preston Urka uses outdated and incomplete (and uncited) data. Previously he compared 2019 solar costs in the UAE to nuclear costs from 2009. It is easy to make something look good using outdated and incomplete data."


    You just can't read the links and caveats can you?


    I clearly state where my data came from - is it the best data - no, but it is what I had available. I note the discrepancies. Your comments really are not in the spirit of courteous discourse.


    I note you also have used Wikipedia (in preference to IEA data no less! where the data is available!!!).


    ---


    "Nuclear supporters like Preston Urka are claiming nuclear can supply a portion of electricity only. Electricity is only about 20% of all power."


    First, the IEA disagrees with you about 20% of all power. I suggest you re-research that number. With electrification of industry, transport, etc this share will just increase.


    Second, I will let you in on a secret - EV batteries work just as well on electricity from NPPs as from RE! - I know! Who knew???? It also turns out that other appliances and tools are the same. Amazing!


    Third, nuclear produces a lot of heat.



    • Process heat is useful in industry.


      • About 70-90% of their total energy use is process heat.

      • Using process heat directly is more efficient than generating electricity and using electricity to generate heat.

      • Note there are no current cases, but the technology is engineering, not ground-breaking science.


    • Process heat can also be used to create synthetic fuels for transportation and agriculture (in addition to EV-type juice). 

    • Process heat can be used directly in transportation.


      • The US Navy has the greenest submarines in the world!


        • I believe many on this website are Australian - you guys should stop buying those nasty, carbon-dioxide emitting diesels. The idea that you will convert a green French submarine into a dirty emissions scow is horrifying!


      • The Russians have green icebreakers!

      • Large cargo ships (currently burning bunker oil and accounting for 1-3% of global emissions) can easily be converted to nuclear - using existing military designs - or some of the newer micro reactors.


    • Wind? - no, no process heat from wind. Need to lose energy in converting electricity to heat.

    • Solar PV? - no, no process heat from solar PV. Need to lose energy in converting electricity to heat.

    • Solar CSP? - Yes, but solar CSP tends to be in sunny arid deserts. For example, one of the biggest US chemical plants is Dow Chemical in Midland, Michigan. It is not in a sunny desert, but a cloudy northern climate.


      • Pipe the coolant north! - a 1000 km from the Mohave to Midland? (Not sure on this distance michael sweet, better check me!) - not a great idea, efficiency-wise.

      • Move the plant south? - ok, but you just blew the carbon budget.

      • Close the old plant, and open a new southern plant? - ok, but you just blew the carbon budget.

      • Again, I believe this website has an Australian connection - How many 10MW+ Solar CSP plants are in Australia? Are any of 10MW+ solar CSP plants providing process heat to Australian industry? 1MW? - I mean 1MW is just a large diesel generator. And solar is soooooo cheap!


    • Lastly, nuclear process heat can go from 350 C to 1200 C - a huge range of industrial process (most of which start around 600).


      • Solar CSP - well, from 250 C to about 650 C - just where process heat starts going good.

      • ammonia starts at 400 C

      • glass starts at 500 C

      • cement starts at 800 C

      • thermo-electrolysis to produce hydrogen at 850 C

      • aluminum starts at 940 C

      • silica glass starts at 1000 C



    Lastly, I have never claimed, in this forum, any other forum, or in person that nuclear can supply electricity only.

  • YouTube's Climate Denial Problem

    nigelj at 11:15 AM on 6 April, 2020

    dudo39 @8


    Your comments are mostly misguided. Sorry about that, you will get over it.


    We already know and accept water vapour is a greenhouse gas, but you have to be able to explain why its increased in the atmosphere in recent decades, and the IPCC has determined this is because of the CO2 forcing causing evaporation. The proven underlying thing driving the warming is CO2, with water vapour as a feedback. We know the spectral properties of the water molecule so know how much warming this water vapour causes in comparison to the C02 molecule.


    The one area of doubt is the effect of clouds, but most published research finds they have a slightly positive warming effect overall or are neutral. They cannot be sharply negative or there would be no warming.


    You do not need one million argo floats to sufficiently sample ocean temperatures. And ocean temperature trends are broadly similar to atmospheric and land based trends which you would expect so this provides evidence there are more than enough argo floats, and that 'drift' is not a significant issue.


    The issue with weather stations in northern Russia obviously has little significance for global temperatures, and you provide no link to back up your assertions about Russia. The urban heat island effect is taken into consideration and temperatures are adjusted downwards where its an issue. And research has determined its not a huge issue anyway.Regarding temperature adjustments, Read this article.


    Since you are so conerned about facts, the global temperature dataset as a whole has been adjusted down because of a known issue with ships buoy issues. This is the reality, and is the complete opposite of the false denialist claims that global tempertaures have been adjusted upwards. Read this article.


    Now go away and spread your useless, badly informed doubt somewhere else preferably in a hole in the ground.

  • Skeptical Science New Research for Week #2, 2020

    Mathieu at 08:38 AM on 16 January, 2020

    A study using Zetajoules always make my critical mind goes wild, alarms going on.

    99.8% of the globe does not know what is a zetajoules and cannot convert it to a more common metric, the Celsius.

    Doing the maths i saw that the ''shocking'' study (from some headlines i saw on MSM) is actually a 0.1 C of increase in the last 60 years. Ok? but it becomes wilder still. The margin of error is 0.003%. I mean come on! Do you really takes us all for retards?

    With 4000 buoys from Argo you can't pretend having such a low margin of error. It is a deceptive affirmation, as every buoy needs to mesure the temperature of the sea the size of Portugal AND 2 KMs deep. 

    Does anyone here believe a reading in Lisbon actually gives the temperatures of Porto, Faro or Lagos? It is ridiculous and does not help the narrative at all. 

     

    I'm 100% for waking up people but the data and affirmation needs to be strong and realistic. In the end, anyone with some understanding of margin of error can understand the results of the ''study'' IS within the margin of error and what does this say? Nothing here move along...

  • 3 clean energy myths that can lead to a productive climate conversation

    ThinkingMan at 08:40 AM on 10 April, 2019

    Michael Sweet challenged my 3 April post’s main point: The full cost of RELIABLE electricity service structured around wind turbines SIGNIFICANTLY EXCEEDS the full cost of reliable service based on a combined cycle natural gas turbine (CCGT). This post begins the process of supporting the statement. At least one more post will be needed to complete the process.

    RELIABLE is a key word in the initial post. Reliable service has for decades been characteristic of Australia, New Zealand, Japan, Korea, Taiwan, North America, Europe and elsewhere. Thus, globally, electricity users are now accustomed to getting all the electricity they want when they want. Lights glow when switched on, and stay on until switched off. Personal devices, laptop computers, Teslas and other battery operated items get charged when needed. Stop lights function full time, and electric trains run on schedule. Refrigerators and freezer s work round the clock. Meals are cooked when needed. Stores are open, fully illuminated and operational when shoppers visit (ditto schools, hospitals and bureaucracies). Work schedules are regular, and one puts in a full day every day.

    Whereas society is accustomed to reliable electricity, wind turbines generate unreliable electricity. Their output is intermittent, variable and unpredictable. And, other traits can differ from electricity produced by conventional generators.

    How unreliable is wind electricity? In Texas, the wind turbine capacity factor routinely fluctuates FIVEFOLD during 24 hour periods. Fivefold means the highest capacity factor is 5x the lowest. For example, the capacity factor was 63.7% at 4 a.m. (an off peak time) on 31 Dec 2018 and 12.1% at 5 p.m. (a peak demand time) 30 Dec. . More than one quarter of the time, the capacity factor is less than 20%. One quarter is equivalent to 6 hours per day. The 6 hours tend to occur during business hours—when electricity demand is strong. Each year, seasonal forces reduce the capacity factor 35% while concurrently raising demand 45%. For the source data, go to the “Hourly Aggregated Wind Output” entry on www.ercot.com/gridinfo/generation.

    Wind electricity is also unreliable in New England. For the source data go to: https://www.iso-ne.com/isoexpress/web/reports/operations/-/tree/daily-gen-fuel-type

    For the benefit of readers not familiar with industry jargon, capacity factor is a measure of utilization. When generation equals rated capacity, the capacity factor equals 100%. A 50% capacity factor indicates rated capacity is half utilized. 10% indicates one tenth.

    Because wind turbine output is erratic and frequently mismatched with electricity demand, wind turbines must be supplemented with additional equipment so society gets reliable electricity service. The additional equipment adds capital and operating costs to the system, thereby raising the full cost of service.

    Actual experience and data suggest the cost of reliable electricity correlates with wind & solar’s combined share of electricity supplies. In Europe, electricity rates are highest in the two countries most dependent on renewables. The two countries are Denmark and Germany. Furthermore, rates rose more in Denmark and Germany than elsewhere in Europe while these two countries installed the bulk of their wind capacity. In Australia, rates are highest and rose fastest in the state most dependent on wind & solar (South Australia). Germany, Denmark and South Australia have the highest electricity rates in the WORLD (source: https://www.afr.com/news/australian-households-pay-highest-power-prices-in-world-20170804-gxp58a ). In the United States, electricity rates in the top 10 wind producing states as a group ROSE 7x faster than the U.S. average. The comparison period is 2008-2013 (source: https://www.forbes.com/sites/jamestaylor/2014/10/17/electricity-prices-soaring-in-top-10-wind-power-states/#70c08fbe6112 ). The conflict between experience and claims about the cost of wind electricity prompted me to look into estimates of wind turbine costs. Insights will follow in a future post.

  • Freedom of Information (FOI) requests were ignored

    Daniel Bailey at 09:45 AM on 26 December, 2018

    Actually, pretty much all of the data (raw or otherwise) and model code is openly available.

    The raw data:

    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2
    ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/
    ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/
    http://dss.ucar.edu/datasets/ds570.0/
    http://www.antarctica.ac.uk/met/READER
    http://eca.knmi.nl/
    http://www.zamg.ac.at/histalp/content/view/35/1
    http://amsu.cira.colostate.edu/
    Link to SORCE
    http://daac.gsfc.nasa.gov/atdd
    http://oceancolor.gsfc.nasa.gov/
    http://www.psmsl.org/
    http://wgms.ch/
    http://www.argo.net/
    http://icoads.noaa.gov/
    http://aeronet.gsfc.nasa.gov/
    http://aoncadis.ucar.edu/home.htm
    http://climexp.knmi.nl/start.cgi?someone@somewhere
    http://dapper.pmel.noaa.gov/dchart/
    http://ingrid.ldgo.columbia.edu/
    http://daac.gsfc.nasa.gov/giovanni/
    http://www.pacificclimate.org/tools/select
    http://gcmd.nasa.gov/
    http://www.clivar.org/data/global.php
    http://www.ncdc.noaa.gov/oa/ncdc.html
    http://www.ipcc-data.org/maps/
    http://climatedataguide.ucar.edu/
    http://cdiac.ornl.gov/
    http://www.cru.uea.ac.uk/cru/data/
    http://www.hadobs.org/

    Next, the processed data:

    http://data.giss.nasa.gov/gistemp
    http://clearclimatecode.org/
    http://hadobs.metoffice.com/hadcrut4/index.html
    http://www.ncdc.noaa.gov/cmb-faq/anomalies.php#anomalies
    http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/ann_wld.html
    http://www.berkeleyearth.org/
    http://vortex.nsstc.uah.edu/data/msu/
    http://www.ssmi.com/msu/msu_data_description.html
    http://www.star.nesdis.noaa.gov/smcd/emb/mscat/mscatmain.htm
    ftp://eclipse.ncdc.noaa.gov/pub/OI-daily-v2/
    http://www.cpc.noaa.gov/products/stratosphere/temperature/
    http://arctic.atmos.uiuc.edu/cryosphere/
    http://nsidc.org/data/seaice_index/
    http://www.ijis.iarc.uaf.edu/en/home/seaice_extent.htm
    https://seaice.uni-bremen.de/sea-ice-concentration/
    http://arctic-roos.org/
    http://ocean.dmi.dk/arctic/icecover.uk.php
    http://www.univie.ac.at/theoret-met/research/raobcore/
    http://hadobs.metoffice.com/hadat/
    http://weather.uwyo.edu/upperair/sounding.html
    http://www.ncdc.noaa.gov/oa/climate/ratpac/
    http://www.ccrc.unsw.edu.au/staff/profiles/sherwood/radproj/index.html
    http://cdiac.ornl.gov/trends/temp/sterin/sterin.html
    http://cdiac.ornl.gov/trends/temp/angell/angell.html
    http://isccp.giss.nasa.gov/products/onlineData.html
    http://eosweb.larc.nasa.gov/project/ceres/table_ceres.html
    http://sealevel.colorado.edu/
    http://ibis.grdl.noaa.gov/SAT/SeaLevelRise/index.php
    http://dataipsl.ipsl.jussieu.fr/AEROCOM/
    http://gacp.giss.nasa.gov/
    http://www.esrl.noaa.gov/gmd/aggi/
    http://www.esrl.noaa.gov/gmd/ccgg/trends/
    http://gaw.kishou.go.jp/wdcgg/
    http://airs.jpl.nasa.gov/AIRS_CO2_Data/
    http://www.usap-data.org/entry/NSF-ANT04-40414/2009-09-12_11-10-10/
    http://climate.rutgers.edu/snowcover/index.php
    http://glims.colorado.edu/glacierdata/
    http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/
    http://oceans.pmel.noaa.gov/
    http://cdiac.ornl.gov/oceans/
    http://gosic.org/ios/MATRICES/ECV/ecv-matrix.htm
    http://www.ncdc.noaa.gov/bams-state-of-the-climate/2009-time-series/

    Now, the model code:

    http://www.giss.nasa.gov/tools/modelE/
    ftp://ftp.giss.nasa.gov/pub/modelE/
    http://simplex.giss.nasa.gov/snapshots/
    http://www.cesm.ucar.edu/models/
    http://www.ccsm.ucar.edu/
    http://www.ccsm.ucar.edu/models/ccsm3.0/
    http://www.cgd.ucar.edu/cms/ccm3/source.shtml
    http://edgcm.columbia.edu/
    http://www.mi.uni-hamburg.de/Projekte.209.0.html?&L=3
    http://www.mi.uni-hamburg.de/SAM.6074.0.html?&L=3
    http://www.mi.uni-hamburg.de/PUMA.215.0.html?&L=3
    http://www.mi.uni-hamburg.de/Planet-Simul.216.0.html?&L=3
    http://www.nemo-ocean.eu/
    http://www.gfdl.noaa.gov/fms
    http://mitgcm.org/
    https://github.com/E3SM-Project
    http://rtweb.aer.com/rrtm_frame.html
    http://www.sciencemag.org/cgi/content/full/317/5846/1866d/DC1
    http://www.pnas.org/content/suppl/2009/12/07/0907765106.DCSupplemental
    http://geoflop.uchicago.edu/forecast/docs/Projects/modtran.html
    http://geoflop.uchicago.edu/forecast/docs/models.html
    http://www.fnu.zmaw.de/FUND.5679.0.html
    http://www.pbl.nl/en/themasites/fair/index.html
    http://nordhaus.econ.yale.edu/DICE2007.htm
    http://nordhaus.econ.yale.edu/RICEModelDiscussionasofSeptember30.htm
    https://github.com/rodrigo-caballero/CliMT
    http://climdyn.misu.su.se/climt/
    http://starship.python.net/crew/jsaenz/pyclimate/
    http://www-pcmdi.llnl.gov/software-portal/cdat
    http://www.gps.caltech.edu/~tapio/imputation
    http://holocene.meteo.psu.edu/Mann/tools/MTM-SVD/
    http://www.atmos.ucla.edu/tcd/ssa/
    http://holocene.meteo.psu.edu/Mann/tools/MTM-RED/
    http://www.cgd.ucar.edu/cas/wigley/magicc/

    Source code for GISTEMP is here:

    https://data.giss.nasa.gov/gistemp/sources_v3/
    https://data.giss.nasa.gov/gistemp/news/
    https://data.giss.nasa.gov/gistemp/faq/
    https://data.giss.nasa.gov/gistemp/
    https://simplex.giss.nasa.gov/snapshots/

    Related links:

    https://data.giss.nasa.gov/gistemp/faq/
    https://data.giss.nasa.gov/gistemp/faq/#q209
    https://podaac.jpl.nasa.gov/
    https://daac.gsfc.nasa.gov/
    https://earthdata.nasa.gov/about/daacs
    http://www.wmo.int/pages/prog/wcp/wcdmp/index_en.php
    http://berkeleyearth.org/summary-of-findings/
    http://berkeleyearth.org/faq/
    https://www.climate.gov/news-features/understanding-climate/climate-change-global-temperature
    https://www.climate.gov/maps-data/primer/climate-data-primer
    https://www.ncdc.noaa.gov/monitoring-references/faq/anomalies.php
    https://www.ncdc.noaa.gov/ghcnm/v3.php?section=quality_assurance
    https://www.ncdc.noaa.gov/ghcnm/v3.php?section=homogeneity_adjustment
    https://www.ncdc.noaa.gov/crn/
    https://www.ncdc.noaa.gov/crn/measurements.html
    https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2009JD013094
    https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2011JD016761
    https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2015GL067640
    https://www.clim-past.net/8/89/2012/
    https://www.carbonbrief.org/explainer-how-data-adjustments-affect-global-temperature-records

    Global surface temperature records use station temperature data for long-term climate studies. For station data to be useful for these studies, it is essential that measurements are consistent in where, how and when they were taken. Jumps unrelated to temperature, introduced by station moves or equipment updates, need to be eliminated. The current procedure also applies an automated system that uses systematic comparisons with neighboring stations to deal with artificial changes, which ensures that the Urban Heat Island effect is not influencing the temperature trends. In the same fashion that a chef turns raw ingredients into a fine meal, scientists turn raw data into a highly accurate and reliable long-term temperature record.

    Although adjustments to land temperature data do have larger consequences in certain regions, such as in the United States and Africa, these tend to average out in the global land surface record.

  • Hurricanes aren't linked to global warming

    MA Rodger at 20:44 PM on 10 October, 2017

    wili @78,

    I did have some useful NOAA(?) numbers for the energy fluxes associated with tropical cyclones but they are not falling to hand. However there is literature that presents data. Although this is a bit less authoritative-looking, the literature (& this is from papers to hand rather than from a proper search) does seem quite definitive that hurricanes act to warm the planet rather than cool it although the mechanisms are not that simple.

    Tropical cyclones do simplistically pump energy out of the ocean which will cool the planet. They also mix warm surface waters down into the ocean which, as the post-cyclone surface is cooler and thus easier to warm, will allow ocean warming. (These hurricane-warmed ocean depths won't just sit there but will enhance poleward heat fluxes, as discussed below.) The net size of the ocean-atmosphere flux from global tropical cyclones has been assessed globally using ARGO data at +1.9PW during the passage of storms but becomes a net negative -0.3PW when subsequent enhanced warming following the storm is included. The global figures when divided between hurricanes and lesser storms shows that it is hurricanes which are responsible for the net total being negative (Net total for just hurricanes equals 0.75PW cooling = a global 1.5Wm^-2), with 0.8PW of ocean cooling during the storm but followed by 1.5PW of subsequent ocean warming. For lesser storms the net ocean cooling remains positive 1.0PW cooling during the storm with 0.6 subsequent warming. This suggests that in a world with more hurricanes but fewer less-powerful tropical storms (a possibility that many denialists deny), there will be as a result bigger heat fluxes into the oceans.

    A further mechanism for cooling the planet is that the ocean mixing caused by tropical cyclones will impact poleward heat transfer to some extent, enhancing it in the oceans, reducing it in the atmosphere. But when the effect is set up in a climate model, the impact becomes a net warming effect due to the spread of humid atmospheres and such-like. So, of the ~2ºC global warming resulting from poleward heat fluxes (which are roughly 5 PW in each direction), perhaps some 0.2ºC results from tropical cyclones and would be boosted by increased cyclone activity. (That could be equated to a climate forcing using ECS=3 of +0.25Wm^-2).

    So in terms of A-bombs, the increase in that +0.25Wm^-2 of warming from today's tropical cyclones will be small and will also be A minus.

  • It takes just 4 years to detect human warming of the oceans

    barry at 08:16 AM on 26 September, 2017

    OPOF,

    Yes, the results would likely be less tight using different/longer time periods, and the uncertainty greater. But I think that would be good science. 2014/2015 were subsequent hottest years in the record. That's going to skew results.

    From the opinion piece:

    These analyses show that during 2015 and 2016, the heat stored in the upper 2,000 meters of the world ocean reached a new 57-year record high (Figure 1). This heat storage amounts to an increase of 30.4 × 1022 Joules (J) since 1960 [Cheng et al., 2017], equal to a heating rate of 0.33 Watts per square meter (W m−2) averaged over Earth’s entire surface—0.61 W m−2 after 1992.

    It seems they consider the data useful enough for earlier periods, but perhaps not for the calculation they performed. ARGO became fully global in 2007, for example. But they did not include 2016 - presumably because of the el Nino skewing results.

    If data limitations 'forced' them to use the one 12-year period, that limitation might also have earned a comment, and perhaps some added uncertainty. I was a little surprised to see no commentary on why this particular choice of time period and not another, like 2003-2014, for example. 2003-2014 has 50% lower trend than the period they chose. The actual trend may not make a difference, but it would have been good to see that discussed, as the trend they chose had 2 consecutive record warm years at the end. But even the noise might be peculiar to the period selected.

    Had it been a study rather than an 'opinion', I expect they'd have explored these things.

  • New study finds that climate change costs will hit Trump country hardest

    michael sweet at 08:03 AM on 31 August, 2017

    NorrisM,

    The solutions Proect has documented in detail that Renewable energy is the cheapest way to provide all energy in the future.  Here is a summary (on SkS) of their proposal that I wrote.  Claims that renewable energy will be more expensive than fossil fuels are false, they are cheaper.

    Your argument comparing solar panel waste which can all be recycled, with radioactive nuclear waste that has to be sequestered for millions of years is absurd.  I will let other readers decide for themselves what they think.

    Your land area arguments are a red herring.  As NigelJ states, most of the land is farms with occasional wind turbines.  How much land is permanently sequestered by nuclear accidents in Russia and Japan??  Nuclear proponents always seem to forget the nuclear disasters.  Solar farms can be built in deserts or on other low value land (or existing buildings, parking lots and other structures).

    Turbines are sent by ships, just like other cargo.  Currently, old turbines in developed countries are being replaced by upgraded models.  The old turbines are still useful so they are rebuilt and sold as a cheap source of energy to the developing world.  When they reach the end of their lives they will be recycled.  Some people falsely argue that the turbines have worn out.

    Nuclear engineers have been promising cheap reactors since before I was born ("too cheap to meter").  I am 58 now and nuclear is bankrupt.  Your article describes the water cooled reactors that bankrupted Westinghouse.  Engineers describe them as "unbuildable".

    While reading background material on EROEI for solar and wind I found this article.  It responds to an article similar to the one you referenced that MARodgers links above and describes some of the many errors in the analysis.

    Just for starters, the data for solar panels comes from an article written in 2006 (updated in 2007) while the wind power data comes from a masters thesis published in 2004 and a paper from 1998.  These papers are also used in the article I linked above.  I don't know about where you live, but in the USA there have been significant developments in wind and solar since 1998 and 2006.  These data are updated yearly.  I do not know why the authors decided to use ancient data, but for me that disqualifies your reference.  It seems to me that the authors are trying to justify a conclusion, not reach a true answer.  Other readers can make their own judgements.  The article I link calculates an EROEI of above 10 for roof top solar in Switzerland.  Somewhere with better sun (say New Mexico) would have an EROEI of at least 20 for utility farms.  

    As for your excuse for not providing references, if you are too lazy to Google data and read the background you should not post to a forum that requires posters to support their arguments.  It is very time consuming for me to look up data to reply to your idle claims.  If you put in the time to research your claims maybe you would realize that they are specious.

    As I said before, nuclear supporters generally just post reams of false data and do not read the links that are posted in return.  They need to get over it.  Nuclear is bankrupt.  They cannot build a reactor on time and on a budget.  

    Current nuclear plants operation and maintenance alone are more than the total costs of a wind or solar facility including the mortgage for the renewable facility.  Current users in South Carolina pay 25% of their utility bills for nuclear plants that have been abandoned.  They will pay even more in the future as they are stiffed for the capitol costs of the abandoned plants.  Meanwhile wind and solar cause the price of energy to plummet where they are built.

    Lomborg argues that solar is not economic because the price of electricity plummets after solar facilities are built.  The solar facilities are making money and the electricity is cheaper.

    "People like to claim that green energy is already competitive. This is far from true. For instance, when solar energy is produced, it is all produced at the same time — when the sun shines. The energy thus floods the market and becomes less valuable. Models show that when solar makes up 15% of the market, the value of its electricity is halved. In California, when solar reaches 30% of the market, its value drops by more than two-thirds."

    Lomborg is just a shill for the fossil fuel industries.

  • Increasing CO2 has little to no effect

    Tom Curtis at 09:13 AM on 1 May, 2017

    vatmark @312, sorry for my delayed response.  I am suffering from poor health at the moment, and am finding it difficult to respond to involved posts in a timely manner.  Unfortunately this may mean a further delay in responding to two other posts directed to me by you on another thread, for which I also apologize.

    1)

    "This does not convince me that climate models are doing it right by using backwards calculations where emitted radiation is causing the temperature of layers below."

    I should hope not, as that is not what General Circulation Models (GCM) do.  Rather, they divide the ocean and atmosphere into a number of cells, and for each time step solve for all energy entering, absorbed and emitted from that cell, including energy transfers by radiation, latent heat, diffusion and convection.  In doing so, they maintain conservation of energy and momentum (or at least as close an approximation as they can maintain given the cellular rather than continuous structure of the world).  When they do this, properties of the simplified models of the greenhouse effect used primarilly for didactic purposes are found to emerge naturally, thereby showing those simplified models to capture essential features of the phenomenon.

    2)

    "He says that observed heat from the earth is not in balance, the heat flux from the sun that heats earth is larger than the amount of heat that earth emit to space. I find that logical, the earth is not equally warm throughout, and then it has to emit less energy. Only when the system is equally warm in every point inside, it emits as much heat to space as it receives."

    You have taken a reqirement for a body, heated externally, and equally from all directions and assumed it is a universal condition.  It is not.

    To take a simple example, if a spherical body having the same thermal conductivity throughout, bathed in a fluid of uniform temperature, but having a significant heat source at the center.  According to you it must have the same temperature throughout before energy in can equal energy out.  But, based on Fourier's law of conduction, if there is no temperature gradient, there is no movement of energy by conduction.  If follows that based on your theory, the heat from the heat source at the center can never leave, which must result in an infinite energy build up at the center.

    Your assumed requirement does not even describe such very simple models.  It has been falsified, in fact, since Fourier's experiments that led to his seminal work.  It certainly does not apply to the complicated situation of an atmosphere, or a large, massive rotating sphereoid heated intensely from one side, and situated in a heat bath of near zero degrees absolute, ie, to the Earth.

    Your claim is also refuted by the Earth itself, which has existed for long enough, with a very stable energy source, that it is in near thermodynamic equilibrium.  If your supposed condition held, then there would be no significant difference in temperature with altitude.  Despite that, ice has existed at altitude in the tropics for hundreds of thousands of years. 

    3)

    "Hansen wrote about satellite measurements showing an imbalance of 6.5W/m^2 averaged over 5 years. Then he says it was thought to be implausible and they made instrumentation calibrations to align the devices with what the models say, 0.85W/m^2."

    Satellite measurements currently suffer a disadvantage, in that while they are very accurate in showing relative changes in Total Solar Irradiance (TSI) and Outgoing Long Wave Radiation (OLR), they are fairly inaccurate in showing absolute values.  This was known from design specifications, and also by comparison of the data from instruments of the same, or different design over the same period, as here:

    That means, while we can know the annual change in the energy imbalance quite accurately, we cannot know it's absolute value from satellites alone.  Two different methods are used to compensate for this.  In the past, the values from climate models were used of necessity.  Since the advent of Argos, the rise in OHC is sufficienty well known that it can be used to calibrate the absolute energy imbalance.  Hanson discusses both methods (which approximately agree, and certainly agree far better than does either with the value from the satellites).  Further, the specific use of computers you mention was not Hanson's, but that of Loeb (2006).

    4)

    "How can forcings be known accurately if they are not a result of measurements? Not any of the studies show how any numbers of forcing has been achieved."

    Hanson does not say the forcings are known accurately.  Rather, he shows the Probability Density Functions of the forcings:

    As can be seen, the 95% confidence limits of the greenhouse gas forcing amount to a range of about 1 W/m^2, or approximately a third of the best estimate forcing.  In constrast, the aerosol forcing has a 95% confidence limit range of about 3 W/m^2, or just over twice the best estimate.

    5)

    "And I can´t find any descriptions of the heat flow the way I think it should be done, or rather, the way I like it."

    Given the level of understanding of thermodynamics shown by you in your claims about equal temperature, it is neither a surprise nor a problem that you cannot find descriptions of heat flow the way you like.  GCMs do use, however, the standard laws of thermodynamics, and of heat flow in its various forms.

  • Models are unreliable

    SemiChemE at 13:39 PM on 23 March, 2017

    Tom Curtis @1034, Thanks for understanding the point I was trying to make and giving a better explanation than I could have (see post #1035) for why paleoclimate data from >30million years ago may not be useful for predicting the earth's climate sensitivity to CO2 in modern times.

    As for my conclusion, your post suggests I was not clear in stating my conclusion, since your argument appears to be about the likely range of climate sensitivities. I did cite a paper (or papers) that reflect a lower climate sensitivity, but my point in doing so was to highlight potential flaws in the models that might cause them to make improper predictions about future climate trends.

    My intended conclusion was that climate models are still quite crude and unreliable for predicting the future climate. I do have hope that the models will get better over time, especially in light of modern data collection techniques (eg. Satellites, Argo sensors, etc...), which will enable modellers to reduce the acceptable ranges of the parameters that are currently used to adjust the model outputs.

    I also argued that paleoclimate data is not sufficient to completely validate any given model due to
    1. Limitted accuracy and precision
    2. Poor temporal resolution
    3. Significant gaps in global coverage
    4. Limited visibility to important historical factors, including cloud behavior, aerosol and particulate variations, Ocean Currents, etc...

    Finally, while I believe my statements about Paleoclimate data to be true, I am certain there is a literal army of climate scientists working to address these shortcomings and I would welcome any suggestions for a good summary on the latest state of the art in understanding our planet's climate history.

  • Over 31,000 scientists signed the OISM Petition Project

    Tom Curtis at 10:06 AM on 26 February, 2017

    Deaner @38, and Kirdee @ 37, you may be waiting a long time for a detailed rebutal of the accompanying paper to the OISM petition.  That is because the paper constitutes a Gish gallop.  It is so dense with cherry picks, data taken out of context and other errors that it would take a paper just as long simply to provide links to related rebutals.  Given that all of the claims can be (and have been rebutted on SkS) in relation to other issues, the time that would be involved in tracing down all the references, and composing a rebutal is not sufficiently well rewarded.

    To give you an idea of what I mean, I will consider just a few claims made by the paper.

    The paper leads with a Sargossa Sea proxy from Keigwin (1996):

    It is a real proxy, and I do not know of any problems with Keigwin (1996).  What I do know (and which should be obvious) is that no proxy from a single location is a proxy of global temperature.  To think it is is as absurd as thinking that temperatures in Darwin, Australia must vary in sync with those of Boston, Massachussets.  Because temperatures in different regions do not vary in sync, when taking a global average they will regress towards the mean.  Large variations will be evened out, and global mean temperature peaks (and troughs) are unlikely to coincided with peaks (and troughs) of individual regions.

    Robinson, Robinson and Soon (hereafter RRS) will have nothing of that, and conclude from a single proxy that:

    "The average temperature of the Earth has varied within a range of about 3°C during the past 3,000 years. It is currently increasing as the Earth recovers from a period that is known as the Little Ice Age, as shown in Figure 1. George Washington and his army were at Valley Forge during the coldest era in 1,500 years, but even then the temperature was only about 1° Centigrade below the 3,000-year average."

    In contrast to their finding, if you look at a genuine multi-proxy reconstruction of Holocene temperatures (in this case 73 proxies from diverse regions), you see that global temperatures have varied within a 1 to 1.5 C temperature range, and that "Current global temperatures of the past decade have not yet exceeded peak interglacial values but are warmer than during ~75% of the Holocene temperature history", including, as it happens, the MWP:

    RRS have created an entirely false impression by using clearly inadequate, and cherry picked, data.

    Next consider their use of Oelermanns (2005) regarding glacer length, which RRS show as follows:

    For comparison, here is the actual figure (2 B) from Oerlermans (2005):

    You will notice that RRS show the figure inverted.  You will also notice that while the all glaciers figure (in red) jogs down towards the end, it is only the "Alps excluded" figure that jogs up at the end, as shown (once allowing for the inversion) by RSS.  From that evidence, they have deliberately chosen the more restricted data, and chosen it because it better fits their narrative (because it is smoother).

    What is worse, they know and neglected the fact that a Oerlermans (2005) used the data to reconstruct global temperatures.  The result is very different from the impression they are trying to create:

    Temperatures are seen to be more or less stable from 1600, with the slight rise starting around 1850 in keeping with what has gone before.  The 20th century, however, is marked by an unprecedented, rapid, rise in temperature.  That has lead to an unprecedented and rapid retreat of glaciers.

    Once again RRS create a false impression by cherry picking the data, and by forcing us to rely on an intuitive, but false understanding of the relationship between glacier length and temperatures (which are modulated by slope and precipitation, factors Oerlermans takes into account but for which we have no information).  Worse, they portray the data from approximately 70 glaciers (ie, the total number of glaciers used excluding those from the Alps) as though it were the full 169 glaciers considered.

    I could go on, but you will already see from my brief treatment of just two points how extensive a full treatment of RSS would be.  You will also have noted the dishonest tactics used repeatedly by RSS in their paper.

  • NOAA was right: we have been underestimating warming

    Philippe Chantreau at 04:52 AM on 6 January, 2017

    Cooper13, anything and everything will be seized by some as opportunities to claim that the data are unreliable, regrdless of the vacuity of the argument. We live in a post reality, post information world that is no better connected to reality than the melanesian cargo cults. There is nothing that can be done to prevent some to make noise about the unreliability of this or that; the noise itself is their goal and achievement.

  • Most of the last 10,000 years were warmer

    Tom Curtis at 09:44 AM on 12 June, 2016

    Mike Hillis @80, yes, "here we go again with the air bubbles".  Just because you do not understand the mechanisms involved does not mean they are invalid, nor that they have not been properly explained in Alley (2000a) and (2000b), as noted @69 above.

    Specifically, Alley (2000a) states:

    "Temperature gradients cause gas-isotope fractionation by the process of thermal diffusion, with heavier isotopes migrating toward colder regions. Diffusion of gases through pore spaces in firn is faster than diffusion of heat, so the isotope signal reaches the bubble-trapping depth before the heat does, and the isotope anomaly is recorded as the air is trapped in the bubbles (8). The degree of enrichment reveals how big the temperature difference was, and thus the magnitude of any abrupt climate change. In addition, the number of annual layers between the record in the ice and in the bubbles of an abrupt climate change is a known function of temperature and snow accumulation; using snow-accumulation data, one can learn the absolute temperature just before the abrupt climate change (8)."

    Alley (200b) states:

    "New gas-isotope techniques pioneered by Severinghaus, Sowers, Brook et al., (1998) o!er this high-frequency calibration at important transitions. On cold ice sheets with little or no melting, the transformation of snow to ice takes decades to millennia, and there are tens of meters of old snow called firn above the depth at which bubbles are formed. Wind-mixing is limited to the top few meters, so tens of meters of firn exist in which gases exchange with the free atmosphere by di!usion alone.

    Gravitational fractionation causes the gases trapped by bubbles forming at the bottom of the firn to be very slightly heavier than the free atmosphere, in predictable and regular ways (Sowers et al., 1989). However, following abrupt climate changes, the trapped-gas composition is also perturbed slightly by thermal di!usion."

    Reference 8 in Alley (2000a) is just Severinghaus, Sowers, Brook et al., (1998), which states:

    "Observations of thermal diffusion in nature have thus far been limited to gas-filled porous media such as sand dunes and firn, where advective mixing is inhibited by the small pore size such that transport is mainly by diffusion. These observations confirm that transient seasonal and geothermal temperature gradients fractionate air approximately as predicted.


    After an abrupt climate warming, transient temperature gradients lasting several hundred years should arise in the upper part of the firn owing to the thermal inertia of the underlying firn and ice. The heavy gas species such as 15N 14N and 40Ar should therefore be preferentially driven downwards, towards the cold deeper firn, relative to the lighter species (14N2 and 36Ar). An important point is that gases diffuse about 10 times faster than heat in firn, so the thermally fractionated gas will penetrate to the bottom of the firn long before the temperature equilibrates, and the gas composition will closely approach a steady state with respect to the firn temperature such that equation (1) should be valid."

    Your gross misunderstanding of the process involved clearly shows that you did not read the linked references (which are only the scientific articles we are discussing in the first place).  Your strongest argument appears to be your invincible ignorance.

    By 2010 this method was sufficiently refined that Kobashi et al (2010) and Kobashi et al (2011) could use it to construct an independent GISP 2 paleo temperature record.  In 2000, however, Alley only used it as a supplement to his basic d18O paleothermometer.  But because he used it as a supplementary thermometer, the depth at which the nitrogen and argon isotopes can be used to effectively provide a temperature record becomes important.

    I will note that I have incorrectly used the term "diffraction" when I should have used the term "fractionation".  That is a massive error compared to your identifying the wrong isotopes, the wrong method, and falsely claiming the gas isotopes would equilbriate with the ice isotopes that it must obviously prove your point /sarc

    Your compete befuddlement with regard to the gravitational fractionation as a paleothermometer is as nothing compared to your comments on total gas content.  You are very specific that "[Cuffey and Clow] found that the elevation of the Greenland Summit did not change at all during the Holocene."  This is a point you are so secure on that you make it the foundation of your claim that you had the relevant knowledge to exclude any possible elevation correction in the recent record.

    It even has some basis in Cuffey and Clow, who state, "The corresponding elevation (relative to sea level)
    histories show a near-constant Holocene elevation".  The only problem is that the full sentence states, "The corresponding elevation (relative to sea level) histories show a near-constant Holocene elevation for AL = 50 km, and a decreasing Holocene elevation for larger retreats (Figure 2b)."  (My emphasis)  This relates to the discussion of three assumptions about modeling of elevation based on marginal retreats of the ice.  As is clear, they are only committed to "near-constant Holocene elevation" if marginal retreats match the 50 km assumption, however they find that:

    "Which of these elevation curves is most reasonable?  Geologic evidence indicates the Greenland Ice Sheet margin had extended about 200 km in the southwest part of the island and about 150 km eastward during the last glacial [Funder, 1989; Funder and Larsen, 1989]. At the latitude of Summit, the western margin position is not well known but was probably at least 100 km extended. Thus we expect the histories with 100 km > AL > 200 km to be most appropriate."

    They go on to say:

    "The GRIP data of Raynaud and colleagues likewise show a large decrease in gas content during deglaciation, and in addition show a sizable gas content increase from early to late Holocene (Raynaud et al., this issue). If elevation changes of the ice sheet are responsible for these trends, then the ice sheet surface must have been at lower elevation during the glacial, have risen during deglaciation, and have decreased elevation again through the Holocene. This pattern compares best with our elevation curves for marginal retreat distances of 100 to 150 km. However, problems with interpretation of total gas content as elevation preclude a firm conclusion at this time."

    In otherwords, "[Cuffey and Clow] found that the elevation of the Greenland Summit did not change at all during the Holocene" only if you assume they found marginal retreats nearly a third less than those they actually found, and totally ignore the gass content data, which was reproduced in Vinther et al (2009):

     

    So, your defence against my claim that you certainly did not consider elevation changes is to assert that Cuffey and Clow did not find any elevation changes so that your failure to consider them was entirely justified.  By that defence you prove that you did not consider elevation changes, which Cuffey and Clow found to exist, and which gas data show to have fluctuated significantly in recent times (see GRIP total gas content in the graph above).  You thereby prove, not only that you are wrong about the elevation changes, but that you indeed did not consider them as I originally asserted.  Your defence merely proves my assertion justified. LOL

    For the record, the most recent air content data for the GRIP ice core is 169 BP (ie, 1781), and indicates an elevation at the Greenland Summit over 100 meters above current levels (as seen above).

    Also for the record, I note that Mike Hillis continues to argue by mere assertion; and has now resorted to asserting findings for scientific papers that are directly contradicted by those papers.  Just how much sloganeering is permissible at Sks?

  • Most of the last 10,000 years were warmer

    Mike Hillis at 13:01 PM on 8 June, 2016

    MA Rodger said @70:

    [Is it truly a "basic error with the OP" to state that "it takes decades for snow to consolidate into ice"? The OP statement quoted here is correct in itself. Further it does explain why Alley (2004) stopped his temperature reconstruction in 1855 which is a fundamental part of the post. I do not see here any "basic error."]

    It's not an error to say it takes time to consolidate snow to ice, but it's misleading, because the snow does not need to consolidate to measure the 18O isotopes, and because this is long before the Argon/Nitrogen bubble study there is no reason to consider the entrapment of bubbles in the ice via consolidation. The OP claims that there is no isotope proxy after 1855 because of this reason, which is utterly wrong.

    MA Rodger said @70:

    [Perhaps a challenge should be set for Mike Hillis. Isotope data can be and is taken from snow and this data can be and is dated and added to ice core isotope data. But where is this data published authoritatively as a temperature series? Alley (2004) provides such a temperature series but only to 1855. Is there an ice core/snow temperature series that continues to a later date?]

    I already gave a link to this data series in posts 58 and 59. Those measurements were done on the ice cores by the University of Washington, after they had been moved to the US (Alley et. al. made their measurements in a sheltered laboratory in Greenland.) I see Tom Curtis links to that study in @73.

    Alley stopped his temperature data at 1855 in his paper on the Younger Dryas, not because that's the latest data he had, but more likely because he was writing about an event at the beginning of the Holocene and anything after 1855 was not relevant to the Younger Dryas. I'm sure that Alley had at his disposal, all the data after 1855, and I'm sure he also was aware of the 1999 University of Washington measurements on the same cores.

  • Most of the last 10,000 years were warmer

    MA Rodger at 04:05 AM on 8 June, 2016

    Tom Curtis @71,

    Kobashi et al (2011) wasn't actually what I had in mind (although it does fit the description I presented @70). The method of Kobashi et al. is to use nitrogen & argon isotope ratios in the air bubbles. So this does not fit with the Mike Hillis assertion that the oxygen isotopes from H2O in snow layers provide a temperature proxy that can be seemlessly affixed to the ice isotope data, that it is effectively the same data series. 

    Kobashi et al. (2009) describes the method they use for their most recent proxy data thus:-

    "Our latest data for isotopes is 1950 C.E. as the air occlusion process is not completed for recent decades. For the period 1950-1993, the surface temperature is estimated heuristically by a forward model (Goujon et al. 2003) running various surface temperature scenarios to find the best fit with the borehole temperature record."

  • Temperature tantrums on the campaign trail

    Nick Palmer at 04:36 AM on 26 March, 2016

    Thanks Andy. I got fooled by a persistent denialist I was arguing with who was quoting a post Lubos Motl did about the Karl et al paper. The denialist was arguing that it was "typical alarmist bad science" to use "biased" ship's intake data and splice it on to the "much more accurate ARGO data". Motl himself did not make that mistake.

  • Models are unreliable

    Tom Curtis at 10:03 AM on 23 February, 2016

    FrankShann @960, you quote as your source the Oxford English Dictionary but my print version of the Shorter Oxford gives an additional meaning of predict as "to mention previously" ie, to have said it all before.  That is equally justified as a meaning of 'predict' by its Latin roots which are never determinative of the meaning of words (although they may be explanatory of how they were coined).  The actual meaning of words is given by how they are in fact used.  On that basis, the mere fact that there is a "jargon" use of the word, means that 'predict' has a meaning distinct from 'forecast' in modern usage.  Your point three refutes your first point.

    For what it is worth, the online Oxford defines predict as to "Say or estimate that (a specified thing) will happen in the future or will be a consequence of something".  That second clause allows that there can be predictions which do not temporally precede the outcomes.  An example of the later use is that it could be said that "being in an open network instead of a closed one is the best predictor of career success".  In similar manner, it could be said that forcings plus basic physics is the best predictor of climate trends.  This is not a 'jargon usage'.  The phrase 'best predictor of' turns up over 20 million hits on google, including in popular articles (as above).  And by standard rules of English, if x is a good predictor of y, then x predicts y.

    As it happens, CMIP5 models with accurate forcing data are a good predictor of GMST.  Given that fact, and that the CMIP5 experiments involved running the models on historical forcings up to 2005, it is perfectly acceptable English to say that CMIP5 models predict GMST up to 2005 (and shortly after with less accuracy based on thermal inertia).  On this usage, however, we must say they project future temperatures, however, as they do not predict that a particular forcing history will occur.

    As a side note, if any term is a jargon term in this discussion, it is 'retrodict', which only has 15,000 hits on google.

    As a further sidenote, you would do well to learn the difference between prescriptive and descriptive grammar.  Parallel to that distinction is a difference between prescriptive and descriptive lexicographers.  The curious thing is that only descriptive lexicographers are actually invited to compose dictionaries - while those dictionaries are then used by amateur prescriptive lexicographers to berate people about language of which they know little.

    The only real issue with Dana's using 'prediction' is if it would cause readers to be confused as to whether the CMIP5 output on GMST was composed prior to the first date in the series or not.  No such confusion is likely so the criticism of the term amounts to empty pedantry.

     

  • Republicans' favorite climate chart has some serious problems

    FrankShann at 22:03 PM on 22 February, 2016

    The core issue here is merely semantic. Modellers use "predict" to mean how well their model performs on past data, even when the model has been adjusted (tweaked) in the light of knowledge of the dependent variable. This meaning of "predict" is jargon. The general meaning (well over the 97% consensus mark) is that predict means to make a statement about something that is unknown (such as GMST in 2030-2040, or the presence of gravity waves). For example, I take information about the results of football matches over the last 10 seasons and make a model that "predicts" the winners. Then I try different variables or transformations and get better "prediction". But the vast majority of people do not regard this as prediction - for prediction, they require my model to say which teams will win *next* season. That is, GMST after 2015, not 1880-2015. I was suggesting (and still suggest) avoiding jargon and using the widely accepted meaning of "predict" (as defined in the Oxford Dictionary) - which means that CMIP5 describes rather than predicts GMST for the vast majority of 1880 to 2015.

    Tom Curtis @19. I did not say global mean surface temperature (GMST) was "fed into" CMIP5, but I agree that I should have made it clear that I am not suggesting that global mean surface temperature (GMST) is an independent variable in the CMIP models. I *am* suggesting that the models have been adjusted in the light of how well they predict GMST (and other variables) over some or all of the period from 1880 to 2014 (the period shown in the graph). Knowledge of GMST during the period has influenced the development of the models.

    Tom Curtis @20. I am not atacking Dana's post - it is very helpful indeed (as I said @3). Also, I heartily agree that climate models are remarkably useful predictors of future climate, and vastly superior to the denialist attempts. I merely suggest that Dana consider altering one word (predict to describe) so the post is more plausible to readers who are not statistical modelers (the vast majority of the population) so it reads "Climate models have done an excellent job describing [instead of predicting] how much temperatures at the Earth’s surface would warm" because this statement is supported by a graph plotting CMIP5 against GMST from 1980-2015 (and it still refutes John Chrisy's misleading implication that climate models do not describe past GMST well). The link you mention to the excellent Comparing Global Temperature Predictions article (which I printed and gave to my friends in 2011) occurs at the end of the post, far removed from the predict/describe statement.

    I am disappointed at the response to my efforts to help. I am not a climate scientist, but I have have extensive experience with statistical modeling and scientific publication (I'm a member of the International Advisory Board of The Lancet). I tried to help because I think climate change is extremely important and that Sceptical Science is a very useful resource - and that it might benefit from advice from a non-climate scientist about how a post could be misinterpreted by other people who are not climate scientists. Perhaps I won't bother in future.

     

  • Republicans' favorite climate chart has some serious problems

    FrankShann at 12:03 PM on 22 February, 2016

    @7 and @11. Predict means to state what will happen in the *future* (Latin prae- "beforehand" + dicare "say"), or to state the existence of something that is *undiscovered* (such as my example of gravitational waves, or Bob's crude oil). 

    I am well aware of the sloppy use of "predictor variables" in statistical modelling (I am a regular user of Stata), but the correct term is independent variables and *not* predictor variables. The latter term is used (incorrectly) by some scientists because once a model has been developed the (known) independent variables are sometimes used to predict the (unknown) dependent variable.

    Bob says that prediction includes "any output of a model or theory that relates to data that was not given to the model before-hand (either explicitly as input, or implicitly as data used to derive a relationship included in the model)". Taylor (2012) says CMIP5 includes near-term "simulations focusing on recent decades and the future to year 2035. These 'decadal predictions' are initialized based on observations and will be used to explore the predictability of climate and to assess the forecast system's predictive skill." Global mean surface temperature observatons to (about) 2014 were available before CMIP5, and were used to develop the model. The model has had direct or indirect input from the independent variable (GMST) during its development - this is completely different to the prediction of gravitational waves or the location of new deposits of cude oil.

    The graph presented by Dana is "based on observations" up to about 2014, so it models or describes these observations but it does not predict them (either in the future or as unknowns) . As the Taylor (2012) CMIP5 paper says, the model's predictive skills will be tested in the period from now up to 2035 and beyond.

    So I think Dana's usage is incorrct both technically and in terms of the common meaning of "predicting". Even if the usage were technically correct, if "the goal of Skeptical Science is to explain what peer reviewed science has to say about global warming" to the general public, then then it should use words in the same sense as the general public, or make it very clear that jargon is being used.

    The post should say that the graph shows that (CMIP5) "climate models have done an excellent job describing [or modeling] how much temperatures at the Earth’s surface would warm" - not predicting.

  • The gutting of CSIRO climate change research is a big mistake

    Tom Curtis at 12:24 PM on 13 February, 2016

    ryland @25, if your charge is that Dr Marshall has not properly detailed his plans so that I lack essential knowledge on the issue, well then I agree.

    If your charge is that my 'speculation' was no more than that, you are wrong.  First, contrary to your implication, I have not speculated that RV Investigator will wind up climate research.  Or that the ARGO float program will continue its current rate of deployment.  I have merely pointed out that we have no specific assurance on these points, and that therefore Dr Marshall's "assurances" have not been to the point.  Second, even on that limited basis, my querying as to whether Dr Marshall's "assurances" have been sufficiently informative to actually reassure have been based on fact.

    Take the RV Investigator.  It was recently hired out to oil and gas companies because government funding of the ship was limited to 180 days of the year.  While concurrent research in addition to the oil exploration was conducted, that research was restricted to ecological research, as mentioned in the above article.  For a voyage commenced in November last year (possibly that above if delayed, or possibly a follow on voyage), research was again restricted to ecological research.  That RV Investigator conducts voyages in which climate research is not undertaken is a fact.  Not speculation.  Therefore Marshall's assurance that "The RV Investigator, operated by CSIRO for scientists from Australia and around the world as a state of the art research facility will continue to operate scientific voyages, gathering data every day at sea" provides no assurance of continued climate research by RV Investigator.  It may, under Marshall's plans - but the evidence for that has simply not been provided.

    Or consider the number of staff cut.  We are told that 100 of the 350 overall cuts will be from just two sections of Ocean and Atmosphere, the two most closely involved with climate research.  The sections of Ocean and Atmosphere are:

    • Coastal Development and Management
    • Earth System Assessment
    • Engineering and Technology
    • Ocean and Climate Dynamics
    • Marine Resources and Industries 

    Of these, Earth System Assessment and Ocean and Climate Dynamics are the most closely entwined with climate research.  I do not have direct figures for the number of staff in each, but across all five there are 420 staff.  If they are evenly divided, that means there are 168 staff in those two divisions, a calculation that ignores the number of administrative staff.  So on those figures, we are looking at a 60% cut in the climate related research, although it is probably higher than that.  That is a lot more than the 24% you would estimate from the figure actually given by Dr Marshall.

    Unless we think the other three divisions are mere cyphers, there is no shadow of a doubt that Dr Marshall has deliberately concealed the impact of the cuts by quoting the larger, irrelevant figure rather than the current staffing levels of the two divisions that will actually experience the cuts.

    Finally, with regard to the computer model, if Dr Marshall was leaving a sufficient staff to appropriately update the model, it would have been irrelevant to his point that the model was open source.  That he thought it was, and defended the cuts on that basis makes it plain that he does not envisage more than a skeleton staff maintaining the software, and therefore more than staffing levels required to keep the model up to date.

  • The gutting of CSIRO climate change research is a big mistake

    ryland at 09:50 AM on 13 February, 2016

    Tom Curtis @ 23.   Unlike mancan@18 I am surprised at the unusual amount of speculation and supposition in your discussion.  I too read the SMH but I also read the Australian and trust neither to give a totally unbiased report. You express reservations on the statements made by Dr Marshall on staff cuts, the RV, Argo and the climate models that have no basis in fact.  They are in fact pure speculation

    On RV Dr Marshall stated:"The second area of correction is our ability to support climate measurement in Australia. Cape Grim and RV Investigator are not under threat from these changes."Your interpretation is:"RV Investigator is a multi-function research vessel and can continue its voyages very easilly without any research on climate (focussing instead on ecology, for instance)". What evidence have you that any of this will occur?  As far as I can determine it is again speculation with no basis in fact

    On Argo, Dr Marshall: :We will also continue our contribution to the international Argo floats program which provides thousands of data points for temperature and salinity of our oceans; and we’ll be investing more in autonomous vehicles, using innovation to collect more data than ever before."

    Your comment is : "Nor does a continued contribution to the Argo floats program assure us that the level of contribution will remain the same."

    Any evidence that it won't? Marshall certainly gives no indication it will be changed. He specifically states "we'll be investing more".

    On climate models Dr Marshall states:"Our climate models have long been and will continue to be available to any researcher and we will work with our stakeholders to develop a transition plan to achieve this."

    You say "the phrasing of the assurance regarding the climate model suggests that it will not be used by CSIRO researchers, merely that it will be available to others (of which more later). More important, it contains no assurance of the continued development and testing of the model, without which it will be obsolete in 4-5 years."

    This is purely your interpretation of Marshall's phrasing.  Another interpretation could well be  "that as the statement says models will continue to be available etc, these models will be fit for purpose".  

    On the staff cutting to which you refer Dr Marshall said: "In our Oceans and Atmosphere business we have about 420 staff, not 140 as reported by some media, and after these changes we expect to have about 355, contrary to media reports."

    Your comment "This, however, seems like misdirection to me. Specifically, the 100 full time positions lost from the Oceans and Atmosphere section will be lost from just two out of five units. The question is, how many staff are their in the two units that will sustain the losses? Larry Marshall does not answer, and the answer is probably 140". "

    "Seems like misdirection to me" is a purely subjective assessment with no apparent basis in fact Why is there "probably 140"? That number is specifically referred to by Dr Marshall as being incorrect.  

    In conclusion, why is the climate science community, of which SkS is certainly a member, so vehemently hostile to any actions it considers a threat to its beliefs and activities?  The furore  the appointment of Bjorn  Lomborg generated and the current hand wringing and prophecies of doom about proposed cuts at CSIRO epitomise the "to the ramparts" attitude of the climate science community at anything it perceives a threat to its beliefs and importance.  To the unbiased observer this could appear to be more like knee jerk paranoia than anything else.

  • The gutting of CSIRO climate change research is a big mistake

    Tom Curtis at 18:37 PM on 12 February, 2016

    Further information, and comments.

    First, Larry Marshall clarified the restructure on Monday 8th here.  Amongst other things he said:

    "In our Oceans and Atmosphere business we have about 420 staff, not 140 as reported by some media, and after these changes we expect to have about 355, contrary to media reports. We asked business unit leaders to focus their operational plans on growth, and growth within finite resources will always initially lead to making choices about what to exit. However, as painful as any redundancy is, for the majority of the 5,200 CSIRO employees there will be no change to their current circumstances as a result of these plans, and we will also recruit new people with new skills."

    This, however, seems like misdirection to me.  Specifically, the 100 full time positions lost from the Oceans and Atmosphere section will be lost from just two out of five units.  Both are heavilly focussed on climate research.  The question is, how many staff are their in the two units that will sustain the losses?  Larry Marshall does not answer, and the answer it probably 140.  Marshall merely distracts us by inflating the denominator.

    Marshall goes on:

    "The second area of correction is our ability to support climate measurement in Australia. Cape Grim and RV Investigator are not under threat from these changes. The Cape Grim air pollution monitoring station which is a source of much of our greenhouse gas information will continue to be that source. Our climate models have long been and will continue to be available to any researcher and we will work with our stakeholders to develop a transition plan to achieve this. The RV Investigator, operated by CSIRO for scientists from Australia and around the world as a state of the art research facility will continue to operate scientific voyages, gathering data every day at sea. We also have an air archive which is a resource available to any researcher to investigate air changes over time. We will also continue our contribution to the international Argo floats program which provides thousands of datapoints for temperature and salinity of our oceans; and we’ll be investing more in autonomous vehicles, using innovation to collect more data than ever before."

    While happy to hear that Cape Grim will survive, I am less than sanguine about the other reassurances.  RV Investigator is a multi-function research vessel and can continue its voyages very easilly without any research on climate (focussing instead on ecology, for instance).  Nor does a continued contribution to the Argo floats program assure us that the level of contribution will remain the same.  Finally, the phrasing of the assurance regarding the climate model suggests that it will not be used by CSIRO researchers, merely that it will be available to others (of which more later).  More important, it contains no assurance of the continued development and testing of the model, without which it will be obsolete in 4-5 years.

    Ryland above reffers us to the Senate Estimates hearings, for which (unfortunately) a transcript is not yet available.  The SMH, however, reported on the hearings.  From them we learn that:

    1)  An original document planning this restructuring indicated the need for the loss of only 35 positions from Ocean and Atmosphere, which can reasonably be taken as the number of cuts necessary to impliment the restructure without loss of significant, relevant capacity.  Apparently the increase from 35 to 100 positions was a top down position made without familiarity with the research being cut.

    '"Those numbers of 100 are very round," said one senior researcher, who had watched the live stream of the hearing and whose work may face the chop. "What was the rationale for coming up with them? We still don't know."'

    2)  The board was told of the level of cuts involved in the restructure just two days before the public announcement.  From that it is clear that this was not a decision made in consultation with the board, and ergo also not a decision whose rational has been tested by independent scrutiny.

    3)  The executives making the decision had not adequately informed themselves of the details of the operations and research they were cutting.  This is evident in their having made several errors about that research in responding to Senate Estimates.  In particular:

    "For instance, they initially said the key Australian Community Climate and Earth System Simulator (ACCESS) model jointly worked on by the Bureau of Meteorology and CSIRO was "open-sourced", allowing for wide-ranging contributions that might offer the opportunity for savings."

    A belief that the software was open access may well have contributed to a belief that the CSIRO "climate models have long been and will continue to be available to any researcher" even while cutting the staff that operate those models (see Marshall's clarrification, and discussion above).

    This is fairly crucial in that Senate Estimates is the only indepedant scrutiny of the suitability of the restructure, and for the exectives to not have the basic facts underlying the restructure at their fingertips for Senate Estimates shows the numbers were chosen independent of an actual analysis of the number of staff needed to be retained for the capability Marshall claims will be maintained.  His clarrification is therefore revealed more as a statement of faith than something of which he can genuinely reassure us based on analysis.  Worse, his faith inflated by a factor of three the number of cuts an actual analysis showed to be appropriate.

  • No climate conspiracy: NOAA temperature adjustments bring data closer to pristine

    Kevin C at 02:06 AM on 10 February, 2016

    jmath: I wonder if you could help me by providing some evidence for a couple of your claims, in particular:


    The unadjusted data like the ... ocean buoy data show for instance the 2015 was not the hottest year on record.


    This claim is puzzling. I'm not aware of Woodfortrees providing a buoy-only dataset. So I went to the raw ICOADS data here and calculated my own, using just the WMO buoys and no adjustments.

    Of course I may have made a mistake, so then I went to the University of Hawaii here, and downloaded their data. This is based on a different and independent set of buoys - the ARGO profiling buoys.

    The results are plotted below:

    As you can see, the results from two different sets of buoys calculated by different methods show remarkable agreement. Given that I used the raw WMO buoy data and my own code you can check for yourself that no adjustments were involved.

    Secondly, from the same sentence:


    The unadjusted data like the 14 satellites, the radiosondes ... show for instance the 2015 was not the hottest year on record.


    You seem to be claiming that Woodfortrees includes unadjusted satellite records. However the series up on Woodfortrees are heavily adjusted. The adjustments are documented in the publications of both the UAH and RSS groups, for example here.

    I cannot find any radiosonde data at all on Woodfortrees, however RATPAC-A shows 2015 as the hottest year on record at the surface by a wide margin.

  • Ted Cruz fact check: which temperature data are the best?

    David Lewis at 16:52 PM on 19 January, 2016

    Wouldn't the Argo float data indicating an almost steady rise in ocean heat storage be the most significant indicator that the planet is heating up?  Its physics, isn't it? 

    A bit of a shift in heat distribution in the ocean takes place, i.e. El Nino, and a major shift in global surface temperature results. The ocean is a big dog and average global surface temperature, or even less, average mid tropospheric temperature, are tiny tails. 

    The only thing that could account for the ongoing accumulation of heat in the oceans is that there is a planetary energy imbalance. 

    Running satellite data through a model to compute average global mid troposphere temperature is basically irrelevant compared to this, no matter what it says. 

  • Surface Temperature or Satellite Brightness?

    Kevin C at 21:10 PM on 16 January, 2016

    We can also check different subsets of the weather stations against each other, as I do in the video. We can check different SST platforms against each other, such as weather buoys against Argos. We can check island weather stations against surrounding SSTs. We can check in situ observations against skin temperature data from infrared satellites. We can check in situ observations against reanalyses based on satellites (including MSUs) or barometers and SSTs. All of these have been done, and more such comparisons are in the pipeline.

    The UKMO Eustace project will be relevant in future too.

  • A Buoy-Only Sea Surface Temperature Record Supports NOAA’s Adjustments

    Zeke Hausfather at 01:41 AM on 31 December, 2015

    Hi dazed and confused,

    Thanks for the good questions; science should always be skeptical (hence the name of this site!), so there is no harm in pushing for clarification.

    I agree that in a perfect world we would have created a separate ship-only record to compare to our buoy-only record to better assess the magnitude of buoy bias. However, thats not really what we were focusing on for this project.

    We wanted to look at whether or not ERSST v3 or v4 was more accurate in recent decades. The main differences between the two are the buoy adjustments and the NMAT-based ship corrections. Both of these issues arise from the fact that the network is composed of inhomogenous sensors; ships themselves are not easily intercomparable as they don't all have the same instrument and engine room configurations, and buoys and ship engine intake valves are clearly different instruments.

    However, there are relatively homogenous instruments available: buoys themselves. They have nearly identical sensor setups across all the buoys, and should provide a relatively unbiased estimate of SSTs in areas where buoys are present. Thus buoys provide a good test for ERSST v3 vs. v4: whichever one is more similar in trend to the unbiased buoy-only record should be the more accurate one, at least for the period of overlap with buoys. 

    As you can see in our results, ERSST v4 is effectively identical to the buoy-only record, telling us both that the buoy corrections employed in v4 are accurate (and remove the ship-buoy transition bias present in v3), and that the NMAT-based ship corrections don't introduce any detectable trend bias relative to the buoy-only record, at least in grid cells containing both ships and buoys.

    Our intent wasn't to evaluate Kennedy's work on buoy-ship differences; rather, it was to evaluate the effectiveness of the corrections in the new version of ERSST. For that the buoy-only record provides a useful test, even if its not independent of the ERSST records.

    There is interesting future work to use this and other datasets to compare ERSST and HadSST in more detail, but that will likely take the form of an academic paper rather than a blog post. This initial blog post was more of a rapid response to the politically-inspired criticism of NOAA, essentially pointing out that their results in recent years (and the resulting increase in trend) tend to make ERSST as a whole more similar to relatively homogenous series like buoys or ARGO floats.

  • A Buoy-Only Sea Surface Temperature Record Supports NOAA’s Adjustments

    Kevin C at 19:59 PM on 30 December, 2015

    D&C:

    The reason for not comparing to the ship-only record is that the ERSST ship only record is not distributed as gridded data, which would be required. You can't draw conclusions from time series graphs alone due to the coverage issue - you need the gridded data. Further a ship only record does not include the ship-buoy transition bias, and so only contains some of the bias.

    The work has not been peer-reviewed at this point. However, neither have any of the critiques of ERSSTv4. If you are suggesting that climate science can be critiqued in political events and the media, but that scientists may only respond in the peer-reviewed literature, then the public will be systematically exposed to non-evidence based positions. This is exacerbated by the fact that the media favour political material over scientific material, and tend not to be interested in later followups.

    On the other hand, the Argo data are independent of our buoy-only record and are from peer-reviewed work. We also have published all of our code and data for you to review. Finally, Tom Karl of NOAA, who has appropriate expertise in the area, had sufficient confidence in our analysis to show it at AGU:

    The differences between ERSSTv4 and HadSST3 are an open question. Before Zeke added the Argo analysis I had no view on which was more realistic for the post-1995 period. With the Argo analysis, I'm leaning towards ERSSTv4 being the better record for this period.

    There is one part of your argument which is confusing to me: I think that you are arguing that because the buoys are upweighted in ERSSTv4, it is unsurprising that it shows good agreement to the buoy only record?

    In which case you are arguing that ERSSTv4 is already, to a close approximation, a buoy only record for the recent period. I'm fine with that. In which case the NOAA adjustments to the SST record play no significant role in the post-1995 trend, because the buoy records are unadjusted. So there can be no objection to the ERSSTv4 trends being a result of NOAA adjustments.

    That's a valid position. Our work is mearly an independent reproduction of the recent record using a rather different methodology which explicitly rather than implicitly excludes any adjustments, and using a minimial implementation (100 lines of code) to allow easier review.

  • A Buoy-Only Sea Surface Temperature Record Supports NOAA’s Adjustments

    Miriam O'Brien (Sou) at 01:25 AM on 1 December, 2015

    A big thank you to you both, Zeke and Kevin. This is really useful and not a surprise, given the work I know went into preparing ERSSTv4. I like it that you've now included a comparison with Argo data, which contradicts what some people have been claiming. I'll be referencing this article from time to time.

  • Spoiled ballots, spoiled views: an election snapshot from Powys, Wales, UK

    Langham at 19:54 PM on 15 June, 2015

    Scaddenp @ 74.

    UK government energy policy does not envisage reliance on renewables to the extent demanded in your probably unachieveable and certainly unrealistic scenario - see the recent output from DECC on the subject, which clearly envisages a mixture or energy sources, less reliant on FF than at present but also making use of biomass and CO2 sequestration technologies - as well as nuclear.

    In the real-world circumstances, at the present time, it may well be that the UK and perhaps other nations turn increasingly to marine power, where this is available, to complement or replace land-based wind-power systems - and my assertion is absolutely true. The UK has a target for renewable power, and if some of that is met from marine sources, then the land-based component is reduced in direct proportion.

    Perhaps in certain middle-eastern or equatorial countries with abundant sunshine and sparse countries it is possible to envisage an energy policy based entirely on renewable (in this case solar) power, but in densely populated European countries, 100%  reliance on renewable energy generated locally, with present technology, is just an idle pipedream.

    The concept of rural serenity seems, for reasons which are obscure to me, to aggravate some here. Nevertheless, a sufficient number of people in the UK and I imagine elsewhere prize it sufficiently that, while it would be delusional to imagine there can be any form of complete embargo on all rural development -  clearly there isn't - nevertheless any government or planning authority is ill-advised to interfere with it lightly.

  • Research downplaying impending global warming is overturned

    Tom Curtis at 08:28 AM on 4 June, 2015

    1) Dana, congratulations on publication of the response.

    2) @3, it is certainly inappropriate to average surface temperature records with satellite temperature records to produce a combined record.  Not only that, even averaging surface temperature records is dubious.  As there is considerable overlap between the stations used for the different records, the effect is to downweight the effect of stations not represented in all records.  As HadCRUT4 ignores the Arctic, de facto assuming arctic temperature increase equals the global average, averaging temperature records also downweights arctic temperature trends.

    In fact, if you want to use a single record it is difficult to justify using anything other than that record which employs the most raw data, and uses the best statistical approach.  At the moment this is the BEST record.  HadCRUT4, which is the most commonly used record, is the worst on both of those criteria (ie, it has the least raw data, and employs the worst statistical method).  Ergo, as good practise scientists should currently employ either only the BEST record or (as replication is important), all records shown seperately.

    3)  When I read the abstract of the paper, I thought it one of the most damning critiques of another paper I have ever read.  Well worth a read:

    "Monckton of Brenchley et al. (Sci Bull 60:122–135, 2015) (hereafter called M15) use a simple energy balance model to estimate climate response. They select parameters for this model based on semantic arguments, leading to different results from those obtained in physics-based studies. M15 did not validate their model against observations, but instead created synthetic test data based on subjective assumptions. We show that M15 systematically underestimate warming: since 1990, most years were warmer than their modelled upper limit. During 2000–2010, RMS error and bias are approximately 150% and 350% larger than for the CMIP5 median, using either the Berkeley Earth or Cowtan and Way surface temperature data. We show that this poor performance can be explained by a logical flaw in the parameter selection and that selected parameters contradict observational estimates. M15 also conclude that climate has a near-instantaneous response to forcing, implying no net energy imbalance for the Earth. This contributes to their low estimates of future warming and is falsified by Argo float measurements that show continued ocean heating and therefore a sustained energy imbalance. M15’s estimates of climate response and future global warming are not consistent with measurements and so cannot be considered credible."

  • Models are unreliable

    MA Rodger at 03:45 AM on 6 May, 2015

    Moderator Response @903.

    I think accusing Klapper of being "insulting" is a bit strong. I would accept that refusing even to read a replying comment is outrageously discourteous. But my characterising Klapper's comments as "pretty-much wrong on every point" without explanation - now that could be construed as being insulting, although if asked I am happy to provide such explanation.

    Klapper @901.

    I do not see that I did put words in your mouth. You are on record as objecting to the statement "We have OHC data of reasonable quality back to the 1960s" by saying "I've looked at the quarterly/annual sampling maps for pre-Argo at various depths and I wouldn't agree that's true for 0-700 m depth and certainly not true for 0-2000 m. There's a reason Lyman & Johnson 2014 (and other stuides) don't calculate heat changes prior to 2004 for depths greater than 700 m; they are not very meaningful." If you are stating that pre-Argo 0-2000m data is certainly not of reasonable quality, that trying to use it would be not very meaningful, this can only suggest that you are saying it is not useful data and thus it is junk. And others elsewhere have inferred the same from less well defined statements of your position on pre-Argo OHC data, inferences that did not meet objection from you.

    I have in the past seen the early OHC data point maps. Sparce data is not the same as no data, is it?

    And if you don't read something, how can you know what it is saying? Indeed, was I "hectoring" @900?

  • Models are unreliable

    Klapper at 21:43 PM on 5 May, 2015

    @MARodger:

    "...to happily junk 90% of it because it doesn't meet some level of precision..."

    You're putting words in my mouth I didn't say. What I did say and have said all along is that the new ARGO data are much much better. Have you looked at the 5 year data point maps in question? I didn't bother reading your whole post, just as I didn't bother reading the last part of Tom Curtis #897 for the same reason, it's a none quantitative hectoring lecture.

  • Models are unreliable

    Klapper at 14:20 PM on 5 May, 2015

    @Scaddenp #895:

    "... (and why is 2014 in age of Argo relevant?)..."

    I think you're referring to my comparison of the 5 year '68 to '72 inclusive data density map at 1500 m. I could have given you any 5 year period from 2005 on for the ARGO (i.e. 2005 to 2009 inclusive), but it's not important whether I used 1 year or 5 from the ARGO era, or whether it was 2011 or 2014 or whatever. The point is the data density now in the deep ocean is many orders of magnitude better than the 60's to 90's.

  • Models are unreliable

    Klapper at 14:16 PM on 5 May, 2015

    @Rob Honeycut/scandenp #894/#895:

    You're complaining not because I didn't utilize the data, which I did, but I think because I don't embrace it as much as I should. It is what it is and I accept that, however, it's not only myself that has doubts about reliability of the data. See these comments from Kevin Trenberth et al 2012:

    "...(XBTs) were the main source from the late 1960s to 2004 but, because depth or pressure of observations werent measured, issues in drop rate and its corrections plague these data and attempts to correct them result in varied outcomes.”

    Certainly the data are far better with the ARGO collecting system was my key point and that analyses using these later systems should carry more weight than '60s/70's/80's analyses.

  • Models are unreliable

    scaddenp at 08:20 AM on 5 May, 2015

    Klapper - all measurement systems have issues. The question to ask is what can be determined from measurements available and to what accuracy. This is dealt with in a number of papers, particularly here. See also supplimentary materials in the Levitus papers on OHC content. What do you perceive to be the errors in this analysis?

    Your earlier response on dismissing pre-Argo, simply pointed to sparcity of deeper data (and why is 2014 in age of Argo relevant?). To dismiss 0-700 warming because 700-2000 is sparce however means having a plausible mechanism for 700-2000 cooling while 0-700 heats.

    Looking over your posting history, it appears to me that you have made an a priori choice to dismiss AGW and seem to be trying to find something plausible, anything!, for dismissing inconvenient data rather than trying to understand climate. If this is correct, then do you have an idea of what future data might cause you to revise your a priori choice?

  • Models are unreliable

    Klapper at 04:17 AM on 5 May, 2015

    @CBDunkerson #892:

    I used datasets compiled using XBD inputs. As you can see my graph shows ocean heat content changes going back to 1959 (pentadal dataset starts 1957, so a centred 5 year trend first occurs in 1959. However, given the XBT have problems with depth resolution, based on sink rates, they are nowhere near as good as the ARGO floats. Unfortunately the ARGO network only reach a reasonable spatial density in 2004 or 2005.

  • Models are unreliable

    CBDunkerson at 22:37 PM on 4 May, 2015

    Klapper wrote: "I don't want 'perfect data', I want the best data."

    Great! So what pre-Argo data is there which is better than the XBT results? None? Then guess what "the best data" for that time period is. :]

  • Models are unreliable

    Klapper at 07:44 AM on 2 May, 2015

    @KR #867:

    "...a combination of poor statistics and impossible expectations about 'perfect' data..."

    I don't want "perfect data", I want the best data. I think all posters would agree that thanks to Aqua/Terra/GRACE/ARGO etc. we have the much better data available in the 20th century than previously.

  • Models are unreliable

    KR at 06:46 AM on 2 May, 2015

    From Klapper"I've looked at the quarterly/annual sampling maps for pre-Argo at various depths..."

    Well, there are good reasons for NOAA to display 0-2000 data as pentadal (5-year) averages:

    0-2000 Global Ocean Heat Content - NOAA

    [Source - NOAA, slide 2]

    What Klapper appears to be expressing with his short term trends and dismissal of earlier OHC data is a combination of poor statistics and impossible expectations about 'perfect' data. 

  • Models are unreliable

    scaddenp at 06:22 AM on 2 May, 2015

    Klapper, at the moment, your dismissal of pre-Argo data seems to be an argument from personal incredulity. If you believe the Leviticus estimates of error margins on OHC to be incorrect, then can you please show us where you think the fault in their working is?

  • Climate sensitivity is low

    Klapper at 05:35 AM on 2 May, 2015

    @KR #347:

    "...We have OHC data of reasonable quality back to the 1960s"

    I've looked at the quarterly/annual sampling maps for pre-Argo at various depths and I wouldn't agree that's true for 0-700 m depth and certainly not true for 0-2000 m. There's a reason Lyman & Johnson 2014 (and other stuides) don't calculate heat changes prior to 2004 for depths greater than 700 m; they are not very meaningful.

  • Climate sensitivity is low

    Klapper at 03:39 AM on 2 May, 2015

    @KR #338:

    "I don't think you can make any significant conclusions from such a short period of data".

    The quality data for OHC only begin since the ARGO system reached a reasonable spatial density (say 2004 at the earliest). However I will look for some longer OHC/global heat gain data/estimates to match longer periods, say a 15 year period from 2000 to 2014 inclusive. The average for that period is a TOA energy imbalance of 0.98W/m2 from the CMIP5 ensemble (multi-runs per model) mean rcp4.5 scenario.

  • Climate sensitivity is low

    Klapper at 00:09 AM on 2 May, 2015

    @MA Rodger #334:

    To cross-check my model vs actual comparison for TOA energy imbalance I extracted at the KNMI Data Explorer site data from the CMIP5 Model Ensemble RCP 4.5 (all runs) the variables rsut, rlut, and rsdt, monthly data. I averaged the monthly global data into annual global numbers and calculated the TOA energy imbalance per year as rlut + rsut - rsdt.

    To compare to a published number, in this case I'll use the Hansen et al number from the GISS website linked above, I averaged the years from my model extraction data, in this case 2005 to 2010. The GISS number for global TOA energy imbalance of 0.58 W/m2 +/- 0.15. This agrees with other published estimates of similar time periods.

    The average I get from my CMIP5 RCP 4.5 ensemble annual data, 2005 to 2010 inclusive is 0.92 W/m2. The models appeart to be running too hot by a substantial amount.

    My next experiment will be to compare these TOA CMIP5 data to OHC over a longer period, say 2000 to 2014 inclusive. Or maybe just OHC from 2005 to 2014 since the ARGO spatial density was essentially full coverage after 2004 or 2005. We can likely agree that the global energy imbalance dominantly present in the ocean heat gain, although some of the imbalance goes into the atmosphere and melting continental ice.

  • Climate sensitivity is low

    scaddenp at 07:43 AM on 1 May, 2015

    Klapper - you are proposing to ignore OHC pre-Argo because there is only data to 700m. However, if you wish to postulate that the huge change in OHC 0-700m does not mean energy imbalance, then you must also be proposing that there could somehow cooling of the 700-2000 layer to compensate for warming in the upper layer.

    I would also be interested in your opinion on the Loeb et al 2012  paper in claiming that models and observations are at odds.

  • Climate sensitivity is low

    Klapper at 17:29 PM on 30 April, 2015

    @KR #327:

    The problem is the XBT data only go down to 700 m. If I had tried to use only 0-700 heat gain as my metric, I would be jumped on big time since I was "ignoring" the deeper ocean. The amount of sampling below 700 m prior to the ARGO network is extremely sparse, as noted in the Smith et al paper, particularly in the southern ocean.

  • Climate sensitivity is low

    Klapper at 17:25 PM on 30 April, 2015

    @Rob Honeycutt #326:

    "...assuming they have to be somehow correct..."

    Oh I don't assume they have to be correct at all. You can take the Net observations (satellite) from the Smith paper and throw them in the trashcan. However, that's not true of the ARGO data. Our knowledge of ocean heat is much much better since 2005 or 2004 than pre 2005 or 2004.

  • Climate sensitivity is low

    KR at 13:11 PM on 30 April, 2015

    Klapper - It is wholly unreasonable to discard ocean heat content data prior to 2005. While the XBT data has higher uncertainties than ARGO, and there have been several calibration issues with it that are recently resolved, the sampling back to the 1960's is more than sufficient to establish long term growth in ocean heat content. There simply isn't enough deviation in temperature anomalies over distance to reject long term warming of about 0.6C/decade even with sparse XBT sampling. 

    For details on this, including evaluating the standard deviation of anomalies against distance, see the Levitus et al 2012, specifically the "Appendix: Error Estimates of Objectively Analyzed Oceanographic Data", which speaks directly to this matter. The uncertainty bounds from Levitus et al are shown in Fig. 2 here. And they are certainly tight enough to establish warming. 

  • Climate sensitivity is low

    Klapper at 12:04 PM on 30 April, 2015

    @Tom Curtis #322:

    "...First, ...the TOA energy imbalance...from observations and models match closely except for the period of 1972-82"

    Where would you get observations from 1972 for the TOA energy imbalance? For that matter exactly how accurate are the current observations for the TOA imbalance? There's an post over at the Guardian on the water vapour/climate change story by "MaxStavros" which claims the satellite numbers in raw form show an imbalance of 6.5W/m2 at the TOA. Since we know that is impossible the number has been adjusted down to something more believable. I can understand the instruments on the satellite are precise but not accurate, but that means the "observations" are not that reliable. I'm guessing the most reliable number is ocean heat, but that is true only since the ARGO era, from 2004 or 2005. From the NODC data, the warming rate of the oceans, corrected to global area, is about 0.5 W/m2. This is close to other estimates. The following example is ocean plus melting, plus land, but since most of the heat goes into the oceans we would expect the ocean only and total should be close (and they are).

    Here's a quote from Jame Hansen et al 2012 at the NASA website: "We used other measurements to estimate the energy going into the deeper ocean, into the continents, and into melting of ice worldwide in the period 2005-2010. We found a total Earth energy imbalance of +0.58±0.15 W/m2 divided as shown in Fig. 1"

    http://www.giss.nasa.gov/research/briefs/hansen_16/

    Here's the problem with an energy imbalance of +0.58W/m2: the models show a much larger TOA energy imbalance. The GISS model shows +1.2W/m2, and the CMIP5 ensemble mean is +1.0 W/m2 for the 2000 to 2015 period.

  • Global warming hiatus explained and it's not good news

    DSL at 02:52 AM on 16 April, 2015

    Peter: "ARGO is unlikely to show heating. I think the floats only take a reading every 30 mins so that they are likely to miss an actual eruption; the warm water would likely just float up past them without being recorded. They’d also have to be situated over the correct position."

    This is the most ridiculous thing I've read in months.  Volcanic action that produces enough heat content to warm the El Nino 3/3.4 regions for months just magically slips by dozens of Argo floats.  This is what happens when a pet theory is forcefully driven through the actual data: Bizarro Physics.   

  • Global warming hiatus explained and it's not good news

    Peter Carson at 15:30 PM on 15 April, 2015

    Thanks scanddenp, in that you do provide scientific back-up to your arguments.

    1. Calculation: Annually but using the low figure of only using East Pacific Rise and discounting the extra effect of Galapagos Ridge.

    Height (2 km) x length (say 1,000 km) x Width (0.1 m) = 0.2 cu km

    EN happens one year in five on average. This gives 1 cu.km per thousand km of Ridges in the vicinity.

    2. I don’t know of anySulphurisotope that could be used for dating or origin purposes. Please inform.

    3. Your “1/ El Nino should be correlated with change in undersea volcanic productivity. None that I can see.”

    Try Daniel A Walker (a geophysicist – I’ve spoken with him some years ago) Eos vol 76, 1995 p 33 to 36. “More evidence indicates link between El Nino and seismicity”.

    [Come in Spinner!]

    4. You seem confident in the current “theory” for EN. It should match data rather well - it’s not actually a theory but a description of events! It has no predictive capabilities whatsoever. For example, how does it explain how El Nino got its name, ie occurring near Xmas? It can’t.

    (I can! - but you’ll have to wait for me to put it onto my site.)

    What causes theWalkercirculation to weaken? Why do the tropical westward winds drop preceding an EN, especially since the west now has a build-up of warm water pushed there? That should increase these winds!

    5. ARGO is unlikely to show heating. I think the floats only take a reading every 30 mins so that they are likely to miss an actual eruption; the warm water would likely just float up past them without being recorded. They’d also have to be situated over the correct position.

    Heat from volcanic activity on the bottom will not stay there waiting for its temperature to be taken but will rise.

  • Global warming hiatus explained and it's not good news

    scaddenp at 13:20 PM on 15 April, 2015

    Peter, if you want propose a theory, then you have ensure it is compatiable with all known observation data not just grabbing bits that suit you. El Nino is not volcanic, pure and simple. There is a massive literature with data on its actual causes. The H2S that is painting fish boats is from the biological source - the huge die-off that goes with the warmer temperatures. You can look this up for goodness sake. H2S from organic die-off has different isotopic signature from volcanic source H2S and there arent undersea volcanoes where H2S is bubbles are observed.

    As for warmer water rising- are you aware of the ARGO network? Why is this then not observed? And  why the massive discrepency between heat emitted from volcanoes and actual heat content? How do you account for the actual spatial distribution of OHC?

    The extra heating from CO2 is actually observed. It is the correct magnitude to explain OHC and many many other things.

  • Measuring Earth's energy imbalance

    scaddenp at 07:14 AM on 13 March, 2015

    Chpter 2 of IPCC AR5 discusses the measurement in 2.3.1. You might like to start with the references from there. I understand there are some difficulties with accuracy in the raw data though making it hard to get a precise measure of magnitude. ARGO data may be a more accurate way to get TOA imbalance.

  • Another global warming contrarian paper found to be unrealistic and inaccurate

    Tom Curtis at 14:35 PM on 24 October, 2014

    Pierre-Normand @22, much of Roy Spencer's responce depends on asserting the adequacy of 1 dimensional models for assessing climate sensitivity.  That, in one respect, is a fair line of defence.  Spencer and Braswell (2014) used a one dimensional model, ie, a single vertical profile of the top 2000 meters of the ocean using globally averaged values.  Because it uses globally averaged values, it necessarilly treats all points of the global ocean as having the same values, and so much of Abraham's critique amounts to a critique of the adequacy of such models in this application.

    Spencer defends the adequacy of his model on the grounds that Hansen has purportedly claimed that, "... in the global average all that really matters for the rate of rise of temperature is (1) forcing, (2) feedback, and (3) ocean mixing."  Following the link, however, I find no such claim by Hansen.  He does claim that the global energy imbalance determines (in part) the final temperature rise from a forcing, but that is a far cry from asserting that treating only averaged values in a model will adequately determine when that will be (ie, determine the climate sensitivity factor).

    Interestingly, Hansen did say, "Ocean heat data prior to 1970 are not sufficient to produce a useful global average, and data for most of the subsequent period are still plagued with instrumental error and poor spatial coverage, especially of the deep ocean and the Southern Hemisphere, as quantified in analyses and error estimates by Domingues et al. (2008) and Lyman and Johnson (2008)."  It follows that, according to Hansen, Spencer's one dimensional model must be essentially useless over the period prior to 1970.  Indeed, Hansen goes on to write:

    "Earth's average energy imbalance is expected to be only about 0.5-1W/m2. Therefore assessment of the imbalance requires measurement accuracy approaching 0.1 W/m2. That target accuracy, for data averaged over several years, is just becoming conceivable with global distribution of Argo profiling floats. Measurements of Earth's energy imbalance will be invaluable for policy and scientific uses, if the observational system is maintained and enhanced."

      Based on that, given the monthly data required for the empirical validation of Spencer's model, according to Hansen the model would be useless for all periods prior to 2004 at the earliest.  (Note, long term averages are more accurate than monthly variations.  It is the later, required by Spencer, that are inadequate prior to 2004; whereas estimates of the former would still be reasonable although with wide error margins.)

    This brings us to the second basis on which Spencer claims adequacy, a claimed superior empirical fit to that of GCMs.  That superior fit, however, is unimpressive both because it is purely the function of having tunable parameters, and does not take into account that while GCMs produce ENSO like fluctuations, they do not produce them in sync with the observed ENSO fluctuations.  In constrast, Spencer imposes the observed ENSO fluctuations onto his model (which is not superior empirically until he does).  Thus, the purported superior emperical fit is not an outcome of the model but an input.

    All this, however, is beside the point.  While nearly all climate scientists would see a use for one dimensional models, very few (other than Spencer) would consider them adequate to determine climate sensitivity with any real accuracy.  They give ballpark figures only, and are known to lead to significant inaccuracies in some applications.

    Turning to more specific points, one of Abraham's criticisms is the use of an all ocean world, a point Spencer responds to appeal to the adequacy of single dimensional models.  However, in using an all ocean world, Spencer assumes that the total heat gain by the Earth's surface equals the ocean heat gain from depths of 0-2000 meters.  That is, he underestimates total heat gain by about 10%, and consequently overestimates the climate sensitivity factor by about the same margin (ie, underestimates ECS by about 10%).

    That is important because his estimated climate sensitivity factor with ENSO pseudo-forcing (Step 2) is 1.9 W/m-2K-1.  Correcting for this factor alone it should be 1.7 W/m-2K-1, equivalent to an ECS of 2.2 C per doubling of CO2.  The step 3 ECS would be much lower, but it only gains a superior emperical fit to step 2 on one measure, and obtains that superior fit by the tuning of eight different parameters (at least).  With so many tuned parameters for a better fit on just one measure, the emperical suport for step 3 values is negligible.

    A second of Abraham's criticisms is the failure to include the effects of advection.  Spencer's response that his model includes advection as part of the inflated diffusivity coefficients would be adequate if  (1) they varied between all layers instead of being constant for the bottom 26 layers, and (2) where set by emperical measurement rather than being tunable parameters.  The first point relates to the fact that advection may differentially carry heat to distinct layers, and hence the effects of advection are not modelled by a constant ocean diffusivity between layers, even on a global average.

    There may be other such nuances in relation to particular criticisms that I am not aware of.  The point is that the appeal to the features of a one dimensional model does not justify Spencer and Braswell in ignoring all nuance.  Therefore some of Abraham's criticisms, and possibly all of them still stand.

    Finally, I draw your attention to Durack et al (2014).  If their results are born out, it will result in Spencer and Braswell's model with current parameter choices predicting an ECS 25-50% greater than the current estimates, ie, 2.7-3.3 C per doubling of CO2.  Of course, the parameters are tunable, and Spencer and Braswell will without doubt retune them to get a low climate sensitivity once again. 

  • Empirical evidence that humans are causing global warming

    jmath at 01:18 AM on 5 September, 2014

    This document is misleading:

    1) Water vapor accounts for 50% of greenhouse "effect" and is counted as a greenhouse gas.

    2) Clouds account for 25% of the greenhouse effect

    3) CO2 is 20%

    The graph is obviously more impressive by leaving out these 2 more significant contributors.

    I also find the graph showing ocean heat content increasing to be highly questionable.  Since it is 90% of the retained heat and we still know so little about the ocean, in particular the ARGO floats have been in existence for only 15 years and even they do not capture the entire ocean heat content it is impossible to believe the graph as fact.  Even if we assume the graph is correct ocean heat content is enormous and obviously contributes as part of the "blanket" that is unmentioned in your blanket analysis.   You seem to be implying that the atmosphere provides the full explanation for the slow movement of temperatures on the earth when a large part clearly belongs to the oceans which store 1000 times the heat content of the atmosphere.

    You are very good at leading the first parts of the argument, i.e. the greenhouse effect and the increase in co2.  However, the conclusion that our temperature increase over the last 50 years is due to co2 is not proven because you exclude 75% of the greenhouse gases and their changing composition, you exclude the ocean which is 99.9% of the heat storage.   If 90% of the heat went into the ocean and the ocean did not re-radiate that heat back in some form the oceans would rise negligibly in temperature and the atmosphere would be similarly little affected.  

    Also a smart student would notice another big hole in the argument.  The earth is volcanically active and has a hot magma layer that is leaking into the environment under the ocean and the surface.  There are also other concerns as pointed out by the IPCC which include albedo and aerosols which could have major impacts on the heat retained by the earths atmosphere.  

    Your argument appears oriented to a 3rd grader not a high school student even.  A smart science high school student would see the missing pieces or be very upset when told you were excluding more than 75% of greenhouse gases and didn't mention the incredible role of the ocean.   I think you can still make the argument but it has to include these complexities.  

    Something I didn't understand from the beginning was the hubris of climate scientists to speak as if this is so simple even though it is plain even a high school student could see the holes in the argument.   The point you make that CO2 contributes heat is fine (with my caveats) but to go beyond that and ascribe it as the sole reason for temperature variation is reaching and weak.  You simply rush to the point without explaining the complexity of how the system will respond to the changes.  I think you are better off if your purpose is to show the effect of CO2 is proven to leave off the latter part of your argument.  

    It would also be very important to show a graph of how the radiation at the surface has changed because of the changing CO2.  Showing a single graph implies that you could simply leave all the other elements the same and move the co2 contribution up or down exactly based on the movement of co2.  Having such a graph of real data would be the "proof" that would complete this argument nicely.

  • It hasn't warmed since 1998

    citizenschallenge at 12:07 PM on 25 August, 2014

    Update:

    "Varying planetary heat sink led to global-warming slowdown and acceleration"
    Xianyao Chen, Ka-Kit Tung

    Science 22 August 2014:
    Vol. 345 no. 6199 pp. 897-903
    DOI: 10.1126/science.1254937
    RESEARCH ARTICLE

    http://www.sciencemag.org/content/345/6199/897

    ===============

    August 21, 2014
    Cause of global warming hiatus found deep in the Atlantic Ocean
    Hannah Hickey

    http://www.washington.edu/news/2014/08/21/cause-of-global-warming-hiatus-found-deep-in-the-atlantic-ocean/

    Following rapid warming in the late 20th century, this century has so far seen surprisingly little increase in the average temperature at the Earth’s surface. At first this was a blip, then a trend, then a puzzle for the climate science community.

    More than a dozen theories have now been proposed for the so-called global warming hiatus, ranging from air pollution to volcanoes to sunspots. New research from the University of Washington shows that the heat absent from the surface is plunging deep in the north and south Atlantic Ocean, and is part of a naturally occurring cycle. The study is published Aug. 22 in Science.

    Subsurface ocean warming explains why global average air temperatures have flatlined since 1999, despite greenhouse gases trapping more solar heat at the Earth’s surface.

    ...

    The results show that a slow-moving current in the Atlantic, which carries heat between the two poles, sped up earlier this century to draw heat down almost a mile (1,500 meters). Most of the previous studies focused on shorter-term variability or particles that could block incoming sunlight, but they could not explain the massive amount of heat missing for more than a decade.

    “The finding is a surprise, since the current theories had pointed to the Pacific Ocean as the culprit for hiding heat,” Tung said. “But the data are quite convincing and they show otherwise.”

    Tung and co-author Xianyao Chen of the Ocean University of China, who was a UW visiting professor last year, used recent observations of deep-sea temperatures from Argo floats that sample the water down to 6,500 feet (2,000 meters) depth, as well as older oceanographic measurements and computer reconstructions. Results show an increase in heat sinking around 1999, when the rapid warming of the 20th century stopped.

  • Climate models accurately predicted global warming when reflecting natural ocean cycles

    scaddenp at 09:56 AM on 24 July, 2014

    This is my personal view on this paper. The paper takes a novel way to test the hypothesis that poor match between ensemble mean and observations is due fact the model mean includes many different states of ENSO whereas observations a "one member of the ensemble". The paper does demonstrate that a mean created from runs which are in phase with actual state are a closer match to observed global temperature. This does underline the importance of ENSO on short term global temperatures. I am sure everyone is very surprized by that result (not!).

    I do not think the paper can preclude (and the authors make no such claim) that there are other problems with the modelling. Beyond well-known problems with models, the question about accuracy of aerosol forcing seems to need more data (at least another year) from the Argo network. There could obviously be other errors and inaccuracies still hidden in modelling of feedbacks.

    However, what you can conclude is that there is not as yet conclusive evidence of some unknown failure in the models on the basis of a mismatch between ensemble mean and observations: It would appear that issue of ENSO is quite sufficient to explain the mismatch in global surface temperature for such a short term trend.

  • Water vapor is the most powerful greenhouse gas

    Arthur123 at 00:47 AM on 16 May, 2014

    I have not read all these comments. There has been so much fudging on historical climate data by GISS and other government outlets to make the past look colder than today that I don't really see any evidence the eartth is really warming at any truely measureable rate. The HADCUT data shows no warming, the ARGOS ocean data shows no warming, Antartica is growing record ice, and the Arctic is showing some signs of ice growth too. Plus the USA just experienced one of the harshest winters of all recent times. If CO2 increases water vapor in the atmosphere than why are some blaming the west coast drought on AGW? The truth is these droughts have occurred many times in the historical past in the USA. A few years ago the Southeast was in severe drought. Not any more. The evaporation process in itself is a COOLING process so more evaporation more cooling not less. When precipitation falls latent is released back into the atmosphere. There is no net warming or cooling. The earth's atmosphere is a baroclinic system which is always trying to bring equilibrium to this dynamic system. Its this natural unbalance that keeps the system in motion and always unstable. 

  • Climate Models Show Remarkable Agreement with Recent Surface Warming

    scaddenp at 06:11 AM on 30 March, 2014

    I would also point out that Argo data is not an input into models. 

  • Climate Models Show Remarkable Agreement with Recent Surface Warming

    Rob Painting at 19:21 PM on 29 March, 2014

    "Some sources suggest that > 40% of Argo floats are either non- operational or produce questionable data"

    Let me guess, these 'sources' don't happen to be oceanographers, but are instead non-experts ideologically resistant to the whole idea of climate-driven policy?

    If readers are interested in the robustness of ocean heat measurements they should consider the IPCC AR5, Abraham et al (2013) & Von Schuckmann et al (2013). Yes the oceans are warming and the consequent thermal expansion of seawater is one of the main contributors to sea level rise.

    IPCC AR5 Chapter 3 states:

    "It is virtually certain that upper ocean (0 to 700 m) heat content increased during the relatively well-sampled 40-year period from 1971 to 2010"

    &

    "Warming of the ocean between 700 and 2000m likely contributed about 30% of the total increase in global ocean heat content (0 to 2000m) between 1957 and 2009. Although globally integrated ocean heat content in some of the 0 to 700m estimates increased more slowly from 2003 to 2010 than over the previous decade, ocean heat uptake from 700 to 2000 m likely continued unabated during this period."

    As for the models, see figure 3 in the post. CMIP5 seems to do a reasonable job of simulating surface temperatures over the last hundred years. With better forcing estimates going back in time they might do an even better job. It's certainly plausible based on the work of Schmidt et al (2014).  

  • Climate Models Show Remarkable Agreement with Recent Surface Warming

    Riduna at 10:56 AM on 29 March, 2014

    An interesting article on which I would make two observations: Firstly: the author points out that the accuracy of model output is dependent on reliable input – the GIGO factor. On this point one should perhaps question the reliability of Argo data. Just how reliable is it? Some sources suggest that > 40% of Argo floats are either non- operational or produce questionable data.Secondly: the author suggests that CMIP5 has been proven to be a reliable model when compared with observation – but is this really so? An improved model, maybe. (Overland et al (2014) questions the ability of CMIP5 to accurately show current or predict future temperature, particularly in higher latitudes. Predicting average global surface temperature, even in the short-term is, as the author points out, an extraordinarily difficult and complex process since it relies on the reliability and accuracy of a vast amount of data and models which predict the interaction of these data. Even though we do not have access to such data, two things can be predicted with reasonable certainty.1. El Nino is very likely to become established within the next 6-18 months and may well be as strong as or stronger than the one experienced in 1997/98.2. We shall not need complex models and data to appreciate its effect on average global surface temperature or the prognostications of so called “skeptics” who have rashly declared a hiatus in global warming, or its demise.Nor does one need sophisticated models to tell us that, as long as we continue to pump increasing amounts of greenhouse gases into the atmosphere, temperatures will continue rising, first with dangerous, then with catastrophic consequences.
  • Climate Models Show Remarkable Agreement with Recent Surface Warming

    Klapper at 07:04 AM on 29 March, 2014

    "When you consider all of Earth's reservoirs of heat; the oceans, land, ice and atmosphere together, global warming hasn't slowed down at all"

    Then again has it accelerated? The suggestion of this post is that warming has in fact accelerated if you include all the heat resevoirs, including ice melting and the deep ocean. I assume that means the predicted radiative imbalance has grown larger, which is what the models predict with rising GHGs.

    We have a number of metrics to verify this, not the least being ocean heat content. However, there are problems with the metric of ocean heat, at least for the deep ocean in that the data are extremely sparse prior to 2005 or so when the ARGO network gained a robust density of floats.

    A better metric is sea level since it includes both thermosteric and net ice melt, and it is relatively noise free. There are problems with sea level too of course, namely the satellite data only start in 1993 and the readily available tide gauge compilations readily end in 2009 so are getting kind of stale.

    However, in neither of these sea level datasets do we see evidence of recent acceleration. If anything the reverse is true as evidenced by this paper at this link:

    http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate2159.html#auth-1

    The linked paper explains the lack of recent sea level rise as related to changes in the hydrologic cycle in turn related to ENSO. However, regardless, corrected sea level shows no acceleration so the claim that there has been recent acceleration of warming is dubious.


  • Tropical Thermostats and Global Warming

    Tom Curtis at 11:52 AM on 18 March, 2014

    Following on from a discussion elsewhere, I would like to discuss Willis Eschenbach's hypothesis, in that he at least presents what at first glance looks like evidence for his hypothesis.  The evidence is scattered through three posts at WUWT, and shows that SST above 30 C are uncommon.  Eschenbach argues that because those temperatures are uncommon, there is a "hard limit" on ocean temperatures, slightly above 30 C.

    Eschenbach's hypothesis faces an immediate hurdle in that his own data refutes it.  Here is his plot of "all" NH Argo surface temperatures (Fig 2, AOTM):

     

    The "all" is dubious in that there are far to few data points for "all" ARGO NH surface temperature records, and it is likely that Eschenbach has used a random sample of the data to make distributions clearer.  Regardless of that point, however, it is very clear from the graph that there is not a hard limit at 30-32 C.  Several temperatures are recorded above those values, and some very far above those values.  This is most clear in 2012 which shows a cluster of data points above 35 C.  Further, the period of peak temperature does not show a well defined limit.  Indeed, the upper limit on temperatures is less well defined in the warm months than in the cool months, the opposite of what we would expect if there were indeed a "hard limit".

    What we would expect with a genuine "hard limit" can be seen by comparing the NH warm temperatures with the lower range of the SH cool temperatures (Fig 2, Notes 2):

     You can clearly see a hard limit in low temperatures slightly below 0 C, representing the freezing point of sea water.  The key feature is that the lower limit of temperatures is far more sharp ly defined in the cool months than in the warm months.  That is in strong contrast to the upper temperature limit, which is more sharply defined in cool months than in warm months, a feature which by itself refutes Eschenbach's hypothesis.

    At this point I will make a short logical excursion.  As everyone knows, there is a "hard limit" on liquid water temperatures at 0 C, ie, the freezing point.  Despite that, the hard limit in sea water is obviously less than 0 C.  The reason is that increased salinity reduces the freezing point.  Therefore, the "hard limit" is only a hard limit under a certain set of condition.  If you change those conditions, you also change the "hard limit".  It follows that even had Eschenbach been able to demonstrate a hard limit, he would not have demonstrated that Sea Surface Temperatures would not rise above that limit in the future, under different conditions.  Of course, that is a point purely of logical interest in that Eschenbach has not demonstrated a "hard limit" to begin with.

    Returning to Eschenbach's evidence, he presents more evidence that he supposes supports a hard limit.  Specifically, he shows that the closer to the equator, the smaller the annual variation in temperatures (Fig 3, AOTM):

     

    He says of this graph,

    "As you can see, the warm parts of the yearly cycle have their high points cropped off flat, with the amount cropped increasing with increasing average temperatures."

    That, however, is not what you see at all.  Rather, at the warmest times of the year, the upper limit of temperatures are least well defined.  If anything, at that time you have a spike in temperatures.

    I suspect the misdescription is because Eschenbach reffers to the guassians rather than the data.  He expects the Gaussians to show a series of sine waves, with those closer to the equator being warmer than those further away.  He thus interprets the actual series of successively smaller amplitude sine waves with the upper cycle nearly coinciding in values as the top of the cycles having been truncated.

    Unfortunately for his hypothesis, there is a well known phenomenon in nature that shows a similar pattern to his Gausians, ie, the daily TOA insolation relative to latitude:

    (Source)

    You will notice the near constancy of insolation at the equator, and also that insolation at 20 degrees North is higher in the summer than it is at any time on the equator.  The reason for that is that, at 20 degrees North, when the sun is directly overhead, the days are longer than they are when the sun is directly overhead on the equator.  And with a longer day, and the same peak forcing we expect higher SST, which is what we see.  Curiously Eschenbach draws attention to the fact that the peak temperatures are found not at the equator, but between 15 and 30 degrees North, in the Summer. But given the insolation data, that is just what we would predict. So also, given the insolation data, would we predict that peak summer temperatures through out the tropics and near tropics would match or exceed peak equatorial temperatures, and that the closer the equator, the less variation in SST.

    Eschenbach also draws attention to the shape of the Gaussians (shown in Fig 6), noting in particular that "...summer high temperature comes to a point, while the winter low is rounded".  But again, however, he needs search no further for an explanation than the insolation pattern:

    (Wikipedia)

    I mentioned in my introduction that Eschenback presents data that "...at first glance looks like evidence for his hypothesis".  It should not be plain, however, that it is onlyh at a first, and superficial glance that that is true.  His most convincing evidence turns out to be a direct consequene of the patterns of insolation at, or near the equator.  The more direct evidence is seen to contradict his claim of a hard limit, showing as it does a less defined limit to temperatures in the warmest months - the exact opposite of what is required by his hypothesis.  It is only by maintaining a superficial glance, and by not paying attention to actual forcings that his hypothesis appears to have any support at all.

  • Unprecedented trade wind strength is shifting global warming to the oceans, but for how much longer?

    HK at 06:02 AM on 14 February, 2014

    Klapper #61:

    Sea level rise from thermal expansion is not an accurate proxy for ocean heat uptake without a more careful analysis.
    Take a look at this graph in Wikipedia. (the water’s density for each degree from 0°C to 100°C is listed further down the page) If you convert density change to expansion you get this for some temperatures:

    4-5°C:    +0.001%

    6-7°C:    +0.004%

    10-11°C: +0.01%

    15-16°C: +0.016%

    20-21°C: +0.021%

    Heating water from 10 to 11°C makes it expand ten times more than if you heat it from 4 to 5°C! So the amount of thermal expansion doesn’t only depend on how much heat is being added, but where it is added, since warm water expands more than cold. These numbers are for fresh water. Salt water continues to contract down to its freezing point, but the principle is the same.

    And where has the ocean heating taken place?
    If you study this and this and this graph thoroughly, you will find that a rising fraction of the heat is accumulated in the deeper and colder parts of the oceans.
    With a caveat for quite uncertain pre Argo data I got this result for the fraction of the heating taking place deeper than 700 meters:

    1957-1994: 25%

    1994-2011: 38%

    2005-2013: 49%

    The tendency is quite clear: More of the heat accumulation happens in the deeper, colder parts of the oceans where each unit of energy cause less thermal expansion than in the warmer, upper layers.

    And finally, this graph shows that the warming in the upper 100 meters have been almost zero for the last ten years. Each unit of energy added here would produce at least 10-20 times more expansion than in the deep and cold parts of the oceans!

    Conclusions:

    1. You can’t translate thermal expansion to heat accumulation without knowing where the heating takes place.

    2. Much of the SLR from increased melting of ice sheets in the 2000’s has been offset by decreased thermal expansion because more of the warming happens in colder water. That probably explains the apparent lack of acceleration seen here.


  • Unprecedented trade wind strength is shifting global warming to the oceans, but for how much longer?

    Klapper at 21:26 PM on 10 February, 2014

    This same effect could be part of the reason for the rapid warming post 1975. If you run a 30 year rolling linear regression trend on the Church & White tide gauge sea level dataset you find sea level rise peaked ending in about 1965 at a sea level rise rate actually above the trend ending at 2010 (end of the dataset). Yet SAT was stagnent from 1945 to 1975.

    You have very sparse data coverage for heat content down to 2000 m during this period (actually pretty much every period up to ARGO), maybe a handfull of measurements per year in the south Pacific during the 60's so you're left with sea level as your best guess at possible heat gain in the deep ocean.

    The implication of that possible heat gain in the -IPO between '45 and '65 is that it's subsequent release in the 80's and 90's is some part of the warming of that period, which would mean the warming of this period was certainly not all anthropogenic.

    I'm one of those skeptics who believes in an anthropogenic component to warming, but that the projections of future temperature rise have been exaggerated by the models. The information in this post points to a possible reason the models are "tuned" too hot; they have failed to account for the heat "burp" from the ocean in the '80's, and have based the warming in this period solely on radiative forcing and feedbacks to radiative forcing.

  • The Oceans Warmed up Sharply in 2013: We're Going to Need a Bigger Graph

    tcflood at 06:46 AM on 2 February, 2014

    I don’t understand how it can be said so authoritatively that the rate of ocean heating has been rapidly accelerating recently. If you look at the table of ocean heating rates at various depths as a function of time given in the posting directly below this post, it seems that from 0-700 m the rate of heating since 2004 has slowed compared to 1983-2004, and we don’t have any good data below 700m until the Argo data started flowing in (2005-2008?). 

    With so little data, where does the confidence of the heating acceleration claim come from?  

  • Three perfect grade debunkings of climate misinformation

    Poster at 13:53 PM on 25 January, 2014

    Tom Curtis  Apologies but could you explain what appears (note I say appears not is) a contradiction in your post @10 above.  With regard to a pause or hiatus or whatever in global temperatures you state "Warren Hindmarsh @9, it is fairly easy to debunk the two fictions you mention by simply pointing out that they are fictions. This can be seen easilly for the satellite data on the SkS trend calculator. Just set the platform to UAH, and the start year to 1997.0 and you will see the trend is 0.93 +/- 0.208 C/decade. The central estimate, therefore is strongly positive, and while negative trends are not excluded, neither are trends 50% stronger than those predicted by the IPCC. Calling such a trend "a halt in the warming" at best shows a complete lack of understanding of the meaning of error bars"

     I may well have misunderstood but from what you write it seems you are  advocating starting at 1997 to "properly debunk the fact that accurate satellite measurements have shown a halt in the warming over the last 17years or so?"

    You then write  "Some people have misrepresented the ARGO data by only showing the 0-700 meter Ocean Heat Content, ie, by excluding 65% of the data. Excluding data like that because you do not like what it shows is fraudulent. Arguable, so also is the massive cherry pick involved in selecting 1997 (a very strong El Nino year, with temperatures far above trend rates) as a start year for your comparison."

    This suggests that picking 1997 as start year is somewhat fraudulent and is seemingly at odds with your earlier comment.  I expect I have misunderstood and apologies in advance if I have but would you clarify?

  • Three perfect grade debunkings of climate misinformation

    Tom Curtis at 23:30 PM on 23 January, 2014

    Warren Hindmarsh @9, it is fairly easy to debunk the two fictions you mention by simply pointing out that they are fictions.  This can be seen easilly for the satellite data on the SkS trend calculator.  Just set the platform to UAH, and the start year to 1997.0 and you will see the trend is 0.93 +/- 0.208 C/decade.  The central estimate, therefore is strongly positive, and while negative trends are not excluded, neither are trends 50% stronger than those predicted by the IPCC.  Calling such a trend "a halt in the warming" at best shows a complete lack of understanding of the meaning of error bars.

    Even worse is the misrepresentation of the ARGO measurements, which are shown here in red:

     

    Some people have misrepresented the ARGO data by only showing the 0-700 meter Ocean Heat Content, ie, by excluding 65% of the data.  Excluding data like that because you do not like what it shows is fraudulent.  Arguable, so also is the massive cherry pick involved in selecting 1997 (a very strong El Nino year, with temperatures far above trend rates) as a start year for your comparison.

  • Climate Science History - interactive style

    Paul D at 20:20 PM on 12 November, 2013

    Hi Lei@14.

    The timeline covers over 200 years of history and the Vostok Core was extracted in the 1990s. As you can see in the project, space is constrained and when designing anything like this, information has to be edited in order to fit in with a design.

    I have had many discussions with Skeptical Science contributers about what should and shouldn't be included, the reality is we all had different projects in our heads and this one is I guess the result of me leading it! If someone else had led it, the result would have been different.

    I take your point though. We have the Argo project in the timeline which is a project that provides data, so why not mention Vostok?

    It's probably one for the to do list!

    I'll point out that the project data is not 'static'. The data can be added to and in theory that is my intention. For example we did consider including the IPCC AR5 report, but I thought that because it is a 'history' project, AR5 isn't really history yet. It's to soon to include it and it can be added later

    The software is designed to automatically configure itself to new data, so that in the future it can be extended to as many years as is needed.

     

  • Does the global warming 'pause' mean what you think it means?

    grindupBaker at 17:06 PM on 21 October, 2013

    @Dean #16 You might want to first look 1 step back at the foundation. GEOPHYSICAL RESEARCH LETTERS, VOL. 40, 10 May 2013 pp 1754–1759 is 6 pages of "Distinctive climate signals in reanalysis of global ocean heat content" by Magdalena A. Balmaseda, Kevin E. Trenberth and Erland Källén with the graph  & description of the ORAS4 reanalysis. Sure, they use models which can therefore be argued but my inference is that these are to interpolate in time & space between ocean temperature data from the 7,000 Argo floats & huge numbers of XBTs (I seem to recall climate scientist saying 240,000 in a video) which are sparsely spread at depth and back in time decades ago. Where you and I would simply average between the two distant measurements, I presume their fancy computers do a better job than a linear interpolation by simulating how the ocean moves. But basically it's underpinned by 7,000 Argo floats <huge numbers> XBTs.  With all the work they've done it just doesn't look like they could have messed it up so badly that the 137 ZettaJoules they graph being added to the oceans from 2000 to 2012 could be off by any amount that's a game-changer. I doubt very much that the climate sensitivity IPCC uses is based on what's been seen, I think the big increase since 2000 is at the lower IPCC feedbacks. I infer that IPCC is using the models and I infer that they show increasing feedbacks so what we've seen so far hasn't even reached the lower end of the forcing+feedbacks they expect.

  • Levitus et al. Find Global Warming Continues to Heat the Oceans

    scaddenp at 10:52 AM on 14 October, 2013

    Probably worth also have a scan of Von Schuckmann and La Traon 2011, "How well can we derive Global Ocean Indicators from Argo data?"

  • Levitus et al. Find Global Warming Continues to Heat the Oceans

    Eric (skeptic) at 01:08 AM on 14 October, 2013

    KR and Dave123, thanks for the replies.  I had read a website a few months ago that explained the way Levitus et all assimilated the ARGO data into their climate model.  Basically the model would predict temperatures, they would compare those to the real temperature measurements, then adjust the model for a better fit in an iterative process.  The key point is that the model provided the energy change results not the measurements directly.  When I find that website again, I will read through it and post what I find here.

    I read through the appendix of Levitus et al 2012 and saw a process for estimating the error of a data smoothing algorithm.  I'm not sure how I can evaluate that yet.

    The crux of the issue is determining how much heat has percolated into the deeper ocean.  KR, do you mean that the "mass effect" allows the integration of many estimates of heat transfer to determine an aggregate heat transfer over large regions?  That's true as long as the factors and effects are somewhat uniform. I'm not sure that is true in cases where both heat transfer and heat accumulation vary over relatively small regions.  Nonuniformity seems to be the case looking at figures S5 (which  is averaged longitudinally) and S6 (which is not).

    And Dave123, a linear regression through data points will only work if there is a somewhat linear rise (or fall) in temperature.  That's true for some basins to some extent and true for the world, but is only true sometimes for individual floats.  A higher variance in any particular float means a higher standard deviation and uncertainty.

    But as I said above, I owe you a better answer once I find the web site (it was not a paper, but a NOAA website describing the assimilation process). 

  • Levitus et al. Find Global Warming Continues to Heat the Oceans

    Dave123 at 19:40 PM on 13 October, 2013

    Eric- So what you've said is that after I pointed out some gaps, you went back and tried to fix it.  That's good. If you truly aspire to be a compelling sceptic, you need to do this first, not as an afterthought.  And I'd suggest you should assume that the people writing those papers have done so.  Peer reviewers tend to be a little reactive when they see a paper than hasn't properly acknowledged prior work in a field.  It's one of the easiest things to 'ding' a paper for.

    Let me add one other point about accuracy that you seem to have missed and KR didn't discuss.  We are interested in the change of temperature in the deep oceans.  Thus the standard procedure of computing the temperature anamoly for each device and working range applies.  A given Argo Float measures a hypothetical temperature of 8.023 C at a depth of 1500 meters. The measurement is repeated on a regular basis of 5 years.  You can run a regression line through the temperatures, which does much the same thing as taking the average of all the temperatures and substracting that average from each data point.  Now you have the anamolies, and those are independent of whether the "true" temperature was 8.021 or 8.025.  If a given thermocouple reads a little high, it always reads a little high.  For a narrow (or not so narrow) range of temperatures you can treat that error as a constant offset.  Looking at the anomoly also allows us to pool data across multiple floats and thus create a statistic that represents the total ocean heat content change.

    Add this to the law of large numbers and you get a statistically sound measurement of an increase in heat going into the lower depths. 

     

     

  • Levitus et al. Find Global Warming Continues to Heat the Oceans

    Eric (skeptic) at 12:14 PM on 13 October, 2013

    In this thread I posted a link to Schmitt et al 2005 and stated that "The bottom line is we don't really know how much 'missing' atmospheric heat has wound up in the oceans."  Dave123 replied that "the paper makes the case that mixing was faster than anticipated" and quoted form the paper here.  He suggested checking cites so I did, although not all 71 of them.  Searching within the cites for GCM was dry, but searching for ARGO brought up this paper: website: scientiamarina.revistas.csic.es path: /index.php/scientiamarina/article/viewFile/1384/1488 that suggests that the Schmitt results were localized that were not present in the rest of their Atlantic ocean cross section.  I didn't pursue the cites further.

    But more apropos is the Levitus papers themselves such as this one: World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010  In order to fill in missing data for the 700-2000m depth they have to model how much heat flows down from the 0-700 depth which has good coverage.  The relatively small temperature changes that SASM asks about in post 28 is answered by Tom two posts later as being 0.1C over the period.  However the annual energy change is roughly 1022 Joules so the temperature change for 6 x 1023 cubic cm of seawater with 4 J/g/C is 0.002C per year.

    The answer to KR's reply a few posts later that the large number of sample points reduces the error: there are only 4 data points in each one degree grid square in the model in Levitus.  The mixing shown in the Schmitt paper and the other paper linked above requires simulation at the 1 degree cell resolution to simulate the mixing processes.

    The latter paper also notes that the accuracy of temperature measurement was 0.002C  so measuring a 0.002C change is problematic.  Also the ARGO network has about 3 degree spacing according to their website which makes it basically impossible to simulate the mixing, so it must be parameterized.

  • Why climate change contrarians owe us a (scientific) explanation

    Dave123 at 09:42 AM on 13 October, 2013

    Eric, your citation of Schmitt (2005) puzzles me.  First, the paper makes the case that mixing was faster than anticipated.  

    The significant vertical dispersion of tracer observed in this thermohaline staircase supports the idea that salt fingers significantly enhance mixing in certain parts of the main thermocline. Our derived salt diffusivity of 0.8j0.9 10j4 m2/s is an order of magnitude larger than that predicted for typical internal wave breaking within the mid-latitude thermocline. Indeed, for this low-latitude region, parameterization of mixing supported by the background internal wave field (32, 33) indicates that a diffusivity of only È0.02  10j4 m2/s should be expected. The tracer derived
    diffusivity is also larger than that implied by microstructure measurements previously
    made in this staircase (16, 25, 34). However, it is in agreement with the salt finger
    model applied to those dissipation data (35), as well as to our new observations. Notably, the diapycnal tracer mixing rate observed in the western tropical Atlantic is 5 times that observed in the eastern subtropical Atlantic during NATRE, because of the presence of the thermohaline staircase. The staircase appears to transform the T-S structure of the thermocline waters entering the Caribbean, increasing the salinity and density of Antarctic Intermediate Water (36) and preconditioning it for sinking at higher latitudes.  The efficient vertical transport within this strong tropical thermocline must be taken into account in oceanic and climate models, where the parameterization of diapycnal mixing continues to be a major uncertainty in assessing the ocean's ability tosequester heat, pollutants, and carbon dioxide.

    In otherwords, I don't think the paper can be read the way you're claiming.  Beyond that, this paper is 8 years old.  Did you do any work to check citations.  If this were a class paper, I'd certainly demand evidence that you'd followed the scholarly trail and were presenting an opinion grounded on more than one paper.  What work has been done since.

    Then there's this question.  There's instrumental data from the Argo floats.  You've made no mention of this, only models.  Why not?  If you don't like the data, wouldn't your exposition be sounder if you brought this up and disposed of it?

    If you want to create a synthesis, you need to do the work.  Especially here, where the aggregate of people do follow the literature, check citations, and routinely attempt sound scholarly syntheses.  Unless of course, you think picking a few papers is "good enough for a 'C'", in which case you'll not convince anyone here.

  • Dueling Scientists in The Oregonian, Settled by Nuccitelli et al. (2012)

    Philippe Chantreau at 01:52 AM on 3 October, 2013

    For starters, I believe that the Trenberth quote is inaccurate and I would ask for the original source. As I recall, the "travesty" applied to missing energy in the overall budget, which is an area of expertise of Trenberth. I'm sure that Trenberth elaborated on that and that there is context.

    If you look at the ARGO website, they state very clearly that the period of observation for ARGO data is still too short to calculate a trend. "The data is dominated by interannual variability" per ARGO website. There is no way to calculate an OHC trend except by using data before the deployment of ARGO, so your interlocutor is disingenuous.

    I am also pretty sure that claiming that Levitus used "a model" is a wild misrepresentation. Levitus, Antonov and their collaborators have been studying this for years and I doubt that anyone knows the observational data better than them. Perhaps your interlocutor is of the opinion that correcting for errors as Levitus and Antonov did, notably by using Wijffels et al, 2008, is "using a model."

    The truth is that Levitus is the most knowledgeable in the matter and his papers have hundreds of cites, some over a thousand cites. I don't have the time to dig deeper but I believe that, if you do the digging, you can refute each and every one of your interlocutor's claim. The most obvious is that Argo does not show a cooling trend because the time series are too short to show any trend.

    As for NOAA, their site is not available at the moment due to the government shutdown, so digging through their references is not possible.

    D&K has been looked at here and elsewhere and their wild claims of "step changes" are a little too much like magical thinking.

    To make a long story short, yes your interlocutor is misrepresenting the science but placing a big burden on you to show that he is. Anyone who is not scientifically litterate following the discussion will get the impression that some science says one thing, some say different and they'll go where their emotions/ideological preferences take them anyway. Typical modus operandum of the obfuscators these days.

  • Dueling Scientists in The Oregonian, Settled by Nuccitelli et al. (2012)

    Drewd006 at 18:54 PM on 2 October, 2013

    I have come across this blogger who is claiming: "The oceans are cooling just like the air is, as proven by the measurements of the 3,000 Argo buoys; the oceans are cooling at all measured levels, and have been since the buoys were launched"

    I cited: Levitus et al. 2012, Lyman et al. 2010, Von Schuckmann et al. 2009, Trenberth 2010, Purkey & Johnson 2010, and Trenberth & Fasullo 2010.

    And he response saying:

    "NOAA have just used Levitus's paper, we can forget them as they simply estimated the OHC using a model; there were no measurements (only ARGO after 2003).

    Lyman et al's paper has been debunked by R. S. Knox and D. H. Douglass and by NODC OHC data.

    Trenberth 2010; HAHA! This is the guy who said; "“The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.”

    HE is debunked by FOUR other papers; Willis, and Loehle, and Pielke, and Von Schuckmann." 

    Are there any validity to this mans claims, if he is misrepresenting the science I would love to know.

    Thank You

  • Why is the IPCC AR5 so much more confident in human-caused global warming?

    CBDunkerson at 07:28 AM on 28 September, 2013

    Zen, the Argo network is now providing much better data for ocean warming down to about 2000 meters. The average depth of the worlds' is about 4267 meters... so we are currently measuring heat accumulation for less than half of the water in the oceans. This is the 'deep ocean' that Stoker was referring to.

  • How much will sea levels rise in the 21st Century?

    jja at 15:19 PM on 29 August, 2013

    Tom Curtis @ 49

    I am not manufacturing a worst case scenario from the clothe.  The AR5 is fundamentally flawed due to unmodeled parametes (i.e. todays revelation that expected ocean acidification will lead to an additional .4C warming by 2100 due to DMS reductions and ice albedo positive feedbacks happening 6 decades before the current worst-case scenarios).  Globally induced warming due to summer arctic sea ice loss will produce a non-linear behaviour in warming resulting in a step-change in warming prior to 2050.  Hence the TCR is estimated higher for 2013-2050 than from 2050-2100.  The ECS taken as a whole is higher for 2XCO2 than from 2XCO2 to 4XCO2.  The carbon feedbacks listed in @16 coupled with the lack of CCS implementation and current projected global CO2 emissions produce a Carbon Burden of 1200PPMv by 2100.  Methane burden will also increase significantly due to permafrost release and Stratospheric OH depletion increasing CH4 persistence. 

    These are all real and ocurring and not modeled under RCP 8.5

    The 2110 ZJ figure that you determined was conservative due to non-frontloading of Arctic albedo as well as the fact that you start from a lower figure than the one determined using more accurate ARGO data by Trenberth. (TOA of .75 W/m^2).  The primary driver of this change in the rate of energy accumulation is the albedo change produced by the Arctic sea ice melt. 

    The RCP 8.5 is the A1Fl scenario as far as I can read the emissions profiles, not A2

    The RCP temperature response profile models a relative linear change in temperatures.  The arctic albedo and associated carbon feedbacks beginning in 2020 and peaking in 2040 will produce a decidedly non-linear response.  The front-loading of deposited energy and temperature increase will greatly increase the rate of OHC deposition in the early years.  basically a doubling of current RCP 8.5 projections between now and 2050 and a quadrupuling between 2050 and 2100 from current scenarios.

    The temperature rise from LGM (post dryas) to Holocene Climatic Optimum was approximately 4C and occurred in 2.5k.y.a.  We are talking about a 4C rise in 170 years (post 1880).  The reason it is different now is that we are at the tail end of a 10 k.y.a. interglacial.  if you want to look at historical analogies, the closest I can find is the Ice shelf collapse that ocurred at the end of the Eemian interglacial 125,000 years ago.

    The reason the ice albedo feedback calculation is different now is because the incremental annual change between 1982 and 2012 (and I project from 2012 to 2030) will provide as much global albedo shift as over 1,000 years of warming during the transition from the LGM toward the Holocene climate optimum.

  • Fasullo (2013) Seeks Some Levelheadedness Regarding Sea Level Variability

    John Fasullo at 01:32 AM on 29 August, 2013

    Hi Chris,

    Thanks for the feedback. Yes, the answer regarding other such drops in the historical record is complicated by the fact that our observations prior to GRACE and ARGO are incomplete, and prior to altimetry, are exceptionally so. We know that no comparable events occurred in the altimetry record. The gauge record of sea level is quite noisy on interannual time scales and so it is of limited use for identifying other events. There are comparable dips to that in 2010-11, but their credibility is highly questionable as they occur often and likely result from noise, not signal. And so what data to use?

    If we were to argue that other events would require anomalous rainfall over Australia's interior basins comparable to that in 2010-11, then we could use the rainfall record to infer such sea level drops. In this case 1973-74 appears to be a comparable interval. Nonetheless, the assumption that the dips can only occur when Australian rainfall is high is questionable. Perhaps models will provide some perspective on this, and this is a possibility we are now exploring. Stay tuned!

    In terms of 'dominant contributor', yes we did establish a % contribution. It varies somewhat depending on which GRACE product you use and at what time you evaluate the contribution. At the peak of the event, Australia made up about 50% of the storage +anomaly, with South America and North America, contributing about 30% and 20% respectively. But the anomalies in the Americas were relatively short lived whereas Australia's persisted for over a year. So at these longer time scales, Australia's influence was not only dominant, but solitary and unique. 

    Thanks for the informative/constructive feedback everyone! It is nice to read a blog where facts are the focus.

    John

  • A grand solar minimum would barely make a dent in human-caused global warming

    StealthAircraftSoftwareModeler at 07:53 AM on 23 August, 2013

    All: my apologies. I made a mistake and started too many conversations @12. I tried to respond to all of the issues I had with Dana’s article and I should have kept it to just one. Let me back up and do some homework on ocean heating. I may even go look at Argo data and play with it a little bit.

    I’d like to restart and focus just on Dana’s multiple comments stating “the amount of solar radiation reaching the Earth’s surface is very stable.” I do not think this is true at all. I think Wild 2009 and 2012 (referenced above) both claim the opposite, and Kiehl and Trenberth’s energy budget (http://curryja.files.wordpress.com/2012/11/stephens2.gif) has a huge 34 W/m^2 window (30 times the effect of CO2 forcing). This clearly indicates there is a lot of variance in surface energy, or it is hard to measure, or both. In in case, it is not stable.

  • A grand solar minimum would barely make a dent in human-caused global warming

    KR at 02:58 AM on 23 August, 2013

    Stealth - I would agree with Tom Curtis - claiming that the uncertainty of multiple measurements from almost 4000 ARGO floats is identical to the error of a single measurement is absurd, and anyone with science or engineering background (which you claim) should be well aware of this aspect of signal averaging, right along with the Central Limit Theorem

    OHC is an anomaly measure, not an absolute measure, and hence the link you provided regarding absolute measures is wholly irrelevant. Use of anomalies also removes any potential systemic bias in those measurements. 

    Your last few comments appear to be (IMO) increasingly disingenuous.

  • A grand solar minimum would barely make a dent in human-caused global warming

    StealthAircraftSoftwareModeler at 02:05 AM on 23 August, 2013

    KR @14: Minnett 2000 and the A High-Accuracy, Seagoing Infrared Spectroradiometer paper (http://journals.ametsoc.org/doi/pdf/10.1175/1520-0426%282001%29018%3C0994%3ATMAERI%3E2.0.CO%3B2) point out that their “high accuracy equipment” has stated accuracies of 0.1K. The entire OHC content anomaly when converted from Joules back to temperature in the ocean is on the order of 0.09C (I assume you can do the math and conversion, but if not let me know and I’ll show my work). Clearly, OHC is within the error bars of the measurement system, unless Argo can measure more accurately (unlikely). It could be there or it might not be – it simply is too small to measure. Then, when combined with the number of measurements and uncertainties induces by thermal eddies and ocean mixture, it is seems unlikely to me that measurements can tease out the true OHC.

    Finally, I believe OHC from NOAA suffers from modeling artifacts like surface air temperature (SAT). Please see Dr. Schmidt’s discussion of SAT at NASA GISS: http://data.giss.nasa.gov/gistemp/abs_temp.html

  • A grand solar minimum would barely make a dent in human-caused global warming

    Composer99 at 01:51 AM on 23 August, 2013

    Stealth:

    I would have thought your misconception regarding TOA vs surface energy balance and their respective significance with regards to global warming has been previously addressed, so I confess I am surprised to see you re-state it here.

    I should also add that you appear to engage in cherry-picking. Stephens et al 2012 has to be considered in context with other papers of the same nature. As far as I have seen you have provided no analysis doing so. At the very least if you feel Stephens et al by itself outweighs any (or indeed every) other similar paper, it is down to you to show your working.

    Finally, with respect to your assertions about NOAA vs ARGO, unless you yourself provide evidence (*) to support your claim, no one has any obligation to accept it as correct.

    On that note, the opening paragraph of Levitus et al 2012 states:

    We provide updated estimates of the change of ocean heat content and the thermosteric component of sea level change of the 0–700 and 0–2000 m layers of the World Ocean for 1955–2010. Our estimates are based on historical data not previously available, additional modern data, and bathythermograph data corrected for instrumental biases. We have also used Argo data corrected by the Argo DAC if available and used uncorrected Argo data if no corrections were available at the time we downloaded the Argo data. [Emphasis mine.]

    So the data you deride uses the very same floats that Svensmark claims "have not registered any increase in temperature" and finds... that they have.

    ----------

    (*) Evidence other than an appeal to your qualifications, I might add.

  • A grand solar minimum would barely make a dent in human-caused global warming

    KR at 01:37 AM on 23 August, 2013

    Stealth - Svensmark is in fact flatly wrong in claiming the ARGO probes "...have not registered any temperature rise", and you are equally off-base in claiming "...Heat Content is some output from a software model, not direct measurements...".

    Temperatures measured by the ARGO floats and the XBTs before them are rising in the raw data, and the ocean heat content (OHC) is simply observed temperature change scaled by the thermal mass of the ocean layer in question - not some kind of complex model. OHC cannot be dismissed by appealing to model complexities. 

    ---

    OHC may be one of the best measures of the top of atmosphere imbalance available - averaged over long time periods, global, representing (for the full depth of the oceans) ~93% of the energy changes. And it is consistent with satellite observations of TOA flux (Loeb et al 2012). Adding up components of the Earth energy budget (evaporation, thermals, clouds, albedo, etc) sums estimate uncertainties - but OHC is a direct measure of TOA imbalance. 

    In addition to Sphaericas link on ocean warming/IR, I would also point out a RealClimate discussion of the same work. 

  • Why doesn’t the temperature rise at the same rate that CO2 increases?

    CBDunkerson at 21:23 PM on 23 July, 2013

    DAK4Blizzard, Glenn directly covered most of your questions and the fact that ARGO buoys have a maximum depth of 2000 meters should explain why data below that point isn't included. There is no 'lack of interaction' in the deeper oceans, and indeed various studies of deep ocean temperatures have found evidence that significant additional warming is accumulating there. We just don't have widespread or continuous readings for those depths, and thus estimates of total additional heat accumulation in the deep ocean have a wide uncertainty range.

    Thus, the chart in the article above is 'conservative' in excluding the deep ocean heat content change... but only because the data on that isn't available at the same level of detail as the other items shown.

  • Why doesn’t the temperature rise at the same rate that CO2 increases?

    Glenn Tamblyn at 17:56 PM on 23 July, 2013

    DAK4Blizzard

    The difference between 700m and 2000m is historical. It is based on an earlier sensor technology and a later one. Prior to the 2000's, detailed measurement of heat content down to 700 meters was obtained using data from Expendable Bathythermographs. 700 meters was their maximum operating depth. Heat content below 700 was estimated from their data and other more sporadic deep sampling techniques.

    In the early 2000's, deployment of the ARGO array of smart robot diving buoys was commenced. These now drift around the oceans, diving to operating depth, sampling the water, surfacing and relaying their data back to satellites. And their maximum operating depth is 2000 meters.

  • Models are unreliable

    JasonB at 01:38 AM on 18 July, 2013

    Stealth, on another thread:

    KR @ 248: I get the impression that you do not understand physics; fudge factors are not an accusation of fraud in any way. The comment of “fudge factors” is from Dr. Freeman Dyson, a world renowned physicist -- I am certain he is qualified to speak to both physics and fudge factors.

    But he is not qualified to speak on models, as Tom Dayton showed, and your original use of the phrase "Dyson's claim that GCMs are full of fudge factors" didn't sound like an innocuous use of physics jargon.

    As an example, in the CO2 forcing equation, ΔF = 5.35*ln(C/C0) W/m2, the 5.35 value is a “fudge factor”, and so is the natural logarithm function. Over a much larger sample of data, for example, a logarithm with a base of pi may fit better than one with a base of e. [Emphasis mine.]

    Erm... You may want to rethink that.

    The “laws of physics” are not something given to us from on high. They are simply a mathematical representation of what we think is happening, or a way to describe how something behaves.

    They embody our understanding of physical reality. If you are going to base your arguments on ignoring the laws of physics on the basis that they may not be correct, it's only fair to say so. It's important to be able to give "models" that are based on what we don't think is happening the weight they deserve.

    Your statement of “They are full of physics” as a way to assert truth or correctness of the models, is both meaningless and ignorant.

    What he means is that they embody our understanding of physical reality. That's hardly meaningless and ignorant.

    Some of the other folks here are a little combative or condescending.

    Sorry about that. I get a bit testy when someone expresses doubt that they can get hold of the source code of the models when they obviously haven't attempted to do so, then when pointed to the correct location to download the source code proclaim that they've wasted enough time trying and question whether those who gave links have actually attempted it themselves, while finding time to twist statements that they found "interesting" to mean other than what was intended. As the steps I gave showed, it was hardly an onerous exercise.

    All the while not admitting that the accuracy of the coefficient 5.35 is actually far higher than suggested earlier.

    This is what I mean when I say “all models are wrong.” This doesn’t mean they are not useful

    You're hardly the first to make that observation.

    but it means they are limited in their effectiveness because of the underlying assumptions. How GCMs handle this effect is critical across every equation they use. My primary concern is that almost every equation in GCMs have various limitations or assumptions because they are calibrated to measurements made recently.

    Firstly, GCMs aren't parameter-fitting statistical models that are simply fitted to recent observations. Where parameterisation is performed, it is at a low level and it is based on observation of the particular physical process in question. The fact that, when combined, the model as a whole actually reproduces the observed record extremely well (and is unable to when anthropogenic influences are removed) is a testament to the model's accuracy.

    Secondly, no matter how complicated the model, and no matter how many "fudge factors" are involved, at the end of the day the model still has to satisfy some pretty basic physics. This is why a simple energy balance model can reproduce the temperature record remarkably well, even though it cannot tell you what the regional effects will be, or changes to ocean circulation, or Arctic sea ice. These limit its accuracy but there are still some pretty strong bounds on the possible behaviour of the more sophisticated models.

    JasonB @258: I followed your instructions and still failed. Trying to extract the ModelE1 tarball using Windows and WinZIP fails.

    My instructions were to use 7-zip, not WinZIP. I'm surprised that someone with your experience developing software was stumped by a .tar.gz file.

    So now I have actual Fortran code for a 10 year old model.

    Well, what did you expect? That's the source code for one of the models that was used to create the forecasts for the still-current IPCC report. You want to check those forecasts, you look at the source code for those models. It's also an ancestor of one of the models used for the next IPCC report.

  • Patrick Michaels: Cato's Climate Expert Has History Of Getting It Wrong

    Ray at 03:11 AM on 15 July, 2013

    Several comments from Dr Pauchauri such as that there has been no warming for 17 years and that discussion of the science is essential suggest that all the forecasts from the IPCC may not be set in stone.  For example sea level rises from IPCC are 3.1mm/year; data from NOAA for 2005-2012 is 1.1-1.3 mm/year  which is a bit less than half.  NOAA values from satellite altimentry and ARGO.  Latter values actually about 0.3mm/year but a "correction factor" by NOAA of 0.9mm/year increases ARGO values.  With error bars min and max values from NOAA are  0.2 -2.2mm/year quite different from IPCC.  Is NOAA wrong?  And why, if global temperature is increasing and ice melt is increasing, why are current increases in sea level rises not increasing also?

  • A Looming Climate Shift: Will Ocean Heat Come Back to Haunt us?

    tcflood at 09:48 AM on 28 June, 2013

    Rob Painting: I have noticed that the skeptics are making a big deal (as usual) out of any uncertainty of an average 0.09 C temperature change measurement by the Argo instruments and the issue of how to integrate the relatively new and short term data with more sparse older data.  How robust are the assertions of greater depth heat increases that are being made by Levitus12 and BTK13, etc.?

  • A Looming Climate Shift: Will Ocean Heat Come Back to Haunt us?

    dvunkannon at 11:51 AM on 26 June, 2013

    The way I am reading this, the effect of changes in the IPO is _not_ that heat buried in the deep ocean resurfaces, it is simply that less heat gets buried than otherwise.

    Does ARGO float data from the area of these gyres confirm that heat is descending near their centers? I would expect that floats near the gyre centers would show higher temperatures at depth than floats in other locations.

  • Is More Global Warming Hiding in the Oceans?

    Donthaveone at 15:04 PM on 25 June, 2013

    To scaddenp,

    I read the paper you provided, it does detail potential errors in the readings from the Challenger and the authors appear to do thier best to take these errors into account.

    They say all the errors add a warm bias to the measurements therefore the Challenger data is reduced in magnitude, obviously the larger the reduction the larger the trend over the 135 years becomes.

    So i suppose it comes down to how much confidence you have in the data and according to the authors i would say that is not too much when they say

    Obviously, these local differences may represent any timescale in the 135-year intervalfrom a transient meander of the Gulf Stream in 1873 to a long-term change in the current's latitude. Similarly, regional to ocean-scale differences may be affected by interannual to decadal15,16 variability, including in the deep ocean17, and hence our Challenger-to-Argo difference based on stations along the Challenger track must be viewed with caution.

    That said i found it an interesting study and according to the authors the results show a warming on centenial time scales

    The larger temperature change observed between the Challenger expedition and Argo Programme, both globally (0.33 C +/-0.14, 0-700 m) and separately in the Atlantic(0.58 C +/-0.12) and Pacific (0.22 C+/-0.11), therefore seems to be associated with the longer timescale of a century or more. The implications of centennial-scale warming of the subsurface oceans extend beyond the climate system's energy imbalance.

    What the authors are saying is that the positive trend in OHC can be extended right back to the 1870's (Challenger data).

    In summary, this paper uses data that cannot be considered accurate but if we were to accept these results as they are the trend shown in this data is similar to other studies and tthey show the trend extends back well before man could have started to change the climate through CO2 emissions. This paper is not a new discovery, this paper adds to what is already known and that is OHC and therfore SLR has been increasing at a steady rate for well over a century.

    I believe the headline "Is more global warming hiding in the ocean" to be an inaccurate description of what the paper discusses and declares.

    Thanks again for supplying the paper

    Cheers

     

  • Is More Global Warming Hiding in the Oceans?

    Donthaveone at 11:02 AM on 25 June, 2013

    I am sorry maybe i misunderstood what the point of SKS was.

    dana1981 posted a newspaper story which in a nutshell claimed OHC data taken some 120 years ago was compared to current day Argo data and from this comparison it was then stated that the comparison shows the OHC has risen by an amount and this was due to AGW. The newspaper story gave no indication of how this comparison was achieved.

    I was of the opinion that such a comparison was unrealistic in terms of both number of samples and methodology and stated such in the hope of generating a discussion point however this did not occur, instead i was told my "tone" was not acceptable and you cannot end a post with the word "cheers" i would be fascinated to know what is the correct way of ending a post DiKran?

    Following on from this a moderator made this statement

    (-Moderation complaints snipped-)?

    To Dikran,

    You stated in 9

    (-blockquote snipped-).

    (-Inflammatory snipped-). In regards to discussing science well i have asked questions regarding the science around this issue, have you even attempted to respond to those questions?

    To scaddenp in 11,

    Thankyou very much for the link i have not read the paper as yet but i will and respond to you in time.

    Regards?

  • Is More Global Warming Hiding in the Oceans?

    Donthaveone at 15:49 PM on 24 June, 2013

    Do you think it is possible to take a very small sample of data from 135 years ago, manipulate it a bit and then compare it to Argo data and then draw a conclusion of any relevance?

    just some questions?

    How was the equipment calibrated?

    What was the margin of error in the original data?

    Was the process of measuring the data the same for each and every measurement?

    How accurate was the hemp rope for measuring depth?

    Was the data reported correctly and consistently?

    Was the data rounded up or down or did they measure down to 3 decimal places?

    What were the currents at the time, could this have an effect on the results?

    Too many variables combined with a very small sample means this comparison is a futile exercise.

     

    Cheers

     

     

     

     

  • Global warming is here to stay, whichever way you look at it

    Tom Curtis at 07:47 AM on 31 May, 2013

    tcflood @16, details on the operaton of Argo floats can be found here.  As you will note, the Argo floats measure temperature from 0-2000 meters in depth.  Other systems are used for temperature (and hence OHC) measurements below that depth.  Some continue to be used above 2000 m, but they are nowhere near as numerous as the Argo floats.

    For actual OHC data, the NOAA National Oceanographic Data Center (NODC) is the best first stop.

  • Global warming is here to stay, whichever way you look at it

    tcflood at 07:36 AM on 31 May, 2013

    Let me mention that I have also read Balmesda, Trenberth, and Kallen in their accepted for publication article "Distinctive climate signals in reanalysis of global ocean heat content." They make a point of comparing the OHC record with and without Argo data, and they get slightly less warming without Argo. I can't tell if the Argo data are synonymus with "data below 700m". Please help.  

More than 100 comments found. Only the most recent 100 have been displayed.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us