Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
Keep me logged in
New? Register here
Forgot your password?

Latest Posts

Archives

Climate Hustle

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable
"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. For example, here’s a graph of sea level rise:

Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.

Mainstream climate models have also accurately projected global surface temperature changes.  Climate contrarians have not.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

There's one chart often used to argue to the contrary, but it's got some serious problems, and ignores most of the data.

Christy Chart

Basic rebuttal written by GPWayne


Update July 2015:

Here is a related lecture-video from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Last updated on 31 December 2016 by pattimer. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Further reading

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Comments

Prev  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  Next

Comments 851 to 900 out of 1028:

  1. I'm sorry about getting off topic..you're correct...however...i'm not the only one who is doing this for the responses. btw..you only win the argument if you convince the other person that they are wrong.

    anyhow...after doing some more research...i ran into the article...

    http://theresilientearth.com/?q=content/climate-models-%E2%80%9Cbasic-physics%E2%80%9D-falls-short-real-science

    this is another view that is along the lines of what i have been exactly saying. my experience in modeling agrees with everything that is said in the article. and scientist need to explain why what is being said in this article is not correct. it didn't even touch on the dreaded (actually a serious problem) divide by 0 which is all to common in computerization of real world models.

    the cmip5 models produce a wide range of results with a large error. the models are off what real world temperatures indicate. just looking cmip5 ar4 model graphs compare to real world temperature it appears that the models are more than two standard deviations off. I wanted to calculate this...but I was unable to find the necessary data to do so. if someone knows how to get this data or and some analysis of this i would very much like to review this.

    if it is more than 2 standard deviations off this means that the math is questionable or the theory is questionable.

    if the models produced a nice error i think 20% is acceptable..and the real world temperatures were following this within 2 standard deviations then this would significantly increase your confidence level. as things stand right now i do not have any confidence that the models results are accurate or properly represent the theory or math.

    Response:

    [Rob P] - Climate models show remarkable agreement with recent surface warming once natural variability and unanticipated changes in external forcings (such as increased volcanic sulfate aerosols) are taken into account. 

  2. I was unaware that reading blog posts by people who aren't climate scientists was considered research. Maybe you should read some actual papers.

  3. tristan

    here are the credentials of doug l hoffmann..blog originator

    http://theresilientearth.com/files/dlhoffman.html

    the man is highly qualified to evaluate computer models. you have to understand the the people who are writing and operating the programs for computer models are not climate scientists.

    the field requires experts in many areas and not one person can be expert in all of them.

    please present an argument addressing the issues cited

    Response:

    [JH] You are deluding yourself if you believe you know more about Global Climte Models (GCMs) than the commentors who are attempting to educate you on this thread. Please cease making dispariging remarks about their knowledge. If you do not, you will quickly relinquish your privilege of posting on this website. 

  4. I do not see any climate science credentials. Climate science modelling is a multidisciplinary topic, and as far as I can tell, Dr Hoffman has never published in the field of climate science. He only possesses one piece of the requisite expertise, and presents an opinion on a blog (and hence is not subject to the rigours of peer review and the scrutiny of others in the field).

    On the other hand, we have papers like Risbey et al. (2014) that discuss the very topic at hand.


    My question to you is this: Why do you eschew the published material on the topic and get your information from sources without the credibility that comes from working in a particular field?

  5. Rhoowl claimed "the cmip5 models produce a wide range of results with a large error. the models are off what real world temperatures indicate. just looking cmip5 ar4 model graphs compare to real world temperature it appears that the models are more than two standard deviations off."

    Multiple people have explained to you that your particular interpretation of the model results as "error" is incorrect.  It appears that you continue to refuse to read explanations of what the model results actually are, and what the models actually are.  To start with, you absolutely must read the Intermediate version of this "How Reliable Are Climate Models?" post.  When a bunch of people on SkS tell you that you don't understand something, it is your responsibility to make at least the effort to read all of the original post on which you are commenting; some posts have Basic, Intermediate, and Advanced tabbed panes.  If you don't trust what those blog posts say, I applaud you for your skepticism as long as you then read the peer reviewed original publications that those posts cite.

    One source of variability in GCMs' results is differences in the models' constructions, not just their parameters.  The CMIP5 ensemble of models is just that--multiple models, created by multiple people using different approaches.  Those differences in models' constructions are not weaknesses!  They are intentional--think of them as replications of experimental setups.  Robust "replication" does not mean just rerunning an experiment with exactly the same setup.  Instead it means running an experiment differently as long as, in principle, the results should be the same.  Using differences in experiment construction and running is a test of whether the orginal experiment's results really were due to the posited phenomena or were due to otherwise uninteresting quirks in the experiment.  Likewise, having differently constructed climate models safeguards against any one model's results being due to quirks in that particular model.

    A good place to start learning about verification & validation (V&V) of climate models is at Steve Easterbrook's blog Serendipity. Steve is a computer scientist and engineer who used to be the chief scientist at NASA's independent V&V center, now is a professor, and does climate research.  He has a good recent video of a TED talk (you should read the text surrounding that video on his blog), a short but good description of V&V, and a short description of massive and thorough comparisons of the outputs of 24 climate models. You would benefit from reading other posts of his that you can find by using his blog's Search field to look for "verification" or "validation."

    Also useful for you to read is Tamsin Edwards's series of four short blog posts the links to which are near the top of her post Possible Futures.

    Of course the bottom line is whether all those different models' results are the same.  But "the same" does not mean "exactly the same."  There is no absolute definition of "the same."  Not 1 in 10,000.  Not 2 standard deviations.  Not 20%.  This is true not just in climatology, but in every field.  All models are wrong, but some are useful.  For example, if you are trying to discover whether a drug helps an illness, and every experiment testing that shows it does not help, then it doesn't matter that some experiments show it makes no difference and some show it makes the illness worse.  For the purpose of those experiments, the unanimous, sensible conclusion is that you should not give that drug to anyone with that illness. 

    It is necessary to define "the same" climate model results as "similar enough to suit the purpose to which these models' results are being put."  But that's a topic for a future comment.  First please address, narrowly, what I've written here. 

  6. Rhoowl wrote,

    "you have to understand the the people who are writing and operating the programs for computer models are not climate scientists."

    Producing a useful model of anything is 99% based on understanding the domain you are modelling, and 1% putting some code together. The idea that a non-climatologist who knows about programming is particularly well-positioned to comment on the success or otherwise of a climate model is nonsense. The idea that climate science has a lack of intelligent people versed in both the necessary domain knowledge and the coding skill is also nonsense. Sure, you mustn't assume that the climate modellers are infallible, but your starting assumption should be that the people trying to educate you on this site know much more about this than you or some programmer.

    "btw..you only win the argument if you convince the other person that they are wrong."

    This is probably the silliest comment I have ever read on this site. For a start, you are wrong if you see this exchange as a contest people are trying to win. The people responding to you are trying to educate you, and if you refuse to be educated that is a reflection on you, not on the validity of their responses. I see no evidence that anyone has failed to understand your points (which have all been discussed before anyway), but I see plenty of evidence that you have not actually stopped to consider what you are being told.  Remainingly stubbornly ignorant and then calling that result a win or a draw is simply foolish.

  7. @851

    The blog post by Dr Hoffmann is wrong in so many ways. Here are just two points from it.

    1. The illustration of rounding errors in computer programs would only be relevant if the errors have a systematic bias (i.e. they all rounded up or all rounded down). As Hoffmann's output shows they don't; the rounding errors are randomly signed and therefore will tend to cancel each other out, both within an individual run and between runs.

    2. The discussion about modelling individual molecules in the atmosphere/planet is ludicrous; bulk matter has well defined properties that can be determined experimentally and used in a model without recourse to modelling individual molecules. We didn't know, or model, the individual atoms of the Apollo 11 space rocket, but that didn't affect our ability to predict its behaviour.

  8. And two more ...

    3. Hoffmann seems to think that global temperatures are inputs to GCM. This is just factually wrong


    4. He makes the usual "denier" mistake of equating the atmospheric temperature record with the "global" temperature record (i.e he ignores 93% of the energy imbalance)

  9. robp dr Roy spencer has also reviewed this his conclusion don't agree with you graph. Tristan dr spencer is a climate scientist.

    http://www.drroyspencer.com/2013/06/epic-fail-73-climate-models-vs-observations-for-tropical-tropospheric-temperature/

    Td I have already agreed that the scenarios spread was not an model error....there are errors in the models. I have also read the intermediate blog. What you pointed to as verification was reviewing past Enso from only 18 climate models. What About the other climate models. This appears to be a weak verification. He only matced the trend an not absolute values. I reviewed Steve easterbrook material. Much of what he professes is that science twist need to aek to the public in general terms so it is more understandible. Much of what he said didn't address the issues I am presenting.

    leto I never claimed that the modelers do not have skill or infallibilty

    Quite the opposite actually. The losing argument was started by someone else previously. Perhaps this was out of line.

    Phil computers do have rounding errors, iteration problems with real numbers. Input has fudge factors. Is use fudge factors all the time when modeling. I enter objects that can't possibly exist just to make the program work...

    after further reading  about about the water co2 interaction it became clear that a grid resolution of 100km x 100km is too coarse to property model the cloud co2 interaction. The material is too anisotropic for that resolution. Zhou zhang bao and liu wrote a paper suggesting that grid resolution Be 1x1 mm. To properly model turbulence.. In the atmosphere. Obviously this would be an impossible task

    Response:

    [JH] Either English is not your first language, or you do not take the time to proof read what you have keyed in prior to hitting the "Submit" button. Either way, parts of your comment are nonsensical. In addition, some of your statements insult the intelligence of other commenters. If you keep going down this path, your future postings may be summarily deleted.

  10. Rhoowl:  Spencer followed up his claim that you linked, with another claim this time about "90 models" but likewise severely flawed. Hotwhopper clearly explained Spencer's biggest...um, "mistake"...of playing loose and fast with baselines. There is also the issue of Spencer falsely giving the impression that the RSS and UAH satellite trends for the tropics are consistent, when in fact UAH for the tropics is three times lower trend than RSS, and recently RSS has been shown to be correct in the tropics and UAH wrong.

  11. Rhoowl:  Your reply to my comment was not on the topic I had explained--differences between models.  Since you either will not or cannot focus on a topic long enough to have an actual conversation, I'm giving up on you.

  12. Jh I wish to reply to your comment and since it is off topic and is more personal in nature we should do this privately. My email is rhoowl at yahoo

    Response:

    [JH] Your request has been duly noted.

  13. Rhoowl:

    Phil computers do have rounding errors, iteration problems with real numbers.

    Indeed they do, which is why careful climate modeller programmers analyse their programs to ensure such errors are restricted, and the models (or their component parts) are tested to ensure they do not exert a undue influence. That they do would be obvious - a modelling program that produces results that are unduely influenced by rounding errors would give widely different results with very small changes to the input.

    Input has fudge factors. I use fudge factors all the time when modeling. I enter objects that can't possibly exist just to make the program work...


    Whatever you may do, it does not follow that climate modellers do it too.

  14. Tom Dayton @860, that Hotwhopper article is pretty damning of Roy Spencer's choices.  What it does not mention was that 1983 was massively effected by the El Chichon volcano (which shows up the models), but that the effect in observed temperatures was cancelled, or more than cancelled by the 1983 El Nino in the observational record, which by some measures was stronger than the 1998 El Nino:

     

     

    As ENSO fluctuations are random in time in the models, they do not coincide with observed fluctuations.  The consequence is that while the volcanic signal was obscured in the observations, it was not in the models and the discrepancy between models and observations in 1983 was not coincidence.  Nor was the greater relative temperature in UAH relative to HadCRUT4, as satellite temperature indexes respond more strongly to ENSO.

    Spencer knows these facts.  Therefore, his arbitrary choice of 1983 as the baseline year must coint as deliberate deception.  He is knowingly lying with the data.

  15. Klapper - Levitus et al seems to think there's sufficient data for estimating OHC, as does NOAA. But if you disagree, then you really don't have sufficient data to argue about model fidelity. 

  16. Klapper, at the moment, your dismissal of pre-Argo data seems to be an argument from personal incredulity. If you believe the Leviticus estimates of error margins on OHC to be incorrect, then can you please show us where you think the fault in their working is?

  17. From Klapper"I've looked at the quarterly/annual sampling maps for pre-Argo at various depths..."

    Well, there are good reasons for NOAA to display 0-2000 data as pentadal (5-year) averages:

    0-2000 Global Ocean Heat Content - NOAA

    [Source - NOAA, slide 2]

    What Klapper appears to be expressing with his short term trends and dismissal of earlier OHC data is a combination of poor statistics and impossible expectations about 'perfect' data. 

  18. @KR #867:

    "...a combination of poor statistics and impossible expectations about 'perfect' data..."

    I don't want "perfect data", I want the best data. I think all posters would agree that thanks to Aqua/Terra/GRACE/ARGO etc. we have the much better data available in the 20th century than previously.

  19. Klapper @868...  Absolutely. And the data we have a decade from now will be better than the data we have today. Today's data certainly doesn't invalidate past data nor would better systems in the future mean current data is bad. The data we have is just what it is at any given point in time. It's always going to be imperfect. Data is imperfect. Models are imperfect. 

    But again, this is why models are used to constrain those uncertainties. That's "Trenberth's tragedy." Our current systems can't fully account for all the heat in the climate system. That doesn't mean it's not there. That just means that our systems are inadequate.

    What is abundantly clear, though, is that adding 4W/m^2 to the climate system is going to warm the planet in a significant and potentially calamatous way.

  20. scaddenp: "If you believe the Leviticus estimates of error margins on OHC to be incorrect"

    Hmm, I'm not a biblical scholar, but I don't recall seeing any estimates of error margins in that book...

  21. Klapper - "...I want the best data"

    As do we all. And the best data for the last half of the 20th century, while subject to higher uncertainties that current measurements, is worth attention.

    Again, differences in the 5-10 (and, grudgingly on your part, perhaps 15) year periods you are looking at are short enough to be entirely unforced variation - with recent work on 21st century volcanic activity (not included in the CMIP5 forcings) that has direct implications for the TOA balance also worth considering. You've limited yourself to such a small dataset that frankly I cannot take any of your arguments seriously.

  22. Rob Honeycutt @868:

    "And the data we have a decade from now will be better than the data we have today."

    With conservative governments in Australia and Canada, and a conservative congress in the US being so sure that the science is against them, that they are doing all they can to cut science budgets (particularly for research on global warming) I would not be sure of this.

  23. @scaddenp #866:

    "...But if you disagree, then you really don't have sufficient data to argue about model fidelity.."

    I do disagree. Go to the NODC website. You can find a ocean heat data distribution mapping tool you can customize by period. For example, display 1500 metres depth for the period 1968 to 1972. Count the dots in a polygon formed by New Zealand, Ecuador, the Solomon Islands and the Antarctic Penisula. It's not hard to do. Keep in mind each black dot represents one sample, i.e. May 15, 1969.

    You have maybe 4 or 5 single samples in this 5 year period between the equator and the Antarctic coast and 90 degrees and 150 degrees longitude west. This represents a huge area with essentially no data in a recent 5 year period.

    For a shocking contrast, now retrieve the same depth for 1 year (2014) and try and estimate how many samples were retrieved.

  24. ..never fear, Instagram is here![starts worry mode].... 

  25. @KR #871:

    "...You've limited yourself to such a small dataset that frankly I cannot take any of your arguments seriously.."

    Although I'm skeptical of the data quality before this century for the deep ocean, I downloaded the pentadal OHC data and ran a 5 year running trend to convert ZJ to W/m^2 heat input on a global area basis. The results are as follows (TOA CMIP5 ensemble forcing vs NODC Pentadal heat content change, both 5 year periods):

    1959 to 2000 - 0.23 W/m^2 from OHC, 0.49 W/m^2 from model forcing

    2000 to 2010 - 0.51 W/m^2 from OHC, 0.95 W/m^2 from model forcing

    While delta OHC is not global heat content change, it is the great majority of it. Two conclusions seem appropriate:

    1. The better the observational data quality, the bigger the discrepancy between model hind/forecasts and,

    2. The models run hot.

    I can post the graph here if someone lets me know where I might post to the internet so I have a URL link.

  26. @Tom Curtis #872:

    "Political, off-topic or ad hominem comments will be deleted" (Comments policy)

  27. Klapper, is that really how you want to defend your persistent use of inappropriate comparisons?  I take that as an admission that it is indefensible (which I guess I knew anyway).

  28. @Tom Curtis #877:

    We should stay on topic and deal with the numbers. Do you have a suggestion of a linkable place I can post my graph?

  29. @Tom Curtis #345:

    In case you missed my last post directed at you on the other thread, I'd like you to expand on your reasoning for adjusting net CMIP5 TOA energy input forecasting based on "model drift".

  30. Klapper @879.

    There is plenty of advice on where you can up-load images that is easily found on-line, for instance here. Many require nothing more than an e-mail, a user name & a password. For instance (and I mention it as an exemplar rather than recommend it) this website allowed me to upload an image in less that a minute. I would have displayed the resulting image in-thread but the image to hand that I up-loaded is political in nature.  

  31. Mal Adapted @870.
    You write "I'm not a biblical scholar, but I don't recall seeing any estimates of error margins in that book..."
    My understanding of Leviticus is that it is entirely about defining error and what happens when any such error occurs. Within an approach to error such as laid out in Leviticus, the concept of there being 'error margins' disappears within a binary reality: either there is error or there is not error :-)

  32. The uncertainties in Earth's total heat content data, 93% of which is ocean heat content, is shown in the image from the IPCC AR5 below:

     

  33. And the climate model vs observation (black solid line) of ocean heat content from the IPCC AR5 is shown here:

     

  34. @Rob Painting #882:

    Graph as discussed. TOA Energy imbalance net from CMIP5 model ensemble variables as discussed above.

    CMIP5 TOA variables and net imbalance

     

    CMIP5 TOA Energy Imbalance vs OHC

  35. @Klapper #884:

    I see my dropbox links do not work, which I suspected when I could not see the images in the preview. Back to the drawing board.

    Response:

    [RH] You might try http://tinypic.com

  36. @Rob Painting #882:

    Here is a graph I created by extracting from KNMI explorer the Watts/m^2 down and up (SW and LW) and calculating the net energy imbalance from these absolute variables (dashed black line, variables are rlut, rsut, and rdst). The "Global" (dark blue line) net from observations is really a fudge by assuming that OHC (which is derived from the Pentadal 0-2000) is 80% of the global (as per Tom Curtis' comment). Obviously that's not true over time but it suffices as a cross-check on how far from the observations the models might be deviating using this 80% factor to calculate a facsimile for global energy imbalance. The light blue line is the 0-2000 OHC from NODC pentadal, delta ZJ over 5 year running linear trend with ZJ/year converted to W/m^2 global basis.

    CMIP5 TOA Energy Imbalance vs OHC

    There is a difference between this and the Smith et al figure posted on the other thread by Tom. I think the difference may be what Tom alluded to as adjustments for "model drift" in the Smith et al TOA model net imbalance. Then again, I could have made some mistake in my processing.

    If I am correct the first observation I would make is that the better quality data we have on observations, the bigger the spread between OHC energy input and TOA model energy imbalance.

  37. Klapper @886, very briefly, the CMIP5 RSDT from KNMI is consistently larger than the equivalent estimated value from the SORCE TSI reconstruction currently considered to be the best TSI reconstruction by the IPCC.  In the late nineteenth century, the discrepancy is about 0.8 W/m^2.  Even if we align the two estimates over the late 20th century, what we get is an increasing overestimate by CMIP5 with time:

    So, the most obvious thing about your diagram is not that the discrepancy becomes largest where the observations are most accurate, but that it becomes largest where the solar component is known to be over represented in the models.  That alone accounts for approximately 0.2 W/m^2 of your discrepancy, and possibly more depending on how accurate the difference between model and observed solar input is over the full record.

    This, of course, continues to ignore the effect of the large number of small volcanoes in the early twenty-first century that are observed, but not included in the CMIP-5 data which will account for yet more of the discrepancy.

  38. @Tom Curtis #887:

    "... In the late nineteenth century, the discrepancy is about 0.8 W/m^2"

    "...Even if we align the two estimates over the late 20th century..."

    Yes, the models use a TSI history that is consistently 0.8 to 0.9 W/m^2 higher measured at the earths surface (or 3.2 to 3.6 watts/m^2 of total solar flux) compared to the Source TIM-adjusted reconstruction. How do you explain with this alleged massive error over the whole of the 20th century, they manage to replicate surface temperature as well as they do?

    Is this the reason behind the "model drift correction" employed by Smith et al you alluded to earlier?

  39. Klapper @888:

    1)  Why are you focusing on the least germain part of my comment?  Surely the important thing here is the change in the RSDT discrepancy over the last 15 odd years.  The discrepancy over the full period is relevant only in illustrating that models uses observational data of forcings that are approximately 10 years out of date (of necessity given the time it takes to set up and run models, and delays related to publication time).  It follows that minor discrepancies over more recent periods between model predictions and up to date data is as likely to be due to the updating of the data as to any problem with the models.  In particular, over the last decade or so, we know that model forcings are too large relative to recent observations because of an unpredicted very low solar minimum and recent low solar activity, and because of recent small scale volcanism (also not included in the models).  That you have run this entire argument without ever acknowledging this fact, even when it is pointed out to you shows deliberate avoidance.

    2)

    "How do you explain with this alleged massive error over the whole of the 20th century, they manage to replicate surface temperature as well as they do?"

    More specifically with relation to comment my point (1), models predict changes in temperature anomalies.  Slight changes in a forcing consistently applied over the whole duration will not effect the anomaly and therefore are not relevant.  They are relevant to absolute temperature values, as shown in the side box to this graph:

    You will notice that the multi-model mean is about 0.2 C less than (ie colder than) the observed values.  The primary effect of the 0.8 W/m^2 difference in solar insolation would be to reduce that further by a small amount.  (Note as an aside that the absolute value of the GMST is not well constrained by observations.  I have seen values of 14 C and 15 C quoted based on different temperature series.  Further note with respect to your "models always run hot" comment on another thread, in this and many other cases, they run cold.  It is only that deniers cherry pick only those cases where the models "run hot" for their criticisms.)

    3)  I have clearly linked to Smith et al, who in turn clearly cited Sen Gupta et al on climate drift.  I am not going to try to explain it further as I do not understand it well enough.  I am going to acknowledge that the relevant experts think it a significant factor and correct for it, and note the corrected values.  You on the other hand seem intent on holding it dubious because you couldn't bother doing your own homework. 

  40. For completeness, here are the absolute discrepancy at top of Troposphere, the Smith et al  corrected values and the Smith et al observed values for the periods listed in Smith et al for comparison:

    Period | CMIP5 | CMIP5 (Smith) | Obs (Smith
    1861-1880 | 0.29 | xxxx | xxxx
    1961-2010 | 0.56 | 0.36 | 0.33
    1971-2010 | 0.67 | 0.46 | 0.48
    1993-2010 | 0.91 | 0.68 | 0.59
    2000-2010 | 0.92 | 0.73 | 0.62
    2005-2010 | 0.90 | xxxx | 0.58

    The factors in play explaining why there is a difference between the Smith et al CMIP5 values and the CMIP 5 absolute values are:

    1)  The CMIP 5 values as downloaded from the KNMI climate exporer are strictly speaking top of troposhere (or tropopause) values where as Smith et al may have obtained actual Top Of Atmosphere values.  The primary differences between top of tropopause and TOA values is that TOA solar values would be slightly higher, as would outgoing longwave radiation (due to the effect of the stratosphere).

    2) Smith et al are corrected for model drift.

    The primary factors relating to the difference between observed values and the absolute CMIP5 values are:

    a)  CMIP5 forcings are known to be overstated by 0.2-0.4 W/m^2 relative to anomaly values from the late 1990s onward due to low solar and background volcanic effects.

    b) CMIP5 absolute values of solar forcings are known to be overstated relative to observations by an unknown (by us) amount.  The amount is unknown in that in benchmarking values for the observations, all three relevant factors may have been adjusted, so that the solar values may have been greater than those from SORCE TIM, but would not have been less than the unadjusted SORCE-CMIP5 discrepancy of about 0.8 W/m^2.  This could account for the ongoing high bias of CMIP-5 absolute values.  (Note again, such a constant offset of a forcing would not affect appreciably changes in anomaly temperature values.)

    c) CMIP5 absolute values apparently need correction for model drift, although I cannot do more than note the stated necessity by relevant experts and refer you to the relevant literature on this point.

    Combining these three factors we have an explanation for the increased discrepancy in the 21st century that explains from half to all of the discrepancy observed.  We have a further explanation that potentially over-explains the persistent high bias of CMIP-5 absolute values.  Finally we have a factor that essentially eliminates the discrepancy prior to the 21st century.  If anything, given all this, the models are running too cold relative to known discrepancies.

    The important point is not that we have these explanations.  With further refinement of observations, the correction factors they imply are likely to shift so that the models are running hot again, or colder.  The important point is that the models have run within error of observations, and that there are factors that can explain both short term increases in the discrepancy and long term persistent features.  Ergo it is jumping the gun to conclude from this that the models are in error. 

  41. @Tom Curtis #889 & 890:

    "... It follows that minor discrepancies over more recent periods between model predictions..."

    I don't think they are minor, I think they help explain the recent lack of surface temperature gain in the observations compared to that projected by the models. The discrepancy from observations to models is currently 48% (0.90 to 0.62 W/m^2 TOA energy imbalance).

    "...because of recent small scale volcanism (also not included in the models).."

    I don't accept that argument. Forster and Rahmstorf 2011 did multivariate regression on the effects of TSI, ENSO and AOD, albeit against surface temperature, not TOA imbalance, but their Figure 7 shows essentionally no significant effect form aerosols after the mid-nineties (as least compared to ENSO and TSI). You'd be better off to include ENSO in your arguments than small volcanoes as I doubt the latter come close to the effect of the former. I suspect that's your next argument, ENSO deflated the observed TOA imbalance in the first decade of the 21 first century, which the models didn't include.

    "...Slight changes in a forcing consistently applied over the whole duration will not effect the anomaly and therefore are not relevant.."

    That's a rather astounding statement given it's untrue if you mean that changes in forcing won't affect the delta in the temperature anomaly.

    "...You will notice that the multi-model mean is about 0.2 C less than (ie colder than) the observed values..."

    Irrelevant. The forcing changes the warming rate, not the baseline which is dependent on the starting temperature/starting heat content. The warming rate in the models is essentially the same as the observations for surface temperature, yet the magnitude of the solar input appears to be approximately 0.85 W/m^2 too high (if we can believe the SORCE TSI reconstruction), in the CMIP5 model inputs. This is a serious issue you chose to treat as if it's not important but it is. Either the models are using the wrong input, or the SORCE 20th century TSI reconstruction is wrong.

    "...Further note with respect to your "models always run hot" comment on another thread, in this and many other cases, they run cold..."

    Calculate the SAT trend in all of the models and tell me what percentage run "hot" and what percentage run "cold"? Not many of them run cold and we shouldn't waste our time on sematic arguments when the ensemble mean is consistantly above the observations for TOA imbalance. Look at your own table above. The model forcing is higher than the observations in all but 1 of 10 period comparisons to the observations (5 CMIP5, 5 CMIP5 "adjusted").

    "....KNMI climate exporer (sic) are strictly speaking top of troposhere"

    What makes you think that? Maybe there's an issue with translation from Dutch but the description in the CMIP5 "standar output" document for the "rlut" variable is:

    "at the top of the atmosphere (to be compared with
    satellite measurements)"

    And if the "rsdt" varible was Top of the troposphere, it should be lower than the TSI reconstruction, not higher, as some incoming LW would not make the tropopause due to absorption in the stratosphere.

    "...(Note again, such a constant offset of a forcing would not affect appreciably changes in anomaly temperature values.)..."

    Once again, we are not talking about offsetting forcings, I agree it doesn't matter, we are talking about a difference in the net between input and output TOA, which do affect anomaly values. It is not true the net forcing in the models is the same as the observations.

    "...CMIP5 forcings are known to be overstated by 0.2-0.4 W/m^2..."

    "...Ergo it is jumping the gun to conclude from this that the models are in error."

    Both above statements cannot be true. The models according to you are (currently at least) in error. If the models are not in error why do they need to correct the TOA imbalance numbers for model drift?

    I think my next step will be to compare the CMIP5 model TSI input to the ACRIM TSI reconstruction.

  42. Klapper wrote: "I don't want 'perfect data', I want the best data."

    Great! So what pre-Argo data is there which is better than the XBT results? None? Then guess what "the best data" for that time period is. :]

  43. @CBDunkerson #892:

    I used datasets compiled using XBD inputs. As you can see my graph shows ocean heat content changes going back to 1959 (pentadal dataset starts 1957, so a centred 5 year trend first occurs in 1959. However, given the XBT have problems with depth resolution, based on sink rates, they are nowhere near as good as the ARGO floats. Unfortunately the ARGO network only reach a reasonable spatial density in 2004 or 2005.

  44. Klapper... Don't throw the baby out with the bathwater just because he's not reached puberty yet.

  45. Klapper - all measurement systems have issues. The question to ask is what can be determined from measurements available and to what accuracy. This is dealt with in a number of papers, particularly here. See also supplimentary materials in the Levitus papers on OHC content. What do you perceive to be the errors in this analysis?

    Your earlier response on dismissing pre-Argo, simply pointed to sparcity of deeper data (and why is 2014 in age of Argo relevant?). To dismiss 0-700 warming because 700-2000 is sparce however means having a plausible mechanism for 700-2000 cooling while 0-700 heats.

    Looking over your posting history, it appears to me that you have made an a priori choice to dismiss AGW and seem to be trying to find something plausible, anything!, for dismissing inconvenient data rather than trying to understand climate. If this is correct, then do you have an idea of what future data might cause you to revise your a priori choice?

  46. Klapper,  I was looking at the NODC 0-2000 OHC data as a check on the empirical data.  Year by year, here is the comparison, with "world" equalling the 0-2000 meters adjusted by a scaling factor based on Smith et al. (The scaling factor is to multiply by 0.58/0.47, or equivalently, divide by 0.81.)

    For reproducibility, the 2005 value is based on the difference between the 2005.5 and 2006.5 OHC, which represents therefore the gain in OHC between those periods (ie, the gain in OHC for 2005).  Overall, there is an average difference between the models and observation in this period of 0.12 W/m^2.

    For comparison, here are the five year means over that period shown in the graph:

    Period_____ | CMIP5 | Obs | Diff
    2005-2010 | 0.91 | 0.67 | 0.24
    2006-2011 | 0.95 | 0.42 | 0.53
    2007-2012 | 0.99 | 0.58 | 0.41
    2008-2013 | 1.02 | 0.83 | 0.20
    2009-2014 | 1.02 | 1.02 | 0.00
    Mean______ |0.98 | 0.71 | 0.27

    The means of the five year means exagerate the discrepancy because they count the middle (low) values more often than the high endpoints.

    In any event, it is clear that when you say "The discrepancy from observations to models is currently 48% (0.90 to 0.62 W/m^2 TOA energy imbalance)" it is not true.  The discrepancy, if we take the latest five year mean is in fact 0%.  Of course, a better observational basis may restore that discrepancy.

  47. Klapper @891:

    1)

    ""...because of recent small scale volcanism (also not included in the models).."

    I don't accept that argument."

    I really don't care about your propensity for avoiding inconvenient information.  Recent papers show that the volcanic effect has influenced temperature trends and and TOA energy imbalance.  Thus we have Santer et al (2014):

    "We show that climate model simulations without the effects of early twenty-first-century volcanic eruptions overestimate the tropospheric warming observed since 1998. In two simulations with more realistic volcanic influences following the 1991 Pinatubo eruption, differences between simulated and observed tropospheric temperature trends over the period 1998 to 2012 are up to 15% smaller, with large uncertainties in the magnitude of the effect."

    Haywood et al (2013):

    "Using an ensemble of HadGEM2-ES coupled climate model simulations we investigate the impact of overlooked modest volcanic eruptions. We deduce a global mean cooling of around −0.02 to −0.03 K over the period 2008–2012. Thus while these eruptions do cause a cooling of the Earth and may therefore contribute to the slow-down in global warming, they do not appear to be the sole or primary cause."

    And most directly of all, Solomon et al (2011):

    "Recent measurements demonstrate that the “background” stratospheric aerosol layer is persistently variable rather than constant, even in the absence of major volcanic eruptions. Several independent data sets show that stratospheric aerosols have increased in abundance since 2000. Near-global satellite aerosol data imply a negative radiative forcing due to stratospheric aerosol changes over this period of about –0.1 watt per square meter, reducing the recent global warming that would otherwise have occurred. Observations from earlier periods are limited but suggest an additional negative radiative forcing of about –0.1 watt per square meter from 1960 to 1990. Climate model projections neglecting these changes would continue to overestimate the radiative forcing and global warming in coming decades if these aerosols remain present at current values or increase."

    If you add the -0.1 W/m^2 additional aerosol load after 2000 to the approximately -0.1 W/m^2 from the the discrepancy between modeled and observed solar forcing, you get a CMIP5 absolute value energy imbalance of 0.72 W/m^2 from 2000 to 2010, ie, only 16% greater than observed (Smith et al), and using drift corrected figures the modelled TOA energy imbalance becomes 14.5% less than the observed values.  Forster and Rahmstorf used values from prior to these analyses and so cannot be expected to have incorporated them.  Therefore citing Forster and Rahmstorf  is not a counter argument.  It is merely an appeal to obsolete data.

    2)  With regard to the SORCE data, the situation is very simple.  The SORCE reconstruction is essentially an earlier reconstruction that was benchmarked against PMOD which has been rebenchmarked against the SORCE data.  The effect of that it to shift the entire reconstruction down by the difference between the TSI as determined by PMOD, and that as determined by SORCE.  Consequently the TOA shortwave down radiation is shifted down by a quarter of that value over the entire length of the reconstruction.  Because that shift occures over the entire length of the reconstruction, it means the difference between twentieth century values of the solar forcing and preindustrial values(ie, rsdt(y) minus rsdt(pi), where rsdt(y) is the downward short wave radiation at the tropopause in a given year, and rsdt(pi) is the downard short wave radiation at the tropopause in 1750) does not change, because both the twentieth century values and the preindustrial values have been reduced by the difference between PMOD and SORCE.  Ergo there is no appreciable change in the solar radiative forcing in the twentieth century as a result of the difference.  

    In contrast, for twenty-first century values, the models use a projection so that the difference between (model rsdt minus SORCE value) and the mean twentieth century difference is significant because it does represent an inaccurate forcing in model projections.

    The tricky bit comes about in a direct comparison of TOA energy imbalance.  In determining the "observed" energy imbalance, Smith et al following Loeb et al adjust the satellite observed rsdt, rsut and rlut so that the net value matches the calculated increase in OHC from 2005-2010, and so as to maximize the likilihood of the adjustments given the error margins of the three observations.  Consequently, in all likelihood, they have adjusted the rsdt upward from the SORCE estimate.  Therefore when comparing observations to models we are dealing with two adjustments to rsdt.  First we have an implicit adjustment in the models that results in the radiative forcing being preserved in the models.  This implicit adjustment is equivalent to the average difference between the model rsdt and the SORCE reconstruction.  Secondly, we have another smaller adjustment to the SORCE value that results from the benchmarking of the empirical values.  Because this adjustment is smaller than the first, it generates a persistent gap between the observed and modelled rslut resulting in a persistent difference in the energy balance.

    From the fact that this gap is persistent, the size of the TOA energy imbalance and that temperatures were rising from 1861-1880, it is evident that the gap (and hence the persistent bias) is less than 0.2 W/m^2.  I suspect, however, that it is at least 0.1 W/m^2 and probably closer to 0.2 than to 0.1 W/m^2.

    3) 

    ""....KNMI climate exporer (sic) are strictly speaking top of troposhere"

    What makes you think that?"

    The fact that the graph of rsdt shows a clear downward spike in 1992 (Pinatubo) and another smaller one in 1983 (El Chichon).  That makes sense with increases in stratospheric aerosols, but is impossible if the data is trully from the TOA (rather than the TOA by convention, ie, the tropopause).

    4) 

    ""...CMIP5 forcings are known to be overstated by 0.2-0.4 W/m^2..."

    "...Ergo it is jumping the gun to conclude from this that the models are in error."

    Both above statements cannot be true. The models according to you are (currently at least) in error. If the models are not in error why do they need to correct the TOA imbalance numbers for model drift?"

    By "both of these statements cannot be true", you really only indicateing that you don't understand it.  In fact, everytime you said it in the post above, you were wrong.

    So, lets start from basics.  Climate models are models that, given inputs in the form of forcings produce outputs in the form of predictions (or retrodictions) of a large number of climate variables.  When you have such a model, if you feed it non-historical values for the forcings, it is not an error fo the model if it produces non-historical values for the climate variables.  So, when we discover that forcings have been overstated for the first decade and a half of the twentyfirst century, we learn absolutely nothing about the accuracy of climate models.  We merely rebut some inaccurate criticisms of the models.  It follows that the first sentence does not contradict, but rather provides evidence for the second.

    With regard to model drift, had you read the relevant scientific paper (to which I linked) you would have learnt that it is impossible to determine without exhaustive intermodel comparisons whether or not drift is the result of poor model physics, too short a run up time or poor specification of the initial conditions.  Only the first of these counts as an error in the model.  Ergo, you cannot conclude from this that because of model drift, the models are flawed.  All you can conclude is that, if you accept that the model drift exists, then you ought to correct for it and that uncorrected model projections will be rendered inaccurate by the drift.  Now here you show your colours, for while you steadfastly refuse to accept the dift corrected TOA energy imbalance figures as the correct comparitor, you want to count model drift as disproving the validity of models.  That is an incoherent position.  Either the models drift and we should compare drift adjusted projections to emperical observations, or they don't drift in which case you can't count drift as a problem with the models.

  48. @Rob Honeycut/scandenp #894/#895:

    You're complaining not because I didn't utilize the data, which I did, but I think because I don't embrace it as much as I should. It is what it is and I accept that, however, it's not only myself that has doubts about reliability of the data. See these comments from Kevin Trenberth et al 2012:

    "...(XBTs) were the main source from the late 1960s to 2004 but, because depth or pressure of observations werent measured, issues in drop rate and its corrections plague these data and attempts to correct them result in varied outcomes.”

    Certainly the data are far better with the ARGO collecting system was my key point and that analyses using these later systems should carry more weight than '60s/70's/80's analyses.

  49. @Scaddenp #895:

    "... (and why is 2014 in age of Argo relevant?)..."

    I think you're referring to my comparison of the 5 year '68 to '72 inclusive data density map at 1500 m. I could have given you any 5 year period from 2005 on for the ARGO (i.e. 2005 to 2009 inclusive), but it's not important whether I used 1 year or 5 from the ARGO era, or whether it was 2011 or 2014 or whatever. The point is the data density now in the deep ocean is many orders of magnitude better than the 60's to 90's.

  50. Klapper @898&899.

    It is bizarre that you are happy to present a trace of ΔOHC 1959-2010 @886 then to happily junk 90% of it because it doesn't meet some level of precision that you have decided is required. Indeed, discussing your dismissal of pre-2005 OHC data isn't going to be very helpful if you cannot make a better case for so doing. For instance 'Many orders of magnitude of data density in deep oceans' (which sounds exagerated) can be translated into data uncertainty so it doesn't justify the use of the rubbish bin. Further, inclusion or otherwise of such data is an aside to the central point of this interchange which is the ability of the models to handle the global energy balance.

    I am of the opinion that you are pretty-much wrong on every point being presently discussed (as per @891 for instance). I think scaddenp @895 has probably diagnosed the situation. As you continue to protest that you still hold a valid position, the explanations of why you are wrong become ever more detailed & technical but that will probably not be helpful.

    There are two things required to establish your "The models run hot" assertion. Firstly that model output is higher than measured values. This is possibly true but not to the large extent that you are arguing. And secondly, that the inputs into the model are not the reason for those higher output values. It does appear that the inputs are the reason for the higher model output and to the extent that the models are probably running cool, the opposite of your position.

Prev  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)

Smartphone Apps

iPhone
Android
Nokia

© Copyright 2017 John Cook
Home | Links | Translations | About Us | Contact Us