Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Twitter Facebook YouTube Mastodon MeWe

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Can we trust climate models?

Posted on 24 May 2011 by Verity

This is the first in a series of profiles looking at issues within climate science, also posted at the Carbon Brief.

Computer models are widely used within climate science. Models allow scientists to simulate experiments that it would be impossible to run in reality - particularly projecting future climate change under different emissions scenarios. These projections have demonstrated the importance of the actions taken today on our future climate, with implications for the decisions that society takes now about climate policy.

Many climate sceptics have however criticised computer models, arguing that they are unreliable or that they have been pre-programmed to come up specific results. Physicist Freeman Dyson recently argued in the Independent that:  

“…Computer models are very good at solving the equations of fluid dynamics but very bad at describing the real world. The real world is full of things like clouds and vegetation and soil and dust which the models describe very poorly.”

So what are climate models? And just how trustworthy are they?

What are climate models?

Climate models are numerical representations of the Earth’s climate system. The numbers are generated using equations which represent fundamental physical laws. These physical laws (described in this paper) are well established, and are replicated effectively by climate models.

Major components of the atmosphere system such as the oceans, land surface (including soil and vegetation) and ice/snow cover, are represented by the current crop of models. The various interactions and feedbacks between these components have also been added, by using equations to represent physical, biological and chemical processes known to occur within the system. This has enabled the models to become more realistic representations of the climate system. The figure below (taken from the IPCC AR4 report, 2007) shows the evolution of these models over the last 40 years.

Climate model development over the last 40 years

Models range from the very simple to the hugely complex. For example, ‘earth system models of intermediate complexity’ (EMICs) are models consisting of relatively few components, which can be used to focus on specific features of the climate. The most complex climate models are known as ‘atmospheric-oceanic general circulation models’ (A-OGCMs) and were developed from early weather prediction models.

A-OGCMs treat the earth as a 3D grid system, made up of horizontal and vertical boxes. External influences, such as incoming solar radiation and greenhouse gas levels, are specified, and the model solves numerous equations to generate features of the climate such as temperature, rainfall and clouds. The models are run over a specified series of time-steps, and for a specified period of time.

As the processing power of computers has increased, model resolution has hugely improved, allowing grids of many million boxes, and using very small time-steps. However, A-OGCMs still have considerably more skill for projecting large-scale rather than small-scale phenomena.

IPCC model projections

As we have outlined in a previous blog, the Intergovernmental Panel for Climate Change (IPCC) developed different potential ‘emissions scenarios’ for greenhouse gases. These emissions scenarios were then input to the A-OGCM models. Combining the outputs of many different models allows the reliability of the models to be assessed. The IPCC used outputs from 23 different A-OGCMs, from 16 research groups to come to their conclusions.

Can we trust climate models?

'All models are wrong, but some are useful' George E Box 

There are sources of uncertainty in climate models. Some processes in the climate system occur on such a small scale or are so complex that they simply cannot be reproduced in the models. In these instances modellers use a simplified version of the process or estimate the overall impact of the process on the system, a procedure called ‘parameterisation’. When parameters cannot be measured, they are calibrated or ‘tuned’, which means that the parameters are optimised to produce the best simulation of real data.

These processes inevitably introduce a degree of error - this can be assessed by sensitivity studies (i.e. systematically changing the model parameters to determine the effect of a specific parameter on model output).

Other sources of potential error are less predictable or quantifiable, for example simply not knowing what the next scientific breakthrough will be, and how this will affect current models.

The IPCC AR4 report evaluated the climate models used for their projections, taking into account the limitations, errors and assumptions associated with the models, and found that:

“There is considerable confidence that AOGCMs provide credible quantitative estimates of future climate change, particularly at continental and larger scales.”

This confidence comes from the fact that the physical laws and observations that form the basis of climate models are well established, and have not been disproven, so we can be confident in the underlying science of climate models.

Additionally, the models developed and run by different research groups show essentially similar behaviour. Model inter-comparison allows robust features of the models to be identified and errors to be determined.

Models can successfully reproduce important, large-scale features of the present and recent climate, including temperature and rainfall patterns. However, it must be noted that parameter ‘tuning’ accounts for some of the skill of models in reproducing the current climate. Furthermore, models can reproduce the past climate. For example simulations of broad regional climate features of the Last Glacial Maximum (around 20,000 years ago) agree well with the data from palaeoclimate records.

Climate models have successfully forecast key climate features. For example, model projections of sea level rise and temperature produced in the IPCC Third Assessment Report (TAR - 2001) for 1990 – 2006 show good agreement with subsequent observations over that period.

So it is a question of whether the understanding of the uncertainties by the climate science community are sufficient to justify confidence in model projections, and for us to base policy on model projections. Whether we chose to accept or ignore model projections is a risk. As Professor Peter Muller (University of Hawaii) put it in an email to the Carbon Brief:

“Not doing anything about the projected climate change runs the risk that we will experience a catastrophic climate change. Spending great efforts in avoiding global warming runs the risk that we will divert precious resources to avoid a climate change that perhaps would have never happened. People differ in their assessment of these risks, depending on their values, stakes, etc. To a large extent the discussion about global warming is about these different risk assessments rather than about the fairly broad consensus of the scientific community.”

It should be noted that limits, assumptions and errors are associated with any model, for example those routinely used in aircraft or building design, and we are happy to accept the risk that those models are wrong.

For more information about climate models:

0 0

Printable Version  |  Link to this page

Comments

Prev  1  2  3  

Comments 101 to 105 out of 105:

  1. Riccardo I don't think this paper sheds much light on the limitations of GCMs (especially ensembles of GCMs). As I said, GCMs only aim to predict forced climate change, so it shouldn't be a surprise to anyone that a statistical forcasting model that is directly aiming to predict the observed climate is more accurate. They haven't mentioned this point as far as page 10, perhaps the do afterwards. They provide very little justification for decadal timescales as those relevant to policymaking. As they say that the IPCC are more interested in long term centenial timescales, one wonders why they mention the IPCC so often in the report given that the IPCC policy guidance is based on centenial timescales not decadal ones (and thus their paper is only of tangential relevance). Their idea about reinitialising the GCMs is basically what is done in reanalysis, so it is hardly new. The paper also hints that is what is done in the Mochizuki paper anyway (but one paper at a time! ;o) I like the analagy, similarly why would F&K (or any skeptics) expect the sprinter to win the marathon? Statistical methods working well on decadal predictions does not mean they are more accurate on centenial scales. As a statistician myself, I would much rather extrapolate using a physics based model than a purely statistcal model (and a neural network is pretty much the last statistical model I would use). An interesting experiment would be to see how statistical methods fare in predicting the output of individual model runs (treating the model as statistically exchangeable with the real world) on both decadal and centennial scales. I suspect the models perform better on centennial scales (as they are designed to do), even if the model used for prediction is not the same model used for generating the synthetic observations. It would at least be a sanity check of their conclusions.
    0 0
  2. "Diffenbaugh and Scherer analyzed more than 50 climate model experiment and found that large areas of Earth could experience a permanent increase in seasonal temperatures within only 60 years. Their analysis included computer simulations of the 21st century when global greenhouse gas concentrations are expected to increase, and simulations of the 20th century that accurately “predicted” the Earth’s climate during the last 50 years." Source: "Stanford Climate Scientists Forecast Hotter Years Ahead", Planetsave (http://s.tt/12BR7) http://s.tt/12BR7
    0 0
  3. The more I read this paper, the more surprised I am that it made it through review. There are five steps listed in the description of the GCM system (DePreSys); however in the evaluation F&K only use the first four, but admit that the fifth step gives an improvement in performance. In otherwords, the GCM is only allowed into the fight with one arm tied behind its back. If I were a reviewer I would not have recommended this paper be published unless a fair comparison were performed with the GCM used in accordance with the makers instructions. It may be true that would limit the prediction horizon to nine years rather than ten, but that is far less of a problem than using an incorrect implementation of the GCM based method.
    0 0
  4. Section 3.3 of the F&K paper shows they definitely have no idea about the way in which GCMs are used. They try to use GCMs to predict station data, which obviously is a non-starter. The climate at particular stations depends a lot on the local geography (compare rainfall in Manchester and York for example), where as the value for the nearest gridbox in a GCM is an area average over a very large area. For that very reason impacts studies use statistical downscaling methods that aim to predict local (station level) data from the GCM output for nearby grid boxes (e.g. European scale). Does this get a mention in the paper? No. The reason is because they are basing their work on one paper by Koutsoyiannis et al, which was subject to those same criticisms over at RealClimate.
    0 0
  5. Dikran you're absolutely right on this last point, it doesn't makes much sense to compare GCM with station data. I also agree that they overstate their conclusions and that the IPCC shouldn't be mentioned at all. Probably they are new in the field of climate and slipped badly on this. But let me go back to my original point. I didn't mean that this paper has any particular value; though, it address a point that might be relevant when people try to study the possibility of medium term projections. We all know that GCM are not designed to do this and that they need some new idea to perform this task. F&K idea is not new, right, and not the only one. But I didn't see it applied to decadal projections. I think it's worth a try and in doing this I wouldn't be surprised if we learn something on the short/medium term behaviour of the climate system.
    0 0
  6. Riccardo I am not against testing the accuracy of GCMs on a decadal timescale, especially as the modellers seem to feel this is becoming a worthwhile activity (judging from the references given). The problem is that the F&K paper if anything sets progress back by use of bogus statistical procedures and lack of understanding of GCMs. Even their combination of statistical models and GCMs is invalid because they optimise the combinations on the test data and perform a multitude of experiments without taking into account the fact they are doing multiple hypothesis tests. The conclusions seems to be an alternation of overstatement and caveats. This is not the way to do science, do the experiment right and then report the results. If you can't draw a conclusions without caveats that effectively make the conclusion invalid, don't draw it in the first place. The paper suggests decadal predictions will be in th enext IPCC report, I expect they will get the experiments basically right, better to wait for that IMHO.
    0 0
  7. Kevin C @79. "To what extent is the response of a GCM constrained by the physics, and to what extent is it constrained by training?" This is exactly the question lurking (even subliminally)in the back of every moderately skeptical mind. I have played with the EDGCM model a bit and it is frustrating to not know what is going on in there. One approach might be to get a list of all the tuned or trained parameters and the values assigned to them (I have never seen such a list)and vary each by an even increment one at a time. Something like this must have been done as the parameters were developed, but with a different objective. To know the answer to your question would be a huge benefit to climate science.
    0 0
    Moderator Response: [Dikran Marsupial] This sort of sensitivity analysis has been done using a variety of models, for example, see the experiments at climateprediction.net.
  8. Dikran ironically, we're seeing a statistician (you) against a paper based on statistical models and a physicist (me) defending it. Or maybe it's not so ironic. Perhaps it's just because I am a physicist that I see the limitations of the current approach and I appreciate insights coming from different fields that could help overcome those limitations. The shorter the time span of interest the more the chaotic behaviour of the climate system is apparent; deterministic models can't go in this realm. It is clearly shown in this paper, DePrSys performs poorly below one decade. This paper at least tells us something about this limit, which is a usefull information to have, and one of the (maybe) many possible way to go beyond it.
    0 0
  9. Riccardo it is as it should be, "familiarity breeds contempt" as they say, and I am familiar with statistics and you with physics! ;o) The reason I would prefer the physics based model is that extrapolation is safer from a physics based model than a statistical one. If you have a causal model, it is constrained to behave in unfamiliar conditions according to the nature of the causal relationship built into the model. With a statistical model, there is essentially very little to constrain how it will extrapolate. An even better reason to prefer the physics based model is that if you perform enough model runs, the error bars on the projection will be very broad, so the model openly tells you that it doesn't really know. For a statistically based model on the other hand, if you over-fit it (which is essentially what I suspect they have done in combining the GCM and statistical models tuned on the test set), the error bars on the predictions will be unduly narrow, and make the model look more confident of its prediction than it really ought to be. I don't think it really tells us anything about where the limit lies as the GCM they used was not actually the full DePrSys procedure, but only part of it (that they admit doesn't work as well as the full version). It is a traversty that the paper got published without a fair comparison with the full version of DePrSys. As it stands it is essentially a straw-man comparison. There is no reliable evidence that the combination approach is an improvement because they tuned on the test set (again I would not have given the paper my bessing as a reviewer while it still contained tuning on the test set, even with a caveat, as the biases that sort of thing can introduce can be very substantial). One thing they could have done would be to re-run the experiment multiple times using one model run as the observations and a subset of the others as the ensemble. Each time they could perform the analysis again and see how often the combination approach was better. That would give an indication of the "false positive" rate as the GCM would be the true model, so you coulnd't genuinely improve it by adding a statistical component. I suspect this owuld be quite high because of the multiple hypothesis testing and the tuning on the test data.
    0 0
  10. BTW, while I was looking up the URL for trunkmonkey, I found a paper that might be of interest, "Climate Predictability on Interannual to Decadal Time Scales: The Initial Value Problem" which is next on my reading list.
    0 0
  11. Dikran no doubt that physics based models are better for long extrapolations because they are constrained. In case of short term extrapolations, though, this advantage ceases to be significant while what is left of the chaotic behaviour (internal variability if you wish) cannot be dealt with. There's a chance that internal variability can be better accounted for statistically. I think that we are not used to this time frame and we have some difficulties to think in the proper way. This problem is often common between specialists in any field. As an example not related to GCM, think about the ocean heat content and the missing heat (if any) that Trenberth and others are pursuing or ENSO related variability. GCMs may even simulate this variability, but no way to have it at the right time, i.e. no way to forecast it. Assimilation may help to predict it and maybe even understand its origin. Probably I just like the idea behind F&K paper while the paper itself doesn't give much insights. Though, I wouldn't simply dismiss it as a skeptic paper.
    0 0
  12. truckmonkey - parameterizations will vary from model to model. But note that things like relationship of evaporation to windspeed are tuned from observations of windspeed and evaporation, not fiddling a value to fit a temperature. You can get detail from model documentation (see here for list), eg AOM GISS. If it was possible to tune a climate model so that we could ignore GHG, then I think that would have been a much cheaper proposition for opponents than funding disinformation and a lot more convincing.
    0 0
  13. I checked out climateprediction.net and was pleased to find an actual table of the parameters and their values for one of the Hadley models. It was a daunting list, and after looking at scaddenp's GISS documentation it is clear what an enormous undertaking separating the parameters from the physics would be. Riccardo mentions difficulties approaching things the right way and this brings to mind the model handling of the THC. The cartoons at the top of this thread show schematically how the approach is rooted in the notion of Meridional Overturning Circulation. There is a large literature on MOC and I was disappointed to see at climateprediction.net references that people are stilled mired in this notion in 2010. IMO the entire notion of MOC reflects a Euro-American bias toward the Atlantic, sort of like we study Greece and Rome and ignore the Han Dynasty. To really understand how the THC works requires a continuous ocean view like the one above (if it works)by Alexandro Van de Sande. That the Atlantic bottom water is being actively pumped out is supported by satellite measurements that it is about a meter lower. The Antarctic beltway sure looks like a centrifugal pump to me... The models are at their best when the fluid dynamic equasions spontaneously produce observed behavior. I don't think we can even evaluate the behavior if we are stuck on the notion of MOC.
    0 0
    Moderator Response: [Dikran Marsupial] "would be" is the wrong tense. As I said, sensitivity analyses have allready been done, including by climateprediction.net and you can even download their results if you want to analyse them for yourself.
  14. trunkmonkey I'm confused. What's the difference between the graph you show and the common notion of MOC? I can't see any atlantic-centrism here. It just describes a physical mechanism by which sea waters may sink and upwell, just Archimedes principle if you wish.
    0 0
  15. Why you "separating the parameters from the physics"? They are physics too. Read the RealClimate FAQ on parameterization?
    0 0
  16. Riccardo The common notion of MOC was developed in the Atlantic where the Gulf Stream was identified long ago, and slowly evolved into the concept of a large convection cell extending through both hemispheres along a meridion somewhat west of Greenwich. I think in the 1960's people began to realize that this was part of a larger thermohaline circulation, but the details were vague. When the benthic foram and ice core data came on line people noticed that when Greenland was colder during DO events, Antarctica was warmer. This led to the idea that the MOC and inded the THC were hemispheric temperature balancing mechanisms with the Geenland-Antarctic axis criical to the overall circulation. A large literature is devoted to how MOC may have been shut down or restricted during Bond events and when vast glacial lakes suddenly dumped fresh water ino the nordic sinking area. I believe that THC differs from MOC in three important ways: The Atlantic is not the most important axis and the overall circulation is driven by the Antarctic beltway. The circulation loops in the Pacific and Indian oceans move in the opposite direction from the Atlantic. The most important axis of THC is lattitudnal rather than longitudinal (meridional); that is, it serves to balance the ocean basins rather than the hemispheres. There, you see? I can be one of those math challenged arm waving geologists!
    0 0
  17. trunkmonkey actually the term THC is less general than MOC. The latter doeas not specify one particular physical mechanism thus including wind and tidal drivers. Apart from this, they are commonly used interchangeably and even though the term "meridional" may cause some misunderstanding, as you're showing, it does not mean in any way that it includes just the atlantic latitudinal motion. But maybe we should get back on topic.
    0 0
  18. Scaddenp "Why you "separating the parameters from the physics"? They are physics too." (wink)perameritizations are actually geology.
    0 0
  19. I see your wink but I do not follow your comment at all. Care to explain?
    0 0
  20. Sorry if this is the wrong way to ask, this is the closest article I could find though so far. Is there another article that more specifically looks at model tuning? Or is this an appropriate place to ask questions about details like that as well?

    0 0
    Moderator Response:

    [TD] Thank you for trying to find the appropriate thread!  This thread is fine.  Another relevant one is Models are Unreliable.  Then there is question 6 in Dana's post "Answers to the Top Ten...."

  21. I'm trying to understand model tuning correctly. It seems from most references I can find via the IPCC and papers like that by Mauritsen et al, it is pretty common practice when tuning climate models to adjust cloud parameters to balance TOA energy. It sounds like it is again pretty universal that this is a very necessary step to prevent unrealistic drift of TOA energy balance. Given that TOA energy balance drives everything in our climate, betting that right is pre-requisite to reasonable model behaviour. I've got follow up questions on interpretting this, but am I even correct in understanding things as expressed thus far?

    0 0
  22. bcglrofindel, since you've gotten no reply to your query here, I suggest you ask on the "Unforced Variations" open thread at RealClimate.

    0 0
  23. @Tom Dayton,

    Thanks, giving it a try on that thread as well.

    0 0

Prev  1  2  3  

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us