Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

At a glance

So, what are computer models? Computer modelling is the simulation and study of complex physical systems using mathematics and computer science. Models can be used to explore the effects of changes to any or all of the system components. Such techniques have a wide range of applications. For example, engineering makes a lot of use of computer models, from aircraft design to dam construction and everything in between. Many aspects of our modern lives depend, one way and another, on computer modelling. If you don't trust computer models but like flying, you might want to think about that.

Computer models can be as simple or as complicated as required. It depends on what part of a system you're looking at and its complexity. A simple model might consist of a few equations on a spreadsheet. Complex models, on the other hand, can run to millions of lines of code. Designing them involves intensive collaboration between multiple specialist scientists, mathematicians and top-end coders working as a team.

Modelling of the planet's climate system dates back to the late 1960s. Climate modelling involves incorporating all the equations that describe the interactions between all the components of our climate system. Climate modelling is especially maths-heavy, requiring phenomenal computer power to run vast numbers of equations at the same time.

Climate models are designed to estimate trends rather than events. For example, a fairly simple climate model can readily tell you it will be colder in winter. However, it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Weather forecast-models rarely extend to even a fortnight ahead. Big difference. Climate trends deal with things such as temperature or sea-level changes, over multiple decades. Trends are important because they eliminate or 'smooth out' single events that may be extreme but uncommon. In other words, trends tell you which way the system's heading.

All climate models must be tested to find out if they work before they are deployed. That can be done by using the past. We know what happened back then either because we made observations or since evidence is preserved in the geological record. If a model can correctly simulate trends from a starting point somewhere in the past through to the present day, it has passed that test. We can therefore expect it to simulate what might happen in the future. And that's exactly what has happened. From early on, climate models predicted future global warming. Multiple lines of hard physical evidence now confirm the prediction was correct.

Finally, all models, weather or climate, have uncertainties associated with them. This doesn't mean scientists don't know anything - far from it. If you work in science, uncertainty is an everyday word and is to be expected. Sources of uncertainty can be identified, isolated and worked upon. As a consequence, a model's performance improves. In this way, science is a self-correcting process over time. This is quite different from climate science denial, whose practitioners speak confidently and with certainty about something they do not work on day in and day out. They don't need to fully understand the topic, since spreading confusion and doubt is their task.

Climate models are not perfect. Nothing is. But they are phenomenally useful.

Please use this form to provide feedback about this new "At a glance" section. Read a more technical version below or dig deeper via the tabs above!


Further details

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been shown to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. Sea level rise is a good example (fig. 1).

Fig. 1: Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. A 2019 study led by Zeke Hausfather (Hausfather et al. 2019) evaluated 17 global surface temperature projections from climate models in studies published between 1970 and 2007.  The authors found "14 out of the 17 model projections indistinguishable from what actually occurred."

Talking of empirical evidence, you may be surprised to know that huge fossil fuels corporation Exxon's own scientists knew all about climate change, all along. A recent study of their own modelling (Supran et al. 2023 - open access) found it to be just as skillful as that developed within academia (fig. 2). We had a blog-post about this important study around the time of its publication. However, the way the corporate world's PR machine subsequently handled this information left a great deal to be desired, to put it mildly. The paper's damning final paragraph is worthy of part-quotation:

"Here, it has enabled us to conclude with precision that, decades ago, ExxonMobil understood as much about climate change as did academic and government scientists. Our analysis shows that, in private and academic circles since the late 1970s and early 1980s, ExxonMobil scientists:

(i) accurately projected and skillfully modelled global warming due to fossil fuel burning;

(ii) correctly dismissed the possibility of a coming ice age;

(iii) accurately predicted when human-caused global warming would first be detected;

(iv) reasonably estimated how much CO2 would lead to dangerous warming.

Yet, whereas academic and government scientists worked to communicate what they knew to the public, ExxonMobil worked to deny it."


Exxon climate graphics from Supran et al 2023

Fig. 2: Historically observed temperature change (red) and atmospheric carbon dioxide concentration (blue) over time, compared against global warming projections reported by ExxonMobil scientists. (A) “Proprietary” 1982 Exxon-modeled projections. (B) Summary of projections in seven internal company memos and five peer-reviewed publications between 1977 and 2003 (gray lines). (C) A 1977 internally reported graph of the global warming “effect of CO2 on an interglacial scale.” (A) and (B) display averaged historical temperature observations, whereas the historical temperature record in (C) is a smoothed Earth system model simulation of the last 150,000 years. From Supran et al. 2023.

 Updated 30th May 2024 to include Supran et al extract.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

Last updated on 30 May 2024 by John Mason. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Further reading

Carbon Brief on Models

In January 2018, CarbonBrief published a series about climate models which includes the following articles:

Q&A: How do climate models work?
This indepth article explains in detail how scientists use computers to understand our changing climate.

Timeline: The history of climate modelling
Scroll through 50 key moments in the development of climate models over the last almost 100 years.

In-depth: Scientists discuss how to improve climate models
Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.

Guest post: Why clouds hold the key to better climate models
The never-ending and continuous changing nature of clouds has given rise to beautiful poetry, hours of cloud-spotting fun and decades of challenges to climate modellers as Prof Ellie Highwood explains in this article.

Explainer: What climate models tell us about future rainfall
Much of the public discussion around climate change has focused on how much the Earth will warm over the coming century. But climate change is not limited just to temperature; how precipitation – both rain and snow – changes will also have an impact on the global population.

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Denial101x videos

Here are related lecture-videos from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Myth Deconstruction

Related resource: Myth Deconstruction as animated GIF

MD Model

Please check the related blog post for background information about this graphics resource.

Fact brief

Click the thumbnail for the concise fact brief version created in collaboration with Gigafact:

fact brief

Comments

Prev  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  Next

Comments 401 to 425 out of 601:

  1. It also looks like your friend is confusing physical models (used by climate scientists) with statistical models. However, for a purely phenomenological approach, then perhaps he should look at Benestad and Schmidt
  2. A blogger has just posted an objection to climate models that I haven't come across before. Hopefully someone here knows more about it than me. The claim is that: "By concentrating on anomalies they (scientists) hide the fact that the models get the absolute global temperatures wrong by as much as 2C." I was under the impression that temperature anomalies are used because they correlate well over large distances, whereas absolute temperature doesn't. This sounds like the type of argument that McKitrick and Essex might have used. Any ideas, anyone? Thanks, Paul
    Response:

    [DB] "By concentrating on anomalies they (scientists) hide the fact that the models get the absolute global temperatures wrong by as much as 2C."

    The key word in your quote is "hide".  Your blogger is operating under the premise that there is a conspiracy, therefore ____________ is true.  This is a blatant obfuscation, easily proven wrong.

    From the NOAA page:

    1. Why use temperature anomalies (departure from average) and not absolute temperature measurements?

      Absolute estimates of global average surface temperature are difficult to compile for several reasons. Some regions have few temperature measurement stations (e.g., the Sahara Desert) and interpolation must be made over large, data-sparse regions. In mountainous areas, most observations come from the inhabited valleys, so the effect of elevation on a region’s average temperature must be considered as well. For example, a summer month over an area may be cooler than average, both at a mountain top and in a nearby valley, but the absolute temperatures will be quite different at the two locations. The use of anomalies in this case will show that temperatures for both locations were below average.

      Using reference values computed on smaller [more local] scales over the same time period establishes a baseline from which anomalies are calculated. This effectively normalizes the data so they can be compared and combined to more accurately represent temperature patterns with respect to what is normal for different places within a region.

      For these reasons, large-area summaries incorporate anomalies, not the temperature itself. Anomalies more accurately describe climate variability over larger areas than absolute temperatures do, and they give a frame of reference that allows more meaningful comparisons between locations and more accurate calculations of temperature trends.

    NASA has a nice synopsis on models, here.

  3. With reference to my post above, I've now been given a link to the article it is apparently based on. It's from lucia at The Blackboard Has anyone come across this before?
  4. DB, I appreciate your response. In fact, I have already pointed out the fact that anomalies rather than absolute temperatures are used for the reasons stated by NOAA. My problem was that I was not sufficiently confident of my facts regarding the models to say for certain that the raw data output does not appear in the form of absolute temperature. It certainly wouldn't make sense for it to do so given that the global temperature datasets are presented as anomalies, but I wanted to check up first. Thanks, Paul
    Response:

    [DB] Apologies; I didn't mean to imply that you hadn't.  My intent was to provide you with a sourced, concise reference.  Sphaerica gives some good links to resources on models here.

  5. Thanks for the further links, DB. I've now located a paper which discusses the discrepancy in time-averaged global mean temperatures between the different GCM's: An Overview of Results from the Coupled Model Intercomparison Project (CMIP) The paper states: "Both flux-adjusted and non-flux-adjusted models produce a surprising variety of time-averaged global mean temperatures, from less than 12°C to over 16°C. Perhaps this quantity has not been the subject of as much attention as it deserves in model development and evaluation." However, given that we're dealing with energy flux here, the appropriate unit is surely the Kelvin. In this context, all of the models get within 2K of the actual global mean, which appears to be around 287K - that's within 0.7%!! I'd say that's pretty remarkable given all the various features which are incorporated into the models. Indeed, if they model the response to greenhouse gases anything like that well I'm sure the scientists will be delighted! Paul
  6. Camburn: "2. My impression of section 8 is that there is confidence in the output of the models concerning termperature because this is simple physics." This alone contradicts your claim that models don't have predicative ability. Of course you immediately contradict yourself to some degree: "As far as multiple model runs and picking the middle as a result. Being the models do not do well with clouds, hydro, etc which do affect not only weather, but clmate as well, the outputs of the models should be in question." Well, of course model outputs are in question - does the actual sensitivity to a doubling of CO2 lie a the low end or high end of the 2.5-4C range that's constrained by a bunch of scientific study, including but not limited to research involving GCMs? Just because modeling of clouds is identified as being an area where models don't do as well as one would like (because of restrictions on resolution, there are people who do very interesting work modeling clouds using high resolution models on small slices of the atmosphere) doesn't mean that there is no constraint on the magnitude of cloud feedbacks. Your - and the denialsphere in general - say "cloud feedbacks aren't as well constrained as the radiative properties of CO2" (for instance) and conclude "therefore, the magnitude of cloud feedbacks is not constrained at all" and furthermore argue that cloud feedbacks must be strongly negative to the point of counterbalancing CO2 and water vapor forcing. That's just crap.
  7. 406, dhogaza, Let me see if I get this right: 1) Clouds are a factor that models do not handle as well as desired 2) Therefore clouds are not handled at all 3) Therefore models are unreliable and have no predictive skill 4) Therefore we don't know what climate sensitivity is 5) Therefore we don't even know global warming is happening (natural cycles!) 6) Therefore we can't be certain to what degree we should take action 7) Therefore we can't be sure if we should take action at all 8) Therefore we shouldn't take any action at all QED!
    Response:

    [DB] Moved to the appropriate thread.  My bad.

  8. dhogaza: The models do well with co2 because of the simple physics. However, there is a lot more to climate than just co2 levels. The hydro cycle is critical.
    Response:

    [DB] Please provide peer-reviewed evidence that models do not deal adequately with the hydrological cycle.  This is a climate science website; opinions are of no value without a scientific undercarriage to support them.

    Climate models have this scientific undercarriage; your opinions do not.

  9. 408, Camburn, Say something beyond the obvious, and make an actual point rather than a vague and wholly uncertain implication. And remember to support it with facts and references.
  10. In addition to their 'undercarriage,' models get better with time. People who run models learn from prior work. That seems to be a significant problem with the denials - they just keep repeating the same old generic 'models are unreliable.' For example, listed here are several publications from a NASA water cycle study group. These folks are addressing the very issues that Camburn is looking for: evaporation, clouds, soil moisture, etc. But really: is there something likely to come out of this detail work that will undo the warming to date? That will undo the fact that forcing from atmospheric CO2 keeps rising? That these nonsensical objections (Warming paused! You can't be sure! There's no basis!) are just distractions from the real questions? Sphaerica's assessment is quite correct. We are in this situation. Meanwhile there's a tropical storm in the Atlantic in mid November and the worst storm in 37 years in Alaska on the same day.
  11. Sphaerica: "Let me see if I get this right: 1) Clouds are a factor that models do not handle as well as desired 2) Therefore clouds are not handled at all 3) Therefore models are unreliable and have no predictive skill 4) Therefore we don't know what climate sensitivity is" Typically #4 is "4) therefore climate sensitivity is about 1C per doubling, max", no ??? :) ITSM that's where they always end up. Even Curry does it ... her "uncertainty monster" argument is that poor treatment of uncertainty causes climate scientists to *overestimate* sensitivity. Not "underestimate error bars" ...
  12. You've got to love the way uncertainties in parts of climate models get conflated with "models are unreliable", or "models do not have preditive ability". Say it's mid-August in Melbourne, the daytime temperature is a respectable (and close to average) 15C. Can I forecast the exact temperature two weeks from now? No. But I can say that it's likely that the average temperature during September will be a bit higher than 15C. Some days will be cooler, but it's very likely, but not certain that most will be warmer. As for October, I can forecast that nearly all days will have a max temperature higher than 15C, and for November and December, it's unlikely that any day will be below 15C. I know this because the underlying forcing, not visible in a short timeseries with large variability, shows up over a longer period of time. The underlying forcing beats the variability every time. I know that October will be warmer than August, although not every October day will beat every August day. In the same manner, I can be very confident that the 2010s and the 2020s will be warmer than the 1990s and 2000s, even though not every later year will beat every earlier year. The models forecast this very well, alongside a great deal of more complex factors. Some factors not so well, but claiming unreliability belies an inability to understand the usefulness of models. Is the model unreliable because it cannot pick out the exact variability due to noisy variations in the short term? If you're forecasting the weather two months ahead, yes, but if you're forecasting the climate, no.
  13. 411, dhogaza, No, no, no. What if the cloud feedback is negative, as RW1 claims? Then climate sensitivity is 0.5C per doubling, or even 0.0C per doubling. Maybe even -1C per doubling! You darn science types are always making false assumptions and jumping to alarmist conclusions.
  14. So ... no Camburn. Am I only only person who thinks he was just going off on models in an effort to derail the other thread?
    Response:

    [DB] Camburn has elected to recuse himself from further participation.

  15. I have recently heard an allegation from skeptics that you always get the sam results from models, regardless of the information you put in and that they must therefore be extremely unreliable. Anyone have an idea regards what they are talking about?
  16. peacetracker @415, models can be set up with forcings typical of the peak of the last "ice age" (the Last Glacial Maximum) and they will yield climate predictions featuring kilometer thick ice sheets over North America and Europe. They can be set up with forcings typical of the Paleocene-Eocene Thermal Maximum and will yield tropical water temperatures in Arctic seas. So not only do I not know what they are talking about, evidently if they claim climate models produce the same results regardless of input, neither do they.
  17. peacetracker - if you can tune models to produce what you want, then why cant skeptics take any one of the models (open source) and produce the current warming without needing anthropogenic factors?
  18. This is probably a reference to McIntyre and McKitrick 2005. I remember this being touted by 'skeptics' during Cliamtegate 1.0 as proof that the models were doctored. See also the What evidence is there for the hockey stick? thread.
  19. Thanks to all for the feedback pbjamm - Have checked out your references and am guessing that's probably it. Cheers.
  20. skywatcher. The weather varies from day to day because of atmospheric pressure and wind blowing either hot or cold air from other areas of the planet on to the location that you're observing the weather from. Now, if manmade pollution is the main factor that governs climate change, and natural forcing agents are a much lesser factor, then why can't you predict the global mean temperature next year, 5 years, 10 years, 15 years etc. with a reasonable degree of confidence?
  21. "If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future." Interesting point of view. See my Nikkei example which demolishes that thought.
  22. "Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption." I note that there is no hyperlink to any report which actually proves that a model was produced prior to an eruption taking place, that the prediction was proved correct in terms of the volcanic eruption's effect on the global mean temperature.
  23. Jdey123 - the reasons why models have little skill with decadal-level prediction are well understood. It may improve, but this has little to do with the skill of models designed to predict climate not weather. You do understand the difference between a climate model and a weather model? I would note that models are very successful at predictions within their domain. eg look here (Noting the papers cited both in making the prediction and observing it). As to volcanoes - models respond to specific aerosol loadings at given altitudes and locations. Until a volcano erupts, you dont know what these will be. Instead models use scenarios to put in volcanoes at the rate they are normally observed. If you look at any of the climate models predictions beyond the present you will see downturn spikes in places (and they will be different for different models and for different runs of the same model). These are simulated volcanoes. They are not saying that there will be a volcano at this time and place, but if they didnt put periodic volcanoes into the scenario, then the temperatures would be too high. (A long span of very quiet volcano activity is in effect a natural forcing). If you think that code is "fitted" to reproduce volcano change, then you could take code from before eruption, put the volcano into the scenario, and rerun. Glory awaits you if this doesnt match the published outputs from scientists doing the very same thing.
  24. Ok, so my example including hyperlinks showing why stock market prediction is analogous to climate prediction and showing why extrapolating historical trends has been deleted. The post was on topic and scientific, so why has this been deleted?
    Response:

    [muon] This is not about the stock market. There are several threads dealing with the overall accuracy of past climate predictions - as well as the overall inaccuracy of predictions made by those in denial.
    You've been counseled multiple times on other threads to read, learn and follow the Comments Policy. As you were already told, posting on this forum is a privilege, not a right.


    [DB] Ok, you have now had 3 4 comments deleted since this one was posted, all of which amount to moderation complaints, trolling and taunting.  No more warnings.  Zero

    Either adhere to the Comments Policy, a rule the vast majority of participants here have no difficulties whatsoever in adhering to, or you "choose to recuse yourself from this venue".

    Your call.

  25. Re Dow. Well actually I expect that stock market does in fact respond to forcings but there isnt a quantitative model to test. Climate IS different. There is a quantitative model based on known physics not a deduction based on observation of a trend. The models are not one dimensional. They make a huge no. of predictions on wide variety of parameters with spatial and vertical structures. These predictions vary in robustness but all amount to tests of the model. The evolving climate is a continuous test of these predictions. Please learn how to do hyperlinks (see tips below comment box).

Prev  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us