Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

At a glance

So, what are computer models? Computer modelling is the simulation and study of complex physical systems using mathematics and computer science. Models can be used to explore the effects of changes to any or all of the system components. Such techniques have a wide range of applications. For example, engineering makes a lot of use of computer models, from aircraft design to dam construction and everything in between. Many aspects of our modern lives depend, one way and another, on computer modelling. If you don't trust computer models but like flying, you might want to think about that.

Computer models can be as simple or as complicated as required. It depends on what part of a system you're looking at and its complexity. A simple model might consist of a few equations on a spreadsheet. Complex models, on the other hand, can run to millions of lines of code. Designing them involves intensive collaboration between multiple specialist scientists, mathematicians and top-end coders working as a team.

Modelling of the planet's climate system dates back to the late 1960s. Climate modelling involves incorporating all the equations that describe the interactions between all the components of our climate system. Climate modelling is especially maths-heavy, requiring phenomenal computer power to run vast numbers of equations at the same time.

Climate models are designed to estimate trends rather than events. For example, a fairly simple climate model can readily tell you it will be colder in winter. However, it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Weather forecast-models rarely extend to even a fortnight ahead. Big difference. Climate trends deal with things such as temperature or sea-level changes, over multiple decades. Trends are important because they eliminate or 'smooth out' single events that may be extreme but uncommon. In other words, trends tell you which way the system's heading.

All climate models must be tested to find out if they work before they are deployed. That can be done by using the past. We know what happened back then either because we made observations or since evidence is preserved in the geological record. If a model can correctly simulate trends from a starting point somewhere in the past through to the present day, it has passed that test. We can therefore expect it to simulate what might happen in the future. And that's exactly what has happened. From early on, climate models predicted future global warming. Multiple lines of hard physical evidence now confirm the prediction was correct.

Finally, all models, weather or climate, have uncertainties associated with them. This doesn't mean scientists don't know anything - far from it. If you work in science, uncertainty is an everyday word and is to be expected. Sources of uncertainty can be identified, isolated and worked upon. As a consequence, a model's performance improves. In this way, science is a self-correcting process over time. This is quite different from climate science denial, whose practitioners speak confidently and with certainty about something they do not work on day in and day out. They don't need to fully understand the topic, since spreading confusion and doubt is their task.

Climate models are not perfect. Nothing is. But they are phenomenally useful.

Please use this form to provide feedback about this new "At a glance" section. Read a more technical version below or dig deeper via the tabs above!


Further details

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been shown to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. Sea level rise is a good example (fig. 1).

Fig. 1: Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. A 2019 study led by Zeke Hausfather (Hausfather et al. 2019) evaluated 17 global surface temperature projections from climate models in studies published between 1970 and 2007.  The authors found "14 out of the 17 model projections indistinguishable from what actually occurred."

Talking of empirical evidence, you may be surprised to know that huge fossil fuels corporation Exxon's own scientists knew all about climate change, all along. A recent study of their own modelling (Supran et al. 2023 - open access) found it to be just as skillful as that developed within academia (fig. 2). We had a blog-post about this important study around the time of its publication. However, the way the corporate world's PR machine subsequently handled this information left a great deal to be desired, to put it mildly. The paper's damning final paragraph is worthy of part-quotation:

"Here, it has enabled us to conclude with precision that, decades ago, ExxonMobil understood as much about climate change as did academic and government scientists. Our analysis shows that, in private and academic circles since the late 1970s and early 1980s, ExxonMobil scientists:

(i) accurately projected and skillfully modelled global warming due to fossil fuel burning;

(ii) correctly dismissed the possibility of a coming ice age;

(iii) accurately predicted when human-caused global warming would first be detected;

(iv) reasonably estimated how much CO2 would lead to dangerous warming.

Yet, whereas academic and government scientists worked to communicate what they knew to the public, ExxonMobil worked to deny it."


Exxon climate graphics from Supran et al 2023

Fig. 2: Historically observed temperature change (red) and atmospheric carbon dioxide concentration (blue) over time, compared against global warming projections reported by ExxonMobil scientists. (A) “Proprietary” 1982 Exxon-modeled projections. (B) Summary of projections in seven internal company memos and five peer-reviewed publications between 1977 and 2003 (gray lines). (C) A 1977 internally reported graph of the global warming “effect of CO2 on an interglacial scale.” (A) and (B) display averaged historical temperature observations, whereas the historical temperature record in (C) is a smoothed Earth system model simulation of the last 150,000 years. From Supran et al. 2023.

 Updated 30th May 2024 to include Supran et al extract.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

Last updated on 30 May 2024 by John Mason. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Further reading

Carbon Brief on Models

In January 2018, CarbonBrief published a series about climate models which includes the following articles:

Q&A: How do climate models work?
This indepth article explains in detail how scientists use computers to understand our changing climate.

Timeline: The history of climate modelling
Scroll through 50 key moments in the development of climate models over the last almost 100 years.

In-depth: Scientists discuss how to improve climate models
Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.

Guest post: Why clouds hold the key to better climate models
The never-ending and continuous changing nature of clouds has given rise to beautiful poetry, hours of cloud-spotting fun and decades of challenges to climate modellers as Prof Ellie Highwood explains in this article.

Explainer: What climate models tell us about future rainfall
Much of the public discussion around climate change has focused on how much the Earth will warm over the coming century. But climate change is not limited just to temperature; how precipitation – both rain and snow – changes will also have an impact on the global population.

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Denial101x videos

Here are related lecture-videos from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Myth Deconstruction

Related resource: Myth Deconstruction as animated GIF

MD Model

Please check the related blog post for background information about this graphics resource.

Fact brief

Click the thumbnail for the concise fact brief version created in collaboration with Gigafact:

fact brief

Comments

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  Next

Comments 251 to 275 out of 310:

  1. @mistermack: false analogy. If you don't put your money on the horse (i.e. you don't trust the models), you are unaffected whether it wins or loses. Your life goes on as normal. If don't "put money" on AGW (i.e. disbelieve the experts) and it turns out to be true - as the body of science strongly suggests - then you'll be affected. A better analogy would be if someone kidnapped a loved one, told you to pick the next Kentucky Derby winner, and warned you they'll kill the hostage if you pick the loser. Which horse would you pick then? The favorite (i.e. the one the experts say has a better chance of winning than the others according to odds calculations), or a long shot that experts say is unlikely to win?
  2. mistermack, are you suggesting that the hypothetical model for the Kentucky Derby would be based on the laws of physics ?
  3. Jmurphy, do you really believe that the models are just constructed from the laws of physics? Someone sat down with a physics textbook and developed the current models? Without peeking at the previous data even once? "based on" is a meaningless phrase here.
    Response: Look in the green box at the bottom of this Argument--the box labeled "Further Reading." Click those links. You will learn how physics is used in the models, and how and to what degree observations are used.
  4. mistermack, as well as looking at the Moderator's comment, you can investigate models further in WIKIPEDIA (Climate Models, Global Climate Models), NASA (The Physics of Climate Modeling), and THE DISCOVERY OF GLOBAL WARMING (Simple Models of Climate). How could the first models have peeked at the previous data ? You're right about one thing, though, they aren't just constructed using the laws of physics : there's a lot of maths in there too, as well as some chemistry.
  5. Oops, sorry - half my links were already in the green box. Sorry !
  6. #253: "told you to pick the next Kentucky Derby winner" That would be akin to predicting the next hurricane's landfall location from a study of prior landfalls. A more reasonably-posed analogy might be to conclude from a study of prior races that there are factors that categorizes the field of entrants into 'likely' winners and 'likely' losers (forgive my alarmist language). For example (from wikipedia): "No horse since Apollo in 1882 has won the Derby without racing at age two." That would make a 'very unlikely' outcome. So from a study of climate, it is perhaps unreasonable to predict a specific heat wave, but not at all unreasonable to build a model that says: "would these [extreme weather] events have occurred if atmospheric carbon dioxide had remained at its pre-industrial level of 280 ppm?", an appropriate answer in that case is "almost certainly not." But what is all this about models not having a peek at prior data? On another thread, there have been comments to the effect that models and evidence somehow pollute each other; that makes no sense to me. What use is a model if it is not built on prior data and tested by subsequent data?
  7. Mueoncounter, my point isn't that we shouldn't have models, or that you shouldn't seek to improve them by using previous data. I take all that as obvious. My problem is that when I look on this site, or anywhere, for good evidence that manmade CO2 is going to cause significant harm, the only evidence of any significance is that the models match the data, or the data matches the models. Since the models are developed to match the data, what do they expect? Don't quote it as evidence, that's all I'm saying.
    Response: The models are not developed merely match "the data" in the pejorative sense you are using the term "the data." Please actually read the material that I and others have pointed you to, for explanations of exactly how observations are used in model construction. Your mere repetition of your contentions is not contributing to the discussion.
  8. To answer the point about physics and chemistry, it's stretching it rather a lot to say that since the models involve (or are based on) physics and chemistry, they must be right. Bridges fall, buildings collapse. Shuttles explode. Their design is always based on maths, physics and chemistry. They can get it wrong. But we have long experience of successful building. We have zilch of successful climate forcasting. So I think I'm right to be sceptical of the models' ability to get it right at this stage.
    Response: You are incorrect that we have "ziltch" experience in successful climate forecasting. You really should actually read the posts. Be sure to click the "Intermediate" tab.
  9. mistermack so we have a phenomenon, we build a theory (a model) and compare it to the observed phenomenon. If they agree I throw the model away because it is trivial, if it does not I throw it anyway. I'm puzzled. More seriously, the first model dates back to 1896. Not enough subsequent data to test it? Yes, of course. It has then be refined and tested again, and so on for many decades. Apparently you're seing just last generation of models, as if they came out of nowhere.
  10. @mistermack: you have yet to demonstrate exactly how models are unreliable. You should provide evidence that supports your allegations, otherwise it's hard to take them seriously.
  11. mistermack, to add to fact that models have been successful in forecasting: Predicting the past is still a prediction (see retrodiction). If you build a simulation of a physical system, it is appropriate to test that simulation by comparing it to past performance of the real system. This is true of any physical model, including models of bridges and space shuttles. The fact that you are comparing to past data does not mean that the simulation has past data "programmed in" as you are implying. What you are thinking of is a statistical model, where the inputs are directly mapped to outputs via a mathematical relationship derived directly from historical data. This is not how physical climate models are derived. Since the model is built on physical laws and not on direct statistics, there is no reason to assume that a particular model could ever recreate past climate behavior, unless that model has some basis in reality. If the basic physics underlying the model are significantly off, then no amount of tweaking would ever result in an accurate recreation of past performance. The fact that it can recreate past performance is therefore evidence that the model is correct, since the likelihood is very slim that the model would be able to accurately recreate real performance if it was significantly wrong in its recreation of physics.
  12. #260:"Bridges fall, buildings collapse. Shuttles explode. ... They can get it wrong. " You're forgetting a significant cause of such unpleasant events: Google search 'operator error accidents'. Such is not the case in a climate model, where there is no one to push the wrong button, run past a red signal or close a valve that should be left open.
  13. mistermack, expanding on e's answer, for more explanation of how observations are used to improve climate models, see the RealClimate "FAQ on Climate Models," in particular the questions "What is the difference between a physics-based model and a statistical model," "Are climate models just a fit to the trend in the global temperature data," and "What is tuning?"
  14. #263 e at 05:23 AM on 27 October, 2010 Since the model is built on physical laws and not on direct statistics, there is no reason to assume that a particular model could ever recreate past climate behavior, unless that model has some basis in reality The situation is not so nice as you paint it. Sub-grid processes, aerosols and the like are always parametrized in models, that is, these are not derived from first principles, but are chosen to reproduce the past. And as von Neumann said "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." Computational climate models tend to agree on past trends but diverge considerably in their predictions. It is a sure sign they do use the leeway provided by the parametrization process and use it disparately.
  15. BP, parameterization is not as freewheeling as you imply. See the RealClimate FAQ section "What is tuning?" and even the Wikipedia entry on parameterization (climate). Also see the list of parameters at climateprediction.net.
  16. #266: "Sub-grid processes, aerosols and the like are always parametrized in models, ... chosen to reproduce the past" Do you have a better procedure in mind? On the same page, von Neumann also said "I think that it is a relatively good approximation to truth — which is much too complicated to allow anything but approximations — that mathematical ideas originate in empirics."
  17. mistermack and BP, there is an example of parameterization at Science of Doom's page CO2 - An Insignificant Trace Gas? Part Four.
  18. Or not...from the same paper "Another solution is to bring trained computer scientists into research groups, either permanently or as part of temporary alliances. Software developer Nick Barnes has set up the Climate Code Foundation, based in Sheffield, UK, to help climate researchers. He was motivated by problems with NASA's Surface Temperature Analysis software, which was released to the public in 2007. Critics complained that the program, written in the scientific programming language Fortran, would not work on their machines and they could therefore not trust what it said about global warming. In consultation with NASA researchers, Barnes rewrote the code in a newer, more transparent programming language —Python — reducing its length and making it easier for people who aren't software experts to understand how it functions. "Because of the immense public interest and the important policy issues at stake, it was worth taking the time to do that," says Barnes. His new code shows the same general warming trend as the original program." Seriously, poptech, how could consistent trends among dozens of climate models and several global temperature averaging algorithms, each coded by separate groups, result from random coding errors? You are applying several general statements in the article to climate modelers specifically when 1) none of the most damning examples provided by that article relate to climate modeling and 2) you haven't even bothered to find out what procedures and cross checks climate modellers have in place. This is the definition of quote mining.
  19. Moderator, I hope that you allow this comment, b/c Poptech's latest post is an especially egregious example of poor form by a "skeptic". Poptech, This is what you should have said, which might have been somewhat closer to the truth, albeit still highly misleading: "Nature Admits Climate Scientists are Computer Illiterate" My retort would be (as exemplified by your post and as noted by Stephen @270 above): "Climate "skeptics" illiterate on the science and fact checking" Anyhow, 1) Nature did not admit that "climate scientists are computer illiterate" as you would so dearly love to believe. The title you elected to use is clearly your spin of the article's content. 2) The example from the University of Edinburgh that Wilson discusses (and which you bolded) does not seem to apply to climate scientists, but scientists in general. Yet, you oddly chose to conclude that he was referring to all climate scientists. Also, From the very same Nature article: "Science administrators also need to value programming skills more highly, says David Gavaghan, a computational biologist at the University of Oxford, UK." "The mangled coding of these monsters can sometimes make it difficult to check for errors. One example is a piece of code written to analyse the products of high-energy collisions at the Large Hadron Collider particle accelerator at CERN, Europe's particle-physics laboratory near Geneva, Switzerland." So using your (flawed) logic, are all CERN scientists, and by extension all physicists, computer illiterate Poptech? No, of course not. "Aaron Darling, a computational biologist at the University of California, Davis, unwittingly caused such a mistake with his own computer code for comparing genomes to reconstruct evolutionary relationships." So using your (flawed) logic are all computational biologists, and by extension all biologists, computer illiterate Poptech? No, of course not. "The CRU e-mail affair was a warning to scientists to get their houses in order, he says. "To all scientists out there, ask yourselves what you would do if, tomorrow, some Republican senator trains the spotlight on you and decides to turn you into a political football. Could your code stand up to attack?" So your post at #270 was seriously flawed, and a perfect example of confirmation bias and cherry picking which is often used by "skeptics" to distort and misinform. You would have been better of citing the example of Harry at CRU that they discussed...oh, but that has already been done months ago, and is not sufficient evidence to make sweeping generalizations. Please, do not insult us here with your misinformation.
    Response: Albatross, obviously I (Daniel Bailey) don't speak for John, but I see nothing wrong with it. As a teaching tool, it may also have more value if you cross-post it over on the The value of coherence in science thread. Thanks!
  20. #270: "As a result, codes may be riddled with tiny errors that do not cause the program to break down, but may drastically change the scientific results that it spits out." There's a crust of value in this observation, as it must apply equally to both sides. Hence, analysis of surface temperature measurements, all of the 'no, its not' repeated ad nauseum here, etc. must be subject to the same risk of error. If you can't trust a climate scientist, why should you trust a climate skeptic?
  21. Poptech, Oh now it is "natural scientists"...need I remind you of your original headline? Anyhow, you demonstrated above that you not only cherry picked your cut and pasts, but you distorted the content of the article to suite your means. Yes, that is poor form. EOS. Do all scientists (with the obvious exception of computer science) need to hone and improve their computer skills? Yes, I can certainly agree with that.
  22. A nice demonstration of how models work is given here, at Steve Easterbrook's site - it is of French origin and you can access the original video through that site also.
  23. "Seriously, why is the code and results different for each climate model if the science is "settled"?" You have that backwards. The fact that the coding differs and the same results are acheived is actually a good thing -- an indication that the science is settled with respect to effects of CO2 on climate. You know as well as I that the code would differ even if the models worked at the same spatial resolution, represented ocean-atmosphere coupling in the same way and had identical degrees of detail the terrestrial carbon cycle (not to mention other things). Different software languages are used, there are different limitations on computing resources, and scientists have to make innumerable little decisions about how to handle input data. The fact that all the models can only produce the increase in temp over the last century only if the effect of CO2 is included indicates that those coding decisions have no bearing on the issue of whether anthropogenic CO2 has an effect on climate. I also agree with the sentiment of the Nature article - scientists (and biologists in particular) need more expert training in coding. But, I think climate scientists are probably the ones on the cusp of the effort to code better and more openly.
  24. Found several articles lately about plant growth providing negative feedback against CO2 increases, including this recent NASA model update. I've also read a paper saying that rising temperatures will also make more nitrogen available for plant growth. These new models seem to make increased temperatures from a doubling of CO2 much more modest than previously claimed. Do these new models also include the negative feedback from the ocean equivalents of green plants, ie - photosynthetic creatures like algae, diatoms, coral, etc? The huge biomass in the oceans would seem to be more important than terrestrial plants? The Science Daily report on the NASA study http://www.sciencedaily.com/releases/2010/12/1012080 Here is the abstract from the paper http://europa.agu.org/?view=article&uri=/journals/gl/gl1023/2010GL045338/2010GL045338.xml&t=gl,bounoua "Several climate models indicate that in a 2 × CO2 environment, temperature and precipitation would increase and runoff would increase faster than precipitation. These models, however, did not allow the vegetation to increase its leaf density as a response to the physiological effects of increased CO2 and consequent changes in climate. Other assessments included these interactions but did not account for the vegetation down‐regulation to reduce plant's photosynthetic activity and as such resulted in a weak vegetation negative response. When we combine these interactions in climate simulations with 2 × CO2, the associated increase in precipitation contributes primarily to increase evapotranspiration rather than surface runoff, consistent with observations, and results in an additional cooling effect not fully accounted for in previous simulations with elevated CO2. By accelerating the water cycle, this feedback slows but does not alleviate the projected warming, reducing the land surface warming by 0.6°C. Compared to previous studies, these results imply that long term negative feedback from CO2‐induced increases in vegetation density could reduce temperature following a stabilization of CO2 concentration." Chris Shaker
  25. Older research on the topic, and it appears, more controversy http://www.sciencedaily.com/releases/2007/12/071211233441.htm "Interestingly, warming temperatures in response to rising carbon dioxide levels could make more nitrogen available, said Xiaojuan Yang, a doctoral student in Jain’s lab. This factor must also be weighed in any calculation of net carbon dioxide load, she said. “Previous modeling studies show that due to warming, the soil releases more carbon dioxide through increased decomposition,” she said. “But they are not considering the nitrogen effect. When the soil is releasing more CO2, at the same time more nitrogen is mineralized. This means that more nitrogen becomes available for plants to use.”" Chris Shaker

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us