Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

At a glance

So, what are computer models? Computer modelling is the simulation and study of complex physical systems using mathematics and computer science. Models can be used to explore the effects of changes to any or all of the system components. Such techniques have a wide range of applications. For example, engineering makes a lot of use of computer models, from aircraft design to dam construction and everything in between. Many aspects of our modern lives depend, one way and another, on computer modelling. If you don't trust computer models but like flying, you might want to think about that.

Computer models can be as simple or as complicated as required. It depends on what part of a system you're looking at and its complexity. A simple model might consist of a few equations on a spreadsheet. Complex models, on the other hand, can run to millions of lines of code. Designing them involves intensive collaboration between multiple specialist scientists, mathematicians and top-end coders working as a team.

Modelling of the planet's climate system dates back to the late 1960s. Climate modelling involves incorporating all the equations that describe the interactions between all the components of our climate system. Climate modelling is especially maths-heavy, requiring phenomenal computer power to run vast numbers of equations at the same time.

Climate models are designed to estimate trends rather than events. For example, a fairly simple climate model can readily tell you it will be colder in winter. However, it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Weather forecast-models rarely extend to even a fortnight ahead. Big difference. Climate trends deal with things such as temperature or sea-level changes, over multiple decades. Trends are important because they eliminate or 'smooth out' single events that may be extreme but uncommon. In other words, trends tell you which way the system's heading.

All climate models must be tested to find out if they work before they are deployed. That can be done by using the past. We know what happened back then either because we made observations or since evidence is preserved in the geological record. If a model can correctly simulate trends from a starting point somewhere in the past through to the present day, it has passed that test. We can therefore expect it to simulate what might happen in the future. And that's exactly what has happened. From early on, climate models predicted future global warming. Multiple lines of hard physical evidence now confirm the prediction was correct.

Finally, all models, weather or climate, have uncertainties associated with them. This doesn't mean scientists don't know anything - far from it. If you work in science, uncertainty is an everyday word and is to be expected. Sources of uncertainty can be identified, isolated and worked upon. As a consequence, a model's performance improves. In this way, science is a self-correcting process over time. This is quite different from climate science denial, whose practitioners speak confidently and with certainty about something they do not work on day in and day out. They don't need to fully understand the topic, since spreading confusion and doubt is their task.

Climate models are not perfect. Nothing is. But they are phenomenally useful.

Please use this form to provide feedback about this new "At a glance" section. Read a more technical version below or dig deeper via the tabs above!


Further details

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been shown to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. Sea level rise is a good example (fig. 1).

Fig. 1: Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. A 2019 study led by Zeke Hausfather (Hausfather et al. 2019) evaluated 17 global surface temperature projections from climate models in studies published between 1970 and 2007.  The authors found "14 out of the 17 model projections indistinguishable from what actually occurred."

Talking of empirical evidence, you may be surprised to know that huge fossil fuels corporation Exxon's own scientists knew all about climate change, all along. A recent study of their own modelling (Supran et al. 2023 - open access) found it to be just as skillful as that developed within academia (fig. 2). We had a blog-post about this important study around the time of its publication. However, the way the corporate world's PR machine subsequently handled this information left a great deal to be desired, to put it mildly. The paper's damning final paragraph is worthy of part-quotation:

"Here, it has enabled us to conclude with precision that, decades ago, ExxonMobil understood as much about climate change as did academic and government scientists. Our analysis shows that, in private and academic circles since the late 1970s and early 1980s, ExxonMobil scientists:

(i) accurately projected and skillfully modelled global warming due to fossil fuel burning;

(ii) correctly dismissed the possibility of a coming ice age;

(iii) accurately predicted when human-caused global warming would first be detected;

(iv) reasonably estimated how much CO2 would lead to dangerous warming.

Yet, whereas academic and government scientists worked to communicate what they knew to the public, ExxonMobil worked to deny it."


Exxon climate graphics from Supran et al 2023

Fig. 2: Historically observed temperature change (red) and atmospheric carbon dioxide concentration (blue) over time, compared against global warming projections reported by ExxonMobil scientists. (A) “Proprietary” 1982 Exxon-modeled projections. (B) Summary of projections in seven internal company memos and five peer-reviewed publications between 1977 and 2003 (gray lines). (C) A 1977 internally reported graph of the global warming “effect of CO2 on an interglacial scale.” (A) and (B) display averaged historical temperature observations, whereas the historical temperature record in (C) is a smoothed Earth system model simulation of the last 150,000 years. From Supran et al. 2023.

 Updated 30th May 2024 to include Supran et al extract.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

Last updated on 30 May 2024 by John Mason. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Further reading

Carbon Brief on Models

In January 2018, CarbonBrief published a series about climate models which includes the following articles:

Q&A: How do climate models work?
This indepth article explains in detail how scientists use computers to understand our changing climate.

Timeline: The history of climate modelling
Scroll through 50 key moments in the development of climate models over the last almost 100 years.

In-depth: Scientists discuss how to improve climate models
Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.

Guest post: Why clouds hold the key to better climate models
The never-ending and continuous changing nature of clouds has given rise to beautiful poetry, hours of cloud-spotting fun and decades of challenges to climate modellers as Prof Ellie Highwood explains in this article.

Explainer: What climate models tell us about future rainfall
Much of the public discussion around climate change has focused on how much the Earth will warm over the coming century. But climate change is not limited just to temperature; how precipitation – both rain and snow – changes will also have an impact on the global population.

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Denial101x videos

Here are related lecture-videos from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Myth Deconstruction

Related resource: Myth Deconstruction as animated GIF

MD Model

Please check the related blog post for background information about this graphics resource.

Fact brief

Click the thumbnail for the concise fact brief version created in collaboration with Gigafact:

fact brief

Comments

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  Next

Comments 276 to 300 out of 446:

  1. It appears that not quite half of the photosynthesis occurring on the earth is in the oceans. Search for 'Global' http://en.wikipedia.org/wiki/Primary_production "Using satellite-derived estimates of the Normalized Difference Vegetation Index (NDVI) for terrestrial habitats and sea-surface chlorophyll for the oceans, it is estimated that the total (photoautotrophic) primary production for the Earth was 104.9 Gt C yr−1.[12] Of this, 56.4 Gt C yr−1 (53.8%), was the product of terrestrial organisms, while the remaining 48.5 Gt C yr−1, was accounted for by oceanic production." So, I hope to read that these negative CO2 feedback effects are also being modeled for algae as well. Chris Shaker
  2. Chris, first note that models are scenarios based on amount of GHG in the atmosphere. ie if CO2 is 450ppm, then climate looks like x. Now if against all paleo icecore data we have situation whereby warming world REDUCED pulls more CO2 from atmosphere than a cooler one, the result would be that the emissions scenarios need revised. ie for a given rate of fossil fuel consumption, then the rate of accumulation of CO2 in the atmosphere would reduce. However,I am unaware of any modern or paleo data to support the idea that increased temperature would decrease CO2 - in fact all of the evidence I have seen to date shows the opposite.
  3. The "reliability" of climate models has been has never been backed up by satellite or weather balloon data. Even the UN says they are unreliable. I know this is a rather simple line of reasoning, but I'd rather not use abstract examples. Numbers can be manipulated to show almost anything, so unless the two basic forms of temperature measurement are wrong, the "unreliable" label will be firmly (and rightfully) attached to the "what if" models.
    Response: [Daniel Bailey] Incorrect on all counts. If you feel differently, please provide links to supporting sources.
  4. I see a lot of parallels between the modelling discussed in this article and the technical analysis used by some stock traders. Of course there are many differences, but at the heart of it, both types of modelers are trying to use intelligent analysis of past data to predict/project the future. I highly recommend the book, A Random Walk Down Wall Street (http://en.wikipedia.org/wiki/A_Random_Walk_Down_Wall_Street) for a very approachable explanation of some of the techniques that modelers use. The author's arguments that it is impossible to "beat" the stock market through this analysis isn't relevant to weather modeling. However, I think he does a good job of addressing the question of identifying success based on past data.
  5. This may be a naïve criticism- but how do you avoid the problem of circularity in using hind-casting to establish the accuracy of climate models? The assumptions for the models can only be based on observations of what has happened in the past, so to create a model based on these assumptions means that it is inevitable that it will accurately predict what has happened in the past. The more established patterns are encoded into the model, the more accurately it will predict the past. It would be ironic if some of the most powerful computers in the world were generating tautologies. This is not a problem for climate science alone. It is a problem for any time-based models. I have been involved in environmental predictions based on using multivariate regression analysis of GIS data correlated with soil types. This falls into the same problem, but it can be amended by later sampling of soils at predicted locations, and correlating the observed soil type with the predicted soil type and running a t-test to establish the reliability of the prediction. It would be useless to sample the same site that the model was based on. The only way that the same calibration could be carried out in time-based climate models is by comparing forecasts with what happens in the future and not the past. Are the models therefore proper science. Without reference to the future they are unfalsifiable. The absence of controls is another issue. I understand the practical problem of testing the accuracy of long-term models in this way. The damage may have been done before the data is in. Can you post a link to papers which articulate the assumptions behind these models?
  6. Jue1234 - You might find the Hansen's 1988 prediction page a useful answer. His predictions are holding fairly well through now. His initial climate sensitivity number was a bit too high - but adjusting for the better sensitivity estimates and actual emissions shows the model (simple as it was) still holds up. Models are not just based on hindcasting - that's a required check, but the assumptions going into the models are based on physics, not just mathematical modeling of previous behavior.
  7. Jue1234, see in the RealClimate post "FAQ on Climate Models," the "Questions" section, "What is the difference between a physics-based model and a statistical model?", "Are climate models just a fit to the trend in global temperature data?", and "What is tuning?" A relevant quote from those: "Thus statistical fits to the observed data are included in the climate model formulation, but these are only used for process-level parameterisations, not for trends in time." Part II of that post then provides more details on parameterizations, including specifics on clouds.
  8. Thanks KR and Tom, I'll look at your links
  9. http://scholar.google.com/scholar?hl=en&q=climate+models+unreliable&as_sdt=0%2C24&as_ylo=2009&as_vis=0
    Response: [DB] Please, no link only.
  10. ( -Snip- )
    Response: [DB] Please, no link only. Future comments containing links without some description of why you are posting it and why you think it's relevant to the discussion on this thread will be deleted. Thanks!
  11. The title of the link desribes some of the issues with models being unreliable in peer review like this one: http://epw.senate.gov/public/index.cfm?FuseAction=Minority.Blogs&ContentRecord_id=84E9E44A-802A-23AD-493A-B35D0842FED8 The reason I am posting this link is to show there is considerable controversy and evidence of GCM's not being reliable or valid from 1900 to 2011.
  12. Chemist1 - all that link shows is what wallowing in a sewer will get you. Firstly, the paper does not support the claims the tin-hats claim of it. Secondly, it was also wrong and revised by Schwatz himself in 2008. It would help to link to the science paper instead of breathless political posturing. Controversy is one thing, but controversy supported by data and published in peer-reviewed science is another.
  13. Chemist1: You've linked to a 3 year old Marc Morono paper written for the senator from the state of Oklahoma Petrodollars. This isn't 'controversy,' its junk. For example: SURVEY: LESS THAN HALF OF ALL PUBLISHED SCIENTISTS ENDORSE GLOBAL WARMING THEORY - Excerpt: "Of 539 total papers on climate change, only 38 (7%) gave an explicit endorsement of the consensus. Even for 2007, that was blatantly false.
  14. Continuing from comment here: "What is an issue is the projected warming at NASA." and here: "comparing results of global climate models to records for individual stations is missing the point, and shows a fundamental misunderstanding of what those models tells us." The paper in question compares GCM output to data at selected stations using point-by-point, year-by-year observation to computed model output. The authors note the following regarding their prior work: criticism appeared in science blogs (e.g. Schmidt, 2008). Similar criticism has been received by two reviewers of the first draft of this paper, hereinafter referred to as critics. In both cases, it was only our methodology that was challenged and not our results. It would seem that if the methodology is challenged, the results are implicitly challenged. The comparison of yearly data to model output is clearly one of apples and oranges: It is little more than comparison of weather to climate. In addition, there's a deeper flaw in the methods used to process individual station data: In order to produce an areal time series we used the method of Thiessen polygons (also known as Voronoi cells), which assigns weights to each point measurement that are proportional to the area of influence This was an issue in the early days of computer-aided mapping. Rather than spatially average a local cluster of stations first, this process allows any error or anomaly at a single station to propagate into the database. Why not filter for a noisy (or bad) datapoint before gridding? So it's not at all clear the original criticisms were answered by the newer paper.
  15. I thought it was well-known that GCMs only give reliable projections on global or regional scales, which is why regional climate models or statistical downscaling are used to get projections of local climate. I don't recall ever hearing a modeller claim that the models were accurate at station level resolution.
  16. 293 muoncounter, I noticed also out of the 26 references cited in the paper that the authors cited themselves 10 times and NASA was only mentioned once in the entire paper. I also find it rather less than scholarly to have a blog cited as one of the references, albeit Gavin Schmidt at realclimate.org., of which you have already described.
  17. As I posted in the other thread: I think the selection of sites in that paper is suspect. In Australia (the only one I commented on), 3/4 of the sites selected for comparison with GCMs are in the rather arid central part of Australia, an area that naturally gets rather extreme weather (either very hot & very dry, or merely quite hot & very wet). To compare data from such stations with a regionally-averaged GCM seems disingenuous, to say the least. You don't have to go all that far from those sites to get others with completely different weather conditions. (It'd be like picking three weather stations in the Namib & Kalahari Deserts, and saying "hey, these measurements don't agree with climate model predictions for southern Africa!" - or, for north american folks, like picking a few stations in Nevada, Arizona, and New Mexico and comparing the results to predictions for all of North America.)
  18. Here are a few more links straight from peer review regarding strenghts, weaknesses and areas where serious need of improvement, exist: http://iopscience.iop.org/1748-9326/6/1/014008: " Abstract This study assesses the accuracy of state-of-the-art regional climate models for agriculture applications in West Africa. A set of nine regional configurations with eight regional models from the ENSEMBLES project is evaluated. Although they are all based on similar large-scale conditions, the performances of regional models in reproducing the most crucial variables for crop production are extremely variable. This therefore leads to a large dispersion in crop yield prediction when using regional models in a climate/crop modelling system. This dispersion comes from the different physics in each regional model and also the choice of parametrizations for a single regional model. Indeed, two configurations of the same regional model are sometimes more distinct than two different regional models. Promising results are obtained when applying a bias correction technique to climate model outputs. Simulated yields with bias corrected climate variables show much more realistic means and standard deviations. However, such a bias correction technique is not able to improve the reproduction of the year-to-year variations of simulated yields. This study confirms the importance of the multi-model approach for quantifying uncertainties for impact studies and also stresses the benefits of combining both regional and statistical downscaling techniques. Finally, it indicates the urgent need to address the main uncertainties in atmospheric processes controlling the monsoon system and to contribute to the evaluation and improvement of climate and weather forecasting models in that respect." http://onlinelibrary.wiley.com/doi/10.1002/wcc.60/full: "Abstract Computer models are powerful tools that allow us to analyze problems in unprecedented detail and to conduct experiments impossible with the real system. Reliance on computer models in science and policy decisions has been challenged by philosophers of science on methodological and epistemological grounds. This challenge is examined for the case of climate models by reviewing what they are and what climate scientists do with them, followed by an analysis of how they can be used to construct new trustworthy knowledge. A climate model is an executable computer code that solves a set of mathematical equations assumed to represent the climate system. Climate modelers use these models to simulate present and past climates and forecast likely and plausible future evolutions. Model uncertainties and model calibration are identified as the two major concerns. Climate models of different complexity address different question. Their interplay helps to weed out model errors, identify robust features, understand the climate system, and build confidence in the models, but is no guard against flaws in the underlying physics. Copyright © 2010 John Wiley & Sons, Ltd." These two last links with Abstracts included are not skeptical arguments against using GCM's or climate change. Yet they highlight regional issues, weather events, patterns,microclimate, and making projections based upon these and finding trends in climate. Not that these papers conclude nothing can be understood better or analyzed but that models still contain lots of unreliability, which is the topic of this thread.
  19. Chemist1@300 Uncertainty is not the same thing as unreliability. Unreliability implies that the models have errors that lie outside the stated uncertainty of the projections. GEP Box said "all models are wrong, but some are useful". You have not established that the models are unreliable, nor have you demonstrated that the stated uncertainty of the projections is so high that they are not useful. It is well known that GCMs don't work well at smaller spatial scales, but that doesn't mean they are not accurate in projections of global climate variables.
    Response: (Daniel Bailey) FYI: Chemist1's long link-fest originally at 297 contained an extensive copy-paste with multiple allegations of impropriety; it and 3 subsequent responses were then deleted. That is why the numbering sequence on comments is off right now.
  20. Okay so here is my edited, kick off on why the models are unreliable, and the IPCC report's over reliance upon them, from peer reviewed papers is a very poor decision. First link:http://www.nature.com/climate/2010/1003/full/climate.2010.19.html First Quote: Initial studies of how the rivers will respond to ice loss show modest changes in stream flow — far from the IPCC report's dire scenario of rivers running dry. Even if the glaciers were lost completely, flows down the Indus would drop about 15 per cent overall, with little or no change in the dry-season flow, one recent study found9. Lall cautions, however, that climate models are poor at simulating rain and snowfall, especially for the Asian monsoons. “I wouldn't hold these models to be very accurate,” he says. In the absence of clear predictions of what's to come, close monitoring of changes in the mountains is all the more important, as rising temperatures will probably affect the whole water cycle, says Eriksson of ICIMOD. “There has been too much focus on the glaciers as such,” he says. “It's urgent to understand the whole [impact] of climate change on snow, ice and rainfall, and that is not happening.” Second Quote back a few sentences prior to the above quote: Cogley. Upmanu Lall, director of the Columbia Water Center at Columbia University in New York, agrees. Lall says the idea that the rivers could run dry because of shrinking glaciers seems to stem from a confusion about how much glaciers contribute to river flows, compared with the contribution from melting of the seasonal snowpack." Link 2, I am introducing because it because you can download it and and place it into GOOGLE Earth and look at data and so called trends yourselves as individuals. It also illustrates how short any decent temperature record keeping has been around and how few stations there have been. The download is free. http://www.climateapplications.com/kmlfiles.asp The above link can assist everyone in analysis
    Response: [Daniel Bailey] Please note: You are responsible for the content of your copy/pasting from other sites when you then post it here. Your comment previously appearing as number 297 contained multiple allegations of impropriety. Repeated violations of the Comments Policy will also be deleted and could subject you to further, more rigorous moderation. Be aware.
  21. Dikran the range of uncertainty is so wide that it makes long term projections or predictions unreliable. Short term projections are so-so, but not great, and some models are almost completely unreliable, which I will get into at a later date.
  22. Chemist1, Concerning your first link please note that it is a news article in Nature, not a peer reviewed study. Secondly, the error about the Himalayan glaciers in the IPCC report has long since corrected "by the IPCC". However, the Himalayan glaciers are receding, just not as fast as initially reported. There are many discussions concerning the Himalayan glaciers on this site. Please review Return to the Himalayas.
  23. "Dikran the range of uncertainty is so wide that it makes long term projections or predictions unreliable." That would be unsupported opinion, at odds with models reliability to date. What's your idea of "short term"? At decadal level, models are hopeless. At longer scales they do exceedingly well. Cutting to the chase - how much accurate prediction by models can you take before you change your mind?
  24. Chemist1, Your second linked abstract "Computer models are powerful tools ... " provides no actual scientific criticism of models. Their interplay helps to weed out model errors, identify robust features, understand the climate system, and build confidence in the models, but is no guard against flaws in the underlying physics. This is merely a caveat; the author suggests that models could have 'flawed' physics. Do you automatically assume that because an author says something could happen, it necessarily does? A far more robust criticism would be 'here is a model study with flawed physics.' I assume you did not use this because you have no such example at hand?
  25. Chemist1 It would help if you actually read posts before responding to them. Here I explained that a model is only unreliable if the observations lie outside the stated uncertainty of the prediction, and you have ignored this point and are back here again saying the models are unreliable because the error bars are broad. If a projection has broad error bars, so broad that they swamp the variation in the ensemble mean, it means the models are telling you they don't know what is going to happen. If you ask someone if the planet is going to warm and they say "I think it may warm, but I don't really know, it could warm by 4 degrees, but it could cool by 1 degree, but the most likely result is that it will warm by 2 degrees", then if the planet cools by 0.5 degrees, their prediction wasn't unreliable, it wasn't even wrong, because they told you what happened was within the stated confidence of the projection. Large uncertainty does not imply unreliability. In fact it means that models are more likely to be reliable as the model projections cover a wider range of possibilities. If you ignore the stated uncertainty of the projections, that is your error, not a shortcoming of the models. As it happens, the error bars on projections for large time and spatial scales are not broad compared to the variation in the ensemble mean. See e.g. IPCC report chapter on attribution. If you don't take on board points that are made, or at least counter them, don't be surprised if contributors stop responding to your posts.

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us