Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.


Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Twitter Facebook YouTube Pinterest MeWe

RSS Posts RSS Comments Email Subscribe

Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...

New? Register here
Forgot your password?

Latest Posts


How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. For example, here’s a graph of sea level rise:

Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.

Mainstream climate models have also accurately projected global surface temperature changes.  Climate contrarians have not.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

A 2019 study led by Zeke Hausfather evaluated 17 global surface temperature projections from climate models in studies published between 1970 and 2007.  The authors found "14 out of the 17 model projections indistinguishable from what actually occurred."

There's one chart often used to argue to the contrary, but it's got some serious problems, and ignores most of the data.

Christy Chart

Basic rebuttal written by GPWayne

Update July 2015:

Here is a related lecture-video from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Last updated on 9 September 2019 by pattimer. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Further reading

Carbon Brief on Models

In January 2018, CarbonBrief published a series about climate models which includes the following articles:

Q&A: How do climate models work?
This indepth article explains in detail how scientists use computers to understand our changing climate.

Timeline: The history of climate modelling
Scroll through 50 key moments in the development of climate models over the last almost 100 years.

In-depth: Scientists discuss how to improve climate models
Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.

Guest post: Why clouds hold the key to better climate models
The never-ending and continuous changing nature of clouds has given rise to beautiful poetry, hours of cloud-spotting fun and decades of challenges to climate modellers as Prof Ellie Highwood explains in this article.

Explainer: What climate models tell us about future rainfall
Much of the public discussion around climate change has focused on how much the Earth will warm over the coming century. But climate change is not limited just to temperature; how precipitation – both rain and snow – changes will also have an impact on the global population.


On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.


Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  Next

Comments 251 to 300 out of 1297:

  1. @mistermack: false analogy. If you don't put your money on the horse (i.e. you don't trust the models), you are unaffected whether it wins or loses. Your life goes on as normal. If don't "put money" on AGW (i.e. disbelieve the experts) and it turns out to be true - as the body of science strongly suggests - then you'll be affected. A better analogy would be if someone kidnapped a loved one, told you to pick the next Kentucky Derby winner, and warned you they'll kill the hostage if you pick the loser. Which horse would you pick then? The favorite (i.e. the one the experts say has a better chance of winning than the others according to odds calculations), or a long shot that experts say is unlikely to win?
  2. mistermack, are you suggesting that the hypothetical model for the Kentucky Derby would be based on the laws of physics ?
  3. Jmurphy, do you really believe that the models are just constructed from the laws of physics? Someone sat down with a physics textbook and developed the current models? Without peeking at the previous data even once? "based on" is a meaningless phrase here.
    Response: Look in the green box at the bottom of this Argument--the box labeled "Further Reading." Click those links. You will learn how physics is used in the models, and how and to what degree observations are used.
  4. mistermack, as well as looking at the Moderator's comment, you can investigate models further in WIKIPEDIA (Climate Models, Global Climate Models), NASA (The Physics of Climate Modeling), and THE DISCOVERY OF GLOBAL WARMING (Simple Models of Climate). How could the first models have peeked at the previous data ? You're right about one thing, though, they aren't just constructed using the laws of physics : there's a lot of maths in there too, as well as some chemistry.
  5. Oops, sorry - half my links were already in the green box. Sorry !
  6. #253: "told you to pick the next Kentucky Derby winner" That would be akin to predicting the next hurricane's landfall location from a study of prior landfalls. A more reasonably-posed analogy might be to conclude from a study of prior races that there are factors that categorizes the field of entrants into 'likely' winners and 'likely' losers (forgive my alarmist language). For example (from wikipedia): "No horse since Apollo in 1882 has won the Derby without racing at age two." That would make a 'very unlikely' outcome. So from a study of climate, it is perhaps unreasonable to predict a specific heat wave, but not at all unreasonable to build a model that says: "would these [extreme weather] events have occurred if atmospheric carbon dioxide had remained at its pre-industrial level of 280 ppm?", an appropriate answer in that case is "almost certainly not." But what is all this about models not having a peek at prior data? On another thread, there have been comments to the effect that models and evidence somehow pollute each other; that makes no sense to me. What use is a model if it is not built on prior data and tested by subsequent data?
  7. Mueoncounter, my point isn't that we shouldn't have models, or that you shouldn't seek to improve them by using previous data. I take all that as obvious. My problem is that when I look on this site, or anywhere, for good evidence that manmade CO2 is going to cause significant harm, the only evidence of any significance is that the models match the data, or the data matches the models. Since the models are developed to match the data, what do they expect? Don't quote it as evidence, that's all I'm saying.
    Response: The models are not developed merely match "the data" in the pejorative sense you are using the term "the data." Please actually read the material that I and others have pointed you to, for explanations of exactly how observations are used in model construction. Your mere repetition of your contentions is not contributing to the discussion.
  8. To answer the point about physics and chemistry, it's stretching it rather a lot to say that since the models involve (or are based on) physics and chemistry, they must be right. Bridges fall, buildings collapse. Shuttles explode. Their design is always based on maths, physics and chemistry. They can get it wrong. But we have long experience of successful building. We have zilch of successful climate forcasting. So I think I'm right to be sceptical of the models' ability to get it right at this stage.
    Response: You are incorrect that we have "ziltch" experience in successful climate forecasting. You really should actually read the posts. Be sure to click the "Intermediate" tab.
  9. mistermack so we have a phenomenon, we build a theory (a model) and compare it to the observed phenomenon. If they agree I throw the model away because it is trivial, if it does not I throw it anyway. I'm puzzled. More seriously, the first model dates back to 1896. Not enough subsequent data to test it? Yes, of course. It has then be refined and tested again, and so on for many decades. Apparently you're seing just last generation of models, as if they came out of nowhere.
  10. @mistermack: you have yet to demonstrate exactly how models are unreliable. You should provide evidence that supports your allegations, otherwise it's hard to take them seriously.
  11. mistermack, to add to fact that models have been successful in forecasting: Predicting the past is still a prediction (see retrodiction). If you build a simulation of a physical system, it is appropriate to test that simulation by comparing it to past performance of the real system. This is true of any physical model, including models of bridges and space shuttles. The fact that you are comparing to past data does not mean that the simulation has past data "programmed in" as you are implying. What you are thinking of is a statistical model, where the inputs are directly mapped to outputs via a mathematical relationship derived directly from historical data. This is not how physical climate models are derived. Since the model is built on physical laws and not on direct statistics, there is no reason to assume that a particular model could ever recreate past climate behavior, unless that model has some basis in reality. If the basic physics underlying the model are significantly off, then no amount of tweaking would ever result in an accurate recreation of past performance. The fact that it can recreate past performance is therefore evidence that the model is correct, since the likelihood is very slim that the model would be able to accurately recreate real performance if it was significantly wrong in its recreation of physics.
  12. #260:"Bridges fall, buildings collapse. Shuttles explode. ... They can get it wrong. " You're forgetting a significant cause of such unpleasant events: Google search 'operator error accidents'. Such is not the case in a climate model, where there is no one to push the wrong button, run past a red signal or close a valve that should be left open.
  13. mistermack, expanding on e's answer, for more explanation of how observations are used to improve climate models, see the RealClimate "FAQ on Climate Models," in particular the questions "What is the difference between a physics-based model and a statistical model," "Are climate models just a fit to the trend in the global temperature data," and "What is tuning?"
  14. #263 e at 05:23 AM on 27 October, 2010 Since the model is built on physical laws and not on direct statistics, there is no reason to assume that a particular model could ever recreate past climate behavior, unless that model has some basis in reality The situation is not so nice as you paint it. Sub-grid processes, aerosols and the like are always parametrized in models, that is, these are not derived from first principles, but are chosen to reproduce the past. And as von Neumann said "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." Computational climate models tend to agree on past trends but diverge considerably in their predictions. It is a sure sign they do use the leeway provided by the parametrization process and use it disparately.
  15. BP, parameterization is not as freewheeling as you imply. See the RealClimate FAQ section "What is tuning?" and even the Wikipedia entry on parameterization (climate). Also see the list of parameters at
  16. #266: "Sub-grid processes, aerosols and the like are always parametrized in models, ... chosen to reproduce the past" Do you have a better procedure in mind? On the same page, von Neumann also said "I think that it is a relatively good approximation to truth — which is much too complicated to allow anything but approximations — that mathematical ideas originate in empirics."
  17. mistermack and BP, there is an example of parameterization at Science of Doom's page CO2 - An Insignificant Trace Gas? Part Four.
  18. Or not...from the same paper "Another solution is to bring trained computer scientists into research groups, either permanently or as part of temporary alliances. Software developer Nick Barnes has set up the Climate Code Foundation, based in Sheffield, UK, to help climate researchers. He was motivated by problems with NASA's Surface Temperature Analysis software, which was released to the public in 2007. Critics complained that the program, written in the scientific programming language Fortran, would not work on their machines and they could therefore not trust what it said about global warming. In consultation with NASA researchers, Barnes rewrote the code in a newer, more transparent programming language —Python — reducing its length and making it easier for people who aren't software experts to understand how it functions. "Because of the immense public interest and the important policy issues at stake, it was worth taking the time to do that," says Barnes. His new code shows the same general warming trend as the original program." Seriously, poptech, how could consistent trends among dozens of climate models and several global temperature averaging algorithms, each coded by separate groups, result from random coding errors? You are applying several general statements in the article to climate modelers specifically when 1) none of the most damning examples provided by that article relate to climate modeling and 2) you haven't even bothered to find out what procedures and cross checks climate modellers have in place. This is the definition of quote mining.
  19. Moderator, I hope that you allow this comment, b/c Poptech's latest post is an especially egregious example of poor form by a "skeptic". Poptech, This is what you should have said, which might have been somewhat closer to the truth, albeit still highly misleading: "Nature Admits Climate Scientists are Computer Illiterate" My retort would be (as exemplified by your post and as noted by Stephen @270 above): "Climate "skeptics" illiterate on the science and fact checking" Anyhow, 1) Nature did not admit that "climate scientists are computer illiterate" as you would so dearly love to believe. The title you elected to use is clearly your spin of the article's content. 2) The example from the University of Edinburgh that Wilson discusses (and which you bolded) does not seem to apply to climate scientists, but scientists in general. Yet, you oddly chose to conclude that he was referring to all climate scientists. Also, From the very same Nature article: "Science administrators also need to value programming skills more highly, says David Gavaghan, a computational biologist at the University of Oxford, UK." "The mangled coding of these monsters can sometimes make it difficult to check for errors. One example is a piece of code written to analyse the products of high-energy collisions at the Large Hadron Collider particle accelerator at CERN, Europe's particle-physics laboratory near Geneva, Switzerland." So using your (flawed) logic, are all CERN scientists, and by extension all physicists, computer illiterate Poptech? No, of course not. "Aaron Darling, a computational biologist at the University of California, Davis, unwittingly caused such a mistake with his own computer code for comparing genomes to reconstruct evolutionary relationships." So using your (flawed) logic are all computational biologists, and by extension all biologists, computer illiterate Poptech? No, of course not. "The CRU e-mail affair was a warning to scientists to get their houses in order, he says. "To all scientists out there, ask yourselves what you would do if, tomorrow, some Republican senator trains the spotlight on you and decides to turn you into a political football. Could your code stand up to attack?" So your post at #270 was seriously flawed, and a perfect example of confirmation bias and cherry picking which is often used by "skeptics" to distort and misinform. You would have been better of citing the example of Harry at CRU that they discussed...oh, but that has already been done months ago, and is not sufficient evidence to make sweeping generalizations. Please, do not insult us here with your misinformation.
    Response: Albatross, obviously I (Daniel Bailey) don't speak for John, but I see nothing wrong with it. As a teaching tool, it may also have more value if you cross-post it over on the The value of coherence in science thread. Thanks!
  20. #270: "As a result, codes may be riddled with tiny errors that do not cause the program to break down, but may drastically change the scientific results that it spits out." There's a crust of value in this observation, as it must apply equally to both sides. Hence, analysis of surface temperature measurements, all of the 'no, its not' repeated ad nauseum here, etc. must be subject to the same risk of error. If you can't trust a climate scientist, why should you trust a climate skeptic?
  21. Poptech, Oh now it is "natural scientists"...need I remind you of your original headline? Anyhow, you demonstrated above that you not only cherry picked your cut and pasts, but you distorted the content of the article to suite your means. Yes, that is poor form. EOS. Do all scientists (with the obvious exception of computer science) need to hone and improve their computer skills? Yes, I can certainly agree with that.
  22. A nice demonstration of how models work is given here, at Steve Easterbrook's site - it is of French origin and you can access the original video through that site also.
  23. "Seriously, why is the code and results different for each climate model if the science is "settled"?" You have that backwards. The fact that the coding differs and the same results are acheived is actually a good thing -- an indication that the science is settled with respect to effects of CO2 on climate. You know as well as I that the code would differ even if the models worked at the same spatial resolution, represented ocean-atmosphere coupling in the same way and had identical degrees of detail the terrestrial carbon cycle (not to mention other things). Different software languages are used, there are different limitations on computing resources, and scientists have to make innumerable little decisions about how to handle input data. The fact that all the models can only produce the increase in temp over the last century only if the effect of CO2 is included indicates that those coding decisions have no bearing on the issue of whether anthropogenic CO2 has an effect on climate. I also agree with the sentiment of the Nature article - scientists (and biologists in particular) need more expert training in coding. But, I think climate scientists are probably the ones on the cusp of the effort to code better and more openly.
  24. Found several articles lately about plant growth providing negative feedback against CO2 increases, including this recent NASA model update. I've also read a paper saying that rising temperatures will also make more nitrogen available for plant growth. These new models seem to make increased temperatures from a doubling of CO2 much more modest than previously claimed. Do these new models also include the negative feedback from the ocean equivalents of green plants, ie - photosynthetic creatures like algae, diatoms, coral, etc? The huge biomass in the oceans would seem to be more important than terrestrial plants? The Science Daily report on the NASA study Here is the abstract from the paper,bounoua "Several climate models indicate that in a 2 × CO2 environment, temperature and precipitation would increase and runoff would increase faster than precipitation. These models, however, did not allow the vegetation to increase its leaf density as a response to the physiological effects of increased CO2 and consequent changes in climate. Other assessments included these interactions but did not account for the vegetation down‐regulation to reduce plant's photosynthetic activity and as such resulted in a weak vegetation negative response. When we combine these interactions in climate simulations with 2 × CO2, the associated increase in precipitation contributes primarily to increase evapotranspiration rather than surface runoff, consistent with observations, and results in an additional cooling effect not fully accounted for in previous simulations with elevated CO2. By accelerating the water cycle, this feedback slows but does not alleviate the projected warming, reducing the land surface warming by 0.6°C. Compared to previous studies, these results imply that long term negative feedback from CO2‐induced increases in vegetation density could reduce temperature following a stabilization of CO2 concentration." Chris Shaker
  25. Older research on the topic, and it appears, more controversy "Interestingly, warming temperatures in response to rising carbon dioxide levels could make more nitrogen available, said Xiaojuan Yang, a doctoral student in Jain’s lab. This factor must also be weighed in any calculation of net carbon dioxide load, she said. “Previous modeling studies show that due to warming, the soil releases more carbon dioxide through increased decomposition,” she said. “But they are not considering the nitrogen effect. When the soil is releasing more CO2, at the same time more nitrogen is mineralized. This means that more nitrogen becomes available for plants to use.”" Chris Shaker
  26. It appears that not quite half of the photosynthesis occurring on the earth is in the oceans. Search for 'Global' "Using satellite-derived estimates of the Normalized Difference Vegetation Index (NDVI) for terrestrial habitats and sea-surface chlorophyll for the oceans, it is estimated that the total (photoautotrophic) primary production for the Earth was 104.9 Gt C yr−1.[12] Of this, 56.4 Gt C yr−1 (53.8%), was the product of terrestrial organisms, while the remaining 48.5 Gt C yr−1, was accounted for by oceanic production." So, I hope to read that these negative CO2 feedback effects are also being modeled for algae as well. Chris Shaker
  27. Chris, first note that models are scenarios based on amount of GHG in the atmosphere. ie if CO2 is 450ppm, then climate looks like x. Now if against all paleo icecore data we have situation whereby warming world REDUCED pulls more CO2 from atmosphere than a cooler one, the result would be that the emissions scenarios need revised. ie for a given rate of fossil fuel consumption, then the rate of accumulation of CO2 in the atmosphere would reduce. However,I am unaware of any modern or paleo data to support the idea that increased temperature would decrease CO2 - in fact all of the evidence I have seen to date shows the opposite.
  28. The "reliability" of climate models has been has never been backed up by satellite or weather balloon data. Even the UN says they are unreliable. I know this is a rather simple line of reasoning, but I'd rather not use abstract examples. Numbers can be manipulated to show almost anything, so unless the two basic forms of temperature measurement are wrong, the "unreliable" label will be firmly (and rightfully) attached to the "what if" models.
    Response: [Daniel Bailey] Incorrect on all counts. If you feel differently, please provide links to supporting sources.
  29. I see a lot of parallels between the modelling discussed in this article and the technical analysis used by some stock traders. Of course there are many differences, but at the heart of it, both types of modelers are trying to use intelligent analysis of past data to predict/project the future. I highly recommend the book, A Random Walk Down Wall Street ( for a very approachable explanation of some of the techniques that modelers use. The author's arguments that it is impossible to "beat" the stock market through this analysis isn't relevant to weather modeling. However, I think he does a good job of addressing the question of identifying success based on past data.
  30. This may be a naïve criticism- but how do you avoid the problem of circularity in using hind-casting to establish the accuracy of climate models? The assumptions for the models can only be based on observations of what has happened in the past, so to create a model based on these assumptions means that it is inevitable that it will accurately predict what has happened in the past. The more established patterns are encoded into the model, the more accurately it will predict the past. It would be ironic if some of the most powerful computers in the world were generating tautologies. This is not a problem for climate science alone. It is a problem for any time-based models. I have been involved in environmental predictions based on using multivariate regression analysis of GIS data correlated with soil types. This falls into the same problem, but it can be amended by later sampling of soils at predicted locations, and correlating the observed soil type with the predicted soil type and running a t-test to establish the reliability of the prediction. It would be useless to sample the same site that the model was based on. The only way that the same calibration could be carried out in time-based climate models is by comparing forecasts with what happens in the future and not the past. Are the models therefore proper science. Without reference to the future they are unfalsifiable. The absence of controls is another issue. I understand the practical problem of testing the accuracy of long-term models in this way. The damage may have been done before the data is in. Can you post a link to papers which articulate the assumptions behind these models?
  31. Jue1234 - You might find the Hansen's 1988 prediction page a useful answer. His predictions are holding fairly well through now. His initial climate sensitivity number was a bit too high - but adjusting for the better sensitivity estimates and actual emissions shows the model (simple as it was) still holds up. Models are not just based on hindcasting - that's a required check, but the assumptions going into the models are based on physics, not just mathematical modeling of previous behavior.
  32. Jue1234, see in the RealClimate post "FAQ on Climate Models," the "Questions" section, "What is the difference between a physics-based model and a statistical model?", "Are climate models just a fit to the trend in global temperature data?", and "What is tuning?" A relevant quote from those: "Thus statistical fits to the observed data are included in the climate model formulation, but these are only used for process-level parameterisations, not for trends in time." Part II of that post then provides more details on parameterizations, including specifics on clouds.
  33. Thanks KR and Tom, I'll look at your links
    Response: [DB] Please, no link only.
  35. ( -Snip- )
    Response: [DB] Please, no link only. Future comments containing links without some description of why you are posting it and why you think it's relevant to the discussion on this thread will be deleted. Thanks!
  36. The title of the link desribes some of the issues with models being unreliable in peer review like this one: The reason I am posting this link is to show there is considerable controversy and evidence of GCM's not being reliable or valid from 1900 to 2011.
  37. Chemist1 - all that link shows is what wallowing in a sewer will get you. Firstly, the paper does not support the claims the tin-hats claim of it. Secondly, it was also wrong and revised by Schwatz himself in 2008. It would help to link to the science paper instead of breathless political posturing. Controversy is one thing, but controversy supported by data and published in peer-reviewed science is another.
  38. Chemist1: You've linked to a 3 year old Marc Morono paper written for the senator from the state of Oklahoma Petrodollars. This isn't 'controversy,' its junk. For example: SURVEY: LESS THAN HALF OF ALL PUBLISHED SCIENTISTS ENDORSE GLOBAL WARMING THEORY - Excerpt: "Of 539 total papers on climate change, only 38 (7%) gave an explicit endorsement of the consensus. Even for 2007, that was blatantly false.
  39. Continuing from comment here: "What is an issue is the projected warming at NASA." and here: "comparing results of global climate models to records for individual stations is missing the point, and shows a fundamental misunderstanding of what those models tells us." The paper in question compares GCM output to data at selected stations using point-by-point, year-by-year observation to computed model output. The authors note the following regarding their prior work: criticism appeared in science blogs (e.g. Schmidt, 2008). Similar criticism has been received by two reviewers of the first draft of this paper, hereinafter referred to as critics. In both cases, it was only our methodology that was challenged and not our results. It would seem that if the methodology is challenged, the results are implicitly challenged. The comparison of yearly data to model output is clearly one of apples and oranges: It is little more than comparison of weather to climate. In addition, there's a deeper flaw in the methods used to process individual station data: In order to produce an areal time series we used the method of Thiessen polygons (also known as Voronoi cells), which assigns weights to each point measurement that are proportional to the area of influence This was an issue in the early days of computer-aided mapping. Rather than spatially average a local cluster of stations first, this process allows any error or anomaly at a single station to propagate into the database. Why not filter for a noisy (or bad) datapoint before gridding? So it's not at all clear the original criticisms were answered by the newer paper.
  40. I thought it was well-known that GCMs only give reliable projections on global or regional scales, which is why regional climate models or statistical downscaling are used to get projections of local climate. I don't recall ever hearing a modeller claim that the models were accurate at station level resolution.
  41. 293 muoncounter, I noticed also out of the 26 references cited in the paper that the authors cited themselves 10 times and NASA was only mentioned once in the entire paper. I also find it rather less than scholarly to have a blog cited as one of the references, albeit Gavin Schmidt at, of which you have already described.
  42. As I posted in the other thread: I think the selection of sites in that paper is suspect. In Australia (the only one I commented on), 3/4 of the sites selected for comparison with GCMs are in the rather arid central part of Australia, an area that naturally gets rather extreme weather (either very hot & very dry, or merely quite hot & very wet). To compare data from such stations with a regionally-averaged GCM seems disingenuous, to say the least. You don't have to go all that far from those sites to get others with completely different weather conditions. (It'd be like picking three weather stations in the Namib & Kalahari Deserts, and saying "hey, these measurements don't agree with climate model predictions for southern Africa!" - or, for north american folks, like picking a few stations in Nevada, Arizona, and New Mexico and comparing the results to predictions for all of North America.)
  43. Here are a few more links straight from peer review regarding strenghts, weaknesses and areas where serious need of improvement, exist: " Abstract This study assesses the accuracy of state-of-the-art regional climate models for agriculture applications in West Africa. A set of nine regional configurations with eight regional models from the ENSEMBLES project is evaluated. Although they are all based on similar large-scale conditions, the performances of regional models in reproducing the most crucial variables for crop production are extremely variable. This therefore leads to a large dispersion in crop yield prediction when using regional models in a climate/crop modelling system. This dispersion comes from the different physics in each regional model and also the choice of parametrizations for a single regional model. Indeed, two configurations of the same regional model are sometimes more distinct than two different regional models. Promising results are obtained when applying a bias correction technique to climate model outputs. Simulated yields with bias corrected climate variables show much more realistic means and standard deviations. However, such a bias correction technique is not able to improve the reproduction of the year-to-year variations of simulated yields. This study confirms the importance of the multi-model approach for quantifying uncertainties for impact studies and also stresses the benefits of combining both regional and statistical downscaling techniques. Finally, it indicates the urgent need to address the main uncertainties in atmospheric processes controlling the monsoon system and to contribute to the evaluation and improvement of climate and weather forecasting models in that respect." "Abstract Computer models are powerful tools that allow us to analyze problems in unprecedented detail and to conduct experiments impossible with the real system. Reliance on computer models in science and policy decisions has been challenged by philosophers of science on methodological and epistemological grounds. This challenge is examined for the case of climate models by reviewing what they are and what climate scientists do with them, followed by an analysis of how they can be used to construct new trustworthy knowledge. A climate model is an executable computer code that solves a set of mathematical equations assumed to represent the climate system. Climate modelers use these models to simulate present and past climates and forecast likely and plausible future evolutions. Model uncertainties and model calibration are identified as the two major concerns. Climate models of different complexity address different question. Their interplay helps to weed out model errors, identify robust features, understand the climate system, and build confidence in the models, but is no guard against flaws in the underlying physics. Copyright © 2010 John Wiley & Sons, Ltd." These two last links with Abstracts included are not skeptical arguments against using GCM's or climate change. Yet they highlight regional issues, weather events, patterns,microclimate, and making projections based upon these and finding trends in climate. Not that these papers conclude nothing can be understood better or analyzed but that models still contain lots of unreliability, which is the topic of this thread.
  44. Chemist1@300 Uncertainty is not the same thing as unreliability. Unreliability implies that the models have errors that lie outside the stated uncertainty of the projections. GEP Box said "all models are wrong, but some are useful". You have not established that the models are unreliable, nor have you demonstrated that the stated uncertainty of the projections is so high that they are not useful. It is well known that GCMs don't work well at smaller spatial scales, but that doesn't mean they are not accurate in projections of global climate variables.
    Response: (Daniel Bailey) FYI: Chemist1's long link-fest originally at 297 contained an extensive copy-paste with multiple allegations of impropriety; it and 3 subsequent responses were then deleted. That is why the numbering sequence on comments is off right now.
  45. Okay so here is my edited, kick off on why the models are unreliable, and the IPCC report's over reliance upon them, from peer reviewed papers is a very poor decision. First link: First Quote: Initial studies of how the rivers will respond to ice loss show modest changes in stream flow — far from the IPCC report's dire scenario of rivers running dry. Even if the glaciers were lost completely, flows down the Indus would drop about 15 per cent overall, with little or no change in the dry-season flow, one recent study found9. Lall cautions, however, that climate models are poor at simulating rain and snowfall, especially for the Asian monsoons. “I wouldn't hold these models to be very accurate,” he says. In the absence of clear predictions of what's to come, close monitoring of changes in the mountains is all the more important, as rising temperatures will probably affect the whole water cycle, says Eriksson of ICIMOD. “There has been too much focus on the glaciers as such,” he says. “It's urgent to understand the whole [impact] of climate change on snow, ice and rainfall, and that is not happening.” Second Quote back a few sentences prior to the above quote: Cogley. Upmanu Lall, director of the Columbia Water Center at Columbia University in New York, agrees. Lall says the idea that the rivers could run dry because of shrinking glaciers seems to stem from a confusion about how much glaciers contribute to river flows, compared with the contribution from melting of the seasonal snowpack." Link 2, I am introducing because it because you can download it and and place it into GOOGLE Earth and look at data and so called trends yourselves as individuals. It also illustrates how short any decent temperature record keeping has been around and how few stations there have been. The download is free. The above link can assist everyone in analysis
    Response: [Daniel Bailey] Please note: You are responsible for the content of your copy/pasting from other sites when you then post it here. Your comment previously appearing as number 297 contained multiple allegations of impropriety. Repeated violations of the Comments Policy will also be deleted and could subject you to further, more rigorous moderation. Be aware.
  46. Dikran the range of uncertainty is so wide that it makes long term projections or predictions unreliable. Short term projections are so-so, but not great, and some models are almost completely unreliable, which I will get into at a later date.
  47. Chemist1, Concerning your first link please note that it is a news article in Nature, not a peer reviewed study. Secondly, the error about the Himalayan glaciers in the IPCC report has long since corrected "by the IPCC". However, the Himalayan glaciers are receding, just not as fast as initially reported. There are many discussions concerning the Himalayan glaciers on this site. Please review Return to the Himalayas.
  48. "Dikran the range of uncertainty is so wide that it makes long term projections or predictions unreliable." That would be unsupported opinion, at odds with models reliability to date. What's your idea of "short term"? At decadal level, models are hopeless. At longer scales they do exceedingly well. Cutting to the chase - how much accurate prediction by models can you take before you change your mind?
  49. Chemist1, Your second linked abstract "Computer models are powerful tools ... " provides no actual scientific criticism of models. Their interplay helps to weed out model errors, identify robust features, understand the climate system, and build confidence in the models, but is no guard against flaws in the underlying physics. This is merely a caveat; the author suggests that models could have 'flawed' physics. Do you automatically assume that because an author says something could happen, it necessarily does? A far more robust criticism would be 'here is a model study with flawed physics.' I assume you did not use this because you have no such example at hand?
  50. Chemist1 It would help if you actually read posts before responding to them. Here I explained that a model is only unreliable if the observations lie outside the stated uncertainty of the prediction, and you have ignored this point and are back here again saying the models are unreliable because the error bars are broad. If a projection has broad error bars, so broad that they swamp the variation in the ensemble mean, it means the models are telling you they don't know what is going to happen. If you ask someone if the planet is going to warm and they say "I think it may warm, but I don't really know, it could warm by 4 degrees, but it could cool by 1 degree, but the most likely result is that it will warm by 2 degrees", then if the planet cools by 0.5 degrees, their prediction wasn't unreliable, it wasn't even wrong, because they told you what happened was within the stated confidence of the projection. Large uncertainty does not imply unreliability. In fact it means that models are more likely to be reliable as the model projections cover a wider range of possibilities. If you ignore the stated uncertainty of the projections, that is your error, not a shortcoming of the models. As it happens, the error bars on projections for large time and spatial scales are not broad compared to the variation in the ensemble mean. See e.g. IPCC report chapter on attribution. If you don't take on board points that are made, or at least counter them, don't be surprised if contributors stop responding to your posts.

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page

The Consensus Project Website


(free to republish)

© Copyright 2022 John Cook
Home | Links | Translations | About Us | Privacy | Contact Us