Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

At a glance

So, what are computer models? Computer modelling is the simulation and study of complex physical systems using mathematics and computer science. Models can be used to explore the effects of changes to any or all of the system components. Such techniques have a wide range of applications. For example, engineering makes a lot of use of computer models, from aircraft design to dam construction and everything in between. Many aspects of our modern lives depend, one way and another, on computer modelling. If you don't trust computer models but like flying, you might want to think about that.

Computer models can be as simple or as complicated as required. It depends on what part of a system you're looking at and its complexity. A simple model might consist of a few equations on a spreadsheet. Complex models, on the other hand, can run to millions of lines of code. Designing them involves intensive collaboration between multiple specialist scientists, mathematicians and top-end coders working as a team.

Modelling of the planet's climate system dates back to the late 1960s. Climate modelling involves incorporating all the equations that describe the interactions between all the components of our climate system. Climate modelling is especially maths-heavy, requiring phenomenal computer power to run vast numbers of equations at the same time.

Climate models are designed to estimate trends rather than events. For example, a fairly simple climate model can readily tell you it will be colder in winter. However, it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Weather forecast-models rarely extend to even a fortnight ahead. Big difference. Climate trends deal with things such as temperature or sea-level changes, over multiple decades. Trends are important because they eliminate or 'smooth out' single events that may be extreme but uncommon. In other words, trends tell you which way the system's heading.

All climate models must be tested to find out if they work before they are deployed. That can be done by using the past. We know what happened back then either because we made observations or since evidence is preserved in the geological record. If a model can correctly simulate trends from a starting point somewhere in the past through to the present day, it has passed that test. We can therefore expect it to simulate what might happen in the future. And that's exactly what has happened. From early on, climate models predicted future global warming. Multiple lines of hard physical evidence now confirm the prediction was correct.

Finally, all models, weather or climate, have uncertainties associated with them. This doesn't mean scientists don't know anything - far from it. If you work in science, uncertainty is an everyday word and is to be expected. Sources of uncertainty can be identified, isolated and worked upon. As a consequence, a model's performance improves. In this way, science is a self-correcting process over time. This is quite different from climate science denial, whose practitioners speak confidently and with certainty about something they do not work on day in and day out. They don't need to fully understand the topic, since spreading confusion and doubt is their task.

Climate models are not perfect. Nothing is. But they are phenomenally useful.

Please use this form to provide feedback about this new "At a glance" section. Read a more technical version below or dig deeper via the tabs above!


Further details

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been shown to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. Sea level rise is a good example (fig. 1).

Fig. 1: Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. A 2019 study led by Zeke Hausfather (Hausfather et al. 2019) evaluated 17 global surface temperature projections from climate models in studies published between 1970 and 2007.  The authors found "14 out of the 17 model projections indistinguishable from what actually occurred."

Talking of empirical evidence, you may be surprised to know that huge fossil fuels corporation Exxon's own scientists knew all about climate change, all along. A recent study of their own modelling (Supran et al. 2023 - open access) found it to be just as skillful as that developed within academia (fig. 2). We had a blog-post about this important study around the time of its publication. However, the way the corporate world's PR machine subsequently handled this information left a great deal to be desired, to put it mildly. The paper's damning final paragraph is worthy of part-quotation:

"Here, it has enabled us to conclude with precision that, decades ago, ExxonMobil understood as much about climate change as did academic and government scientists. Our analysis shows that, in private and academic circles since the late 1970s and early 1980s, ExxonMobil scientists:

(i) accurately projected and skillfully modelled global warming due to fossil fuel burning;

(ii) correctly dismissed the possibility of a coming ice age;

(iii) accurately predicted when human-caused global warming would first be detected;

(iv) reasonably estimated how much CO2 would lead to dangerous warming.

Yet, whereas academic and government scientists worked to communicate what they knew to the public, ExxonMobil worked to deny it."


Exxon climate graphics from Supran et al 2023

Fig. 2: Historically observed temperature change (red) and atmospheric carbon dioxide concentration (blue) over time, compared against global warming projections reported by ExxonMobil scientists. (A) “Proprietary” 1982 Exxon-modeled projections. (B) Summary of projections in seven internal company memos and five peer-reviewed publications between 1977 and 2003 (gray lines). (C) A 1977 internally reported graph of the global warming “effect of CO2 on an interglacial scale.” (A) and (B) display averaged historical temperature observations, whereas the historical temperature record in (C) is a smoothed Earth system model simulation of the last 150,000 years. From Supran et al. 2023.

 Updated 30th May 2024 to include Supran et al extract.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

Last updated on 30 May 2024 by John Mason. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Further reading

Carbon Brief on Models

In January 2018, CarbonBrief published a series about climate models which includes the following articles:

Q&A: How do climate models work?
This indepth article explains in detail how scientists use computers to understand our changing climate.

Timeline: The history of climate modelling
Scroll through 50 key moments in the development of climate models over the last almost 100 years.

In-depth: Scientists discuss how to improve climate models
Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.

Guest post: Why clouds hold the key to better climate models
The never-ending and continuous changing nature of clouds has given rise to beautiful poetry, hours of cloud-spotting fun and decades of challenges to climate modellers as Prof Ellie Highwood explains in this article.

Explainer: What climate models tell us about future rainfall
Much of the public discussion around climate change has focused on how much the Earth will warm over the coming century. But climate change is not limited just to temperature; how precipitation – both rain and snow – changes will also have an impact on the global population.

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Denial101x videos

Here are related lecture-videos from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Myth Deconstruction

Related resource: Myth Deconstruction as animated GIF

MD Model

Please check the related blog post for background information about this graphics resource.

Fact brief

Click the thumbnail for the concise fact brief version created in collaboration with Gigafact:

fact brief

Comments

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  Next

Comments 176 to 200 out of 399:

  1. Explanation of "initial value" (weather) versus "boundary value" (climate) models is provided by Steve Easterbrook at his site Serendipity.
  2. KR at 11:51 AM on 5 June, 2010, KR, I have copied an email exchange relating to the change of computer which may be of interest to you. The email was not to me so I blanked out the recipient, but they are available on the internet to view. The most recent reply is on top. From: Jing-Jia Luo [mailto:jingjia.luo@...com] Sent: Monday, 22 June 2009 2:35 PM To: ====================== Cc: Toshio Yamagata Subject: Re: Seasonal forecasts from 1 June 2009 (monthly mean maps) Dear Peter, Nothing except the computer has changed since 1 April 2009; the forecast model is the same as before. We repeated the forecasts initiated from 1 March 2009 (with the same model and initial conditions), 9-ensemble mean did show certain differences as I mentioned before. I am still not quite sure what the actual reasons for this difference are. One possible factor can be due to the different FORTRAN compiler. This means the executable codes of the coupled model are different now though the source code itself has no any change. I asked NEC system engineering. The answer is that it is basically no way to get the same results on the new Earth Simulator (like chaos). Theoretically, if we have infinite ensembles, the results may be equal if the new compiler does not change the code systematically. But who knows (sometimes, bug fix in the compiler can induce big changes in the model results). We are planing to redo the hindcast step by step (we are facing another technical problem. Our model speed become slower despite the much faster new machine). Bets regards, Jing-Jia On Mon, Jun 22, 2009 at 1:08 PM, wrote: Dear Jing-Jia I regularly talk to wheat farmers in NW Victoria, Australia, at a place called Birchip. The Birchip Cropping Group are the most active farmer group in Australia, and they hold their annual Grains Expo in early July each year. This year, Australia's Governor-General will be attending. Over the years I have given talks about the various climate models, including SINTEX, and they have come to trust SINTEX forecasts. As you know, SINTEX has been successful at predicting the three positive IOD events recently, and the IOD seems to be the most important effect on rainfall at Birchip. I will certainly get questions regarding the change of forecast in SINTEX this year, and I would like to be able to answer as clearly as possible. Can you explain to me why the SINTEX forecasts changed so much? I don't understand why changing computers would make such a big difference. Normally one would expect very minor changes going from one computer to another. Were software changes required in order to change computers? Did data sets change? Any information you can give me will be helpful. Regards, Peter. Dr Peter============== Centre for Australian Weather and Climate Research (CAWCR) CSIRO Marine Laboratories From: Jing-Jia Luo Date: Wed, Jun 17, 2009 at 10:06 PM Subject: Re: no skill for predicting the IOD before June [SEC=UNCLASSIFIED] To: Harry ======== Cc: David Jones, Toshio Yamagata, Grant Beard, Oscar Alves Dear Harry, So we are reaching some agreements. The averaged hindcast skill just gives you a rough reference. If you do carefully for the real time forecasts, I believe you should go beyond this (even without the increase of ensemble members); you have much more information/analysis than the mean hindcast skill tells. Concerning the smoothing issue: When we look at the monthly prediction plumes of 12 target months, we will focus on the signal beyond the intraseasonal disturbance. And we will look at the consecutive forecasts performed during several months. In this sense, we are also doing the smoothing. Or like IRI, we can directly do 3-month average to remove the noise in the prediction plumes. Because of the uncertainty caused by the new Earth Simulator, I do not know how much we can still trust the SINTEX-F model forecast, particularly for the current IOD prediction. I hope POAMA model forecast would be correct. Let's see the forecasts in following months and see what will happen in the real ocean. Best regards, Jing-Jia It is said that the widespread use of Microsoft software will result in most people only able to complete tasks in one way, that being the Microsoft way. I wonder if computers somehow exert the same power when it comes to processing data. Incidentally, my son quickly discovered that the reputed leading secondary school we sent him to, expected all the students to do all things the same way, which was their, the schools way. He is now progressing better in a school that is more accepting of, and better able to cultivate a diversity of thought, thankfully missing out on the opportunity to claim membership of the old boy's club of what many old boys, and their parents, consider an elite school.
  3. doug_bostrom at 11:06 AM, just adding to my earlier reply to you. It may be of even more value to non-sceptics over sceptics to wade through the "Cloud Swamp" and become more familiar with the complexities of clouds. I would be surprised if many do, yet it is probably more vital to understanding climate change than many, perhaps most other issues given it is the least understood,.
  4. Tom Dayton at 11:52 AM, there is a fundamental similarity however, in that weather is about redistributing heat imbalances. But where does it start and stop in limiting the rate of incoming heat or removing heat from the system.
  5. Johnd, I'm wondering how a problem with a seasonal weather or interannual climate forecasting application of a GCM relates to the application of GCM's to produce projections of global climate? The two objectives are not the same. The initializing conditions allowing the model to produce specific regional forecasts of seasonal and interannual climate behavior will cause this application to suffer from the same issues as other weather forecasting systems. More, they'll be sensitive to small perturbations such as those alluded to in the correspondence you quote. In all probability the issue there is w/the compiler change, btw. The goal of the SINTEX application you're worrying over is that of producing -forecasts- of specific weather and climate behavior in specific regions of the globe. That's not the same objective of GCM application to describe gross behavior of the global climate over multi-decade periods. In short, your concern with this problem w/SINTEX is not relevant, or at the least you've not shown how it is. More here on SINTEX applications for seasonal and interannual climate forecasts, for the curious: Seasonal Climate Predictability Using the SINTEX-F1 Coupled GCM
  6. doug_bostrom at 13:36 PM, Doug, where my interest really lies with the Japanese researchers is the work they are doing with the Indian Ocean Dipole. They are the ones that identified it about a decade ago and it's relevance to the Australian climate, and beyond, is gradually being appreciated. Previously our most eminent researchers had hopped on the El-Nino wagon and it became identified as supposedly the dominating influence over most of Australia. However for those whose understanding of the Australian weather and climate was based on what actually can be observed or happens on the ground, rather than what is being said in the media, or even in peer reviewed papers, a lot simply didn't reflect what was being, or had been observed for generations. Then some other independent researchers and forecasters started working the IOD into their calculations and models and suddenly a lot of what had been attributed to ENSO began to appear as being due to the IOD, at least over wide parts of Australia. This began to show how the independent cycles of each system could at times either enhance or offset each other, or remain neutral, thus throwing a completely different understanding into the picture. Now as I understand it, research is being carried out in other parts of the world to determine if the ENSO system is as dominant an influence as previously considered. Given that ENSO is relevant to the climate research and how the climate is modelled, particularly with models having to be validated by backcasting, if the understanding of ENSO changes, that may require some aspects of the models to change. That's where I see the relevance.
  7. Johnd, I think I see what you're driving at. ENSO is a driver of natural variability on a large regional scale, much of the Northern hemisphere really, and thus is of relevance on a fairly short interannual time scale. So models work better for modeling regional climate on short time scales as ENSO is better simulated. But on an interdecadal scale ENSO does not seem as though it is an important factor; ENSO does not offer a means of shedding heat from the globe, only rearranging it. That being said, the better models handle ENSO the better they'll work in finer time and space resolution. That about it?
  8. doug_bostrom at 16:23 PM, doug, basically yes. I'm not sure whether we will ever understand on how all the complexities interrelate or even if we can be sure we have the relationships in the right perspective, clear on cause and effect. Just an aside which relates to El-Nino and the IOD. El-Nino was given greater significance by Australian scientists when it was found that a high proportion of El-Nino events coincided with drought years in Australia, which was true, thus El-Nino became a supposedly reliable indicator for forecasting droughts in Australia. However those who observed what was actually occurring on the ground noted that a much smaller proportion of all drought years happened to coincide with El-Nino events, indicating that perhaps El-Nino was instead a relatively poor indicator to forecast droughts, less than an even chance. When the appropriate IOD cycles were worked into consideration, the correlation jumped substantially, to about 80% I think. It was interesting comparing the initial conclusions reached by those whose primary focus was El-Nino, as against those whose primary focus was droughts.
  9. johnd, corect me if i'm wrong, i do not know much about precipitations in Australia. As far as i can understand, IDO affects mainly the south and south-east of Australia while ENSO central and north Australia, or something like that. This should be the reason why, in general, we see different patterns in different regions. Australia is huge, after all, and bound by two different oceans.
  10. Riccardo at 18:13 PM, basically the weather comes from the west and the northern tropics across the country, at times north western cyclones bring rain to the south east. The eastern regions bounded by the Pacific Ocean do come under the influence of systems originating there, but the mountains that follow the east coast all the way down, the Great Dividing Range, are aptly named and provide some barrier to systems heading inland. The effects of systems originating in the Indian Ocean means that the weather over some of Australia, Indonesia, India and Africa are all interconnected, something that was observed from the early days of settlement with settlers in the north who had lived in other regions bounded by the Indian Ocean, made the connection. This is now being recognised more so since the identification of the IOD about a decade ago, and it is all still being digested, still with some differences of opinion as to the how it all relates. Given that some cycles take many decades to complete, the debate may continue for a long time yet. The El-Nino phenomenon is historically identified with South America, Peru, which is logical given how systems move around the globe. The Southern Ocean is also being given more consideration as to how it all helps influence the mix.
  11. Just adding to earlier posts, a new type of El-Nino has been identified in recent years and is being worked into the models used by the Japanese who work on the Sintex forecasts and research. I believe again it is these Japanese researchers who first identified it, and likely is contributing to the reliability of their forecasts. It is a modified form of the ENSO pattern and called ENSO Modoki or El Nino Modoki. The link below provides some information. The researchers believe that perhaps the conventional El-Nino is evolving into something different. However only time will tell whether this is something new evolving, or just part of an even bigger cycle where these changes may be periodic, and it is our understanding instead that is in a state of evolving. El Niño Modoki http://www.agu.org/pubs/crossref/2007/2006JC003798.shtml
  12. John has asked me to make my comments on this thread rather than on his more recent “Rebutting skeptic arguments in a single line” thread (please see those I made today – Note A) Let me kick off here by posting a comment that software developer James Annan, who is presently involved with the Coupled Model Intercomparison Project (CMIP) (Note B) declined to post on his "Penn State Live - Investigation of climate scientist at Penn State complete" thread (Note C). *************** I have been engaged in an interesting discussion over at William Connolley’s blog (Note 1) about the relevance of established VV&T procedures to climate models and Steve Easterbrook appeared to be suggesting that CMIP was the appropriate solution. I understand that CMIP is a project for comparing climate model outputs and I asked IPCC reviewer Dr. Vincent Gray for his views on this and he rejects the notion that the inter-comparison of the outputs of different models is anything to do with validation. As you may be aware, Dr. Gray is author of “The Greenhouse Delusion”, a member of the New Zealand Climate Science Coalition (Note 1) and was responsible for having the IPCC admit that climate models had never been properly validated, despite the IPCC trying to suggest otherwise. (In response to his comment the word "validation" was replaced by "evaluation" no less than 50 times in the chapter on "Climate Models - Validation" in an early draft of the IPCC's "The Science of Climate Change".) Since you are a member of the Global Change Projection Research Programme at the Research Institute for Global Change working on CMIP3 with an eye on AR5 you may be aware of the extent to which VV&T procedures will be applied. Are you able to point me in the direction of the team responsible for these activities? William seems reluctant to let me participate further on his blog since I commented on his Green Party activism and his reported Wikipedia activities (Note 2 & 3) so I hope that you will take up the debate about the relevance of VV&T for climate models. I get the impression some of you involved in climate modelling see little further than your software engineering and the quality of that software. VV&T as applied in the development of Telecommunications support systems when I was involved in it considerd the full picture from end user requirements definition through to final system integration and operation. (Contrary to what is claimed in “Engineering the Software for Understanding Climate Change” by Steve Easterbrook and Timothy Johns (Note 4) in the case of climate modelling systems, the primary end user is not the scientists who develop these systems.) Although VV+T alone will not produce quality software I recall plenty of instances when professionally applied and independent VV&T procedures identified defects in system performance due to deficient software engineering. Consequently deficiencies were rectified much earlier (and much cheaper) than would have occurred if left to the software engineers and defects only identified during operation. It is possible but highly unlikely that VV&T doubled the cost of the software, as claimed by one software engineer on William’s blog but it would certainly have cost many times more if those defects had remained undetected until during operational use. I don’t expect to be the only person who has had such experiences. I would be surprised if rectification of these software deficiencies led to “quality” software but they did lead to software and operational systems that more closely satisfied the end users’ requirements, used throughout the system development program as the prime objective. Steve Easterbrook and Timothy Johns said (Note 4) “V&V practices rely on the fact that the developers are also the primary users”. It could be argued that the prime users are the policymakers who are guided by the IPCC’s SPMs which depend upon the projections of those climate models. Steve and Timothy “hypothesized that .. the developers will gradually evolve a set of processes that are highly customized to their context, irrespective of the advice of the software engineering literature .. ”. I prefer the hypothesis of Post & Votta who say in their excellent 2005 paper “Computational Science Demands a New Paradigm” (Note 5) that “ .. computational science needs a new paradigm to address the prediction challenge .. They point out that most fields of computational science lack a mature, systematic software validation process that would give confidence in predictions made from computational models”. What they say about VV&T aligns with my own experience, including “ .. Verification, validation, and quality management, we found, are all crucial to the success of a large-scale code-writing project. Although some computational science projects—those illustrated by figures 1–4, for example—stress all three requirements, many other current and planned projects give them insufficient attention. In the absence of any one of those requirements, one doesn’t have the assurance of independent assessment, confirmation, and repeatability of results. Because it’s impossible to judge the validity of such results, they often have little credibility and no impact ..”. Relevant to climate models they say “A computational simulation is only a model of physical reality. Such models may not accurately reflect the phenomena of interest. By verification we mean the determination that the code solves the chosen model correctly. Validation, on the other hand, is the determination that the model itself captures the essential physical phenomena with adequate fidelity. Without adequate verification and validation, computational results are not credible .. ”. I agree with Steve that “further research into such comparisons is needed to investigate these observations” and suggest that in the meantime the VV&T procedures should be applied as understood by Post and Votta and currently practised successfully outside of the climate modelling community. NOTES: 1) see http://nzclimatescience.net/index.php?option=com_content&task=view&id=374&Itemid=1 2) see http://www.spectator.co.uk/columnists/all/6099208/part_3/i-feel-the-need-to-offer-wikipedia-some-ammunition-in-its-quest-to-discredit-me.thtml 3) see http://www.thedailybell.com/683/Wikipedia-as-Elite-Propaganda-Mill.html 4) see http://www.cs.toronto.edu/~sme/papers/2008/Easterbrook-Johns-2008.pdf 5) see http://www.highproductivity.org/vol58no1p35_41.pdf ************** I’m disappointed (but not surprised) by the reluctance of software developers like James, William Connolley and Steve Easterbrook to have open debate with sceptics about the extent to which their climate models have been validated. NOTES: A) see http://www.skepticalscience.com/Rebutting-skeptic-arguments-in-a-single-line.html#comments B) see http://www-pcmdi.llnl.gov/projects/cmip/index.php C see https://www.blogger.com/comment.g?blogID=9959776&postID=2466855496959474407 Best regards, Pete Ridley.
  13. Pete Ridley (whoever you are) - it seems your points were addressed on the other thread - yet you repeat them here. Can you state your beef in a a sentence or two, instead of a fully annotated paper? I think the strongest rebuttal to the skepticism of the model-making process is that they can both hindcast (the easy part) and predict - 22 years isn't as good as 5 decades, but how many years of successful modeling will you need before you are satisified?
    Response: Quick comment - they're repeated here because I emailed Peter and asked him to move the discussion to this more relevant thread.
  14. actually thoughtfull and Pete Ridley, I suggest avoiding carrying over Pete's comments about user names from the previous thread. As far as I know there is no policy here of privileging "realistic-sounding full names" over others. Comments that disparage the choice to use a pseudonym are offtopic and will probably be deleted.
  15. Pete Ridley, evidence of predictive ability was described in the original post at the top of this page. I suggest you read it.
  16. And why do you want climate modelers to debate, other than as yet another delay tactic? Why don't you create a model that account for everything the AGW models do, and prove to the world that man-made CO2 is simply not an issue? The debate of ideas, not names, not titles, not one-liners is the debate that matters to me. So Pete Ridley, where is your competing, complete non-AGW theory of climate change, complete with models that have excellent hind-cast ability, and whose ability we can compare to Hansen 1988, or any of the newer, better models that have come out in the last 22 years?
  17. Pete Ridley at 22:54 PM on 21 July, 2010 I was intrigued by your note 2) which contains this: "A meta-analysis of all the articles written on the subject showed that the vast majority of experts believe that not only was the MWP widespread but that average temperatures were warmer than they are now" Perhaps this needs a bit of VV&T...?
  18. Pete, I do however have more than a passing acquaintance with this type of physical modelling. You ask for evidence of models predictive skill. You are pointed at comparisons between models prediction and actual data. I dont know what other kind of evidence you could mean. Hansen (and everybody else) cant predict what future CO2 emissions will be so of course he works with scenarios. "If you emit this, you will get this climate". You verify by comparing actual forcings (not just emissions but volcanoes, solar etc), versus the models prediction for the scenario that closest matches these forcings. Please also note that you need to distinguish between people giving you "opinions" versus people giving you verifiable facts. The source of facts is important, not the person giving it to you.
  19. Pete Ridley - reading Hansen et al 2006 is quite clear; Hansen's most likely scenario (scenario B), with a particular CO2 increase plugged in, actually predicted temperatures over the last 22 years quite well. If you take Hansen's model and put in the actual CO2 numbers (5-10% less than his scenario) his model is even more accurate. This is discussed in the paper in the section labeled Early Climate Change Predictions, pages 1-3 of the PDF. It's a major part of the paper! The quality of a model lies in whether it makes correct predictions based on various inputs (Given 'A', you will get 'B'). His scenarios covered a range of different inputs (CO2 production), and the the prediction given closely matches the real-world result of that range of CO2 numbers. You can't ask for much more - it's a decent model, and predicts the correct result given a particular set of our actions, even at it's 1988 level of simplicity. That's what a good model does!
  20. Hi folks, thanks to those of you who have responded with an attempt to debate in a reasonable manner the issue of evidence that is supposed to support the claim that climate model predictions/projections have been validated. I will respond to each of you in turn in subsequent comments. First, since for some reason my fist comment on this blog (on the “Rebutting skeptic arguments in a single line” thread has been removed let me repeat it here. The IPCC and other supporters of The (significant human-made global climate change) Hypothesis depend very much upon the “projections” of the computerised climate models. The validity of those projections has been challenged repeatedly by sceptics but they are still depended upon to support the notion that our use of fossil fuels will cause catastrophic global climate change. I have been debating this on several blogs with software developers with an interest in this subject, such as Steve Easterbrook, William Connolley (Note 2) and James Annan (Note 3) but it seemed that as soon as I mentioned Dr. Vincent Gray the debate stopped. Dr. Gray is author of “The Greenhouse Delusion”, a member of the New Zealand Climate Science Coalition (Note 4) and was responsible for having the IPCC admit that climate models had never been properly validated, despite the IPCC trying to suggest otherwise. (In response to his comment the word "validation" was replaced by "evaluation" no less than 50 times in the chapter on "Climate Models - Validation" in an early draft of the IPCC's "The Science of Climate Change".) Yesterday I sent the following comment to Steve Easterbrook’s blog (Note 5) but he refused to post it. Is Skeptical Science prepared to? ________ Climate models are inadequate because of our poor understanding of the fundamental global climate processes and drivers. Improving the quality of the software will not improve their performance. You can't make a silk purse out of a sow's ear. Popular Tehnology has put together a list of respected scientists who recognise this fact. One of these, Freeman Dyson says "My first heresy says that all the fuss about global warming is grossly exaggerated. Here I am opposing the holy brotherhood of climate model experts and the crowd of deluded citizens who believe the numbers predicted by the computer models. Of course, they say, I have no degree in meteorology and I am therefore not qualified to speak. But I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models." Have a read of what the rest have to say (http://www.populartechnology.net/2010/07/eminent-physicists-skeptical-of-agw.html). IPCC reviewer Dr. Vincent Gray is putting together an article on the subject of climate model validation. I’ll let you know when it’s available. -------------- There is no one-liner that will rebut this criticism by a true climate science sceptic. NOTES: 1) see http://www.thefreedictionary.com/rebut 2) see http://scienceblogs.com/stoat/2010/06/engineering_the_software_for_u.php 3) see http://julesandjames.blogspot.com/2010/07/penn-state-live-investigation-of.html 4) see http://nzclimatescience.net/index.php?option=com_content&task=view&id=374&Itemid=1 5) see http://www.easterbrook.ca/steve/?p=1785&cpage=1#comment-3436 Best regards, Pete Ridley
  21. Pete Ridley - a 'scientific model' is a simplified system for making reasonable projections and exploring system interactions, especially useful when it's not practical to subject the real system to repeated tests and inputs. Evaluating a model takes into consideration several things: - Ability to match previous observations (historic data) - Ability to predict future observations - Ability to estimate different future states based on different inputs (Given 'A', predict 'B') - Match of model internal relationships to known physical phenomena - Simplicity (no nested 'crystal spheres' for epicycles) The 1988 Hansen model was, by current standards, fairly simple. Ocean heat content/circulation, ice melt rates, some additional aerosol information, etc., weren't in it. But it still shows close predictive agreement with inputs matched to what has happened since 1988! That's a pretty decent model. And no, it's not 1-to-1 agreement. Short term variation (a couple of years) is really weather, not climate. You need to make running averages of >10 years to average out the short term fluctuations and identify the climate trend. On a side note, you complain about reliability of surface temperature measures. That's a fairly common skeptic argument, and has been discussed here and here, as well as in a very recent topic on cherry picking data. The surface temperature measures are reliable - that argument really doesn't hold water.
  22. Pete Ridley at 00:59 AM on 24 July, 2010 You state "contrary to reality which is a turning point or even downturn in globa tmperature trend". I assume you mean surface or lower tropospheric temperature trends as measured by land/vessel based stations, satellites or radiosondes? Could you please explain how you arrive at your conclusion? I have quite a few data sets available, and it is difficult to see any turning points or downturns in trend unless you are extremely selective or narrow on start and end times.
  23. Pete Ridley - A question for you. The Hansen 1988 model appears to satisfy all the criteria I know of for a reasonable scientific model. You seem to disagree. Can you tell us where the Hansen model fails these criteria? Or perhaps tell us what your definition of a scientific model might be?
  24. Peter, your faith in Vince is touching. Perhaps you should google for some other opinions? (Or look at his review comments at IPCC and editor's response). I've known Vince all my working life and I would trust him to do a coal analysis for me. You still seem to think Hansen's model is somehow flawed because it's deals with "fictious scenarios". Would you complain about say an automobile model not predicting speed because it cant tell how hard you press the accelerator? With the Hansen model however, you can rerun it with ACTUAL forcings instead of the scenario. What else can demand of a model? You are also ignoring all the other model/observation matches in the above article. Where have the models failed?
  25. Pete Ridley, since you are failing to understand the repeated explanations of how Hansen's model has been shown to successfully predict temperatures, you should read the more detailed posting at RealClimate, "Hansen's 1988 Projections." With regard to your more general confusions about models, you should go to RealClimate's Index, scroll down to the section "Climate Modelling," and read all of the posts listed there. For example, in the post "Is Climate Modelling Science?", there appears:
    "I use the term validating not in the sense of ‘proving true’ (an impossibility), but in the sense of ‘being good enough to be useful’). In essence, the validation must be done for the whole system if we are to have any confidence in the predictions about the whole system in the future. This validation is what most climate modellers spend almost all their time doing."

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us