Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

At a glance

So, what are computer models? Computer modelling is the simulation and study of complex physical systems using mathematics and computer science. Models can be used to explore the effects of changes to any or all of the system components. Such techniques have a wide range of applications. For example, engineering makes a lot of use of computer models, from aircraft design to dam construction and everything in between. Many aspects of our modern lives depend, one way and another, on computer modelling. If you don't trust computer models but like flying, you might want to think about that.

Computer models can be as simple or as complicated as required. It depends on what part of a system you're looking at and its complexity. A simple model might consist of a few equations on a spreadsheet. Complex models, on the other hand, can run to millions of lines of code. Designing them involves intensive collaboration between multiple specialist scientists, mathematicians and top-end coders working as a team.

Modelling of the planet's climate system dates back to the late 1960s. Climate modelling involves incorporating all the equations that describe the interactions between all the components of our climate system. Climate modelling is especially maths-heavy, requiring phenomenal computer power to run vast numbers of equations at the same time.

Climate models are designed to estimate trends rather than events. For example, a fairly simple climate model can readily tell you it will be colder in winter. However, it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Weather forecast-models rarely extend to even a fortnight ahead. Big difference. Climate trends deal with things such as temperature or sea-level changes, over multiple decades. Trends are important because they eliminate or 'smooth out' single events that may be extreme but uncommon. In other words, trends tell you which way the system's heading.

All climate models must be tested to find out if they work before they are deployed. That can be done by using the past. We know what happened back then either because we made observations or since evidence is preserved in the geological record. If a model can correctly simulate trends from a starting point somewhere in the past through to the present day, it has passed that test. We can therefore expect it to simulate what might happen in the future. And that's exactly what has happened. From early on, climate models predicted future global warming. Multiple lines of hard physical evidence now confirm the prediction was correct.

Finally, all models, weather or climate, have uncertainties associated with them. This doesn't mean scientists don't know anything - far from it. If you work in science, uncertainty is an everyday word and is to be expected. Sources of uncertainty can be identified, isolated and worked upon. As a consequence, a model's performance improves. In this way, science is a self-correcting process over time. This is quite different from climate science denial, whose practitioners speak confidently and with certainty about something they do not work on day in and day out. They don't need to fully understand the topic, since spreading confusion and doubt is their task.

Climate models are not perfect. Nothing is. But they are phenomenally useful.

Please use this form to provide feedback about this new "At a glance" section. Read a more technical version below or dig deeper via the tabs above!


Further details

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been shown to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. Sea level rise is a good example (fig. 1).

Fig. 1: Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. A 2019 study led by Zeke Hausfather (Hausfather et al. 2019) evaluated 17 global surface temperature projections from climate models in studies published between 1970 and 2007.  The authors found "14 out of the 17 model projections indistinguishable from what actually occurred."

Talking of empirical evidence, you may be surprised to know that huge fossil fuels corporation Exxon's own scientists knew all about climate change, all along. A recent study of their own modelling (Supran et al. 2023 - open access) found it to be just as skillful as that developed within academia (fig. 2). We had a blog-post about this important study around the time of its publication. However, the way the corporate world's PR machine subsequently handled this information left a great deal to be desired, to put it mildly. The paper's damning final paragraph is worthy of part-quotation:

"Here, it has enabled us to conclude with precision that, decades ago, ExxonMobil understood as much about climate change as did academic and government scientists. Our analysis shows that, in private and academic circles since the late 1970s and early 1980s, ExxonMobil scientists:

(i) accurately projected and skillfully modelled global warming due to fossil fuel burning;

(ii) correctly dismissed the possibility of a coming ice age;

(iii) accurately predicted when human-caused global warming would first be detected;

(iv) reasonably estimated how much CO2 would lead to dangerous warming.

Yet, whereas academic and government scientists worked to communicate what they knew to the public, ExxonMobil worked to deny it."


Exxon climate graphics from Supran et al 2023

Fig. 2: Historically observed temperature change (red) and atmospheric carbon dioxide concentration (blue) over time, compared against global warming projections reported by ExxonMobil scientists. (A) “Proprietary” 1982 Exxon-modeled projections. (B) Summary of projections in seven internal company memos and five peer-reviewed publications between 1977 and 2003 (gray lines). (C) A 1977 internally reported graph of the global warming “effect of CO2 on an interglacial scale.” (A) and (B) display averaged historical temperature observations, whereas the historical temperature record in (C) is a smoothed Earth system model simulation of the last 150,000 years. From Supran et al. 2023.

 Updated 30th May 2024 to include Supran et al extract.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

Last updated on 30 May 2024 by John Mason. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Further reading

Carbon Brief on Models

In January 2018, CarbonBrief published a series about climate models which includes the following articles:

Q&A: How do climate models work?
This indepth article explains in detail how scientists use computers to understand our changing climate.

Timeline: The history of climate modelling
Scroll through 50 key moments in the development of climate models over the last almost 100 years.

In-depth: Scientists discuss how to improve climate models
Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.

Guest post: Why clouds hold the key to better climate models
The never-ending and continuous changing nature of clouds has given rise to beautiful poetry, hours of cloud-spotting fun and decades of challenges to climate modellers as Prof Ellie Highwood explains in this article.

Explainer: What climate models tell us about future rainfall
Much of the public discussion around climate change has focused on how much the Earth will warm over the coming century. But climate change is not limited just to temperature; how precipitation – both rain and snow – changes will also have an impact on the global population.

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Denial101x videos

Here are related lecture-videos from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Myth Deconstruction

Related resource: Myth Deconstruction as animated GIF

MD Model

Please check the related blog post for background information about this graphics resource.

Fact brief

Click the thumbnail for the concise fact brief version created in collaboration with Gigafact:

fact brief

Comments

Prev  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  Next

Comments 526 to 550 out of 1081:

  1. Clyde, so far, your case is entirely evidence-free regarding: 1: what you regard as a specialist computer modeller? 2: what expertise such a person would have that it is impossible for a climate modeller to acquire? (who will spend most of their research life from at least PhD onwards writing code, usually beginning with a background in physics or earth/environmental science). 3: why somebody who has spent many years, perhaps even decades, coding climate models would not then be an expert in coding? Are there some secrets that they never let out of the computer science department? I've coded small climate models, and I've worked with a number of people who professionally code much more sophisticated ice sheet and climate models - you baselessly insult their intelligence and hard-won expertise. I've seen physicists attempting to code ice sheet models and making a pigs ear of it in the first instance because they didn't have the understanding of earth systems, glacier dynamics and climate to make such a thing realistic (they learned, gradually!). I've seen the reverse - the physics expertise is equally hard-won. I have not seen a computer scientist even have a go because they would need to get to grips with two major things: how a climate system works, and the mathematics of the thermodynamics/mechanics required to calculate components of the model. Quite how you think the pure coder who can write very tidy Fortran/C++/Python could do this better than the existing experts is remarkable. It is much easier to begin with an understanding of climate and physics, and graduate onto writing computer code, which is fundamentally not that difficult to do, than the alternative.
  2. Clyde @552 links to an atrocious analysis by David Evans, who by all accounts (particularly his own) is an expert in computer modeling. Evans criticizes two models which are supposedly representative of IPCC model predictions, the 1988 prediction by Hansen, and the projections by the IPCC First Assessment Report (FAR). He says of them:
    "The climate models have been essentially the same for 30 years now, maintaining roughly the same sensitivity to extra CO2 even while they got more detailed with more computer power."
    Oddly, in the IPCC Third Assessment Report (TAR) we read:
    "IPCC (1990) and the SAR used a radiative forcing of 4.37 Wm-2 for a doubling of CO2 calculated with a simplified expression. Since then several studies, including some using GCMs (Mitchell and Johns, 1997; Ramaswamy and Chen, 1997b; Hansen et al., 1998), have calculated a lower radiative forcing due to CO2 (Pinnock et al., 1995; Roehl et al., 1995; Myhre and Stordal, 1997; Myhre et al., 1998b; Jain et al., 2000). The newer estimates of radiative forcing due to a doubling of CO2 are between 3.5 and 4.1 Wm-2 with the relevant species and various overlaps between greenhouse gases included. The lower forcing in the cited newer studies is due to an accounting of the stratospheric temperature adjustment which was not properly taken into account in the simplified expression used in IPCC (1990) and the SAR (Myhre et al., 1998b). In Myhre et al. (1998b) and Jain et al. (2000), the short-wave forcing due to CO2 is also included, an effect not taken into account in the SAR. The short-wave effect results in a negative forcing contribution for the surface-troposphere system owing to the extra absorption due to CO2 in the stratosphere; however, this effect is relatively small compared to the total radiative forcing (< 5%). The new best estimate based on the published results for the radiative forcing due to a doubling of CO2 is 3.7 Wm-2, which is a reduction of 15% compared to the SAR. The forcing since pre-industrial times in the SAR was estimated to be 1.56 Wm-2; this is now altered to 1.46 Wm-2 in accordance with the discussion above. The overall decrease of about 6% (from 1.56 to 1.46) accounts for the above effect and also accounts for the increase in CO2 concentration since the time period considered in the SAR (the latter effect, by itself, yields an increase in the forcing of about 10%)."
    (My emphasis) A 15% reduction in estimated climate sensitivity is not "roughly the same sensitivity". What is more, early climate models included very few forcings. Evan's comment on that in his video saying (falsely) that they only include CO2, and do not include natural forcings. However models used in the Third and Fourth Assessment reports most certainly used natural forcings, as well as a wide range of anthropogenic forcings. Therefore the claim that "[t]he climate models have been essentially the same for 30 years now" is simply false. More troubling is the graphic Evan's uses: First we have the label indicating the projections dependent on CO2 emissions as if CO2 was the only forcing modeled by Hansen. Indeed, in the video, Evans explicitly states just that, ie, that CO2 was the only modeled forcing. In fact Hansen included five different anthropogenic gases in each model run, so checking just CO2 emissions does not check how well reality conformed with any particular scenario. Far worse, he labels scenario A as "CO2 emissions as actually occurred". What actually occurred, and entirely unpredicted by Hansen, was that the Soviet Union collapsed resulting in a massive reduction of very polluting Soviet Block industry, with a consequent massive reduction of CO2 emissions from the Soviet Block: As a result, current CO2 levels (ignoring seasonal variation) are only 390.5 ppmv, which compares to the 391 ppmv projected by Hansen for 2011 in scenario B. In other words, Evans is claiming that CO2 emissions followed scenario A whereas in reality they have not yet caught up to scenario B. Here are the current concentrations of the other GHG used in Hansen's model: Gas | Actual__ | Hansen (Scenario) ___CH4 | 1810 ppb | 1920 (Scenario C) ___NO2 | _323 ppb | _330 (Scenario B) _CFC11 | _240 ppt | _275 (Scenario C) _CFC12 | _533 ppt | _961 (Scenario B) So, for every gas modeled, the actual 2011 concentration is greater than the projected scenario B concentration, often much greater. In two cases, even the scenario C projected concentration is greater than the actual concentration; yet Evans says that Scenario A emissions is what happened. Given the size of the discrepancies, there are only two possibilities. Either Evans did not bother looking up the data before making his assertion - an assertion he has made repeatedly while strongly emphasizing his expertise. Or he is flat out lying. Seeing Clyde introduced Evans' rubbish to this discussion, he now needs to answer several questions: Do experts make assertions about data which they have not bothered looking up? Do they lie? And why, given that they are supposedly so skeptical, have no fake "skeptics" picked up on these errors and criticized Evans for them? Finally, for a proper analysis of those predictions, I recommend the posts by Dana on Hansen's 1988 predictions, and on the predictions of the First Assessment Report. I don't think Dana claims to be an expert on climate modeling, but at least he treats the data with integrity.
    Response:

    TC: I again request that we avoid dogpiling. As Jim Eager has withdrawn, I ask that only Bob Loblaw and Skywatcher, as people with directly relevant experience to the topic make further responses to him. I ask that posters forgive my slight hypocrisy also responding, but I am sure you will understand my distaste for the introduction of Evans' effort in what hopes to be a conversation guided by evidence and aiming at truth. In the event, future responses to Clyde other than by Bob Loblaw or Skywatcher will be deleted as dogpiling, unless either withdraws, in which case Muoncounter can take up the cudgels.

    [DB] Lest some think that this is moderation by fiat by TC, this action has the full support of the moderation staff of SkS. TC has merely implemented a jointly discussed and approved action.

  3. Tom Curtis “How reliable are climate models?” (# 527: 10.38 am 1 June 2012) To Tom Curtis: I’m seeking clarification here, so I hope these question does not fall into the category of “dogpiling” (a term I’m loathe to use, given its homophonic noun alternative). I note your criticism of David Evans’ chart overlaying Hansen’s 1988 predictions with NASA satellite data of global air temperatures, where you demonstrate that the current CO2 level corresponds closely to Hansen’s prediction for Scenario B. Is your main point in that part of your analysis, that Evans should have noted that Scenario B “actually occurred”? Or is there more that I have missed? I’m a bit confused by your paragraph “So, for every gas modeled, the actual 2011 concentration is greater than the projected scenario B concentration, often much greater. In two cases, even the scenario C projected concentration is greater than the actual concentration; yet Evans says that Scenario A emissions is what happened.” Does the four row table above that paragraph, show the 1988 figures used in Hansen’s modelling? My last question is whether there is any issue with the Evans chart of the actual NASA air temperatures – the black line? I’d appreciate your help. Thank you.
  4. (snip) As to JoNova/David Evans misinformation - well look around Skepsci for take downs, (eg hot spot and Evans (snip) Hansen's 1984 model - yes it had sensivity wrong for well understood reasons. see Lessons from past predictions 1981 (and rest of that series for interest). And yes, climate sensitivity is still uncertain, but very unlikely to be less than 2 (or more than 4) - but claiming a past prediction is falsified by data on sensitivity doesnt fly when sensitivity wasnt a robust prediction. (snip)
    Response: TC: With regret, comments sniped for compliance with the no dogpiling rule. Explicit discussion of Hansen 1988 was retained as being a relevant response to my post,rather than to Clyde. If either of the two current respondents to Clyde with to step aside in your favour, I shall restore your comment (if the html code is working as it should). In that respect I note that your expertise is in computer modelling of petroleum basins.
  5. TC - Jim Eager stepped aside 523 so I thought I would continue. Your excellent post was composed as I was composing mine.
    Response: TC: It happens. If it were me I was asking you to stand aside, I would step aside in that your post was more directly relevant to Clyde's argument, and you are far more qualified on this topic than I. As it happens, Skywatcher beat you to the punch. I consider my post an aside to rebuff the use of outrageously flawed misinformation.
  6. With Jim Eager stepping back and several others chiming in, and another request to avoid dogpiling, for the moment I will restrict my comments to a few direct statements from Clyde. First "Which is harder/more complicated - Writing code for GCM or writing HTML codes for a website? I consider this to be a rather misleading question. It's like asking "which is harder to edit? A book written in English on the growth of multinational corporations, or a book written in German on how to rebuild the engine of a Leopard tank? Both require editing skills, but one benefits from a knowledge of economics and business (and the English language), while the other is easier if you have a detailed understanding of internal combustion engines, tools (and can read and write German). ...but, to answer your question: - given the specifications of the required procedures, any competent programmer can likely implement the algorithms. - the major stumbling block is in determining the algorithms to use, and someone who knows climatology and numerical methods will do better at a GCM, and someone that knows HTML and graphics displays (or whatever the web page is supposed to do) will do better on the web page. In my personal experience (writing web pages by creating HTML in a text editor), the programming skills I developed in my climatology career made HTML a trivial exercise (for relatively simple web pages), and programmers that only know web development do a poor job at any sort of scientific/numerical programming.
  7. Peter42 @528, the "no dogpiling" rule is designed to avoid people, normally new questioners and/or skeptics, from facing an overwhelming number of responses, thus unintentionally intimidating them. It is certainly not intended to stop side conversations. That said, discussion of Hansen's 1988 predictions are probably off topic here, and should probably be carried across to Dana's excellent post on the topic. My point was not specifically about Hansen's predictions except that Evans completely misrepresents them, either through ignorance where he claimed knowledge, or through willfull deceit; and that therefore reference to his claims has no place in any intelligent discussion of this topic. That said, none of the three scenarios actually occurred, although what actually occurred more closely approximates to B than either other scenario. As Dana says:
    Total Scenario B greenhouse gas radiative forcing from 1984 to 2010 = 1.1 W/m2 The actual greenhouse gas forcing from 1984 to 2010 was approximately 1.06 W/m2 (NASA GISS). Thus the greenhouse gas radiative forcing in Scenario B was too high by about 5%."
    Note that Dana uses a higher value for CO2 concentration than I do, presumably because he got his data from a different source. And yes, there are issues with Evans relative placement of predictions and temperatures (to compare trends, the trend of the different projections should be centered on the trend line of the data at the initial point of the graph to avoid misleading visual cues) and his choice of HadCRUT3 data rather than the more accurate GISTemp or NCDC temperature records.
  8. Re: Clyde's mention of Pielke Sr. and his "challenge": I have no interest in going to Pielke's web site to find out what sort of "challenge" he has issued. He does not allow comments at his blog (last I visited), and I have no interest in trying to "engage" in a one-sided conversation completely in his control. If you wish to place a comment here describing what you understand the challenge to be, then I would be willing to discuss it with you. Pielke Sr. has participated in discussions here at SkS (in some of the blog posts I linked to above), and at Real Climate, and I have debated with him during those discussions. He is free to return here where we can debate on even terms. Since you don't have the time to read the many blog posts I referred you to, I will only suggest that you read this comment of mine on one of those threads, which may explain why I have no respect for Pielke Sr. as a scientist.
  9. Re: Clyde's definition of "computer modeller" and pointers to blogs that purportedly show model weaknesses. You have utterly failed to provide a useful definition of "computer modeller". It is the equivalent of telling me that a "frobnitz gleabinator" is someone who can "gleabinate a frobnitz". As for your link to JoNova's site: I see no point in going to a blog written by someone with no basic understanding of climatology. Others have already posted critiques of that information, and I see no need to add to them now. As I mentioned in my earlier request, please provide links to real scientific literature (Pielke Sr. also doesn't count in this area of expertise), or at least web sites where real scientific information is presented. When you post such links, please provide at least a short description (in your own words) of just what it is I should expect to find there. Now, to continue this discussion, can you please provide me with answers to the following questions that I have already posed to you: 1) What is your definition of a "computer modeler"? 2) On what basis do you claim that any particular "climatology expert" is not knowledgeable about computer modeling, and how would this affect the work that they are doing? 3)What else would I need in my background to convince you that I know enough about "computer modeling"? (My background was presented in this comment.)
  10. Tom, that first graph you show is even more egregious for another reason - the projections are shown by Evans as being initialised from a single high point in the noise of the temperature record in 1988. In reality, Hansen's model runs begin before 1960, and the individual runs are already diverging by 1988, depending on the different settings [e.g. scenario A has no volcanic forcing after 1988] of the model: The scenarios A, B and C are spread over ~0.2C around 1988, and A does not cross B or C after this point (contrast Evans' A and B separation with Hansen's A and B). By doing this, Evans greatly exaggerates discrepancy between modelled and observed temperatures, a discrepancy not actually present (see in detail here at RealClimate, 3rd fig). Below is my estimate of the positions in 1988 of Hansen's Scenarios A, B and C (shaded grey circles), and Evans' start point for all three (blue square), based on GISS (used by Hansen) and UAH (used by Evans) data, with the temperature plots offset so they overlap in WoodForTrees plotting package: The reality is that much of the visual discrepancy in Evans' chart is a consequence of his misrepresenting the positions of the model runs w.r.t. to 1988 temperature. He thus shifts all three model runs much too high, compared to the temperature. Readers are left to ask the question why Evans chose to start all three model run plots from the same spot, a positions higher than any of the model runs as they were actually presented in Hansen's 1988 paper. They can then ask the question why Clyde thinks this is a good example of models not reproducing reality...
  11. Bob Loblaw 533 I read the comment you requested & understand your feelings about Pielke Sr. To the best of my knowledge his reason for turning of the comments on his blog was to avoid dealing with name calling/childish behavior. So far i haven't had that problem with you nor anybody else. This not the only time he has issued the challenge & not the exact page i was looking for. It does have the info needed if anybody wants to refute his claim. I would think if the models are as good as some say this should be an easy task. Read more here. This label, of course, can be avoided if the researchers provide quantitiative model and observational comparisons of multi-decadal regional and local predictions of changes in climate statistics, and show them to be skillful in terms of what metrics are needed by the impacts community. I invite anyone who has published such a study to present a guest post on this weblog alerting us to such a robust scientific study.
  12. Bob Loblaw 534 First let me say i was only curious as to which was more complicated, GCM or HTML coding. I have no experience in GCM (big surprise i know -_^) coding. In the very little HTML coding I've been involved with its a pain in the butt. 1) What is your definition of a "computer modeler"? Somebody who can write the code & has the computer to run the code. 2) On what basis do you claim that any particular "climatology expert" is not knowledgeable about computer modeling, and how would this affect the work that they are doing? I'm going by say a doctor. A heart surgeon can operate on a heart, but that doesn't mean they can write the code & run it on a computer. If my comment left the impression i don't think any scientist has the ability to do both, that wasn't what i meant. 3)What else would I need in my background to convince you that I know enough about "computer modeling"? (My background was presented in this comment.) I don't recall saying you don't know enough about "computer modeling." I only "know" you from the brief interaction we've had here. The reason i asked you the question about the different coding was because you said you have written scientific code before.
  13. I hope this is not off topic for this thread. The reason i think its not is the paper gives more evidence of past warming be equal to or greater than today's. Making climate models in my view not reliable enough to pass new laws & regulations. You only get a small part & have to pay to read the full paper. No i didn't pay to read the full paper. Another paper with evidence of past warming being equal to if not more than today's. My apologies again for this mistyped hyperlink.
    Response: [DB] Your link is indeed off-topic for this thread. Future off-topic comments will receive moderation.
  14. Clyde - that natural fires occur is not evidence against arson, but it would be better to direct comments to Climate has changed before. Computer models do "predict" past warming - its just that the forcing are different.
  15. Clyde #537 - I see you have attempted to answer Bob and I's questions (both are similar). However, your answers consciously avoid any statement of why the expert in a field (heart surgery or climate) cannot become an expert modeller of a process in that field. Why is it that somebody, who has attained skill in understanding the processes of how something works, is precluded from encapsulating that knowledge in computer code? What I want to know is this: What is unique about a "modeller", that means neither a climatologist or a heart specialist can ever become one? How, in your opinion, do you become a "modeller"? Exactly what are the unique skills a modeller has? You see, fundamentally, what a "modeller" is, in this context, is someone who has the ability to generate computer code that results in an approximate representation of one or more processes in the climate system. They will have the ability to test that code, and to validate that code against expected results using synthetic data, as well as against real-world data. They can then correct their code or adjust the uncertainties accordingly. They will be able to estimate the uncertainties in their results and evaluate the strengths and weaknesses of their model. This is a technical skill, but one that is eminently achievable by physical/environmental scientists. By doing so, they become specialist climate modellers. You still have provided not one shred of justification as to why such scientists cannot do this. I don't actually believe you are willing to answer these questions adequately. Your subsequent casual comment concerning a Greenland climate paper equally shows you have little understanding of climatology, palaeoclimate, forcings, and regional versus global variations, to add to your evident failure to substantiate your original disparaging claims about climate modellers. Did you think warming/cooling was globally monotonic?
    Response: [DB] Fixed link.
  16. skywatcher 540 They will have the ability to test that code, and to validate that code against expected results using synthetic data, as well as against real-world data. They can then correct their code or adjust the uncertainties accordingly. That's part of my reason for not trusting models. Correct their code or adjust uncertainties. If laws & regs are passed based on current models that will need adjustments & corrections then why pass said laws & regs? Most of the adjustments & corrections I've read about are always to make the temp higher. (-Snip-) I've answered your other questions in my 537 post. You feel their not "adequate." I noticed some jumped all over JoNova & nobody has refuted the papers in my 520 comment. (-Snip-)
    Response:

    [DB] Imputations of impropriety snipped.

    Off-topic snipped.

    Please construct comments in better compliance with the Comments Policy.

  17. The link to "Only In It For The Gold is old." It takes you to a page that redirects you to the site below. http://init.planet3.org/
  18. Clyde @ 541... "Most of the adjustments & corrections I've read about are always to make the temp higher." Um, I would suggest that's clearly not the case. In general, climate sensitivity estimates have come down slightly. Back in the 80's Hansen was estimating 4.2C for climate sensitivity (based on models and empirical research) and since then that's been adjusted down closer to 3C for 2XCO2. Even more recently research is showing that some of the very high estimations of CS are less likely thereby pushing the most likely CS down a smidge from that, to around 2.9-2.8C.
  19. Clyde - I pointed you to RC over your 520 "paper". The reason why action is needed, even with uncertainties, is because low end of uncertainties are bad and high end is very very bad (uncertainty cuts both ways). Heard of the precautionary principle? Its great that you are interested enough in truth to come here rather than just haunting disinformation sites, but it appears you have some predetermined opinions which are really seriously uninformed. Please take time to look for the real answers (backed by published science) rather than just assuming things (like climate scientists arent competent modellers, that models cant explain past climate change etc). Take a good look over the skeptical argument list - top left button).
  20. Clyde @537: 1) You are still just providing a circular definition of "computer modeller". If you don't know what a circular definition is, look it up in the dictionary under "definition, circular". Or admit that you don't have a definition. 2) I'm not interested in analogies with heart surgeons or doctors. I want you to identify an actual, real "climate expert" that you know of, and explain why that person is not "knowledgeable about computer modeling, and how would this affect the work that they are doing" (to quote my original question). In other words, what is it you think that they are doing that is weakened by your belief that they have insufficient knowledge of "computer modelling"? Or admit that you don't actually have any specifics that you can use to back up your claim. 3) You said that I "don't know enough about computer modeling" in this comment here, where you said "Why is it that folks who critique AGW are dismissed if their not experts in climate science, but we should just accept a climate scientist's work on models when their not experts in computer modeling?" You've cast a pretty wide net with that general claim, and as the old saying goes "I resemble that remark". - I have studied climatology through a Physical Geography program (B.Sc. and Ph.D.). - I have taught climatology in a major Canadian research universty (in a Geography department) - I have published journal papers on my research in reputable scientific journals - my research included writing/coding and using "climate models" I think this is sufficient to be called a "climate expert" - I took one first year "computer science" course in the 1970s. - I stopped taking mathematics course after first year calculus and algebra. I think that makes me someone that you might think of as "not an expert in computer modelling" Yet, somehow I still wrote computer models of climate. Please, tell me what it is you think I need in my background to convince you that I actually knew what I was doing? Surely, with my weak "computer training", I must be an easy target for you to criticize. If you can't argue that I fit your broad, sweeping generalization, then who does? (Which takes us back to point 2.) Back up your claim, instead of just avoiding it. Or admit that you're wrong.
  21. Clyde: I'm not interested in going to Pielke's web site. Please provide a short description of what you think his "challenge" is, and I will discuss it with you here. ("Here" being subject to the assumption that it is relevant to this particular topic, which is the reliability of climate models. If it isn't, please pick another thread and point me to it.)
  22. Clyde @ 541: You say "That's part of my reason for not trusting models. Correct their code or adjust uncertainties." Are you really telling me that if I write a model, and I find that there are difference between it and measurements, and I either - figure out what my model is doing incorrectly, and make it better ("correct the code") - decide that this means that the uncertainties in my model are greater than I thought they were when I had the more limited (and less different) measurements to compare it to ("adjust uncertainties") ..that you would decide that I am a bad scientist and not to be trusted? What actions or characteristics would make you trust a scientist faced with data that differs from a model?
  23. Clyde #541: And with that, you show unequivocally that you really don't have an understanding of what a modeller does, and how a modeller goes about their work. In your #537, you exactly did not answer the specific questions, as you stated that a modeller is someone who can "write the code". Climate modellers around the world can "write the code"! That part is easy! The hard part is validating the code. But you have, as yet, given absolutely no explanation as to why all these people who can "write the code" cannot write and validate a good climate model. You additionally, as Bob says, give no explanation as to why checking/changing a model, having found a discrepancy with real-world data, is anything other than good science. I wonder if you can furnish us with a specific example of the occasions where adjustments "make the temp higher", because to me it sounds like you are confusing temperature reconstructions with climate models. You also are, by this statement indirectly attributing deliberate motivations to the approaches of scientists. Do you actually believe anybody wants temperature to be higher? In other respects, I concur entirely with what Bob says
  24. I told you you would be wasting your time. Clyde isn't really answering your questions and he isn't allowing himself to be pinned down. When you do so he just switches to another argument (which is then deleted). He's not here to learn.
    Response: TC: Indeed. If Clyde does not very shortly answer some of the questions directed at him with answers that would actually substantiate his initial claims, or else acknowledge those initial claims to have been in error, or misinformed, this discussion will be in danger of violating the "no excessive repetition clause" of the comments policy.
  25. Perhaps some personal experience may be illuminating. Lest this be perceived as "dogpiling", I'm happy to respond in the context of skywatcher's claim "It is much easier to begin with an understanding of climate and physics, and graduate onto writing computer code, which is fundamentally not that difficult to do, than the alternative." I am part of a team of three people that writes scientific software (not climate-related). It involves modelling, calibration, error estimation, and 3D graphics, and the consequences of mistakes can be extremely serious. Two of us -- myself included -- are computer science graduates. The third has a PhD in the field that the software is actually used in. My CS degree was very heavy in mathematics (an option I took because I love maths) and the software is very maths-intense, which is obviously an advantage. I understand how the software works, and can explain it to others. However, the scientific innovations in the software usually come from the guy with the PhD in the field. I normally take his working implementation and optimise the hell out of it, as well as do all the 3D graphics stuff, etc., but he usually comes up with the core algorithms. He had no formal computer science training, and learnt most of his coding "on the job". His implementation is still often far from perfect, especially performance-wise, and he isn't aware of a pretty large body-of-knowledge about how to implement things well, but it still works. If he didn't have us, I believe he could still have produced working software that would have done the job, although it would have been orders of magnitude slower, less "fancy" from a user's point of view, and probably much harder to maintain and difficult to understand. It certainly wouldn't have less trustworthy just because it wasn't written by somebody with a CS degree. OTOH, if we didn't have him, we could still have written some software (I know, because we had more primitive software 15 years ago when he joined) but it wouldn't have been as sophisticated and it certainly would have taken us a lot longer to think up the algorithms that he has developed over the years. Writing computer code can either be extremely easy or the most difficult thing a human being can attempt to do. It depends on the nature and complexity of the code. Scientific code is generally not that complex from a computer-science point of view -- the important parts are simply direct transcriptions of mathematical expressions -- and so speaking as a computer scientist, it doesn't bother me in the least if no computer scientists are involved in the writing of GCMs. What they are possibly missing out on is optimised, multi-threaded implementations with wizz-bang 3D GUIs and easy-to-maintain code, but that doesn't change the correctness or reliability of the models. I also have no problem categorising people with a few decades of experience of writing code without CS degrees as "computer modellers". My PhD supervisor, like virtually all CS academics of his generation, had degrees in other disciplines (physics, in his case). It would be pretty absurd to classify me as a computer modeller but not the people who taught me!

Prev  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us