Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

At a glance

So, what are computer models? Computer modelling is the simulation and study of complex physical systems using mathematics and computer science. Models can be used to explore the effects of changes to any or all of the system components. Such techniques have a wide range of applications. For example, engineering makes a lot of use of computer models, from aircraft design to dam construction and everything in between. Many aspects of our modern lives depend, one way and another, on computer modelling. If you don't trust computer models but like flying, you might want to think about that.

Computer models can be as simple or as complicated as required. It depends on what part of a system you're looking at and its complexity. A simple model might consist of a few equations on a spreadsheet. Complex models, on the other hand, can run to millions of lines of code. Designing them involves intensive collaboration between multiple specialist scientists, mathematicians and top-end coders working as a team.

Modelling of the planet's climate system dates back to the late 1960s. Climate modelling involves incorporating all the equations that describe the interactions between all the components of our climate system. Climate modelling is especially maths-heavy, requiring phenomenal computer power to run vast numbers of equations at the same time.

Climate models are designed to estimate trends rather than events. For example, a fairly simple climate model can readily tell you it will be colder in winter. However, it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Weather forecast-models rarely extend to even a fortnight ahead. Big difference. Climate trends deal with things such as temperature or sea-level changes, over multiple decades. Trends are important because they eliminate or 'smooth out' single events that may be extreme but uncommon. In other words, trends tell you which way the system's heading.

All climate models must be tested to find out if they work before they are deployed. That can be done by using the past. We know what happened back then either because we made observations or since evidence is preserved in the geological record. If a model can correctly simulate trends from a starting point somewhere in the past through to the present day, it has passed that test. We can therefore expect it to simulate what might happen in the future. And that's exactly what has happened. From early on, climate models predicted future global warming. Multiple lines of hard physical evidence now confirm the prediction was correct.

Finally, all models, weather or climate, have uncertainties associated with them. This doesn't mean scientists don't know anything - far from it. If you work in science, uncertainty is an everyday word and is to be expected. Sources of uncertainty can be identified, isolated and worked upon. As a consequence, a model's performance improves. In this way, science is a self-correcting process over time. This is quite different from climate science denial, whose practitioners speak confidently and with certainty about something they do not work on day in and day out. They don't need to fully understand the topic, since spreading confusion and doubt is their task.

Climate models are not perfect. Nothing is. But they are phenomenally useful.

Please use this form to provide feedback about this new "At a glance" section. Read a more technical version below or dig deeper via the tabs above!


Further details

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been shown to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. Sea level rise is a good example (fig. 1).

Fig. 1: Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. A 2019 study led by Zeke Hausfather (Hausfather et al. 2019) evaluated 17 global surface temperature projections from climate models in studies published between 1970 and 2007.  The authors found "14 out of the 17 model projections indistinguishable from what actually occurred."

Talking of empirical evidence, you may be surprised to know that huge fossil fuels corporation Exxon's own scientists knew all about climate change, all along. A recent study of their own modelling (Supran et al. 2023 - open access) found it to be just as skillful as that developed within academia (fig. 2). We had a blog-post about this important study around the time of its publication. However, the way the corporate world's PR machine subsequently handled this information left a great deal to be desired, to put it mildly. The paper's damning final paragraph is worthy of part-quotation:

"Here, it has enabled us to conclude with precision that, decades ago, ExxonMobil understood as much about climate change as did academic and government scientists. Our analysis shows that, in private and academic circles since the late 1970s and early 1980s, ExxonMobil scientists:

(i) accurately projected and skillfully modelled global warming due to fossil fuel burning;

(ii) correctly dismissed the possibility of a coming ice age;

(iii) accurately predicted when human-caused global warming would first be detected;

(iv) reasonably estimated how much CO2 would lead to dangerous warming.

Yet, whereas academic and government scientists worked to communicate what they knew to the public, ExxonMobil worked to deny it."


Exxon climate graphics from Supran et al 2023

Fig. 2: Historically observed temperature change (red) and atmospheric carbon dioxide concentration (blue) over time, compared against global warming projections reported by ExxonMobil scientists. (A) “Proprietary” 1982 Exxon-modeled projections. (B) Summary of projections in seven internal company memos and five peer-reviewed publications between 1977 and 2003 (gray lines). (C) A 1977 internally reported graph of the global warming “effect of CO2 on an interglacial scale.” (A) and (B) display averaged historical temperature observations, whereas the historical temperature record in (C) is a smoothed Earth system model simulation of the last 150,000 years. From Supran et al. 2023.

 Updated 30th May 2024 to include Supran et al extract.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

Last updated on 30 May 2024 by John Mason. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Further reading

Carbon Brief on Models

In January 2018, CarbonBrief published a series about climate models which includes the following articles:

Q&A: How do climate models work?
This indepth article explains in detail how scientists use computers to understand our changing climate.

Timeline: The history of climate modelling
Scroll through 50 key moments in the development of climate models over the last almost 100 years.

In-depth: Scientists discuss how to improve climate models
Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.

Guest post: Why clouds hold the key to better climate models
The never-ending and continuous changing nature of clouds has given rise to beautiful poetry, hours of cloud-spotting fun and decades of challenges to climate modellers as Prof Ellie Highwood explains in this article.

Explainer: What climate models tell us about future rainfall
Much of the public discussion around climate change has focused on how much the Earth will warm over the coming century. But climate change is not limited just to temperature; how precipitation – both rain and snow – changes will also have an impact on the global population.

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Denial101x videos

Here are related lecture-videos from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Myth Deconstruction

Related resource: Myth Deconstruction as animated GIF

MD Model

Please check the related blog post for background information about this graphics resource.

Fact brief

Click the thumbnail for the concise fact brief version created in collaboration with Gigafact:

fact brief

Comments

Prev  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  Next

Comments 576 to 600 out of 620:

  1. Bob, as you saw in my second comment, I realized after my first comment that I had in fact assumed a model. My model assumes the rise in CO2 is causing a rise in water vapor and a larger addition to global temperature (57% vs 43%) than the CO2. The question is how well this model applies to the future. There could be long term positive feedback from other sources (e.g. methane) which I am not considering. I am just looking at short term WV feedback and whether the present feedback will continue. I had a lot longer post about WV feedback in models, but the preview erased my comments, possibly due to some bad format code in text I cut/pasted from two different papers. I referred to a report entitled "Climate Models: An Assessment of Strengths and Limitations" and they answer one of my concerns about the unevenness of WV in a sidebar on p. 24. They refer to a paper by Ramaswamy which I could not find, but found a similar paper by Held and Soden (2000): http://www.dgf.uchile.cl/~ronda/GF3004/helandsod00.pdf On p. 450 in HS00, they talk about the importance of circulation in determining the distribution of water vapor. I agree with their final remark indicating that satellite measurements of the distribution of WV should validate modeled WV distribution by 2010 or very likely by 2020 (they wrote that in 2000). There should be recent papers on that topic which I need to look for. What it boils down to is if water vapor is unevenly distributed then there will be less WV feedback and that will be determined by circulation (in reality and in the models).
  2. Eric: The question is how well this model applies to the future. No, the question I was asking was "What kind of model could be used to support a claim of "significant contributing factor", but would not also have an estimate of sensitivity built into it?" Perhaps you are now accepting that any models used in the short-term will have a sensitivity "built in"? Perhaps what you are wanting to do is to argue about the uncertainty in those results? The question of whether that model works well is a different issue from whether or not it has a "sensitivity is built into it". Any model that quantifies the extent to which human activities (i.e., CO2 increase) have contributed to current warming must also have an associated "sensitivity built into it". It's simply a question of how you run the model and how you process the results. You are diverting the discussion into an evaluation of different feedbacks. The models, when used to look at recent warming, are basically the same models that are used to estimate 2xCO2 sensitivity. They incorporate the same feedbacks. They incorporate the same uncertainties in feedbacks. I have grabbed the Climate Models report you refer to, and it does talk about water vapour feedback uncertainty, but the question (in my mind) is: ...why do you decide that all the uncertainties are wrong, and climate sensitivity of the models (which is what is used to decide on the uncertainty) can't be trusted - and indeed you are convinced the sensitivity is too high? You seem to trust the models in the short-term, you seem to feel that something is not handled properly in the long-term, and then you use that lack of trust to argue for a greater certainty/less uncertainty (at the low end) than the scientists come up with. It appears to me that you accept the WV feedback in the short term, and are convinced that the models do it wrong in the longer-term, and then conclude that the only possible correct answer is the one at the low end of the uncertainty. The documents you mention are expressions of that uncertainty, not an argument that the correct answer is at the low end. Your decision at the end of the logic/evidence chain, that the sensitivity is is at the low end of the scientists range, looks like you're just applying magic. [posting note: I've found that if you forget to close an href tag (i.e., leave off the closing > after the link), the editor will drop everything after that point in the text box. I've made the habit of doing ^A ^C to select everything and copy it to the clipboard before I hit "Preview". When I'm feeling particularly unsure, I paste it into a text editor to be sure I've got a copy.]
    Response: [Sph] If this happens, the content of your comment will probably still be there, but just be invisible. Simply post another comment asking a moderator to repair your post, and it will probably be done fairly quickly.
  3. Bob, first I agree with your posting note, some of my best arguments have remained hidden inside unclosed HTML tags. To answer your "It appears to me" paragraph, I accept WV feedback in the short term by which I mean the last few decades in total since WV can fluctuate naturally in shorter intervals. Running the models for the longer run into the future results in circulation pattern changes and associated localized weather changes. Some of the uncertainty in those changes are known to be at the lower end of sensitivity. For example the models underestimate the intensity of precipitation, they underestimate the penetration of cold air masses,, they underestimate storm intensity compared to finer resolution models. These all result in underestimation of negative feedback in particular underestimated latent heat flux.
  4. Eric: "Some of the uncertainty in those changes are known to be at the lower end of sensitivity." Alas, asserting this beyond the knowledge of the scientists is a "dog that won't hunt". If these things are "known to be at the lower end of the sensitivity", then they aren't "some of the uncertainty". You are engaged entirely in wishful thinking.
  5. Bob, "...wishful thinking." Hmmmmmm.
  6. New study of seven climate models finds skill demonstrated for periods of 30 years and longer, at geographic scales of continent and larger: Sakaguchi, Zeng, and Brunke.
  7. dvaytw, other than Christy I have not heard of anyone using that for a baseline referent, let alone any climate models. Climate model referents are typically based on 30-year periods or more. Suggestion: Have your friend cite a source for that claim. Because it reeks like bunkum.
  8. I'm sorry if someone has already brought this up, but I don't want to read the entire comments section. An AGW denier friend of mine has brought up this question: "Why oh why do so many models use 1979-1982 as a base?" I dismissed his question for the anomaly-hunting it is, but I am curious if, assuming his observation is correct, anyone here happens to know the answer.
  9. dvaytw, Can you ask your friend if he can cite a single instance of a model that uses a 1979-1982 base? I have never heard of such a baseline. Usually climate models use a thirty year base. Hansen uses 1950-1979 and others use more recent data. A few special data sets use a 20 year base because the baseline is changing so fast, due to AGW, that 30 years is not representative of the true baseline. It is difficult to counter an argument that is completely non factual.
  10. On another thread, Snorbert Zangox asked:
    I wonder why, if that works so well, that the models cannot reproduce the past 16 years of temperatures not following carbon dioxide concentrations.
    This one's easy. They can. This is the output from a very simple 2-box model. I wrote it in R in ~20 lines of code. All it does is find the response function which matches forcing to temperature over the past 130 years, with an additional term to account for the substantial impact of ENSO on temperatures. Red is model, blue is GISTEMP. You can see that the model also shows similar 1998 peak with a higher trend before and lower trend after. Why? Because there have been more La Ninas over the past few years, and the difference between an El Nino and La Nina is roughtly equivalent to 15 years of warming (see for example this article). The model reproduces reality very well indeed. Now, since the ENSO cycle is chaotic, we can't predict when a run of La Ninas or El Ninos will occur, so this couldn't be predicted in advance. But if you look in the model runs for real climate models which reproduce ENSO well, you see exactly this sort of behaviour. The models predict it will happen from time to time, but not when. There is a second aspect to your question, which you reveal in the 16 year figure. I guess you are referring to the viral '16 years of no warming' story. Ask yourself the following two questions: 'Why do this stories always HadCRUT and not GISTEMP?' and 'Why does no-one ever show a comparison of the gridded data?' Now look at this image which shows the change in temperature between the beginning and the end of the period from various sources. Which datasets have the best coverage? What is going on in the regions omitted in the HadCRUT data? You should now understand why HadCRUT shows less warming that GISTEMP over this period.
    I also wonder, admittedly without having thoroughly read the papers, whether we are using an elaborate logical scheme that is circular.
    No, because we are talking about completely different models. If a climate model was being used to determine the aerosol forcing, you would have a potential case, however we are talking about completely different models. Atmospheric chemistry models are used to describe the behaviour of gasses in the atmosphere and are based on physics and chemistry which is observed in the laboratory. The results are combined with radar, IR, microwave and optical measurements to determine the state of the atmosphere - so far everything is empirical. This empirical data is tested against economic variables to determine how well the atomspheric chemistry is predicted by industrial activity. The robust agreement provides a basis for reconstructing atmospheric data from industrial activity before the observation period. The chain of inference is linear, not circular. Furthermore, no climate models are involved. (There appear to be several other approaches. Some involve climate models as a consistency check.)
  11. So, if I am to accept the science presented here, I must accept that: 1) We have a comprehensive understanding of all the inputs, feedbacks, timing, and interactions of the global climate, 2) We have defined this in a computer model identically with that understanding, 3) We have no bugs or unintended effects programmed into our computer models (Microsoft is very jealous), 4) We have an absolutely accurate understanding of the current climate, and 5) we have done all of this without opening up the models for public review by those who might disagree with them. If not, we have, at best, an approximation with lots of guesses and estimations that might be full of holes and bugs, yet we used it to make predictions that we trust enough to propose spending Trillions of dollars in response. Of course, at worst, we have crappy code that can cause more harm than good by convincing us of things that we don't know enough about how they came about to doubt. Please forgive me if I am still skeptical of the computer models. I do enough modeling in computers to be dubious of anything you get out of an imperfect model.
  12. Jack... You're making completely erroneous assumptions. Even Ben Santer says that models don't do a great job. That is why they rely on model ensembles and multiple model runs. Santer also says that some models are better than others, but ensembles perform better than even the better models. AND weighting the better models makes the ensembles perform even better. It sounds more to me like you are looking for reasons to dismiss the science rather than honestly attempting to understand the science.
  13. Sigh. Are the models, in fact, untestable? Are they unable to make valid predictions? Let's review the record. Global Climate Models have successfully predicted: • That the globe would warm, and about how fast, and about how much. • That the troposphere would warm and the stratosphere would cool. • That nighttime temperatures would increase more than daytime temperatures. • That winter temperatures would increase more than summer temperatures. • Polar amplification (greater temperature increase as you move toward the poles). • That the Arctic would warm faster than the Antarctic. • The magnitude (0.3 K) and duration (two years) of the cooling from the Mt. Pinatubo eruption. • They made a retrodiction for Last Glacial Maximum sea surface temperatures which was inconsistent with the paleo evidence, and better paleo evidence showed the models were right. • They predicted a trend significantly different and differently signed from UAH satellite temperatures, and then a bug was found in the satellite data. • The amount of water vapor feedback due to ENSO. • The response of southern ocean winds to the ozone hole. • The expansion of the Hadley cells. • The poleward movement of storm tracks. • The rising of the tropopause and the effective radiating altitude. • The clear sky super greenhouse effect from increased water vapor in the tropics. • The near constancy of relative humidity on global average. • That coastal upwelling of ocean water would increase. Seventeen correct predictions? Looks like a pretty good track record to me.
  14. Another useful page on model reliability which provides model prediction, the papers that made it, the data that verifies it. Incomplete models with varies degrees of known and unknown uncertainties are just part of life - you should see my ones which help petroleum companies make multi-million dollar drilling decisions. A model useful if it has skill - able to outperform a naive prediction. The trick is understanding what those uncertainties are and what are robust predictions can be made. The models have no skill at decadal or sub-decadal climate (unless there is a very strong forcing). They have considerable skill in long term trends. You a mathematical model to calculate detailed climate change. You dont need a complicated model to see into the underlying physics and its implication, nor to observe the changes in climate.
  15. Jack, come on. You ask for precision and then vomit a bunch of hearsay. On models, a hypothetical: you're walking down the street carrying a cake you've spent hours making. You see someone step out of a car 900 yards down the street. The person aims what looks to be a 30-06 in your direction. You see a flash from the muzzle. What do you do? You're arguing to do nothing, because the precise trajectory of the slug is unknown -- the aim of the person may be bad, or you may not be the target, or the bullet may have been inferior, or other conditions might significantly affect the trajectory. You wouldn't actually do nothing, though. The only question is whether you'd drop the cake to roll to safety, or whether you'd try to roll with the cake. Your instinctual modeling would calculate a probable range for each variable, and the resulting range of probable outcomes would be fairly limited, and most of those outcomes would be unpleasant. So it is with climate modeling. The variables are variable, but the range for each is limited. Climate modelers don't choose one run and say, "Here's our prediction." They work through the range of probable scenarios. Solar output, for example, is likely not going to drop or rise beyond beyond certain limits. We know the power of the various GHGs, and their forcing is likely going to be not far from the known power (see, for example, Puckrin et al. 2004). Climate feedbacks aren't going to go beyond certain bounds. Even the range of net cloud feedback--still understudied--is fairly well-established. To put it back in terms of the analogy, it's not like the shooter is wearing a blindfold, or is non-human, or the gun is a toy, or the shooter keeps changing from a couch to a sock to a human to breakfast cereal. It's not like the shooter is working under a different physical model than the target. And it's not like people haven't been shot before. There's plenty of geologic timescale precedence that supports the theorized behavior of atmospheric CO2. Finally, your assumption that we must drop the cake (spend alleged trillions, with the assumption being that these alleged trillions will not re-enter the economy in various ways) is a bad one. We don't have to drop the cake. We just have to be really smart about our moves. We have to have vision, and that is sorely lacking under the current economic and political regime(s).
  16. Daniel Bailey & scaddenp: Thank you, that's helpful to me to see that. However, if Foster and Rahmsdorf are correct, then the models are wrong recently b/c they missed some critical information. One of them has to be 'wrong', since they either explain why warming is hidden or predict that warming happened and isn't hidden. I'm not throwing the GCMs out the window b/c they need better tuning, but shouldn't we support identifying their weaknesses and correcting them so the GCMs can make better predictions with their next run? I'm not a scientist, I don't play one on TV, but I'm trying hard to better understand all this. I will try to ask better questions as I learn more, but I'm thick skinned enough to tolerate being berated when I ask a stupid one. However, I am an economist by training, and I do a lot of computer modeling in my job, so I am quite familiar with those aspects of this topic. That's also why the economic arguments about a lack of 'real' costs to changing policies is one I dismiss easily. It's the classic 'broken window' proposition, thinking that breaking a window benefits the economy by getting a glass repairman paid to fix the window and then he spends that money on a new TV, which means the worker who made the TV spends his increased wages on a... It only works if you assume that the money to pay the glass repairman was magically created and didn't devalue the remaining currency. Otherwise, you are pulling money from an investment that can increase economic efficiency to spend on a repair to get back to the same level of efficiency you were before the glass was broken. It has been shown in numerous manners that it is a flawed proposition, and it also doesn't 'make sense' (no economy has been helped by being bombed by the US). Yet, it get repeated often to justify spending money on things that don't increase efficiency but cost a lot. I freely admit there are times when it makes sense to spend the money that is being proposed here, but don't try to pretend that their aren't real financial costs.
  17. JackO'Fall @591, 1) If you are an economist, you know that the true cost to the economy of a fee and dividend carbon tax (or similar) is not measured by the cost of the fee alone; and indeed is a small fraction of it. Your characterizing such costs in terms of "trillions" of dollars is, therefore unwarranted (to be polite). 2) If you are an economist worth your salt, you will recognize that uncompensated negative externalities make the economy inefficient, and would be advocating a carbon tax to fund the medical costs, plus costs in lost income for those affected, associated with the burning of coal irrespective of your opinions on global warming. 3) If you were a modeler worth your salt, you would recognize the difference between a prediction, and a conditional prediction premised on a particular forcing scenario. A slight difference between a conditional prediction premised on a particular forcing scenario and observations when the actual forcings differed from those in the scenario does not make the model wrong. It just means the modelers where not perfect in predicting political and economic activity ten or more years into the future. The test of the model is the comparison between prediction and observations once the model is run on actual forcings.
  18. Tom Curtis @592, Re 1): Tax and dividend policies are deceptive, in my view. It is not just a wealth transfer, but it changes behavior (as you intend it to do). This change in behavior has ripples, and the ripples have efficiency costs. For example, a carbon tax in India or China would prevent most of the new coal plants from opening (if it didn't, I think it's fair to call the tax a failure). There is no 'second best' option available to replace those plants at a price that is viable (today). Thus, the tax is retarding the economic growth and advancement of millions of our most poor people without actually collecting any revenue from those power plants. There would be no redistribution of wealth as a result, instead there would be a lack of growth and no taxation to show for it. To produce that missing power with a 'greener' technology will indeed push the price tag into the trillions. Unless you have a cheaper way to produce that volume of power. (side note: I don't want them to build those coal plants for a number of health reasons, but I recognize the economics of it for them and that until they are at a higher economic level, clean isn't a concern for them) 2) Uncompensated or not, ALL negative externalities cause inefficiencies. Taxing them is helpful in reducing the net effect, but compensating for them is actually counterproductive (it creates an incentive to by 'harmed' and eliminates the incentive to avoid harm). To pay for the costs associated with coal, I would fully support a targeted tax on what causes the medical issues (clean coal [in spite of being a misnomer] produces a lot less harmful byproducts than dirty coal, but little difference in CO2). Taxing the carbon would be a very inefficient way to deal with that problem, compared to taxing the release of specific combustion byproducts. But, yes, in a general sense, taxing externalities, such as coal byproducts, is an efficient way to try to compensate for the negative consequences. 3) You are 100% correct that the method you propose to test a GCM would be the best. However, I have never seen a model used that way. Hindcasting is a distant cousin, as best, as the models were developed to account for the known inputs and known climate. Taking the exact models used in AR4 and updating all the unknown variables, specifically CO2 emissions (as opposed to CO2 levels), volcanic activity, & Solar output for the following 8 years (unknown to the modeler at the time of finishing their model), you should be able to eliminate the range of results normally produced and create a single predictive result. That should be much more useful to compare than a range of predictions to cover the uncertainty. Comparing that 'prediction' with actual measurements would be the best way to test the GCMs and would even provide a result that 'could' be proven wrong. Having the possibility of being proven wrong by observations is actually a needed step for AGW, otherwise it hardly fits the definition of a theory. Though, I suspect the climate models of today are more accurate than those used in AR4 (at least I hope they are, otherwise our process is really broken). OTOH, if you are a modeler worth your salt, you will freely admit the range of shortcomings in models, the inherent problems with any computer model, the difficulty with trying to model as chaotic and complex a system as our climate, and the dangers introduced with any assumptions included. At least, those are the caveats I accept in my modeling (except for the difficulty modeling the climate; I have much more simple tasks, but ones with more immediate and absolute feedback to test my predictions).
  19. Jack O'Fall: Please move any further discussion on the economics of climate change to an appropriate thread. It is completely off-topic on this thread. If you have references to economic analyses supporting your position please provide them on an appropriate thread; otherwise you are engaged in unsubstantiated assertion, which will not get you very far here. (All: Any responses to Jack regarding ecnomics should also be on an appropriate thread.) As far as the rest of the wall of text goes, please note that climate models are attempts to create forecasts based on the known physics, existing empirical data, and the reconstructed paleoclimate record. If you want to "disprove" AGW, meaningfully, you must show the physics is wrong, not the models.
  20. JackO'Fall, probably you do not realize how fundamentally wrong some of your contentions are, due to your admitted lack of background in climate modeling, climatology, and science in general. You are overconfident in your experience with modeling in general, too. Your expressed overconfidence is going to trigger some strong reactions. I hope you do not take offense and retreat, but instead get some humility and learn. Many lay people new to the global warming discussions enter with similar overconfidence. You need to learn the fundamentals. Start with this short set of short videos from the National Academy of Sciences: Climate Modeling 101. Your implication that climate modelers do not want to, or have not considered, improving their models is not just offensive to them, but reflects poorly on your understanding and ascribing of motivations. Modelers are consumed by the desire to improve their models, which you would know if you had even a passing familiarity with their work; every paper on models describes ideas for how the models can be improved. Just one example is the National Strategy for Advancing Climate Modeling. Then look at the Further Reading green box at the bottom of the original post on this page (right above the comments). Your contention that models are not open for public inspection is wildly wrong. A handy set of links to model code is the "Model Codes" section on the Data Sources page at RealClimate. Also see the last bullet, "Can I use a climate model myself?", on the RealClimate page FAQ on Climate Models. (You should also read the FAQs.) The Clear Climate Code project is an open source, volunteer rewriting of climate models. So far it has successfully reproduced the results of the GISTEMP model. But computer models are not needed for the fundamental predictions, as Tamino nicely demonstrated. Successful predictions were made long before computers existed, as Ray Pierrehumbert recently explained concisely in his AGU lecture, Successful Predictions. The vertical line in this first graph separates hindcasts from forecasts by a bunch of models. Your baseless, snide remark about "crappy code" reveals your ignorance of software engineering. You can start to learn from Steve Easterbrook's site. Start with this post, but continue to browse through some of his others.
  21. JackO'Fall Tom Curtis - "The test of the model is the comparison between prediction and observations once the model is run on actual forcings." JackO'Fall - "You are 100% correct that the method you propose to test a GCM would be the best. However, I have never seen a model used that way." Then I would suggest looking at the performance of various models here on SkS, both of "skeptics" and of climate researchers, as well as the considerable resources on Realclimate: Evaluation of Hansen 1981 Evaluation of Hansen 1988 2011 Updates to model-data comparisons And even a cursory look via Google Scholar provides a few items worth considering (2.4 million results?). I find your statement quite surprising, and suggest you read further. "OTOH, if you are a modeler worth your salt, you will freely admit the range of shortcomings in models, the inherent problems with any computer model, the difficulty with trying to model as chaotic and complex a system as our climate, and the dangers introduced with any assumptions included." If you are a modeler worth your salt, you will know that all models are wrong, but many are close enough to be useful. And the record of reasonable models in predicting/tracking climate is quite good. Your statements regarding models appear to be Arguments from Uncertainty - the existence of uncertainty does not mean we know nothing at all.
  22. No such thing as a "throwaway remark" in our brave new world; you can't just roll down the window and toss out litter without finding it stubbornly affixed to your reputation, later on. I hope Jack will deal with Tom and KR's remarks; judging by those I count perhaps a half-dozen assertions on Jack's part that appear to be baseless.
  23. " if Foster and Rahmsdorf are correct" If they are correct, then short term surface temperatures are dominated by ENSO, and so for climate models to have the accuracy you want, then they would have to make accurate predictions about ENSO for decades in advance. In practice, ENSO is difficult to predict even month's in advance. Is the current rash of La Ninas unusual historically? No, nor would a rash of El Ninos be but I will bet that if it happens there wont be complaints about models underestimating rate of warming. Climate models have no skill at decadal prediction, nor do they pretend to. It's a fake-skeptic trick to try and prove models wrong by looking at short term trends. Climate is 30 year weather trends, and these are the robust outputs of models. There are attempts at decadal predictions - look at Keenlyside et al 2008. How is this one working out? If surface temperatures are dominated by chaotic weather, then instead of looking at the surface temperature to validate models, then how about looking at indicators that are long-term integrators. Eg OHC and say global glacial volume?
  24. @ Tom Dayton: I never said a modeler doesn't want to improve their model, just that I would like them to continuously improve and that the need to include new feedbacks is not bad. I threw that in there in hopes of showing that I'm not rooting against the models. Apparently I missed getting that across. My apologies. My time is limited, so I know I miss a lot of data out there (and don't have a chance to reply to a lot of what gets written back at me). However, I looked at the GISTEMP link, it doesn't look like a climate model, but an attempt to recreate the corrective actions that go into adjusting the raw data from the temperature stations and producing the GISS results. Still, cool that they are doing it. What I was referring to is a lack of source-code with documentation for the GCMs. If it exists, I am clearly wrong and fully retract that statement. I also read the RealClimate FAQs on GCMs, as suggested. It seemed to agree with many of my basic contentions. (they have estimations that they know are off, they don't include everything we know, they are prone to drifting-though less than in the past, they are primarily tuned by trial&error, not scientific principles [please note the word 'tuned']) @KR: While both of Hansen's graphs seem to do a good job estimating future temperatures, that's not what I was referring to. I believed Tom Curtis was proposing re-running a 2004 scenario (for example), yet adding in known 'future' levels of things like CO2 and volcano emissions. If that exists, please let me know. Of course, the other link was inconclusive, so be polite. The range of uncertainty for those models is so large it doesn't really tell us anything. A result so broad that it would be difficult for it to ever be wrong is also not very right. @scaddenp: At the very least, if the ENSO correction is more natural variability than previously understood, that is very helpful. In terms of modeling that, it will probably increase the uncertainty range, but would allow for a better run at what I believe Tom Curtis proposed. That may be more helpful in time scales of less than 30 years (I'm not sure anyone will wait 30 years to see if the current models accurately predicted the future). My apologies for off-topic discussion. I should not have made my initial off-the-cuff economic response, and certainly should not have replied more extensively.
  25. Jack, Anyone who has looked at this issue at all should know that GISTEMP has two web links. One gives their code and documentation for determining the anomaly of surface temperatures and the other gives code and documentation for their climate model. That includes all the source code and documentation that you can desire,including old models. You need to do your homework before you criticize hard-working scientists efforts. Look at GISS again and find the climate model link.
    Response: [TD] Typo: "GISTEMP has two web links" should be plain "GISS."

Prev  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us