Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

At a glance

So, what are computer models? Computer modelling is the simulation and study of complex physical systems using mathematics and computer science. Models can be used to explore the effects of changes to any or all of the system components. Such techniques have a wide range of applications. For example, engineering makes a lot of use of computer models, from aircraft design to dam construction and everything in between. Many aspects of our modern lives depend, one way and another, on computer modelling. If you don't trust computer models but like flying, you might want to think about that.

Computer models can be as simple or as complicated as required. It depends on what part of a system you're looking at and its complexity. A simple model might consist of a few equations on a spreadsheet. Complex models, on the other hand, can run to millions of lines of code. Designing them involves intensive collaboration between multiple specialist scientists, mathematicians and top-end coders working as a team.

Modelling of the planet's climate system dates back to the late 1960s. Climate modelling involves incorporating all the equations that describe the interactions between all the components of our climate system. Climate modelling is especially maths-heavy, requiring phenomenal computer power to run vast numbers of equations at the same time.

Climate models are designed to estimate trends rather than events. For example, a fairly simple climate model can readily tell you it will be colder in winter. However, it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Weather forecast-models rarely extend to even a fortnight ahead. Big difference. Climate trends deal with things such as temperature or sea-level changes, over multiple decades. Trends are important because they eliminate or 'smooth out' single events that may be extreme but uncommon. In other words, trends tell you which way the system's heading.

All climate models must be tested to find out if they work before they are deployed. That can be done by using the past. We know what happened back then either because we made observations or since evidence is preserved in the geological record. If a model can correctly simulate trends from a starting point somewhere in the past through to the present day, it has passed that test. We can therefore expect it to simulate what might happen in the future. And that's exactly what has happened. From early on, climate models predicted future global warming. Multiple lines of hard physical evidence now confirm the prediction was correct.

Finally, all models, weather or climate, have uncertainties associated with them. This doesn't mean scientists don't know anything - far from it. If you work in science, uncertainty is an everyday word and is to be expected. Sources of uncertainty can be identified, isolated and worked upon. As a consequence, a model's performance improves. In this way, science is a self-correcting process over time. This is quite different from climate science denial, whose practitioners speak confidently and with certainty about something they do not work on day in and day out. They don't need to fully understand the topic, since spreading confusion and doubt is their task.

Climate models are not perfect. Nothing is. But they are phenomenally useful.

Please use this form to provide feedback about this new "At a glance" section. Read a more technical version below or dig deeper via the tabs above!


Further details

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been shown to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. Sea level rise is a good example (fig. 1).

Fig. 1: Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. A 2019 study led by Zeke Hausfather (Hausfather et al. 2019) evaluated 17 global surface temperature projections from climate models in studies published between 1970 and 2007.  The authors found "14 out of the 17 model projections indistinguishable from what actually occurred."

Talking of empirical evidence, you may be surprised to know that huge fossil fuels corporation Exxon's own scientists knew all about climate change, all along. A recent study of their own modelling (Supran et al. 2023 - open access) found it to be just as skillful as that developed within academia (fig. 2). We had a blog-post about this important study around the time of its publication. However, the way the corporate world's PR machine subsequently handled this information left a great deal to be desired, to put it mildly. The paper's damning final paragraph is worthy of part-quotation:

"Here, it has enabled us to conclude with precision that, decades ago, ExxonMobil understood as much about climate change as did academic and government scientists. Our analysis shows that, in private and academic circles since the late 1970s and early 1980s, ExxonMobil scientists:

(i) accurately projected and skillfully modelled global warming due to fossil fuel burning;

(ii) correctly dismissed the possibility of a coming ice age;

(iii) accurately predicted when human-caused global warming would first be detected;

(iv) reasonably estimated how much CO2 would lead to dangerous warming.

Yet, whereas academic and government scientists worked to communicate what they knew to the public, ExxonMobil worked to deny it."


Exxon climate graphics from Supran et al 2023

Fig. 2: Historically observed temperature change (red) and atmospheric carbon dioxide concentration (blue) over time, compared against global warming projections reported by ExxonMobil scientists. (A) “Proprietary” 1982 Exxon-modeled projections. (B) Summary of projections in seven internal company memos and five peer-reviewed publications between 1977 and 2003 (gray lines). (C) A 1977 internally reported graph of the global warming “effect of CO2 on an interglacial scale.” (A) and (B) display averaged historical temperature observations, whereas the historical temperature record in (C) is a smoothed Earth system model simulation of the last 150,000 years. From Supran et al. 2023.

 Updated 30th May 2024 to include Supran et al extract.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

Last updated on 30 May 2024 by John Mason. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Further reading

Carbon Brief on Models

In January 2018, CarbonBrief published a series about climate models which includes the following articles:

Q&A: How do climate models work?
This indepth article explains in detail how scientists use computers to understand our changing climate.

Timeline: The history of climate modelling
Scroll through 50 key moments in the development of climate models over the last almost 100 years.

In-depth: Scientists discuss how to improve climate models
Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.

Guest post: Why clouds hold the key to better climate models
The never-ending and continuous changing nature of clouds has given rise to beautiful poetry, hours of cloud-spotting fun and decades of challenges to climate modellers as Prof Ellie Highwood explains in this article.

Explainer: What climate models tell us about future rainfall
Much of the public discussion around climate change has focused on how much the Earth will warm over the coming century. But climate change is not limited just to temperature; how precipitation – both rain and snow – changes will also have an impact on the global population.

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Denial101x videos

Here are related lecture-videos from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Myth Deconstruction

Related resource: Myth Deconstruction as animated GIF

MD Model

Please check the related blog post for background information about this graphics resource.

Fact brief

Click the thumbnail for the concise fact brief version created in collaboration with Gigafact:

fact brief

Comments

Prev  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  Next

Comments 551 to 575 out of 864:

  1. Hi JasonB, you provide a very interesting perspective there, and I think you make the most important point as well:
    It certainly wouldn't have less trustworthy just because it wasn't written by somebody with a CS degree.
    This is the key issue - is the code any less trustworthy because somebody wrote it who wasn't at the core, a CS specialist< and I concur with your answer that it is not. I don't doubt that (for example) code I wrote was a lot more 'clunky', poorly commented, inefficient and all the rest than a CS specialist's code! (though clunky 3D graphics were quite fun to do). Equally I suspect the coders of big GCMs are much more skilled at efficient algorithm generation than I ever was, as they need to be, running large computationally expensive programs. The core algorithms that controlled the scientific part of my programs were as you describe them - transcriptions of mathematical expressions, and computationally relatively straightforward to implement. Some algorithms are harder than others, of course! Ensuring they are correctly implemented is where detailed testing and validation comes in, to make sure the mathematics and physics is as good as you can make it. These are then documented and described in relevant publications, as with all good science. All part of the scientific coder's life. Thanks for your perspective.
  2. JasonB: Yes, an interesting and illuminating example. It seems that Clyde is "on hiatus", but to continue the discussion a bit: "Climate Models" (of the numerical/computer-based type) date back to the 1960s, when all computing was done on mainframes. Individuals wrote portions of code, but the mainframes also typically had installed libraries of common mathematical routines. The one I remember from my mainframe days is IMSL, which (from the Wikipedia page linked to) appeared on the scene in 1970, and is still actively developed. Such libraries were typically highly-optimized for the systems they were running on, and brought state-of-the-art code to the masses. (When I hear object-oriented affectionados talk about "reusable code" as if it is some novel concept, I think back to the days when I created my own linkable libraries of routines for use in different programs, long before "object oriented" was a gleam in someone's eye.) Of course, "state-of-the-art" was based on hardware that would compare badly to the processing power of a smart phone these days, and "the masses" were a small number of people that had access to universities or research institutes with computers. When I was an undergraduate student, one of my instructors had been a Masters student at Dalhousie University in Halifax (east cost) when it got the first computer in Canada east of Montreal. The university I attended provided computing resources to another university in Ontario that did not have a computer of its own. JasonB's description of developing algorithms and such is just what doing scientific computing is all about. The branch of mathematics/computing that is relevant is Numerical Methods, or Numerical Analysis, and it is a well-developed field of its own. It's not user interfaces and pretty graphs or animations (although those make programs easier to run and data easier to visualize), and a lot of what needs to be known won't be part of a current CS program. (My local university has four courses listed in its CS program that relate to numerical methods, and three of them are cross-references to the Mathematics department.) This is a quite specialized area of CS - just as climate is a specialized area of atmospheric science (or science in general). The idea that "climate experts" have gone about developing "climate models" without knowing anything about computers is just plain nonsense.
  3. I would also describe myself as computer modeller (though not in climate, but petroleum basins). My qualifications are geology,maths and yes a few CS papers, notably postgrad numerical analysis. My main concerns are about the numerical methods to solve the equations in the code; their speed, accuracy and robustness. Validation is a huge issue. We also have CS-qualified software engineers who tirelessly work on the code as well. What they bring to the picture is rigorous code-testing procedures (as opposed to model testing which is not the same thing), and massive improvement in code maintainability. Not to mention some incredibly useful insights into the tricky business of debugging code on large parallel MPI systems, and some fancy front-ends. The modelling and software engineering are overlapping domains that work well together. I suspect Clyde thought climate modellers were not programmers at all, imagining people tinkering with pre-built packages. So much skepticism is built on believing things that are not true.
  4. My initial comment on here and firstly thanks to the site for a well-moderated and open forum. I am a hydrologist (Engineering and Science degrees) with a corresponding professional interest in understanding the basics (in comparison to GCMs, etc) of climate and potential changes therein. My main area of work is in the strategic planning of water supply for urban centres and understanding risk in terms of security of supply, scheduled augmentation and drought response. I have also spent the past 20 years developing both my scientific understanding of the hydrologic cycle as well as modelling techniques that appropriately capture that understanding and underpinning science. Having come in late on this post I have a series of key questions that I need to place some boundaries and clarity on the subject. But I'll limit myself to the first and (in my mind) most important. A fundamental question in all this debate is whether global mean temperature is increasing. This has meant we need some form of predictive model in which we have sufficient confidence to simulate temperature changes over tim, under changing conditions, to an appropriate level of uncertainty. So, my first question that I'd appreciate some feedback from Posters is: Q: Is there a commonly accepted (from all sides of the debate) dataset or datasets that the predictive models are being calibrated/validated against? Also happy to be corrected on any specific terminology (e.g. GMT).
  5. opd68 Rather than validate against a single dataset it is better to compare with a range of datasets as this helps to account for the uncertainty in estimating the actual global mean temperature (i.e. none of the products are the gold standard, the differences between them generally reflect genuine uncertainties or differences in scientific opinion in the way the direct observations should be adjusted to cater for known biases and averaged).
  6. opd68 @554, no, there is not a universally accepted measure of Global Mean Surface Temperature (GMST) accepted by all sides. HadCRUT3 and now HadCRUT4, NCDC, and Gistemp are all accepted as being approximately accurate by climate scientists in general with a few very specific exceptions. In general, any theory that is not falsified by any one of these four has not been falsified within the limits of available evidence. In contrast, any theory falsified by all four has been falsified. The few exceptions (and they are very few within climate science) are all very determined AGW "skeptics". They tend to insist that the satellite record is more accurate than the surface record because adjustments are required to develop the surface record (as if no adjustments where required to develop the satellite record /sarc). So far as I can determine, the mere fact of adjustments is sufficient to prove the adjustments are invalid, in their mind. In contrast, in their mind the (particularly) UAH satellite record is always considered accurate. Even though it has gone through many revisions to correct for detected error, at any given time these skeptics are confident that the current version of UAH is entirely accurate, and proves the surface record to be fundamentally flawed. They are, as saying goes, always certain, but often wrong.
  7. Models do not predict just one variable - they necessarily compute a wide range of variables which can all be checked. With the Argo network in place, the predictions of Ocean Heat Content will become more important. They also do not (without post-processing) predict global trends, but values for cells so you can compare regional trends as well as the post-processed global trends. Note too, that satellite MSU products like UAH and RSS measure something different from surface temperature indices like GISS and HAdCrut, and thus different model outputs.
  8. Another thought to when you talk about "model validation". Validation of climate theory is not dependent on GCM outputs - arguably other measures are better. However, models (including hydrological models) are usually assessed in terms of model skill - their ability to make more accurate predictions than simple naive assumptions. For example, GCMs have no worthwhile skill in predicting temperatures etc on timescales of decadal or less. They have considerable skill in predicting 20year+ trends.
  9. Thanks all for the feedback - much appreciated. For clarification, my use of the terms 'calibration' and 'validation' can be explained as: - We calibrate our models against available data and then use these models to predict an outcome. - We then compare these predicted outcomes against data that was not used in the calibration. This can be data from the past (i.e. by splitting your available data into calibration and validation subsets) or data that we subsequently record over time following the predictive run. - So validation of our predictive models should be able to be undertaken against the data we have collected since the predictive run. Dikran & scaddenp – totally agree re: importance of validation against a series of outcomes wherever possible, however I feel that in this case the first step we need to be able to communicate with confidence and clarity is that we understand the links between CO2 and GMT and can demonstrate this against real, accepted data. As such, in the first instance, whatever data was used to calibrate/develop our model(s) is what we need to use in our ongoing validation. Tom Curtis – thanks for that. The four you mention seem to be the most scientifically justifiable and accepted. In terms of satellite vs surface record (as per paragraph above) whatever data type was used to calibrate/develop the specific model being used is what should be used to then assess its predictive performance. From my reading and understanding, a key component of the ongoing debate is: - Our predictive models show that with rising CO2 will (or has) come rising GMT (along with other effects such as increased sea levels, increased storm intensity, etc). - To have confidence in our findings we must be able to show that these predictive models have appropriately estimated GMT changes as they have now occurred (i.e. since the model runs were first undertaken). As an example, using the Hansen work referenced in the Intermediate tab of this Topic, the 1988 paper describes three (3) Scenarios (A, B and C) as: - “Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely” (increasing rate of emissions - quotes an annual growth rate of about 1.5% of current (1988) emissions) - “Scenario B has decreasing trace gas growth rates such that the annual increase in greenhouse forcing remains approximately constant at the present level” (constant increase in emissions) - “Scenario C drastically reduces trace gas growth between 1990 and 2000 such that greenhouse climate forcing ceases to increase after 2000” From Figure 2 in his 2006 paper, the reported predictive outcomes haven’t changed (i.e. versus Fig 3(a) in 1988 paper) which means that the 1988 models remained valid to 2006 (and presumably since?). So we should now be in a position to compare actual versus predicted GMT between 1988 and 2011/12. Again, I appreciate that this is merely one of the many potential variables/outcomes against which to validate the model(s) however it is chosen here as a direct reference to the posted Topic material.
  10. opd68, I think part of your difficulty is in understanding both the complexity of the inputs and the complexity of measuring those inputs in the real world. For example, dimming aerosols have a huge effect on outcomes. Actual dimming aerosols are difficult to measure, let alone to project into their overall effect on the climate. At the same time, moving forward, the amount of aerosols which will exist requires predictions of world economies and volcanic eruptions and major droughts. So you have an obfuscating factor which is very difficult to predict and very difficult to measure and somewhat difficult to apply in the model. This means that in the short run (as scaddenp said, less than 20 years) it is very, very hard to come close to the mark. You need dozens (hundreds?) of runs to come up with a "model mean" (with error bars) to show the range of likely outcomes. But even then, in the short time frame the results are unlikely to bear much resemblance to reality. You have to look beyond that. But when you compare you predictions to the outcome... you now need to also adjust for the random factors that didn't turn out the way you'd randomized them. And you can't even necessarily measure the real world inputs properly to tease out what really happened, and so what you should input into the model. You may look at this and say "oh, then the models are worthless." Absolutely not. They're a tool, and you must use them for what they are meant for. They can be used to study the effects of increasing or decreasing aerosols and any number of other avenues of study. They can be used to help improve our confidence level in climate sensitivity, in concert with other means (observational, proxy, etc.). They can also be used to help us refine our understanding of the physics, and to look for gaps in our knowledge. They can also be used to some degree to determine if other factors could be having a larger effect than expected. But this statement of yours is untrue:
    This has meant we need some form of predictive model in which we have sufficient confidence to simulate temperature changes over tim, under changing conditions, to an appropriate level of uncertainty.
    Not at all. We have measured global temperatures and they are increasing. They continue to increase even when all other possible factors are on the decline. The reality is that without CO2 we would be in a noticeable cooling trend right now. There are also other ways (beyond models) to isolate which factors are influencing climate: Huber and Knutti Quantify Man-Made Global Warming The Human Fingerprint in Global Warming Gleckler et al Confirm the Human Fingerprint in Global Ocean Warming
  11. opd68, Your definitions of calibration and validation are pretty standard but I'd like to make a few points that reflect my understanding of GCMs (which could be wrong): 1. GCMs don't need to be calibrated on any portion of the global temperature record to work. Rather, they take as input historical forcings (i.e. known CO2 concentrations, solar emissions, aerosols, etc.) and are expected to produce both historical temperature records as well as forecast future temperatures (among other things) according to a proscribed future emissions scenario (which fundamentally cannot be predicted because we don't know what measures we will take in future to limit greenhouse gasses -- so modellers just show what the consequences of a range of scenarios will be so we can do a cost-benefit analysis and decide which one is the optimal one to aim for -- and which we then ignore because we like fossil fuels too much). There is some ability to "tune" the models in this sense due to the uncertainty relating to historical aerosol emissions (which some "skeptics" take advantage of, e.g. by assuming that if we don't know precisely what they were then we can safely assume with certainty that they were exactly zero) but this is actually pretty limited because the models must still obey the laws of physics, it's not an arbitrary parameter fitting exercise like training a neural net would be. 2. GCMs are expected to demonstrate skill on a lot more than just global temperatures. Many known phenomena are expected to be emergent behaviour from a well-functioning model, not provided as inputs. 3. Even without sophisticated modelling you can actually get quite close using just a basic energy balance model. This is because over longer time periods the Earth has to obey the laws of conservation of energy so while on short time scales the temperature may go up and down as energy is moved around the system, over longer terms these have to cancel out. Charney's 1979 paper really is quite remarkable in that respect -- the range of climate sensitivities proposed is almost exactly the same as the modern range after 30+ years of modelling refinement. Even Arrhenius was in the ballpark over 100 years ago!
  12. opd68. Your process of calibrate, predict,validate does not capture climate modelling at all well. This is a better description of statistical modelling, not physical modelling. Broadly speaking, if your model doesnt predict the observations, you dont fiddle with calibration parameters; you add more physics instead. That said, there are parameterizations used in the climate models to cope with sub-scale phenomena (eg evaporation versus windspeed). However the empirical relationship used is based on fitting measured evaporation rate to measured wind speed, not fiddling with a parameter to match a temperature trend. In this sense they are not calibrated to any temperature series at all. You can find more about that in the modelling FAQ at Realclimate (and ask questions there of the modellers). Sks did a series of articles past predictions. Look for the Lessons from past predictions series.
  13. Once again, many thanks for the replies. Hopefully I’ll address each of your comments to some degree, but feel free to take me to task if not. It also appears that I should take a step back into the underlying principles of our scientific 'model' (i.e. understanding) - for example how CO2 affects climate and how that has been adopted in our models. Sphearica – thanks for the links. Totally recognise the complexity of the system being modelled, and understand the difference between physically-based, statistical and conceptual modelling. I agree that it is difficult and complex and as such we need to be very confident in what we are communicating due to the decisions that the outcomes are being applied to and the consequences of late action or, indeed, over-reaction. The GCMs, etc that are still our best way of assessing and communicating potential changes into the future are based our on understanding of these physical processes and so our concepts need to be absolutely, scientifically justifiable if we expect acceptance of our predictions. Yes, we have observed rising temperatures and have a scientific model that can explain them in terms of trace gas emissions. No problem there, it is good scientific research. Once we start using that model to predict future impacts and advise policy then we must expect to be asked to demonstrate the predictive capability of that model, especially when the predicted impacts are so significant. Possibly generaling however my opinion is that the acceptance of science is almost always evidence-based. As such, to gain acceptance (outside those who truly understand all the complexities or those who accept them regardless) we realistically need to robustly and directly demonstrate the predictive capability of our models against data that either wasn’t used or wasn't in existence when we undertook the prediction. In everyday terms this means comparing our model predictions (or range thereof) to some form of measured data, which is why I asked my original question. Tom C thanks for the specifics. So, my next question is: Q: there are models referred to in the Topic that show predictions up to 2020 from Hansen (1988 and 2006) and I was wondering if we had assessed these predictions against an appropriate data from one of these 4 datasets up to the present?
  14. The IPCC report compares model predictions with observations to date and a formal process is likely to be part of AR5 next year. You can get informal comparison from the modellers here. One thing you can see from the IPCC reports though is that tying down climate sensitivity is still difficult. Best estimates are in range 2-4.5 with I think something ~3 being model mean?? Climate sensitivity is a model output (not an input) and this is range present. Hansen's earlier model had a sensitivity at 4.5 which on current analysis is too high (see the Lessons bit on why) whereas Broecker's 1975 estimate of 2.4 looks too low. In terms of trying to understand what the science will predict for the future, we have to live that uncertainty for now. I still think you too fixated on surface temperature for validation. Its one of many variables that affected by anthropogenic change. How about things like GHG change to radiation leaving planet or received at surface? How about OHC?
  15. Whoops, latest model/data comparison at RC is here
  16. Thanks scaddenp. That link is exactly what I was after. And not fixated, just referring to what is provided and communicated most often. Always best to start simple I find. My point about prediction is really what the models are about - if we aren't able to have confidence in their predictions (even if it's a range) then we will struggle to gain acceptance of our science re: the underlying processes. And the question of climate sensitivity is really the key to this whole area of science - i.e. we know that CO2 is increasing and can make some scientifically robust predictions about rates of increase and potential future levels. But that isn't an issue unless it affects out climate. So, the question is then if we (say) double C02 what will happen to our climate and what implications does that have to us? If we have confidence in our predictive models we can then give well-founded advice for policy makers. And whilst sensitivity may be an output, my understanding is that it is determined by our input assumptions re: the component forcings such as increased atmospheric water vapour (positive feedback) and cloud cover (negative feedback). (ps. When you talk about climate sensitivity, I gather the values are referring to delta T for doubled CO2?)
  17. opd68, The Intermediate form of this post contains six figures (including Tamino's) demonstrating the results of exactly the kinds of tests you are talking about. The first one, Figure 1, even shows what should have happened in the absense of human influence. Since the models aren't "tuned" to the actual historical temperature record, the fact that they can "predict" the 20th century temperature record using only natural and anthropogenic forcings seems to be exactly the kind of demonstration of predictive capability that you are looking for. The objection usually raised with regards to that is that we don't know for certain exactly what the aerosol emissions were during that time, and so there is some scope for "tuning" in that regard. But I think it's important to understand that the aerosols, while not certain, are still constrained by reality (so they can't be arbitrarily adjusted until the output "looks good", the modellers have to take as input the range of plausible values produced by other researchers) and there are limits to how much tuning they really allow to the output anyway due to the laws of physics. I think that if anyone really wants to argue that there is nothing to worry about, they need to come up with a model that is based on the known laws of physics, that can take as input the range of plausible forcings during the 20th century, that can predict the temperature trend of the 20th century using those inputs at least as skillfully as the existing models, and has a much lower climate sensitivity than the existing models do and therefore shows the 21st century will not have a problem under BAU. Simply saying that the existing models, which have passed all those tests, aren't "good enough" to justify action is ignoring the fact that they are the most skillful models we have and there are no models of comparable skill that give meaningfully different results. Due to the consequences of late action, those who argue there is nothing to worry about should be making sure that their predictions are absolutely, scientifically justifiable if they expect acceptance of their predictions, rather than just saying they "aren't convinced ". In the absence of competing, equally-skillful models, how can they not be? Regarding climate sensitivity, which you are correct in assuming is usually given as delta T for doubled CO2, the models aren't even the tightest constraint on the range of possible values anyway. If you look at the SkS post on climate sensitivity you'll see that the "Instrumental Period" in Figure 4 actually has quite a wide range compared to e.g. the Last Glacial Maximum. This is because the signal:noise ratio during the instrumental period is quite low. We know the values of the various forcings during that period more accurately than during any other period in Earth's history, but the change in those values and the resulting change in temperature is relatively small. Furthermore, the climate is not currently in equilibrium, so the full change resulting in that change in forcings is not yet evident in the temperatures. In contrast, we have less accurate figures for the change in forcings between the last glacial maximum and today, but the magnitude of that change was so great and the time so long that we actually get a more accurate measure of climate sensitivity from that change than we do from the instrumental period. So it is completely unnecessary to rely on modern temperature records to come up with an estimate of climate sensitivity that is good enough to justify action. In fact, if you look at the final sensitivity estimate that is a result of combining all the different lines of evidence, you'll see that it is hardly any better than what we already get just by looking at the change since the last glacial maximum. The contribution to our knowledge of climate sensitivity from modelling the temperature trend during the 20th century is almost negligible. (Sorry modellers!) So again, if anyone really wants to argue that there is nothing to worry about, they also need a plausible explanation for why the climate sensitivity implied by the empirical data is much larger than what their hypothetical model indicates. And just to be clear:
    And whilst sensitivity may be an output, my understanding is that it is determined by our input assumptions re: the component forcings such as increased atmospheric water vapour (positive feedback) and cloud cover (negative feedback).
    No. It is influenced by some of the inputs that go into the models, but those inputs must be reasonable and either measured or constrained by measurements and/or physics. And the models constrain it less precisely than the empirical observations of the change since the last glacial maximum anyway -- without using GCMs at all we get almost exactly the same estimate of climate sensitivity as what we get when adding them to the range of independent lines of evidence.
  18. JasonB - all clear and understood, and I agree completely that the same clarity and scientific justification is required for the opposite hypothesis of increased CO2 having no significant effect on our climate. Science is the same whichever side you are on. I spend my working life having people try to discredit my models and science in court cases, and doing the same to theirs. I therefore think very clearly on what is and what is not scientifically justifiable and am careful to state only that which I know can be demonstrated. If it can't I am only able to describe the science and processes behind my prediction/statements which by necessity become less certain the more I am asked to comment on conditions outside those that have been observed at some stage. My entry to this conversation is because I keep hearing that the science is settled and I want to see that science. From what I have learned here (thankyou!) the key question for me (which I will start looking through at the climate sensitivity post) is: - Are we confident in our understanding of the forcings that are underpinning our predictions at increasing CO2 levels?
  19. opd68,
    Are we confident in our understanding of the forcings that are underpinning our predictions at increasing CO2 levels?
    The forcing resulting from increasing CO2 levels is very accurately known from both physics and direct measurement. By itself it accounts for about 1.2 C per doubling. The forcing from water vapour in response to warming is also quite well known from both physics and direct measurement and, together with the CO2, amounts to about 2 C per doubling. Other feedbacks are less well known, but apart from clouds, almost all seem to be worryingly positive. As for clouds, they are basically unknown, but I think a very strong case can be made that the reason they are unknown is precisely because they're neither strongly positive nor negative. As such, any attempt to claim that they are strongly negative and will therefore counteract all the positive feedbacks seems like wishful thinking that's not supported by evidence. If anything, the most recent evidence seems to suggest slightly positive. One way to avoid all these complications is to simply use the paleoclimate record. That already includes all feedbacks because you're looking at the end result, not trying to work it out by adding together all the little pieces. Because the changes were so large, the uncertainty in the forcings is swamped by the signal. Because the timescales are long, there is enough time for equilibrium to be reached. The most compelling piece of evidence, for me, is the fact that the best way to explain the last half billion years of Earth's climate history is with a climate sensitivity of about 2.8 C, and if you deviate too much from that figure then nothing makes sense. (Richard Alley's AGU talk from 2009 covers this very well, if you haven't seen that video yet then I strongly recommend you do so.) Look at what the evidence tells us the Earth was like during earlier times with similar conditions to today. This is a little bit complicated because you have to go a really long way back to get anywhere near today's CO2 levels, but if you do that then you'll find that, if anything, our current predictions are very conservative. (Which we already suspected anyway -- compare the 2007 IPCC report's prediction on Arctic sea ice with what's actually happened, for example.) No matter which way you look at it, the answer keeps coming up the same. Various people have attempted to argue for low climate sensitivity, but in every case they have looked at just one piece of evidence (e.g. the instrumental record) and then made a fundamental mistake in using that evidence (e.g. ignoring the fact that the Earth has not yet reached equilibrium, so calculating climate sensitivity by comparing the current increase in CO2 with the current increase in temperature is like predicting the final temperature of a pot of water a few seconds after turning the stove on) and ignored all of the other completely independent lines of evidence that conflict with the result they obtained. If they think that clouds will exert a strong negative feedback to save us in the near future, for example, they need to explain why clouds didn't exert a strong negative feedback during the Paleocene-Eocene Thermal Maximum when global temperatures reached 6 C higher than today and the surface temperature of the Artic ocean was over 22 C. My view is that the default starting position should be that we assume the result will be the same as what the evidence suggests happened in the past. That's the "no models, no science, no understanding" position. If you want to move away from that position, and argue that things will be different this time, the only way to do so is with scientifically justifiable explanations for why it will be different. Some people seem to think the default position should be "things will be the same as the past thousand years" and insist on proof that things will change in unacceptable ways before agreeing to limit behaviour that basic physics and empirical evidence shows must cause things to change, while at the same time ignoring all the different lines of evidence that should be that proof. I find that hard to understand.
  20. opd68 - you are correct that obviously sensitivity is ultimately a function of the model construction. "our input assumptions" of course are known physics. You ask "are we confident in our understanding of the forcings that are underpinning our predictions at increasing CO2 levels?". The answer is yes, but I wonder what you are looking for that could give you that assurance? It's rather exhaustively dealt with in Ch9 from memory of the AR4 IPCC report. If that didnt convince you then what are you looking for? These forcings and response can be verified independent of GCMs.
  21. JasonB - clearest response/conversation I have had on that ever. Thank you. scaddenp - what I am looking for is each 'against' argument dealt with rationally and thoughtfully, which is why I'm working through this as I am.
  22. 563, opd68, Sorry, I've been too busy to follow the conversation and get caught up on everything that's been said, but this one comment struck me (and it's wrong):
    Once we start using that model to predict future impacts and advise policy then we must expect to be asked to demonstrate the predictive capability of that model, especially when the predicted impacts are so significant.
    We're not entirely using models to predict and advise. It's one tool of many, and really, if we wanted to we could throw them out (at least, the complex GCMs, I mean -- after all, all human knowledge is in the form of models, so we can't really throw that out). The bottom line is: 1) The physics predicts the change, and predicted the change before it occurred, and observations support those predictions 2) Multiple, disparate lines of investigation (observations, paleoclimate, models, etc.) point to a climate sensitivity of between 2 C and 4.5 C for a doubling of CO2. 3) None of this requires models -- yes, they add to the strength of the assessment in #2, but you could drop them and you'd still have the same answer. The models are an immensely valuable tool, but there is no reason to apply the exceptional caveat that they must be proven accurate to use them as a policy tool. Poppycock. Human decisions, life-and-death decisions, are made with far, far less knowledge (conduct of wars, economies, advances in technology, etc.). To say that we need even more certainty when dealing with what may turn out to be the most dangerous threat faced by man in the past 50,000 years is... silly.
  23. Sphaerica, Whatever we use to illustrate and communicate our science must, in my opinion, be valid and justified. Otherwise we are simply gilding the lily. The fact that life and death decisions can be made with a paucity of information does not mean that we would be better off not doing so if we can. My opinion is simply that if we are using models to predict outcomes and inform our decisions then if we are confident in them and can demonstrate that to others: (1) we will more easily gain acceptance of the need for and impacts of our decisions, and (2) our decisions are more likely to be good ones. If the models can be so easily discarded, then we have spent a very long time and a lot of money & effort that could have been better employed elsewhere. If, however, they are a key element in improving our understanding and ability to communicate the problem then we can't afford to discount the need for them to be robust and demonstrably so. My point, which I'm still not sure was either wrong or silly, was simply that since we are using these tools I was interested in seeing how they were performing because that is how I increase my confidence in other peoples knowledge and build my own. Your point (1) in 'the bottom line' indicates to me that you think exactly the same way: a model of physics predicted the change and the observations supported those predictions - and you use this evidence to support your knowledge.
  24. For the modellers and the funders of modellers, the point is not see if the science is right - way past that and as Sphaerica says not needed. What the models can do that other lines cannot, is evaluate the difference in outcomes between different scenarios; estimate rates of change; predict the likely regional changes in drought/rainfall, temperature, snow line, and season on so on. Convincing a reluctant joe public that there is a problem is not the main purpose. And yes, I do agree that we need to understand the limitations but the IPCC reports seems to be paragons of caution in that regard.
  25. Moving the discussion of models from posts by Eric here and here. I had asked here "What kind of model could be used to support a claim of "significant contributing factor", but would not also have an estimate of sensitivity built into it?" [For context, since we're jumping threads, we're talking about humans being a significant factor in recent warming, but Eric is questioning predictions of the future, in particular the idea that the sensitivity of doubling CO2 is most likely in the 3C range.] Eric has made the comment "Using a model without sensitivity built in: the rise in CO2 is 6% per decade so the rise in forcing from CO2 per decade is 5.35 * ln (1.06) which without any feedback (lambda is 0.28 K/W/m2) means 0.087C per decade rise due to CO2." (Look at the first link above to see the complete comment.) Can't you realize you've just assumed your conclusion? You've made the erroneous assumption that the model you present (rate of temperature increase dependent on rate of CO2 increase) does not have a built-in sensitivity. Any model that creates a T(t)=f(CO2(t)) relationship (t being time) also has a "built-in" relationship between temperature and CO2 levels at different equilibrium values that will, by necessity, imply a particular "sensitivity". As far as I can see, your answer is a tautology. You've "demonstrated" a transient model that doesn't provide a sensitivity value by simply saying that your transient model doesn't provide a sensitivity value. This falls into the "the isn't even wrong" class. I'm not sure, but perhaps your second comment (second link above) is admitting this error, where you say "I should point out here that using the same simple (no model) equations I get 2.4C per doubling of CO2. I'm sure someone else will point this out, but fast feedback in my post above disproves my claim of low sensitivity that I made on the other thread." The issue in this sentence is the failure to realize that your "no model equation" is indeed a model. I really, fundamentally, think that you do not understand what a "climate model" is, how they are used to examine transient or equilibrium climate conditions, or how a "sensitivity" is determined. To reiterate what other have said: you seem to have a psychological block that "models are bad", and even though you follow much of the science (and agree with it), at the end you stick up the "model boogie man" and declare it all invalid.

Prev  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us