Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

At a glance

So, what are computer models? Computer modelling is the simulation and study of complex physical systems using mathematics and computer science. Models can be used to explore the effects of changes to any or all of the system components. Such techniques have a wide range of applications. For example, engineering makes a lot of use of computer models, from aircraft design to dam construction and everything in between. Many aspects of our modern lives depend, one way and another, on computer modelling. If you don't trust computer models but like flying, you might want to think about that.

Computer models can be as simple or as complicated as required. It depends on what part of a system you're looking at and its complexity. A simple model might consist of a few equations on a spreadsheet. Complex models, on the other hand, can run to millions of lines of code. Designing them involves intensive collaboration between multiple specialist scientists, mathematicians and top-end coders working as a team.

Modelling of the planet's climate system dates back to the late 1960s. Climate modelling involves incorporating all the equations that describe the interactions between all the components of our climate system. Climate modelling is especially maths-heavy, requiring phenomenal computer power to run vast numbers of equations at the same time.

Climate models are designed to estimate trends rather than events. For example, a fairly simple climate model can readily tell you it will be colder in winter. However, it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Weather forecast-models rarely extend to even a fortnight ahead. Big difference. Climate trends deal with things such as temperature or sea-level changes, over multiple decades. Trends are important because they eliminate or 'smooth out' single events that may be extreme but uncommon. In other words, trends tell you which way the system's heading.

All climate models must be tested to find out if they work before they are deployed. That can be done by using the past. We know what happened back then either because we made observations or since evidence is preserved in the geological record. If a model can correctly simulate trends from a starting point somewhere in the past through to the present day, it has passed that test. We can therefore expect it to simulate what might happen in the future. And that's exactly what has happened. From early on, climate models predicted future global warming. Multiple lines of hard physical evidence now confirm the prediction was correct.

Finally, all models, weather or climate, have uncertainties associated with them. This doesn't mean scientists don't know anything - far from it. If you work in science, uncertainty is an everyday word and is to be expected. Sources of uncertainty can be identified, isolated and worked upon. As a consequence, a model's performance improves. In this way, science is a self-correcting process over time. This is quite different from climate science denial, whose practitioners speak confidently and with certainty about something they do not work on day in and day out. They don't need to fully understand the topic, since spreading confusion and doubt is their task.

Climate models are not perfect. Nothing is. But they are phenomenally useful.

Please use this form to provide feedback about this new "At a glance" section. Read a more technical version below or dig deeper via the tabs above!


Further details

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been shown to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. Sea level rise is a good example (fig. 1).

Fig. 1: Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. A 2019 study led by Zeke Hausfather (Hausfather et al. 2019) evaluated 17 global surface temperature projections from climate models in studies published between 1970 and 2007.  The authors found "14 out of the 17 model projections indistinguishable from what actually occurred."

Talking of empirical evidence, you may be surprised to know that huge fossil fuels corporation Exxon's own scientists knew all about climate change, all along. A recent study of their own modelling (Supran et al. 2023 - open access) found it to be just as skillful as that developed within academia (fig. 2). We had a blog-post about this important study around the time of its publication. However, the way the corporate world's PR machine subsequently handled this information left a great deal to be desired, to put it mildly. The paper's damning final paragraph is worthy of part-quotation:

"Here, it has enabled us to conclude with precision that, decades ago, ExxonMobil understood as much about climate change as did academic and government scientists. Our analysis shows that, in private and academic circles since the late 1970s and early 1980s, ExxonMobil scientists:

(i) accurately projected and skillfully modelled global warming due to fossil fuel burning;

(ii) correctly dismissed the possibility of a coming ice age;

(iii) accurately predicted when human-caused global warming would first be detected;

(iv) reasonably estimated how much CO2 would lead to dangerous warming.

Yet, whereas academic and government scientists worked to communicate what they knew to the public, ExxonMobil worked to deny it."


Exxon climate graphics from Supran et al 2023

Fig. 2: Historically observed temperature change (red) and atmospheric carbon dioxide concentration (blue) over time, compared against global warming projections reported by ExxonMobil scientists. (A) “Proprietary” 1982 Exxon-modeled projections. (B) Summary of projections in seven internal company memos and five peer-reviewed publications between 1977 and 2003 (gray lines). (C) A 1977 internally reported graph of the global warming “effect of CO2 on an interglacial scale.” (A) and (B) display averaged historical temperature observations, whereas the historical temperature record in (C) is a smoothed Earth system model simulation of the last 150,000 years. From Supran et al. 2023.

 Updated 30th May 2024 to include Supran et al extract.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

Last updated on 30 May 2024 by John Mason. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Further reading

Carbon Brief on Models

In January 2018, CarbonBrief published a series about climate models which includes the following articles:

Q&A: How do climate models work?
This indepth article explains in detail how scientists use computers to understand our changing climate.

Timeline: The history of climate modelling
Scroll through 50 key moments in the development of climate models over the last almost 100 years.

In-depth: Scientists discuss how to improve climate models
Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.

Guest post: Why clouds hold the key to better climate models
The never-ending and continuous changing nature of clouds has given rise to beautiful poetry, hours of cloud-spotting fun and decades of challenges to climate modellers as Prof Ellie Highwood explains in this article.

Explainer: What climate models tell us about future rainfall
Much of the public discussion around climate change has focused on how much the Earth will warm over the coming century. But climate change is not limited just to temperature; how precipitation – both rain and snow – changes will also have an impact on the global population.

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Denial101x videos

Here are related lecture-videos from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Myth Deconstruction

Related resource: Myth Deconstruction as animated GIF

MD Model

Please check the related blog post for background information about this graphics resource.

Fact brief

Click the thumbnail for the concise fact brief version created in collaboration with Gigafact:

fact brief

Comments

Prev  1  2  3  4  5  6  7  8  9  10  Next

Comments 126 to 150 out of 248:

  1. Allyrooney, saying that "modeling is the main evidence cited by the IPCC" is like saying "epidemiological models are the main evidence for the germ theory of disease." The true root and bulk of the evidence is basic physics, with details added in the form of progressively more advanced physics. But the media and public have gotten the misimpression that (a) scientists are merely guessing that human-produced greenhouse gasses are responsible for the portion of warming that scientists' models can't otherwise predict; and (b) there might be no unusual temperature rise needing to be explained, because the temperature hockey stick graph might be wrong. You will save yourself a lot of time and frustration if you read a quick overview of the wide range of evidence from cce's The Global Warming Debate. (Be patient, his server is slow, and sometimes gets completely bogged down; try again later). Then get a quick history from Spencer Weart's The Discovery of Global Warming; his summary Introduction is nicely short, but the rest of his site is quite rich. If you want to continue reading background material after that foundation, look at the Start Here section on RealClimate, which has links to materials categorized by level of technical background required. But if instead you then want to pursue pointed questions, this SkepticalScience site is a great place to turn next. Note there are two types of posts here: the concise Skeptic Arguments linked at the top left of the page (click "View All Arguments"), and the longer "Posts."
  2. Allrooney, see also Tamino's demonstration that only the fine-tuning of predictions requires "fancy computer models."
  3. Thanks Tom I will a
  4. The actual physics of CO2 cannot be questioned. There is a secondary player with the CO2 emissions that I do not see discussed. What I would like to be pointed to is reference information on the actual heat generated by the oxidation of hydrocarbon fuels. Where can I find discussion about the retention, transmission, and conversion behaviors of the ~4 exajoules of infrared radiation released annually by burning fossil fuels?
  5. hotair, Human-produced direct heat is trivial compared to human-produced greenhouse gas forcing. For details see (in the post The Albedo Effect) the comment 56 by Steve L, and the subsequent comments 57 and 58 by me.
  6. The claim is made that the climate models "...have made predictions that have been subsequently confirmed by observations. This claim is refuted by the noted climatologist Kevin Trenberth; he states at http://blogs.nature.com/climatefeedback/recent_contributors/kevin_trenberth/ that that the models referenced by the United Nations Intergovernmental Panel on Climate Change do not make predictions. It follows that: a) the UN-IPCC models are not falsifiable and b) the IPCC models are not scientific, by the definition of "scientific." Rather than make predictions, the IPCC models make what the IPCC calls "projections." A "projection" is a mathematical function which maps the time to the global average temperature. A "prediction" is a logical proposition which states the outcome of a statistical event. A "projection" supports comparison of the computed to the measured temperature and computation of the error. However, it does not support falsification of the model for the apparatus is not present by which the proje3ction may be proved wrong. A "prediction" provides this apparatus.
  7. The IPCC curves that you reproduce in figure 1 always looked wrong to me. Granted they're hard to read, but they show "observed" temperatures increasing much more than actual measurements from Hadcrut3, GISS,etc.....by a factor of 2-3 in some periods. e.g. 1975-2000. I think these were published in "Nature" years ago and I recall there was controversy about them then.
    Response: I'm having trouble determining which is the observed temperature record in Figure 1 as the IPCC TAR doesn't say which explicitly for that particular graph. I'm guessing it's the HadCRUT3 record as that seems to be the favoured record used throughout TAR. If this is the case, then the temperature record shown is a slight underestimate of actual warming as the HadCRUT record excludes some of the regions on earth that are warming the most.
  8. "If this is the case, then the temperature record shown is a slight underestimate of actual warming..." Are you implying that IPCC uses temperature records that aren't published and we don't have access to? None of the surface air temperatures or the satellite temperature records that I'm aware of come close to showing the temperature increases in figure 1. Certainly HadCRUT reflects less than half the increase in figure 1.
    Response: "Are you implying that IPCC uses temperature records that aren't published?"

    Not at all. The IPCC TAR use the HadCRUT record, NCDC and NASA GISS. They just don't indicate which of these records are used in Figure 1 above. As for the trends in Figure 1, just eyeballing the graph, it looks like the trend in the last few decades is 0.2°C which is consistent with all three temperature records.
  9. A bit nitpicking on my side, John, but in the "Further reading" section you say "Tamino compares IPCC AR4 model results[...] versus observations", wherheas the picture is his Fig. 3: "compare the GISS data to the models listed in IPCC AR4 chapter 8 except for the CCCMA models". The one that reflects all the IPCC models is his Fig. 1. Both graphs are very similar, indeed. Cheers!
    Response: Thanks for the feedback, I've updated the wording to reflect this.
  10. By the way, have you read this RC post?: http://www.realclimate.org/index.php/archives/2009/12/updates-to-model-data-comparisons/ It seems that Hansen's 1988 model is indeed (slightly) overestimating the observed warming trend: "the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC) [...] it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world". Hansen's model shows 0.26 +/-0.05 ºC/dec, whereas the real world shows 0.19 +/-0.05 ºC/dec. However, for this comparison, as well as the climate sensitivity, it must be taken into account that "Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%)". AR4 models give 0.21 +/-0.16 ºC/dec. Anyway, this was already highlighted by Hansen et al 2006: "Close agreement of observed temperature change with simulations for the most realistic climate forcing (scenario B) is accidental, given the large unforced variability in both model and real world. Indeed, moderate overestimate of global warming is likely because the sensitivity of the model used (12), 4.2°C for doubled CO2, is larger than our current estimate for actual climate sensitivity, which is 3+/-1°C for doubledCO2, based mainly on paleoclimate data"
  11. We have no idea how reliable climate models are: IPCC AR4 8.6.4 How to Assess Our Relative Confidence in Feedbacks Simulated by Different Models? [quote]A number of diagnostic tests have been proposed since the TAR (see Section 8.6.3), but few of them have been applied to a majority of the models currently in use. Moreover, it is not yet clear which tests are critical for constraining future projections. Consequently, a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed.[/quote] Any person on earth knows that clouds can warm and cool. IPCC knows that too. Cloud feedbacks are not well modelled. IPCC AR4 8.6.3.2 Clouds [quote]In many climate models, details in the representation of clouds can substantially affect the model estimates of cloud feedback and climate sensitivity (e.g., Senior and Mitchell, 1993; Le Treut et al., 1994; Yao and Del Genio, 2002; Zhang, 2004; Stainforth et al., 2005; Yokohata et al., 2005). Moreover, the spread of climate sensitivity estimates among current models arises primarily from inter-model differences in cloud feedbacks (Colman, 2003a; Soden and Held, 2006; Webb et al., 2006; Section 8.6.2, Figure 8.14). Therefore, cloud feedbacks remain the largest source of uncertainty in climate sensitivity estimates.[/quote]
  12. The link to Hansen 2007 mentioned in figure 3 seems to be not working. Could you please provide current link or cite paper? Many thanks.
    Response: All fixed, thanks for the heads-up.
  13. I have been meaning to post here for a while after reading the post from Poptech who claims that "Only Computer Illiterates believe in "Man-Made" Global Warming". I am a computer scientist with 30 years experience who has no doubt that the theory of AGW is correct. I want to deal specifically with Poptech's claims about computer science as he claims to be an "expert". As most of his post consists of unintelligible rant it is difficult to nail precisely what "straw man" the hapless Poptech is railing against but he does appear to have an issue with physicists or in particular climate scientists who program in FORTRAN. Computers have been used for solving problems in Physics since the beginning of the computer age. In fact most universities run degree courses which allow you to major in Physics and Computing. I did a variation of that degree in the mid 1970s majoring in Applied Maths, Physics and Computer Science. There is a whole range of computer algorithms designed for solving complex mathematical problems using computers and as any physicist will tell you mathematics is the language of physics. He also claims that because some climate scientists use the computer language FORTRAN, their code must be full of bugs. Why? Because Poptech cannot understand FORTRAN code? Because FORTRAN has been around for a while? He does not say. I no longer use FORTRAN but in my experience ability and training is a much better guide to good programming than choice of language. The principals of Computer Science are universal and not tied to any specific computer language. In fact computers are language agnostic as they execute machine code. Many of the changes to programming methodology over the years have addressed the issue of software bugs by promoting the use of tested library components or frameworks, structured coding techniques, the use of design patterns and object oriented programming techniques. That is we break our complex code down into smaller testable units and ensure that they work correctly by testing them rigorously before combining them into the whole. This does not guarantee bug free code but these approaches have been proven to reduce bugs substantially. All these approaches are available to the FORTRAN programmer with the added advantage of having access to a well proven library of scientific and statistical routines. Does our "expert" check every time he flies as to what programming language the aircraft's control system is written in? Most are written in a specialist programming language called ADA which is of the same vintage as FORTRAN. His rant against climate models is really a rant against science of any form. But there is a built in uncertainty in nature so there will always be questions that cannot be answered with absolute precision whether those questions are answered using computer models or with pen and paper. It is the reason why every scientist needs a good handle on statistics because many questions can only be answered within a range of certainty. Sometimes a general question can be answered with more certainty than a more specific question. Actuaries working for health insurance companies use statistical computer models to work out the average health costs of a range of population groups so their employers can set insurance premiums. But they cannot tell you precisely how many people will get sick next week or more specifically if you are likely to need medical care. So it is with climate and weather. Contrary to Poptech's assertion, weather forecasts have actually become much more accurate over the last few years. With better computer climate models, use of satellite measurements and faster computers, weather bureaus now offer five day forecasts which were not reliable enough in past decades. Ironically some forecasters complain that climate change is affecting their forecasts as the changing climate is altering many of the assumptions based on the historical experience that is built into the models. Computer models which deal with climate change have not been designed to forecast the weather over the next century. They cannot tell you the summer temperature in 2050. They are tools for examining climate science the physics of which, contrary to Poptech's opinion, is well understood. They are able to give a range of projections which examine the effects of C02 as well as other factors on the long term climate. In that they have been remarkably successful.
  14. O.k, I'm sorry if my first post sounds agressive towards a side or another, I just want this to get out of my "do-to" list. It's 2010 now and even with El Nino from what I can see from Climate4You (wich I presume is one of the most objective sources there is for climate information), no dataset reaches the 1 degree limit, like the Hansen's "B" scenario seems to have finally gone over. While indeed if I'm not incorrect and that seems to have happened, we can only hope that we have learned trough the decades (wich Hansen 2006 seems to suggest :) ) and at this day of age have had the resources and the time to create the best damn models we can[/End the dramatic b-grade speech].
  15. cloneof, you'll notice that on a year-by-year basis, model output is noisy. For instance, a few years from now the Model B scenario shows a predicted dip in temperature of some two tenths of a degree, passing below your "1 degree limit", a feature we can probably agree is unlikely to be reproduced with exactitude by the actual climate. Equally, expecting Earth's annual temperature to track model output in any given year with faithful reproduction of the model output is bound to lead to disappointment. Rather than throw up my hands in sorrow over the matter, I think I'll go and try to discover why the model output graphs are not smoothed. It's a choice made by the authors, with good reason I suspect, if nothing else intended to convey that we're not to expect a monotonously predictable rise. I can well imagine the hue and cry over divergence from a smoothed result come to think of it.
  16. (This comment is my response to a comment by pdt on a different thread that is not the right place for this discussion.) pdt wrote
    It seems that at least some effects are still not really based upon a fundamental understanding of underlying physics. The effects of clouds are still apparently used as fitting parameters to climate data. The fits to climate data are then used to predict climate over other periods. I don't really have a problem with this in principle, but it does seem that these are not really fully based on fundamental physics and this type of fitting leaves open the possibility of trying to use the fitted parameters outside the region of validity (extrapolation rather than interpolation). Apparently things like clouds are not really understood in enough detail to truly predict climate from fundamental physics.
    Doug Bostrom correctly replied "It's...a rare matter of actual significant uncertainty." The answer is in the RealClimate post FAQ on Climate Models, the "Questions" section, "What is the difference between a physics-based model and a statistical model?", "Are climate models just a fit to the trend in global temperature data?", and "What is tuning?" A relevant quote from those: "Thus statistical fits to the observed data are included in the climate model formulation, but these are only used for process-level parameterisations, not for trends in time." Part II of that post then provides more details on parameterizations, including specifics on clouds.
  17. Tom Dayton quoted from another source, ""Thus statistical fits to the observed data are included in the climate model formulation, but these are only used for process-level parameterisations, not for trends in time."" I'm not sure what "process-level parameterisations" means. Presumably one needs a model of cloud properties for a range of atmospheric conditions in order to predict climate trends with time. Either you get the properties from an understanding of the physics of cloud formation and their properties or you infer them from fitting to measured climate data. "Process-level parameterisations" sounds like the fitting. Again, I'm not judging it, I'm just trying to understand it. The language is just not familiar to me. My modeling experience is in a different field.
  18. pdt, a little more description of parameterization is in Timothy Chase's comment on Open Mind. Scroll down to "IV. Regarding the Nature of Modeling," and read the first two paragraphs.
  19. I understand the idea of modeling at that level and have done things like that myself in my professional life. The issue isn't that you will get something utterly unphysical when parameterizations are used outside the fitted range (though that can happen with a poorly formulated model), but that the accuracy may be lower for predictions than for the fit. For example, if a climate model is parameterized by fitting to measured climate data, then those parameterizations are used to predict climate in conditions that do not include the same levels or rates of changes of variables (e.g. CO2 concentrations), then there is very likely greater uncertainty in the predictions than the errors between the model and the actual climate in the fitted range.
  20. It's well worth tracking this matter of clouds through the history of GCM development, fortunately narrated pretty comprehensively by Weart. Mucho references to follow, if you're inclined.
  21. pdt, you seem to implying that models "tune" the parameters to match climate, but the parametrization is done independent of the models, and the values used in the model. Also note that it is not blind fitting of a statistical function but usually determining the empirical value of coefficients in a functional form derived from the physics. Note also that for some (like clouds), the parametrization can be checked against output of a model with full physics to check for accuracy - its just not practical to use the full physics in a model run. It is also being improved all the time. Either way, even the early Hansen models were way better guide to what the future held just hand-waving about empirical guesses. Of course, there may still be unmodelled physics which is going to save us all - but would you want to bet on such possibility? What the models show, is that with the best physics available to us, our continued emissions of GHGs is going to heat the earth rapidly and we ignore that physics at our peril.
  22. Yep, pdt, I agree with you that the accuracy of the predictions likely will be lower than for the fit. So researchers keep trying to reduce the numbers of parameters they use, and to improve the estimates of the parameters they must use. The claim (not yours!) I was initially responding to was the misperception that the climate models' predictions are evaluated against the same data that the models were statistically fit to in the first place. By the way, there is more discussion of parameterization on Open Mind, especially starting with Ray Ladbury's comment. When you get down to Tim's comment below Ray's, skip it because Tim then posted a correction and then a final correction.
  23. Just asking around the people. Does Spencer & Braswell 2008 affect the credibility of the models how? I have been asking numerous people around and some of my more skeptical friends seem to wave this around and it appears to make some solid points. Anyone know how to answer to this one?
  24. cloneof, not sure about which paper you're refering at, but take a look at this RealClimate post.
  25. Riccardo I was talking about paper released in 2008 by Spencer and Braswell that discussed a potential positive feedback bias caused by cloud variability. The paper makes a strong claim how this bias basicly makes the models show too much positive feedback. The link you gave me talks about one of hi's un-peer reviewed blog posts how PDO would affect climate. See that posts comment number 171. To this day I have not seen a debunking article nor any response from the modelling community about this paper. Considering this paper was released in the pretigius Journal of Climate and even Piers Forsters couldn't but give him a green light, I must wonder.

Prev  1  2  3  4  5  6  7  8  9  10  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us