Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
Keep me logged in
New? Register here
Forgot your password?

Latest Posts

Archives

Climate Hustle

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable
"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. For example, here’s a graph of sea level rise:

Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.

Mainstream climate models have also accurately projected global surface temperature changes.  Climate contrarians have not.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

There's one chart often used to argue to the contrary, but it's got some serious problems, and ignores most of the data.

Christy Chart

Basic rebuttal written by GPWayne


Update July 2015:

Here is a related lecture-video from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Last updated on 31 December 2016 by pattimer. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Further reading

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Comments

Prev  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  Next

Comments 701 to 750 out of 1046:

  1. DSL, I think my argument is sufficiently abstract to allow for critism of  failure in all  directions? (I'm not sure what you mean. I also gave examples from solid mechanics, after all. I was trying to illustrate that modelling is not as perfect as it may seem, for both experts and to the layperson.

  2. DSL, I think my argument is sufficiently objective to allow for critism of  failure in all  directions? (I'm not sure what you mean). I talk about the equations, the procedure, and give examples of approximations. I also gave examples from solid mechanics, a different feild after all. Numerical integration applies to all fields. I was trying to illustrate that modelling is not as perfect as it may seem, for both experts and to the layperson. Also that it takes decades to develope good models.

    This is site is for skeptics. So to your question "Let's imagine that the sign of the alleged failure was in the other direction. Would you still make the comment?", unless you disagree with my content, so what.

  3. Razo, pointing out that modeling is "not as perfect as it may seem" is a no-brainer.  Who has been saying it is?  Are you suggesting that climate modeling is useless?  Where is your comment going?  Or is that the extent of it?

  4. Razo: You really should do some homework before piously pontificating about climate models. The Intermediate version of the OP has a Reading List appended to it. If you read the first Spncer Weart article, you will discover that the first General Circulation Model was created in the mid-1950s. Would six decades of model development and enhancement satisfy your rather vague time criteria? 

  5. Well models from 30 year ago have been remarkably accurate. The Manabe model used by Broecker in the landmark paper 39 years ago nailed 2010 temperatures remarkably well. However, the model is too primitive to deal with much more than energy balance which is reasonably well understood. The inner workings of climate internal variability, regional difference etc are not captured at all. What exactly do mean by the comment?

    That all models are wrong is trivial. The question is, are they skillful? Ie do they allow you to make better predictions of the future than doing it without a model. How would you make a prediction for future temperatures without a model.

    Recent article on this here.

    If you are trying to imply that AGW is dependent on models, then please try reading the IPCC WG1 report first so we can have a more informed discussion.

  6. I find Fig. 1 of the respone curious.Figure 1a, the natural forcings, shows quite bad match with the climate model, especially for the 1850s. Fig 1b, the man made forcings, shows a great match in the 1850s, which is flat around zero. The model aslso shows a great match around the 80s and 90s, which is probably what it is calebrated to. In 1c, the combination of the two, the match is pretty good.

    Even though its agreed that there is little effect of human activity in the 1860s, there is a significant correction in the model 1c at these dates. The poor modelling of natural forcing, doesn't say much for the model. This may best describe the error of the model, say .2C.The choice of calibration date appears to have a large effect on the results. I don't think AR4 models 'perturbate' calibration dates.

    Its hard to understand the model to such detail, but it does appear that the models seem to only be able to predict recent warming.

  7. Razo - these diagrams are from TAR, based on Stott et al 2000. I would agree there is an issue and I would hazard a guess the cloud-response to sulphate aerosols is exaggerated at low concentrations.

    If you look at the corresponding diagram in FAQ 10.1, Fig 1 in latest report you will see that neither the CMIP3 nor CMIP5 model ensembles have this issue.

  8. Razo,

    You say "The model aslso shows a great match around the 80s and 90s, which is probably what it is calebrated to."  This is fundamentaly not how climate models work.  You are coming in from another line of work and incorrectly applying your modeling methods to climate.  The climate models are designed from the ground up using basic physics principls and are not "calibrated" to any period.  If you wish to continue posting on an informed board you need to do your homework and stop making baseless assertions.  

    You would be much better served by asking questions about how climate models work, which you obviously do not understand, than incorrectly complaining that those models do not work properly.  Real Climate has several basic links on how climate models work.

  9. Micheal Sweet, I think my tone is reasonably passive to suggest I'm open to correction. I am not a denier, I present myself as an educated skeptic, and I am here to learn. When I make a mistake, people expect me to have a PHD in GCM. I think the point of calibration can be a little subtle, but it doesn't take away from my post.

    I did read about AR4 models:

    http://web.archive.org/web/20100322194954/http://tamino.wordpress.com/2010/01/13/models-2/

    Here they say

    ''But there’s a sizeable spread in the model outputs as well (especially in the early 20th century, since these results are set to a 1980-2000 baseline).''

    In this case the spread is very little in the 1980-2000 period. If its just blind physics, it is improbable that the spread of the results would behave so differently in this time period.

    Also in the rebuttal above, the basic section, they say models use hindcasting. So if they test with hindcasting, and it doesn't work, one tries to improve thier model. So this is what I call calibrating.  Am I wrong on this?

    I think I see the word 'calibrate' has a charged meaning, becasue deniers use it. Well I didn't know that. LOL. Anyway, I presented my definition of calibrate. Please note the above question mark indicating a question--you invited me to ask.

     

  10. Razo writes "When I make a mistake, people expect me to have a PHD in GCM. "

    and yet on another thread, (s)he writes " I am not in the climate field, but I do have experience with numerical modelling. I simply think GCM should be able to predict trade winds and ocean warming."

    where (s)he is explicitly claiming to have a background that provides a position to criticise GCMs.  As it happens the criticism shows a fundamental lack of understanding of what GCMs are designed to do and what can be expected of them.  Razo, you need to understand first and then criticise.  Asking questions is a better way of learning than making assertions or criticism, especially when you are not very familiar with the problems.  Your tone is not "reasonably passive" as the  quote above demonstrates.

  11. BTW Razo, baselining and calibration are not the same thing.  Yes, the variability in the baselining period should be expected to underestimate the true variability of the model projections, even if the model physics is 100% correct

  12. Well here are a few examples of the use of the word 'calibration' or synonyms in climate change literature, found by simply googling 'calibration climate change model'. I think my use of the term and the idea are reasonbly inline with the scientific community. Where am I going wrong on this? I have not read each of these exhustively. I am only showing showing the use of the expression.

    1)http://www.iac.ethz.ch/groups/schaer/research/reg_modeling_and_scenarios/clim_model_calibration

    This is a Swiss institute for atmospheric and climate science:

    ''The tuning of climate models in order to match observed climatologies is a common but often concealed technique. Even in physically based global and regional climate models, some degree of model tuning is usually necessary as model parameters are often poorly confined. This project tries to develop a methodological framework allowing for an objective model tuning with a limited number of (expensive) climate model integrations.''

    2)

    http://journals.ametsoc.org/doi/abs/10.1175/2011BAMS3110.1

    American meteriological society

    ''Calibration Strategies: A Source of Additional Uncertainty in Climate Change Projections''

     

    3)http://www.academia.edu/4210419/Can_climate_models_explain_the_recent_stagnation_in_global_warming

    An article titled ''Can climate models explain the recent stagnation in global warming?''

    ''In principle, climatemodel sensitivities are calibrated by fitting the climate response to the known seasonal and latitudinalvariations in solar forcing, as well as by the observed climate change to increased anthropogenicforcing over a longer period, mostly during the 20th century. It would be difficult to modify the modelcalibration significantly to reproduce the recent global warming slow down while still satisfying theseother major constraints.''

  13. Razo,

    In the second question at Real Climate's FAQ on Global Climate Models they say:

    "Are climate models just a fit to the trend in the global temperature data? No. Much of the confusion concerning this point comes from a misunderstanding stemming from the point above. Model development actually does not use the trend data in tuning (see below). Instead, modellers work to improve the climatology of the model (the fit to the average conditions), and it’s intrinsic variability (such as the frequency and amplitude of tropical variability). The resulting model is pretty much used ‘as is’ in hindcast experiments for the 20th Century. - See more at: http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/#sthash.EiXtCir9.dpuf"

    You say in reply to me: "If its just blind physics, it is improbable that the spread of the results would behave so differently in this time period."  It appears to me that you are saying that the results are too good to be based on just the physics.  

    I am not an expert in this area, but it seems to me that you are confusing your definations of tuning and calibration for these models because you have not read the background material.  This is very common and most of the experienced posters have seen this many times.  As I understand Tamino's reference to baselining, they are aligning the data from 1980-2000 for comparison, they did not use that data to tune the models.  Perhaps the alignment you referred to above is from how the data are graphed for comparison.  Read the above Real Climate reference.

    In your post at 712, your first and second references are to Regional Climate Models, not Global Climate models.  They are not done the same way.  You need to clear up in your mind what you want to discuss.  Since you mentioned the "hiatus", it previously appeared that you were discussing Global Climate models.  

    Hans Von Storch is a respected scientist, but there are many different opinions on how well Global Climate Models are performing.  Tamino states at the end of the post you linked above "The outstanding agreement holds not just for the 20th century, but into the 21st as well — putting the lie to claims that recent observations somehow “falsify” IPCC model results."  (my emphasis) That was in 2010, but Tamino still feels that climate models are holding their own.

    There are several posters here that are more experienced that I am.  They seem to be holding back.  Perhaps if you make less statements about the Physics being too good people will be more friendly.  

    You frequently make sweeping statements and confuse apples and oranges.  For example, your confusion of Regional and Global climate models above.  This makes you appear hostile.  Dikran Marsupial is very knowledgable and can answer your questions if you pose them in a less hostile tone.   

  14. Razo, none of the examples of the use of "calibration" you provide are referring to baselining, which kind of makes my point, you are not using the term in the ususal sense in climatology.  There is a difference between baselining (which was the cause of the phenomenon you were describing) and calibration (a.k.a. tuning) in the sense of those three quotes.  The models are not affected in anyway by baselining, it is a method used in the analysis of model output and is not part of the model in any way.

    Please, do yourself a favour and try and learn a bit more before making assertions or criticisms (or at least pay attention to responses to your posts, such as mine at 711).

  15. I wasn't think that calibrating means curve fitting or trend.

    I know there are different opinions. Some, which are not deniers,  appear to agree with me.

    I can appreciate baseline study is a little more complex. I think the topic of my post 706 is not effected by this.

    This is what I found for what a baseline climate is for. Amoungst other things, they do say its used for calibration.

    http://www.cccsn.ec.gc.ca/?page=baseline

    ''Baseline climate information is important for:

    -characterizing the prevailing conditions under which a particular exposure unit functions and to which it must adapt;

    -describing average conditions, spatial and temporal variability and anomalous events, some of which can cause significant impacts;

    -calibrating and testing impact models across the current range of variability;

    -identifying possible ongoing trends or cycles; and

    -specifying the reference situation with which to compare future changes.''

  16. Razo, climate modellers explicitly state that climate models have no skill at decadal level prediction. You seem to think from your experience as a numerical modeller, that they should be able to predict the trade winds (aka ENSO cycle), but then have tried predicting weather beyond 5 days? Weather prediction (and ENSO prediction) are initial values limited by chaos theory. Climate prediction is a boundary value problem where internal variability is bounded energy levels. (by analogy, you might get warm days in winter, but the average temperature for a month is going to be lower in winter than in summer). Climate models are trying to predict what will happen to 30 year averages. Got a better way of doing it?

    They are incredibly useful tools in climate science, but if you are wanting to evaluate the AGW hypothesis, then please do it properly rather making uninformed stabs at things you havent understood. Perhaps the IPCC chapters on the subject where everything is referenced to the relevant published science as a starting point?

  17. Razo - Baselines are not complex in the least. Simply put, you take two series, and using a baseline period adjust them so that they have the same mean over that period -in order to see how they change relative to one another. 

    For example, comparing GISTEMP and HadCRUT4 without adjusting for the fact that they use different baselines:

    GISTEMP/HadCRUT4

    And by setting them to a common baseline of 1980-1999:

    GISTEMP/HadCRUT4 common 1980-1999 baseline

    A common baseline is really a requirement for comparing two data series.

  18. Razo - I would point out that the reference you made here to "Can climate models explain the recent stagnation in global warming" is to an un-reviewed blog article. 

    The published peer-reviewed literature on global models, on the other hand, states something quite different, such as in Schmidt et al 2014. This is discussed in some detail here on SkS. Climate model projections are run with forcing projections, and that includes the CMIP3 and CMIP5 model sets discussed by the IPCC. And each model run represents a response to those projected forcings. 

    When (as per that paper) you incorporate observed, not projected, forcings, it is clear that the models are quite quite good. 

  19. Hey KR. Thank you.

    I wanted to point out to Dikran Marsupial, that the point of my post 706 was that the model that includes man made forcings only seems to be reducing the large error of the natural forcing only model in the 1850s when they are combined. this is about the figures 1 a,  b, and c in the rebuttal on the intermediate page.

    So KR I have a couple questions: 1) Is the common base the measured values or the ensemble mean? 2) does it make a difference to the results if you change the baseline dates?

  20. michael sweet,

    When I am talking about calibration, I look at it more like this (slightly simplified version follows):

    Computer models are based on math. Math is equations. For curve fitting or trends one guesstimates an equation based on a graph. For more physical models, the equations are derived from basic principles. In both cases some 'calibration' is done to establish the equation's parameters. In the latter case, sometimes its easy like g=9.81 m/s2.

    As I understand, GCMs basically integrate Navier Stokes equation. These are big and complicated. They can however be broken up into different pieces and a large part of the calibration can be done in parts. Some of the paramteres are omitted and some are estimated using yet another equation, and maybe curvefitting.

    On top of this, computers don't do math like humans. They usually break it into small steps which they perform fast. So the solution process itself is approximate.

  21. Razo: So? That doesnt make them unreliable nor unskillful. You seem to saying "its complex therefore they must be wrong". Much more importantly, you dont have to rely on models to verify AGW. Nor to see that we have problem. Empirically sensitivity, is most likely between 2 and 4.5. From bottom up reasoning, you need a large unknown feedback to get sensivitity below 2. (Planck's law get you 1.1, clausius-clapeyron gives you 2, with albedo to follow). And as for models, the robust predictions from models seem to be holding up pretty good. (eg see here ).

  22. Razo,

    Calibration of Global Climate Models is difficult.  I understand that they are not calibrated to match the temperature trend (either for forcast or hindcast).  The equations are adjusted so that measured values like cloud height and precipitation are close to climatological averages for times when they have measurements (hindcasts).  The temperature trends are an emergent property, not a calibrated property.  This also applies to ENSO.  When the current equations are implemented ENSO emerges from the calculations, it is not a calibrated property.

    Exact discussions of calibration seem excessive to me.  In 1894, Arrhenius calculated from basic principles, using only a pencil, and estimated the Climate Sensitivity as 4.5C.  This value was not calibrated or curve fitted at all— there was no data to fit to. The current range (from IPCC AR5) is 1.5-4.5C with a most likely value near 3 (IPCC does not state a most likely value).   If the effect of aerosols is high the value could be 3.5-4, almost what Arrhenius calculated without knowing about aerosol effects.  If it was really difficult to model climate, how could Arrhenius have been so accurate when the Stratosphere had not even been discovered yet?  To support your claim that the models are not reliable you have to address Arrhenius' projection, made 120 years ago.  If it is so hard to model climate, how did Arrhenius successfully do it?  Examinations of other model predictions (click on the Lessons from Past Predictions box to get a long list) compared to what has actually occured show scientists have been generally accurate.  You are arguing against success.

    A brief examination of the sea level projections in the OP show that they are too low.  The IPCC has had to raise it's projection for sea level rise the last two reports and will have to significantly increase it again in the near future.  Arctic sea ice collapsed decades before projections and other effects (drought, heat waves) are worse than projected only a decade ago.  Scientists did not even notice ocean acidification until the last 10 or 20 years.  If your complaint is that the projections are too conservative you may be able to support that.

  23. Razo wrote "I can appreciate baseline study is a little more complex. I think the topic of my post 706 is not effected by this."

    No, as KR pointed out, baselining is actually a pretty simple idea.  It is a shame that you appear to be so resistant to the idea that you have misunderstood this and are trying so hard to avoid listening to the explanation of why the good fit is obtained during the baseline period (and why it is nothing to do with the models themselves).  You will learn very little this was as most people don't have the patience to put up with that sort of behaviour.  However, ClimateExplorer allows you to experiment with the baseline period to see what difference it makes to the ensemble.  Here is an CMIP3 SRESA1B ensemble with baseline period from 1900-1930:

    here is one with a baseline period of 1930-1960:

    1960-1990:

    1990-2020:

    I'm sure you get the picture.  Now the IPCC generally use a baseline period ending close to the present day, one of the problems with that is that it reduces the variance of the ensemble runs during the last 15 years, which makes the models appear less able to explain the hiatus than they actually are.

    Now as to why the observations are currently in the tails of the distribution of  model runs, well it could be that the models run too warm on average, or it could be that the models underestimate the variability due to unforced climate change, or a bit of both.  We don't know at the current time, but there is a fair amount of work going on to find out (although you will only find skeptics willing to talk about the "too warm" explanation).  The climate modellers I have discussed this with seem to think it is "a bit of both".  Does it mean the models are not useful or skillful?  No.

    Razo also wrote "I wanted to point out to Dikran Marsupial, that the point of my post 706 was that the model that includes man made forcings only seems to be reducing the large error of the natural forcing only model in the 1850s when they are combined."

    Well perhaps you should have just asked the question directly.  I suspect the reasons for this are twofold:  Firstly it is to a large extent the result of baselining (the baseline period for these models is 1880 to 1920), if you made the "error" of the "natural only" models smaller in the 1850s, that would make the difference in the baseline period bigger than currently shown and hence this is prevented by th ebaselining procedure.  The same baselining causes the "anthropogenic model" to have large "errors" from the 1930s to the 1960s.  The primary cause is baselining.  Now if you have a better model that includes both natural and anthropogenic forcings, you get a model that doesn't have these gross errors anywhere, because the warming over the last century and a half has had both natural and anthropogenic components.  So this is no surprise.

    Now it is a shame that you didn't stop to find out what baselining is and why it is used when you first saw it on Tamino's blog, rather than carry on trying to criticise tghe models with incorrect arguments.  Please take some time to do some learning, don't assume your background means you don't have to start at the beginning (as I had to), and dial the tone back a bit.

  24. @scaddenp, in fact the navier stokes are absolutely non-predictable.  This is what the whole deal with Lorenz is all about.  In fact, we cant event integrate a simple 3 variable differential equation with any accuracy for anything but a small amount of time.  Reference:http://www.worldscientific.com/doi/abs/10.1142/S0218202598000597

    Now, if we assume that climate scientists are unbiased (I've been in the business, this would be a somewhat ridiculous assumption), the models would provide our BEST GUESS.  But they are of absolutely NO predictive value, as anyone who has integrated PDE's where the results matter (i.e. Engineering) knows.

  25. Oh, and this inability to integrate the model forward with accuracy doesn't even touch on the fact that the model itself is an extreme approximation of the true physics.  Climates models are jam-packed with adhoc parameterizations of physical process.  Now the argument (assuming the model was perfect) is that averages are computable even if the exact state of the climate in the future is not.  Its a decent arguement, and in general this is an arguable stance.  However, there is absolutely no mathematical proof that the average temperature as a quantity of interest is predictable via the equations of the climate system.  And there likely never will be.  But, again, all of this is not a criticism of climate modelling.  They do the best they can.  The future is uncertain nonetheless.

  26. "The future is uncertain nonetheless."

    A statement of the exceedingly obvious.  However you have not written anything that would support the contention that the future is any more uncertain than the model projections state, or that the models are not useful or basically correct.

    "If we had observations of the future, we obviously would trust them more than models, but unfortunately …… observations of the future are not available at this time. (Knutson & Tuleya – 2005)."

  27. nickels - If you feel that the climate averages cannot be predicted due to Lorenzian chaos, I suggest you discuss this on the appropriate thread. Short answer: chaotic details (weather) cannot be predicted far at all due to nonlinear chaos due to slightly varying and uncertain detailed starting conditions. But the averages are boundary problems, not initial value problems, are strongly constrained by energy balances, and far more amenable to projection. 

    Steve Easterbrook has an excellent side-by-side video comparison showing global satellite imagery versus the global atmospheric component of CESM over the course of a year. Try identifying which is which, and if there are significant differences between them, without looking at the captions! Details (weather) are different, but as this model demonstrates the patterns of observations are reproduced extremely well - and that based upon large-scale integration of Navier-Stokes equations. The GCMs perform just as well regarding regional temperatures over the last century:

    IPCC AR4 Fig. 9.12, regional temperatures modeled with/without anthropogenic forcings

    [Source]

    Note the average temperature (your issue) reconstructions, over a 100+ year period, and how observations fall almost entirely within the model ranges. 

    Q.E.D., GCMs present usefully accurate representations of the climate, including regional patterns - as generated by the boundary constraints of climate energies. 

    ---

    Perhaps SkS could republish Easterbrooks post? It's an excellent visual demonstration that hand-waving claims  about chaos and model inaccuracy are nonsense. 

  28. So with regards to baselines.

    I can understand the need to do it to compare models, less so with perterbations of the same model. But I'm suprised that people use it when comparing models with the actual data. Its one thing to calculate and present ensemble mean, its quite another to offset model runs or different models to match the mean of the baseline. This becomes an arbitrary adjustment, not based on any physics. Could you explain this to me please?

    Also, Dikran you say

    "Now the IPCC generally use a baseline period ending close to the present day, one of the problems with that is that it reduces the variance of the ensemble runs during the last 15 years, which makes the models appear less able to explain the hiatus than they actually are."

    You seem to be saying a higher variance is better. Having the hiatus within the 95% confidence interval is a good thing, but a narrower interval is better if you want to more accurately predict a number, or justify a trend.

    Another thing to add, as I understand if the projected values are less than 3 times the variance, one says there is no result. If it is over 3 times one says there is a trend, and not until the ratio is 10, does one quote a value.  looking at the variablity caused by changing the baseline, as well as the height of the red zones in post 723, the variance appears to be about 2.0C, the range of values is about 6.0 (from 1900 to 2100). Can one use thse same rules here?

  29. Razo wrote "But I'm suprised that people use it when comparing models with the actual data."

    Models are able to predict the response to changes in forcings more accurately than they are able to estimate the absolute temperature of the Earth, hence baselining is essential in model-observation comparisons.  There is also the point that the observations are not necessarily observations of exactly the same thing projected by the models (e.g. limitations in coverage etc.) and baselining helps to compensate for that to an extent.

    "its quite another to offset model runs or different models to match the mean of the baseline."

    This is not what is done, a constant is subtracted from each model run and set ob observations independently such that it has a zero offset during the baseline period.  This is a perfectly reasonable thing to do in research on climate change as it is the anomalies from a baseline in which we are primarily interested.

    "You seem to be saying a higher variance is better. "

    No, I am saying that an accurate estimate of the variance (which is essentially an estimate of the variability due to unforced climate change) is better, and that the baselining process has an unfortunate side effect in artificially reducing the variance, which we ought to be aware of in making model-observation comparisons.

    "Having the hiatus within the 95% confidence interval is a good thing, but a narrower interval is better if you want to more accurately predict a number, or justify a trend."

    no, you are fundamentally missing the point of the credible interval, which is to give an indication of the true uncertainty in the projection.  It is what it is, neither broader nor narrower is "better", what you want is for it to be accurate.  Artifically making them narrower as a result of baselining does not make the projection more accurate, it just makes the interval a less accurate representation of the uncertainty.

    "Another thing to add, as I understand if the projected values are less than 3 times the variance, one says there is no result."


    No, one might say that the observations are consistent with the model (at some level of significance), however this is not a strong comment on the skill of the model.

    "If it is over 3 times one says there is a trend, and not until the ratio is 10, does one quote a value."

    No, this would not indicate a "trend" simply because an impulse (e.g. the 1998 El-Nino event) could cause such a result.  One would instead say that the observations were inconsistent with the models (at some level of significance).  Practice varies about the way in which significance levels are quoted and I am fairly confident that most of them would have attracted the ire of Ronald Aylmer Fisher.

    "Can one use thse same rules here?"

    In statistics, it is a good idea to clearly state the hypothesis you want to test before conducting the test as the details of the test depend on the nature of the hypothesis.  Explain what it is that you want to determine and we can discuss the nature of the statistical test.

  30. Dikran Marsupial  wrote "Models are able to predict the response to changes in forcings more accurately than they are able to estimate the absolute temperature of the Earth, hence baselining is essential in model-observation comparisons."

    Thats a kind of calibration. I understand the need. People here were trying to tell me that no such thing was happening, and it just pure physics. I didn't know exactly how it was calculated, but I expected it. I know very well that "Models are able to predict the response to changes in forcings more accurately than they are able to estimate the absolute...".

     

    "a constant is subtracted from each model run."

    That's offsetting. Please don't disagree. Its practically the OED definition. Even the c in y=mx+c is sometimes called an offset.

     

    "neither broader nor narrower is "better", what you want is for it to be accurate"

    I said, narrower is better if you want to  predict a number, or justify a trend. I mean this regardless of the issue of the variance in the baseline region.

  31. Razo wrote "Thats a kind of calibration."

    sorry, I have better things to do with my time than to respond to tedious pedantry  used to evade discussion of the substantive points.  You are just trolling now. 

    Just to be clear, calibration or tuning refers to changes made to the model in order for it to improve its behaviour.  Baselining is a method used in analysis of the output of the model (but which does not change the model itself in any way).

    "I said, narrower is better if you want to  predict a number, or justify a trend. "

    No, re-read what I wrote.  What you are suggesting is lampooned in the famous quote "he uses statistics in the same way a drunk uses a lamp post - more for support than illumination".  The variability is what it is, and a good scientist/statistician wants to have as accurate an estimate as possible and then see what conclusions can be drawn from the results.

  32. Have the points in this video ever been adressed here?

    Climate Change in 12 Minutes - The Skeptics Case

    https://www.youtube.com/watch?v=vcQTyje_mpU

    From my readings thus far, I agree with this evaluation of the accuarcy and utility of current climate models:

    Part of a speech delivered by David Victor of the University of California, San Diego, at the Scripps Institution of Oceanography as part of a seminar series titled “Global Warming Denialism: What science has to say” (Special Seminar Series, Winter Quarter, 2014):

    "First, we in the scientific community need to acknowledge that the science is softer than we like to portray. The science is not “in” on climate change because we are dealing with a complex system whose full properties are, with current methods, unknowable. The science is “in” on the first steps in the analysis—historical emissions, concentrations, and brute force radiative balance—but not for the steps that actually matter for policy. Those include impacts, ease of adaptation, mitigation of emissions and such—are surrounded by error and uncertainty. I can understand why a politician says the science is settled—as Barack Obama did…in the State of the Union Address, where he said the “debate is over”—because if your mission is to create a political momentum then it helps to brand the other side as a “Flat Earth Society” (as he did last June). But in the scientific community we can’t pretend that things are more certain than they are."

    Also, any comments on this paper:

    Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences

    Naomi Oreskes,* Kristin Shrader-Frechette, Kenneth Belitz

    SCIENCE * VOL. 263 * 4 FEBRUARY 1994

    Abstract: Verification and validation of numerical models of natural systems is impossible. This is because natural systems are never closed and because model results are always non-unique. Models can be confirmed by the demonstration of agreement between observation and prediction, but confirmation is inherently partial. Complete confirmation is logically precluded by the fallacy of affirming the consequent and by incomplete access to natural phenomena. Models can only be evaluated in relative terms, and their predictive value is always open to question. The primary
    value of models is heuristic.

    http://courses.washington.edu/ess408/OreskesetalModels.pdf

  33. Winston2014,two things:

    1. What does Victor's point allow you to claim?  By the way, Victor doesn't address utility in the quote.

    2. Oreskes point is a no-brainer, yes?  No one in the scientific community disagrees, or if they do, they do it highly selectively (hypocritically).  Models fail.  Are they still useful?  Absolutely: you couldn't drive a car without using an intuitive model, and such models fail regularly.  The relationship between climate models and policy is complex.  Are models so inaccurate they're not useful?  Can we wait till we get a degree of usefulness that's satisfactory to even the most "skeptical"?  Suppose, for example, that global mean surface temperature rises at 0.28C per decade for the next decade.  This would push the bounds of the AR4/5 CMIP3/5 model run ranges.  What should policy response be ("oh crap!")?  What if that was followed by a decade of 0.13C per decade warming?  What should policy response be then ("it's a hoax")?

    Models will drive policy; nature will drive belief.  

  34. DSL,

    "Models fail. Are they still useful?"

    Not for costly policies until the accuracy of their projections is confirmed. From the 12 minute skeptic video, it doesn't appear that they have been confirmed to be accurate where it counts, quite the opposite. To quote David Victor again, "The science is “in” on the first steps in the analysis—historical emissions, concentrations, and brute force radiative balance—but not for the steps that actually matter for policy."

    "Models will drive policy"

    Until they are proven more accurate than I have seen in my investigations thus far, I don't believe they should.

    The following video leads me to believe that even if model projections are correct, it would actually be far cheaper to adapt (according to official figures) to climate change than it would be to attempt to prevent it based upon the "success" thus far of the Australian carbon tax:

    The 50 to 1 Project

    https://www.youtube.com/watch?v=Zw5Lda06iK0

  35. Winston @734, the claim that the policies will be costly is itself based on models, specifically economic models.  Economic models perform far worse than do climate models, so if models are not useful "... for costly policies until the accuracy of their projections is confirmed", the model based claim that the policies are costly must be rejected. 

  36. Victor, when you say it's cheaper to adapt, you're falling into an either-or fallacy.  Mitigation and adaptation are the extreme ends of a range of action.  Any act you engage in to reduce your carbon footprint is mitigation.  Adaptation can mean anything from doing nothing and letting the market work things out to engaging government-organized and subsidized re-organization of human life to create the most efficient adaptive situation.   If you act only in your immediate individual self-interest, with no concern for how your long-term individual economic and political freedom are constructed socially in complex and unpredictable ways, then your understanding of adaptation iss probably the first of my definitions.  If you do understand your long-term freedoms as being socially constructed, you might go for some form of the second, but if you do, you will--as Tom points out--be relying on some sort of model, intuitive or formal.

    Do you think work on improving modeling should continue?  Or should modeling efforts be scrapped? 

  37. Victor -> Winston — where "Victor" came from, I have no idea.

  38. Winston2014,

    Your 12min video talks about models' and climate sensitivity uncertainty. However, it cherry picks the lower "skeptic" half of ECS uncertainty only. It is silent about the upper long tail of ECS uncertainty, which goes well beyond 4.5degrees - up to 8degrees - although with low probability.

    The cost of global warming in highly non-linear - very costly at the high end of the tail - essentially a catastrophe above 4degC. Therefore, in order to formulate the policy response you need to convolve the probability function with the potential cost function and integrate it and compare with the cost of mitigation.

    Because we can easily adapt to changes up to say 1degc, the cost of low sensitivity is almost zero - it does not matter. What really matters is the long tail of potential warming distribution, because its high cost - even at low probability - resulting in high risk, demanding serious preventative response.

    BTW, the above risk-reward analysis is the driver of policy response. Climate models have nothing to do with it. Your statement repeated after that 12min video that "Models will drive policy" is just nonsense. Policy should be driven by our best understanding of the ECS. ECS is derived from mutiple lines of evidence, e.g. paleo being one of them. The problem has nothing to do with your pathetic "Models fail. Are they still useful?" question. The answer to that question is: models output, even if they fail, is irrelevant here.

    Incidentally, concentrating on models' possible failure due to warming overestmation (as in your 12min video) while ignoring that models may also fail (more spectacularly) by underestimating over aspects of global warming (e.g. arctic ice melt), indicates cherry picking on a single aspect only that suits your agenda. If you were not biased in your objections, you would have noticed that models departure from observations are much higher in case of sea ice melt rather than in case of surface temps and concentrate your critique on that aspect.

  39. On the costs of mitigation, an IEA Special Report "World Energy Investment" is just out that puts the mitigation costs in the context of the $48 trillion investments required to keep the lights on under a BAUesque scenario. They suggest the additional investment required to allow a +2ºC future rather than a +4ºC BAUesque future is an extra $5 trillion on top.

  40. "The answer to that question is: models output, even if they fail, is irrelevant here."

    Not in politcs and public opinion, which in the real world is what drives policy when politicians respond to the dual forces of lobbyists and the desire to project to the voting public that they're "doing something to protect us." Model projections of doom drive the public perception side. The claim that policy is primarily driven by science is, I think, terribly niave. If that were the case, the world would be a wonderfully different place.

    "Your 12min video talks about models' and climate sensitivity uncertainty. However, it cherry picks the lower "skeptic" half of ECS uncertainty only. It is silent about the upper long tail of ECS uncertainty, which goes well beyond 4.5degrees - up to 8degrees - although with low probability."

    But isn't climate sensitivity uncertainty what it's all about?

    "Incidentally, concentrating on models' possible failure due to warming overestmation (as in your 12min video) while ignoring that models may also fail (more spectacularly) by underestimating over aspects of global warming (e.g. arctic ice melt), indicates cherry picking on a single aspect only that suits your agenda."

    Exactly and skeptics can then use that to point out that the projections themselves are likely garbage. No one has yet commented on the rather damning paper in that respect I posted a link to:

    Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences

    Naomi Oreskes,* Kristin Shrader-Frechette, Kenneth Belitz

    SCIENCE * VOL. 263 * 4 FEBRUARY 1994

    Abstract: Verification and validation of numerical models of natural systems is impossible. This is because natural systems are never closed and because model results are always non-unique. Models can be confirmed by the demonstration of agreement between observation and prediction, but confirmation is inherently partial. Complete confirmation is logically precluded by the fallacy of affirming the consequent and by incomplete access to natural phenomena. Models can only be evaluated in relative terms, and their predictive value is always open to question. The primary
    value of models is heuristic.

    http://courses.washington.edu/ess408/OreskesetalModels.pdf

    Also:

    Twenty-three climate models can't all be wrong...or can they?

    http://link.springer.com/article/10.1007/s00382-013-1761-5

    Climate Dynamics
    March 2014, Volume 42, Issue 5-6, pp 1665-1670
    A climate model intercomparison at the dynamics level
    Karsten Steinhaeuser, Anastasios A. Tsonis

    According to Steinhaeuser and Tsonis, today "there are more than two dozen
    different climate models which are used to make climate simulations and
    future climate projections." But although it has been said that "there is
    strength in numbers," most rational people would still like to know how
    well this specific set of models does at simulating what has already
    occurred in the way of historical climate change, before they would be
    ready to accept what the models predict about Earth's future climate. The
    two researchers thus proceed to do just that. Specifically, they examined
    28 pre-industrial control runs, as well as 70 20th-century forced runs,
    derived from 23 different climate models, by analyzing how well the models
    did in hind-casting "networks for the 500 hPa, surface air temperature
    (SAT), sea level pressure (SLP), and precipitation for each run."

    In the words of Steinhaeuser and Tsonis, the results indicate (1) "the
    models are in significant disagreement when it comes to their SLP, SAT, and
    precipitation community structure," (2) "none of the models comes close to
    the community structure of the actual observations," (3) "not only do the
    models not agree well with each other, they do not agree with reality," (4)
    "the models are not capable to simulate the spatial structure of the
    temperature, sea level pressure, and precipitation field in a reliable and
    consistent way," and (5) "no model or models emerge as superior."

    In light of their several sad findings, the team of two suggests "maybe the
    time has come to correct this modeling Babel and to seek a consensus
    climate model by developing methods which will combine ingredients from
    several models or a supermodel made up of a network of different models."
    But with all of the models they tested proving to be incapable of
    replicating any of the tested aspects of past reality, even this approach
    would not appear to have any promise of success.

  41. Have these important discoveries been included in models? Considering that it is believed that bacteria generated our initial oxygen atmosphere, one that metabolizes methane should be rather important when considering greenhouse gases. As climate changes, how many more stagnant, low oxygen water habitats for them will emerge?

    Bacteria Show New Route to Making Oxygen

    http://www.usnews.com/science/articles/2010/03/25/bacteria-show-new-route-to-making-oxygen

    Excerpt:

    Microbiologists have discovered bacteria that can produce oxygen by breaking down nitrite compounds, a novel metabolic trick that allows the bacteria to consume methane found in oxygen-poor sediments.

    Previously, researchers knew of three other biological pathways that could produce oxygen. The newly discovered pathway opens up new possibilities for understanding how and where oxygen can be created, Ettwig and her colleagues report in the March 25 (2010) Nature.

    “This is a seminal discovery,” says Ronald Oremland, a geomicrobiologist with the U .S. Geological Survey in Menlo Park, Calif., who was not involved with the work. The findings, he says, could even have implications for oxygen creation elsew here in the solar system.

    Ettwig’s team studied bacteria cultured from oxygen-poor sediment taken from canals and drainage ditches near agricultural areas in the Netherlands. The scientists found that in some cases the labgrown organisms could consume methane — a process that requires oxygen or some other
    substance that can chemically accept electrons — despite the dearth of free oxygen in their environment. The team has dubbed the bacteria species Methylomirabilis oxyfera, which translates as “strange oxygen producing methane consumer.”

    --------

    Considering that many plants probably evolved at much higher CO2 levels than found at present, the result of this study isn't particularly surprising, but has it been included in climate models? Has the unique respiration changes with CO2 concentration for every type of plant on Earth been determined and can the percentage of ground cover of each type be projected as climate changes?

    High CO2 boosts plant respiration, potentially affecting climate and crops

    http://www.eurekalert.org/pub_releases/2009-02/uoia-hcb020609.php

    Excerpt:

    "There's been a great deal of controversy about how plant respiration responds to elevated CO2," said U. of I. plant biology professor Andrew Leakey, who led the study. "Some summary studies suggest it will go down by 18 percent, some suggest it won't change, and some suggest it will increase as much as 11 percent." 

    Understanding how the respiratory pathway responds when plants are grown at elevated CO2 is key to reducing this uncertainty, Leakey said. His team used microarrays, a genomic tool that can detect changes in the activity of thousands of genes at a time, to learn which genes in the high CO2 plants
    were being switched on at higher or lower levels than those of the soybeans grown at current CO2 levels.

    Rather than assessing plants grown in chambers in a greenhouse, as most studies have done, Leakey's team made use of the Soybean Free Air Concentration Enrichment (Soy FACE) facility at Illinois. This open-air research lab can expose a soybean field to a variety of atmospheric CO2
    levels – without isolating the plants from other environmental influences, such as rainfall, sunlight and insects.

    Some of the plants were exposed to atmospheric CO2 levels of 550 parts per million (ppm), the level predicted for the year 2050 if current trends continue. These were compared to plants grown at ambient CO2 levels (380 ppm).

    The results were striking. At least 90 different genes coding the majority of enzymes in the cascade of chemical reactions that govern respiration were switched on (expressed) at higher levels in the soybeans grown at high CO2 levels. This explained how the plants were able to use the increased
    supply of sugars from stimulated photosynthesis under high CO2 conditions to produce energy, Leakey said. The rate of respiration increased 37 percent at the elevated CO2 levels.

    The enhanced respiration is likely to support greater transport of sugars from leaves to other growing parts of the plant, including the seeds, Leakey said.

    "The expression of over 600 genes was altered by elevated CO2 in total, which will help us to understand how the response is regulated and also hopefully produce crops that will perform better in the future," he said.

    --------

    I could probably spend days coming up with examples of greenhouse gas sinks that are most likely not included in current models. Unless you fully understand a process, you cannot accurately “model” it. If you understand, or think you understand, 1,000 factors about the process but there are another 1,000 factors you only partially know about, don't know about, or have incorrectly deemed unimportant in a phenomenally complex process, there is no possibility whatsoever that your projections from the model will be accurate, and the further out you go in your projections, the less accurate they will probably be.

    The current climate models certainly do not have all the forces that create changes in the climate integrated, and there are who knows how many more factors that have not even been realized as yet. I suspect there are a huge number of them if my reading about the newly discovered climate relevant factors discovered almost weekly is anything to judge by. Too little knowledge, too few data points or proxy data points with uncertain accuracy lead to a "Garbage in - Garbage models - Garbage out" situation.

  42. Can I suggest that we ignore Winstons most recent post.  The discovery that bacteria have a novel pathway for generating oxygen should not be incorporated into climate models unless there is a good reason to suppose that the effects of this pathway are of sufficient magnitude to significantly alter the proportions of gasses in the atmosphere.

    Winston has not provided this, and I suspect he cannot (which in itself would answer the question of whether they were included in the models and why).  Winstons post comes across as searching for some reason, any reason, to criticize the models and is already clutching at straws. I suggest DNFTT.

     

  43. "BTW, the above risk-reward analysis is the driver of policy response. Climate models have nothing to do with it. Your statement repeated after that 12min video that "Models will drive policy" is just nonsense. Policy should be driven by our best understanding of the ECS. ECS is derived from mutiple lines of evidence, e.g. paleo being one of them. The problem has nothing to do with your pathetic "Models fail. Are they still useful?""

    Equilibrium Climate Sensitivity

    http://clivebest.com/blog/?p=4923

    Excerpt from comments:

    "The calculation of climate sensitivity assumes only the forcings included in climate models and do not include any significant natural causes of climate change that could affect the warming trends."

    If it's all about ECS and ECS is "determined" via the adjustment of models to track past climate data, how are models and their degree of accuracy irrelevant?

  44. "I suggest DNFTT"

    I'm not trolling. It's called playing the proper role of a skeptic which is asking honest questions. Emphasis is mine.

    Here's my main point in which I am in agreement with David Victor, "The science is “in” on the first steps in the analysis—historical emissions, concentrations, and brute force radiative balance—but not for the steps that actually matter for _policy_. Those include impacts, ease of adaptation, mitigation of emissions and such—are surrounded by error and uncertainty."

  45. "Can I suggest that we ignore Winstons most recent post. The discovery that bacteria have a novel pathway for generating oxygen should not be incorporated into climate models unless there is a good reason to suppose that the effects of this pathway are of sufficient magnitude to significantly alter the proportions of gasses in the atmosphere."

    I didn't say they necsessarily should be, my intent was to show the likely huge number of factors that aren't modelled that may very well be highly significant, just as were the bacteria that once generated most of the oxygen on this planet. 

  46. Winston2014:

    You can propose, suppose, or, as you say, "show" as many "factors that aren't modelled that may very well be highly significant" as you like.

    Unless and until you have some cites showing what they are and why they should be taken seriously, you're going to face some serious... wait for it... skepticism on this thread.

    You can assert you're "playing the proper role of a skeptic" if you like. But as long as you are offering unsupported speculation about "factors" that might be affecting model accuracy, in lieu of (a) verifiable evidence of such factors' existence, (b) verifiable evidence that climatologists and climate modellers haven't already considered them, and (c) verifiable evidence that they are "highly significant", I think you'll find that your protestations of being a skeptic will get short shrift.

    Put another way: there are by now 15 pages of comments on this thread alone, stretching back to 2007, of self-styled "skeptics" trying to cast doubt on or otherwise discredit climate modelling. I'm almost certain some of them have also resorted to appeals to "factors that aren't modelled that may very well be highly significant", without doing the work of demonstrating that these appeals have a basis in reality.

    What have you said so far that sets you apart?

    Response:

    [JH] Please specify to whom your comment is directed.  

  47. What makes you think uncertainty is your friend? Suppose the real sensitivity is 4.5, or 8? You seem to overemphasising uncertaintly, seeking minority opinions to rationalize a "do nothing" predisposition. Try citing peer-reviewed science instead of distortions by deniers.

    Conservatively we know how live in a world with slow rates of climate change and 300pm of CO2. 400ppm was last seen when we didnt have ice sheets.

    I agree science might change,  the fly spaghetti monster or the Second Coming might happen instead, but that is not the way to do policy. I doubt would take the attitude to uncertainty in medical science if it came to treating a personal illness.

    Response:

    [JH] Please specify to whom your comment is directed.  

  48. Also, if you are actually a skeptic, how about trying some of that skepticism on the disinformation sites you seem to be reading.

  49. Winston: "I didn't say they necsessarily should be, my intent was to show the likely huge number of factors that aren't modelled that may very well be highly significant, just as were the bacteria that once generated most of the oxygen on this planet."


    This is why you're not being taken seriously.  You're comparing the potential impact of new bacterial growth sites over decades to centuries to the impact of the great oxygenation event, when a brand new type of life was introduced to the globe over a period of millions of years. 

    You fail to recognize the precise nature of the change taking place.  Rising sea level and changing land storage of freshwater will be persistent.  This is not a step change where we go from one type of land-water transitional space to another.  It is a persistently-changing transitional space.  Thus, adaptation in these spaces must be persistent.  How many suitable habitats will be destroyed for every one created?   It's inevitable that some--many--will be destroyed. 

    Further, what impact would additional oxygen mean for the radiative forcing equation?  Note that human burning of fossil carbon has been taking oxygen out of the atmosphere for the last 150 years.

    Your plant argument belongs on another page.

  50. Winston wrote "I didn't say they necsessarily should be, my intent was to show the likely huge number of factors that aren't modelled that may very well be highly significant, just as were the bacteria that once generated most of the oxygen on this planet. "

    That is trolling.  Come back when you can think of a factor that is likely to have a non-negligible impact on climate that is not included in the model, and until then stop wasting out time by trying to discuss things that obviously aren't.

    "I'm not trolling. It's called playing the proper role of a skeptic which is asking honest questions."

    It isn't an honest question as you can't propose a factor that has a non-negligible impact on climate, you just posit that they exist with no support.  That is not skepticism.

Prev  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



Get It Here or via iBooks.


The Consensus Project Website

TEXTBOOK

THE ESCALATOR

(free to republish)

THE DEBUNKING HANDBOOK

BOOK NOW AVAILABLE

The Scientific Guide to
Global Warming Skepticism

Smartphone Apps

iPhone
Android
Nokia

© Copyright 2017 John Cook
Home | Links | Translations | About Us | Contact Us