Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.


Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe

Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...

Keep me logged in
New? Register here
Forgot your password?

Latest Posts


Climate Hustle

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable
"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. For example, here’s a graph of sea level rise:

Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.

Mainstream climate models have also accurately projected global surface temperature changes.  Climate contrarians have not.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

There's one chart often used to argue to the contrary, but it's got some serious problems, and ignores most of the data.

Christy Chart

Basic rebuttal written by GPWayne

Update July 2015:

Here is a related lecture-video from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Last updated on 31 December 2016 by pattimer. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Further reading


On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.


Prev  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  Next

Comments 901 to 950 out of 1028:

  1. @MARodger:

    " happily junk 90% of it because it doesn't meet some level of precision..."

    You're putting words in my mouth I didn't say. What I did say and have said all along is that the new ARGO data are much much better. Have you looked at the 5 year data point maps in question? I didn't bother reading your whole post, just as I didn't bother reading the last part of Tom Curtis #897 for the same reason, it's a none quantitative hectoring lecture.

  2. @Tom Curtis #896:

    I did a similar exercise, but using the quarterly 0-2000 OHC from 2005 to 2014 inclusive, calculating a rolling 5 year trend of delta GJ/yr and converting to W/m^2. The average of the W/m^2 for the rolling 5 year trend was 0.69 (OHC to 2000 metres is 0.56 W/m^2, corrected by .58/.47), very close to your 0.71. The average of a rolling mean of the CMIP5 RCP4.5 ensemble energy imbalance was 0.98 W/m^2, the same as your number. So our numbers agree over the most recent period. They both show the models are on the hot side.

    You are correct the most recent few years the gain in OHC has pickup up.

    I didn't bother reading the last part of your very long winded post #897. It didn't seem to have much quantitative content and if I wanted long hectoring lectures I would be posting somewhere else.


    [JH] If you can't stand the heat, it's best you get out of the kitchen.

  3. @Moderator #902:

    "...[JH] If you can't stand the heat, it's best you get out of the kitchen..."

    I'm happy to respond/discuss quantitative arguments. They are the only ones from which you learn something. My point was hectoring lectures are a waste of time for both the lecturer and the lecturee. Ultimately our opinions about other people can't change the numbers, can they? Opinion has a place in these technical discussions since oftimes the the numbers are highly subject to interpretation but repetitive judgements on the motive of your opponent is a kind of opinion that doesn't move the discussion forward. If anything they move the discussion backward since you're less likely to pay attention to someone who hectors you.

    I came here as a respite from the monotony of righteous opinion at the Guardian, however here it's not much different. In some ways it's worse in that the technical blogs like Skeptical Science tend to be echo chambers and technical opinion, such as you find it at the Guardian, tends to be more free ranging than here.

    Hopefully this opinion will be allowed by the moderators.


    [JH] I believe it is time for you to move on to a different venue. You have exhausted our patience and have insulted both Tom Curtis and MA Rodgers. Posting on Skeptical Science is a privilege and not a right. 

  4. Klapper - After taking a look at your past comments, I would have to agree with scaddenp. You've apparently decided that CO2 warming, AGW, and all climate modelling are somehow incorrect, and have spent considerable time splitting hairs and decimal points attempting to find some reason to reject them. 

    Most recently you've been rejecting all OHC data before 2005, you've refused to consider recent volcanic activity that has presented a different actual forcing history than was run on the models, both accepted and rejected model drift in the same (contradictory) argument, making impossible claims for perfection by ignoring uncertainties, and have in essence dismissed the fact that models accurately project climate responses to forcings, which can be seen if you use historic forcings (Schmidt et al 2014, also discussed here)

    In short, you've been rejecting every piece of evidence contrary to your conclusions while misinterpreting data you feel supports them. The best description I've seen of such confirmation bias is Morton's demon - IMO you are so afflicted, and quite frankly it's not worth the time to watch you run in circles. 

  5. Klapper @902, science is not arithmetic.  Specifically, in science the numbers represent actual states in the world.  They have meaning.  If you base your arguments on rejecting or ignoring those meanings, as evidently you do, it makes discussion impossible.  In this case, you ignore the fact that empirical values must be set by observation, and consequently (apparently) are distressed that I cite observational papers in support of a claim that a particular number has a particular value (897:1); are distressed that I find it necessary to explain the basic meanings of certain scientific terms, and how that relates to a simple quantitative argument (897:2); apparently are distressed about my pointing to a simple observational proof about the meaning of some data (897:3); and are further distressed by my again having to explain some very basic facts about models and their relation to the world (897:4).  If you were distressed by my explaining these simple facts, be assured I was extremely annoyed that your logic chopping responses made it necessary.  In all my impression is that you are keen to seize on numbers that you think "refute AGW", and are both happy to misrepresent those numbers by willful invalid comparisons (ie, comparing non-equal time periods) to exagerate discrepancies), apparently to leave out numbers that weaken your case (eg, the post 2010 NODC data), and above all, to ignore the meaning of the numbers if understanding it will require you to let go of your argument.  If you wish to stop wasting my time by discussing science you clearly do not understand on SkS, be my guest.

  6. Moderator Response @903.

    I think accusing Klapper of being "insulting" is a bit strong. I would accept that refusing even to read a replying comment is outrageously discourteous. But my characterising Klapper's comments as "pretty-much wrong on every point" without explanation - now that could be construed as being insulting, although if asked I am happy to provide such explanation.

    Klapper @901.

    I do not see that I did put words in your mouth. You are on record as objecting to the statement "We have OHC data of reasonable quality back to the 1960s" by saying "I've looked at the quarterly/annual sampling maps for pre-Argo at various depths and I wouldn't agree that's true for 0-700 m depth and certainly not true for 0-2000 m. There's a reason Lyman & Johnson 2014 (and other stuides) don't calculate heat changes prior to 2004 for depths greater than 700 m; they are not very meaningful." If you are stating that pre-Argo 0-2000m data is certainly not of reasonable quality, that trying to use it would be not very meaningful, this can only suggest that you are saying it is not useful data and thus it is junk. And others elsewhere have inferred the same from less well defined statements of your position on pre-Argo OHC data, inferences that did not meet objection from you.

    I have in the past seen the early OHC data point maps. Sparce data is not the same as no data, is it?

    And if you don't read something, how can you know what it is saying? Indeed, was I "hectoring" @900?

  7. Klaper at 903: BS. Tom Curtis and others make a number of substantive arguments backed by the litterature. If you don't want to address the arguments, it's likely because you can't do so without officially abandoning your pet theory. There is lots to be learned from lectures, by the way. And quantitative arguments have meaning only to the extent that we understand the quantities being argued. In this case, the quantities are physical. Physics always win, eventually.

  8. All @90x:

    I have been reviewing Hansen et al 2011 (, and think it would be a useful exercise to update this paper with the very latest data on sea ice (from PIOMASS), continental ice melt (from GRACE), atmospheric heat gain (but from TLT, not surface temperature), land heat flux (let's use HadCRUT4 as the delta T flux driver). Here's my first step, atmospheric heat gain (the easy one), using Hansen's method and replicating his work with one slight difference, I used monthly rather than annual data.

    You can see the 2 methods don't give a significant difference (you can cross-check my work against Hansen's Figure 12) . You can also see for the recent years, the the atmospheric input is close to zero for both GISS SAT as the delta T and RSS TLT.

    Any comments? This looks like a weekend project, but a great learning opportunity all the same.

  9. @Klapper 908:

    In addition to the atmospheric component of global heat changes, I've worked up a graph showing the ice melt component. Arctic sea ice is from PIOMAS data, Greenland and Antarctica are from GRACE data and before that some "generic" estimates of ice mass loss for these 2 continental sheets. Antarctic ice melt is my own model based on average thickness and the delta in ice area. Crude but as you can see, either way ice melt is not a big component of the global heat flux equation.

    Compare this with Hansen et al 2011 Figure 12. Ice melt is a very small contribution to the global TOA energy imbalance and recent increases in ice sheet melt are to a degree cancelled out by recent increases in global sea ice. Note I am using the same method as Hansen, a rolling 6 year trend to calculate ice mass changes, and thereby heat fluxs on a global TOA basis.


    [RH] Reduced image site to fit page formatting.

  10. Klapper @909.

    Do remember there is significant land ice melting that isn't sat on either Greenland or Antarctica. See AR5 Figure 4.25 which suggests such sources added some 4,500Gt to ice loss in 18 years = 250Gt/y. That would add about 0.005 W/sq m to your totals.

  11. @MA Rodger #910:

    No, I hadn't forgotten it. I just don't have a database to calculate it from (that I know of). Neither do I know of a database which estimates minus 2000 m heat content. I think I will do my own calculations of the land heat flux, and present all what I have calculated so far (0-2000, sea ice melt, continental ice melt, troposphere heat gain/loss and present that total W/m^2 forcing. The truth is that some components, like montane ice melt are less than the thickness of the line on the graph at these scales (as becomes readily apparent when looking at either my graphs or Hansen et al 2011 Figure 12.


    [JH] What exactly are you trying to accomplish with your calculations? What do they have to do with OP? Perhaps you should consider creating your own website to fully display what you are doing rather than expecting SkS readers to give you feedback on your "works-in-progress" on our comment threads.  

  12. Moderator Response @911.

    The exercise being embarked on is described by Klapper @908 thus - "I have been reviewing Hansen et al. 'Earth's energy imbalance & Implications' (2011) and think it would be a useful exercise to update this paper with the very latest data."I am assuming this exercise addresses specifically Section 9 of that paper although I fear the selective intent indicated @911 suggests some part of the method employed in Hansen et al (2011) Section 9 is being airbrushed away. There is also some issues with the use of TLT, the use of global land ΔSAT for the Land Heat Uptake, etc. However, while this analysis is not being addressed with the rigour it requires, all the palaver is a tiny a bit pointless. The update will, after all simply show that net energy imbalance is still overwhelming recorded in ΔOHC(0-2000) which weighs in at roughly 1.0W/m2 (2010-2014) compared with 0.65W/m2 (1993-2010) & 0.42W/m2 (2005-2010) in Hansen et al (2011).

  13. @MA Rodger #912:

    "...suggests some part of the method employed in Hansen et al (2011) Section 9 is being airbrushed away..."

    Hansen made an error in Section 9 of Hansen et al 2011. Here's his quote: "The third term is heat gain in the global layer between 2000 and 4000 m for which we use the estimate 0.068 ± 0.061 W/m2 of Purkey and Johnson (2010)."

    The 0.068W/m2 from Purkey & Johnson is not for the 2000 to 4000 zone, but the 1000 to 4000 zone, only in the southern ocean, which means he is double counting some heat (1000 to 2000 in the southern ocean). The Purkey & Johson paper is clear about this, as is the text in Hansen et al 2011 elsewhere in the paper. Keep in mind since the J&P 2010 paper is showing neglible abyssal warming in the northern ocean (see Figure 8 in that paper), its not clear the 1000 to 2000 overlap in the southern ocean is a wash with the "missing" 2000 to 4000 in the northern ocean.

    As for your last comment: "which weighs in at roughly 1.0W/m2 (2010-2014)", I checked the numbers and the W/m^2 is 0.89. However, keep in mind the model CMIP5 is still increasing and so the model projected imbalance average 2010 to 2014 is 1.04, plus sea ice has been gaining 2010 to 2014 so you have at least one of your factors going negative, albeit one of those that have negible leverage.

    You've stated there are "problems" with using TLT as a calculator for net energy change in the atmosphere but surely it's better than what Hansen did which was to use a metric representing maybe the lowest 5 metres of the atmosphere. TLT and TMT combined would likely be the best and certainly more representative of the atmosphere than any SAT data set which has much poorer spatial/volume coverage.

    As for land heat, I would not use Hansens method. I think actual data from the boreholes is better, than a heat flux model. Using the 2002 paper by Beltrami et al, the heat flux from borehole temperature profiles was .039W/m^2, but that is land only. If you calculate this to a global TOA basis (and delete the ice covered continents Greenland and Antarctica) you end up with an average heat flux into land of only 0.010 W/m^2, making it another neglible component of the global energy balance. The recent decadal flattening of the surface temperature would indicate land heat flux is likely no higher now than the 1950 to 2000 average which is the basis of the Beltrami number.

    In summary, my point was that the models run too hot. It is true that recently OHC gains come close to the model TOA imbalance, but then that happened back in 2002-2003 also and then the rate of ocean heat gain faltered somewhat. The bottom line is that even now, using very short periods to estimate the imbalance, the projected TOA imbalance of the models is higher than the actual TOA global energy imbalance as best we can calculate them.

  14. Klapper @913, you are mistaken about Purkey and Johnson.  Specifically, while they mention 0.027 W/m^2 below 4000 meters globally, and 0.068 below 1000 meters "south of the Subantarctic Front of the Antarctic Circumpolar Current" in the abstract, in table 1 they also mention 0.068 W/m^2 globally for below 2000 meters.  That the two values coincide does not make it a mistake to use the second figure (from table 1) in estimated OHC change below 2000 meters.

    Indeed, Purkey and Johnson quantify the below 1000 meters Antarctic change in OHC because prior estimates of OHC failed to determine the change in polar waters.  Ergo, arguably, the 0.032 W/m^2 between 1000 and 2000 meters in the Antarctic should also be added.  Indeed, Purkey and Johnson do in fact argue that in the conclusion, estimating that the total contribution from Antarctic Bottom Water (ABBW) to be 0.1 W/m^2 globally averaged.  Therefore Hansen may have underestimated additional contribution to OHC based on Purkey and Johnson's paper.  Whether he has or not depends on exactly which 0-2000 meter estimate he used, and whether or not it included the 0-2000 meter warming in the Antarctic.

  15. @Tom Curtis #914:

    " table 1 they also mention 0.068 W/m^2 globally for below 2000 meters..."

    If that's the case then Hansen has still made a mistake has he not? He used 0.068W/m^2 for 2000 to 4000, but that number is from 2000 to bottom is it not? In other words he double counted the 0.027W/m^2.

  16. @All #91x:

    Here is a graph of the total inputs. OHC 0-2000 is from NODC, pentadal in the early years and quarterly since 2005, land is from Beltrami et al 2002, > 2000 m OHC is from Purkey & Johnson, ice melt is from PIOMAS/my own model for Antarctica sea ice, Ice sheet melt is from GRACE and earlier some generic Wikipedia numbers, atmosphere is from RSS TLT. I started using 5 year trends, then switched to 6 since that is what Hansen used, but then switched back to 5.

    Klappers TOA comparison CMIP5 models to etc

    Five years might still be a bit short. OHC seems susceptible to even quarterly steps in the rolling trend. For example the 5 year trend centered on August of 2012 is 0.89 W/m2, while May of 2012 is 0.80 W/m2. I think this is the effect of ENSO causing wobbles in heat gain.

  17. Klapper @915, that is far from obvious.  Specifically, Purkey and Johnson write:

    "We make an estimate of the total heat gain from recent deep Southern Ocean and global abyssal warming by adding the integral of Qabyssal below 4000 m to the integral of QSouthernOcean from 1000 to 4000 m (Table 1). The 95% confidence interval for Qabyssal is calculated as the square root of the sum of the basin standard errors squared times 2, again using this factor because the DOF exceed 60. The warming below 4000 m is found to contribute 0.027 (±0.009) W m−2. The Southern Ocean between 1000 and 4000 m contributes an additional 0.068 (±0.062) W m−2, for a total of 0.095 (±0.062) W m−2 to the global heat budget (Table 1)."

    That is exactly what Hansen did, and as he was following the recomendation of Purkey and Johnson, prima facie he was correct to do so.  Further, Levitus et al (2009), from which Hansen obtains his 0-2000 meter data, write:

    "We acknowledge that ocean temperature data are sparse in the polar and subpolar regions of the world ocean but we still refer to our OHC estimates as global. We do this because the OHC estimates are volume integrals so that only relatively small contributions are expected from the polar regions to our global estimates." 

    That appears to suggest that like Levitus et al (2005) (of which Levitus09 is an update), Levitus et al (2009) does not include OHC for the Southern Ocean.

    If you want to make the case that the 1000-2000 meters of the Southern Ocean from Purkey and Johnson should not be included, you need to show either by obtaining a definitive statement from Levitus, or by comparing the gridded data, that Levitus et al (2009) included 0-2000 meter data for the Southern Ocean.  Absent that, and given that excluding warming through too sparse sampling is as much of an error as including too much by double dipping, it appears that Hansen has proceeded correctly.

    Finally, even if he has inadvertently "double dipped" the error involved is appreciably less than the error of the total calculation, and less than the error you made in correcting him @913.

  18. @Tom Curtis #917:

    "...and less than the error you made in correcting him @913.."

    I didn't actually correct him at 913; I stated I thought he was double counting but did not quantify the amount he was double counting by. However, in my graph presented in #916, I use the 0.068W/m^2 as the total below 2000 metres, as per Purkey & Johnson Table 1, and elimate the 0.027 W/m^2 for the > 4000 m component of heat gain, so my assumption for heat gain since 1990 in my graph is that > 2000 m OHC gain = a constant 0.068W/^2.

  19. Klapper,

    I would caution you to slow down with your inputs into this comment thread and consider what you are responding to, rather than bash out the first rebuttal that come into your head.

    As an example of this, your response @913 began @910 with my noting that you are ignoring mountain glaciers which you indicate @911 you are happy to do yet @912 I call it 'airburshing away part of the method.' Your response @913 sets out with this in mind but fails to deliver. Instead we get an account of what you see as failings in Hansen's analyses.

    The moderator response @911 suggested it was inappropriate for you to be posting such this work-in-progress down this thread. Given these recent responses of yours are proving so inadequate, I am inclined to agree.

    Perhaps then it would be better to ignore direct comment on your analysis and rather ask why you feel any result that were obtained has any bearing on what you call "my point was that the models run too hot."? I ask this becuse you could well be setting yourself to support the opposite.

  20. @MA Rodger #919:

    "...with my noting that you are ignoring mountain glaciers..."

    As you pointed out the montane glacier effect was insignificant to the graph I was producing, being 0.005 W/m^2.

    "...The moderator response @911 suggested it was inappropriate for you to be posting such this work-in-progress down this thread..."

    The moderator can delete my posts if it's inappropriate. I'll consider this project done and won't however post my next project which is to compare TOA net forcing with ENSO, the AMO and the PDO.

    "... I ask this becuse you could well be setting yourself to support the opposite..."

    I like to post numbers and let people decide for themselves. I still think the models run hot after this exercise. The average error on the over the last 35 years, model TOA forcing to OHC et al is +0.27 W/^2, so the models are definitely warmer in this analysis period.

    In your #912 post you state OHC to 2000 metres weighs in roughly at 1 W/m2, but I disagree. The actual number is 0.89 for a trend centred on Nov 2012 and 0.80 for a trend centered on August 2012. However, I think the issue with your choice of period is that it starts with a strong La Nina and ends with an El Nino (which I intend to investigate further). So is it representative of longer term forcing elements?


    [JH] Please do not use the SkS comment threads as a "blackboard" for your next project. 

  21. @Rodger MA #919:

    There is on point on my graph where the models do seem to make more sense than the empirical data, namely the early 90's when the model shows a sharp TOA forcing reduction due to a 2 W/m^2 spike in upwelling shortwave.

  22. Klapper @920.
    You write "I like to post numbers and let people decide for themselves" which is rather generous of you. But you do also present an opinion, writing "I still think the models run hot after this exercise." This implies that in light of your "exercise" you see nothing to contradict your contention that "models run hot." Indeed, you go on to argue that your "exercise" lends support to your 'hot model' thesis. "The average error on the over the last 35 years, model TOA forcing to OHC et al is +0.27 W/^2, so the models are definitely warmer in this analysis period." (My emphasis)

    But your 'hot model' thesis is surely to do with the modelled global temperatures being too hot due to their modelled temperature rising too fast. Wouldn't such a 'hot model' not run with reduced TOA energy balance?

  23. @MA Rodger #922:

    "Wouldn't such a 'hot model' not run with reduced TOA energy balance?"

    If the TOA energy balance was zero (incoming = outgoing), we would for that instant be neither gaining nor losing energy, so although there may be temperature change by heat transfer from one global heat sink to another, total global heat stays the same. An  AOCGC model with a positive TOA imbalance (incoming > outgoing) is gaining heat, either in the atmosphere or the ocean, or ice conversion to water (or all three), but it's unlikely you would not see a temperature rise in both the ocean and the atmosphere over longer periods. The higher the positive imbalance the faster the temperature rise, although SAT rise is obviously subject to the percentage of heat sinking into the ocean vs. the atmosphere.

    In fact this is what we see in SAT. Over the last 35 years the ensemble modelled temperature rise is 0.25C/decade, while SAT only rose about 0.15 to 0.18C/decade depending on SAT dataset. So both the TOA spread from model to empirical and the SAT warming rate spread agree the models look to be running too hot.

  24. Klapper @923.

    You conclude "So both the TOA spread from model to empirical and the SAT warming rate spread agree the models look to be running too hot." but your inclusion of "TOA spread" in this statement is entirely unsupported.

    Imagine a world and a model-of-that-world with the model running hot. We impose forcings of equal size onto both. The model SAT rises faster because it runs 'hot'. But all things being otherwise equal, that would reduce the TOA imbalance as a higher SAT leaches more energy back into space. Thus my comment questioning whether high TOA could not be seen as a symptom of a 'cool' model.

    But in climatology, things are never 'otherwise equal'. A 'hot' model with increased SAT presumably results in higher 'forcing levels' due to higher positive feedback. Now if the world & the 'hot' model had an SAT that were equal, their conforming increase in SAT would have equalised different proportions of the initial forcing as the level of feedback is different. In the model because of the larger feedbacks, this equalisation will be less - there will be more of the forcing remaining - more TOA imbalance. So we can propose that the TOA imbalance would differ because proportinately less forcing would be equalised in the model. The difference would be most dramaitc in a 'well-equalised' situation, where most of the forcing has been equalised. But let us assume the forcing is roughly half equalised in the world with the 'hot' model showing 33% more TOA imbalance, 33% less forcing equalised, but the same SAT. This implies ECS in the model is 50% too high.

    But if in the model both TOA imbalance were higher and SAT were higher (this last the Klapper definition of a 'hot model' and exemplified by the CMIP5 projections 2006-2014) , ECS would have to be now greatly different to balance the books. @923, a value is suggested for the model ΔSAT = 150% of world values yielding a model:world ECS of 2.25:1. But this does not actually relate to the post-2006 period. To compensate for both TOA imbalance and the differences in SAT we are therefore talking, what, ECS proportionately 3:1, 4:1, more.cmip5
    Given the CMIP5 models perform well prior to 2006, is it then at all likely that ECS in the models is so wrong? So how can we simply attribute the post-2006 performance to them being 'hot models'?

  25. This is what a regular contributor to the Telegraph web site has to say.

    " . . . the IPCC pseudoscience is based upon 54 years' teaching of incorrect physics.
    It all came from Carl Sagan who made 4 basic mistakes but was supported in the Cold War Space Race.
    Atmospheric Science then invented spurious physics which it uses to justify the Perpetual motion machine in the models but there is zero experimental proof.
    I come from engineering where heat transfer has 90 years of experimental and theoretical proof. it's easy to see where Hansen et al went wrong but they were mentored by Sagan."

    Did Sagan make 4 basic mistakes?




    [RH] Your post should have been deleted as a "link only" post, which is against the SkS posting policy. Being that others have already jumped in and explain the materials, we'll let this one stand. 

    If you wish to continue posting please review the comments policy.

  27. Postkey:

    In 1896 Arhennius reviewed the basic claculations for AGW and closely estimated the amount of warming we woud see by today.  He also predicted it would warm more at night  than day, more in winter than summer, more in the Northern Hemisphere, more in the Arctic and more over land than over sea.  How could Sagen have made a mistake that affected Arhennius 60 years earlier?

    If Sagen had acutally made a mistake, the contributor argues that the scientists who work for Exxon, BP, and Shell are too stupid to recognize that mistake and correct it.  Obviously, these companies have scientists who review all the AGW data and correct errors that hurt their story.    Do you really think that Exxon cannot find any scientists who could expose a simple mistake?  The IPCC report is approved by all the countries in the world.  The summary is approved line by line.  Exxon, Saudi Arabia and other interested fossil fuel companies have lawers there for the entire discussion.

  28. Postkey @925

    The section you quote is essentially a conspiracy theory: that errors in physics originally made by Sagan have been continually supressed.

    It is worth noting that there have, in the past, been a number of scientific papers published that have challenged or questioned the accepted model of climate change; these include papers by Richard Lindzen, Christy and Spencer, Murray Salby and Gerlich and Tscheuschner. This provides us with evidence of the absense of a conspiracy: the scientific community is perfectly willing to publish a variety of views on Climate Change, even if further examination shows these papers to be wrong, unlikely or implausible.

    Thus, had Sagan actually made "4 basic mistakes", and these were hushed up "in the Cold War Space race", there is no way that these mistakes would not have found their way into the scientific literature today. The fame  of any scientist able to disprove todays consensus on Climate change would be immense (if only for the amount of physics they would actually have to overturn in order to do so).

    A brief viewing of the on-line biographies of Carl Sagan and James Hansen shows almost no intersection; Sagan was an advisor to the NASA space program in the 1970's, whilst Hansen was employed at GISS (which is a division of NASA, but not the one Sagan was advising)

    It is worth noting that Sagan is perhaps an easy target; as a science communicator and educator it is often necessary to simplify the science (It is for that reason , for example, that grossly inaccurate "pictures" of the atom persist today for educational purposes). Thus his public pronouncements may have been less rigourous. But as Michael Sweet mentions above, the development of climate science does not spring from Sagan.

  29. Postkey, how shallI say this? The quote you gave is a pile of idiotic nonsense. The radiative physics of the greenhouse effect do not violate the laws of thermodynamics. They predict how much infra red radiation must reach the surface, and that can be measured. It has been measured and is the subject of numerous science papers. It is measured in real time at a variety of locations. It has been measured in the Arctic during the winter, which precludes any other IR source than the atmosphere. The person you quote may not be an egineer, because they normally know better. Saying one "comes from engineering" is rather vague. 

    There is no perpetual motion machine in the atmosphere, those who try to argue such idiocy do not understand the physics. SkS has entire threads about the subject. Search the site.

    The conspiracy theory mentioning Sagan's name is complete bullocks, as he never had anything to do with the climate part of NASA. Without being more specific, it is not possible to answer about Sagan's alleged "mistakes." Considering how incompetent that telegraph person seems to be, the "mistakes"  accusations are likely based on incomprehension of physics. I would caution you that trying to engage someone like this will likely be a complete waste of time. You can see indications of that through the 2nd law thread on SkS.

  30. Thanks, for all of your replies.

  31. These are the 4 basic 'mistakes'!

    1. To assume surface exitance, a potential energy flux in a vacuum to a radiation sink at 0 deg K, is a real energy flux.

    2. To misuse Mie theory to claim that clouds forward scatter when in reality the light becomes diffused.

    3. To claim black body surface IR causes a planet's Lapse Rate temperature gradient, when it is caused by gravity.

    4. To completely cock up aerosol optical physics by assuming van der Hulst's lumped parameterisation indicated a real physical process. In reality, there are two optical processes and the sign of the Aerosol Indirect Effect is reversed. It is in fact the real AGW and explains Milankovitch amplification at the end of ice ages.

    Point 4 has messed up Astrophysics as well.

  32. Postkey @931, you should always cleary indicate when words are not yours by the use of quotation marks.  In particular, it is very bad form to quote a block of text from somebody else (as you did from point 1 onwards) without indicating it comes from somebody else, and providing the source in a convenient manner (such as a link).  For everybody else, from point 1 onwards, PostKey is quoting Alec M from the discussion he previously linked to.

    With regard to Alec M's alegations, although Carl Sagan did a lot of work on Venus' climate, Mars' climate, the climate of the early Earth, and the potential effect of volcanism and nuclear weapons on Earth's climate, he did not publish significantly on the greenhouse effect on Earth.  The fundamental theory of the greenhouse effect as currently understood was worked out by Manabe and Strickler in 1964.  As can be seen in Fig 1 of Manabe and Strickler, they clearly distinguish between lapse rates induced by radiation, and those induced by gravity (that being the point of the paper) - a fundamental feature of all climate models since.  So Alec M's "mistake 3" is pure bunk.  By claiming it as a mistake he demonstrates either complete dishonesty or complete ignorance of the history of climate physics.

    With regard to "mistake 2", one of the features of climate models is that introducing a difussing element, such as SO2 or clouds, will cool the region below the element and increase it above it.  The increase in temperature above the diffusive layer would be impossible if the clouds were treated as forward scattering only.  So again, Alex M is revealed as a liar or completely uninformed.

    The surface excitance (aka black body radiation) was and is measured in the real world with instruments that are very substantially warmer than absolute zero.  Initially it was measured as the radiation emitted from cavities with instruments that were at or near room temperature.  As it was measured with such warm instruments, and the fundamental formula's worked out from such measurements, it is patently false that the surface excitance is "potential energy flux in a vacuum to a radiation sink at 0 deg K".  Indeed, the only thing a radiation sink of 0 deg K would introduce would be a complete absence of external radiation, so that the net radiation equals the surface excitance.  As climate models account for downwelling radiation at the surface in addition to upwelling radiation, no mistake is being made and Alex M is again revealed as a fraud.

    With regard to his fourth point, I do not know enough to comment in detail.  Given that, however, the name gives it away.  A parametrization is a formula used as an approximation of real physical processes which are too small for the resolution of the model.  As such it may lump together a number of physical processes, and no assumption is made that it is not.  Parametrizations are examined in great detail for accuracy in the scientific literature.  So, neither Sagan nor any other climate scientist will have made the mistake of assuming a parametrization is a real physical process.  More importantly, unlike Alex M's unreferenced, unexplained claim, the parametrization he rejects has a long history of theoretical and emperical justification.

    Alex M claims "My PhD was in Applied Physics and I was top of year in a World Top 10 Institution."  If he had done any PhD not simply purchased on the internet, he would know scientists are expected to back their claims with published research.  He would also know they are expected to properly cite the opinions of those they attempt to use as authorities, or to rebut.  His chosen method of "publishing" in comments at the telegraph without any citations, links or other means to support his claims shows his opinions are based on rejecting scientific standards.  They are in fact a tacit acknowledgement that if his opinions were examined with the same scientific rigour Sagan examined his with, they would fail the test.  Knowing he will be unable to convince scientists, he instead attempts to convince the scientifically uninformed.  His only use of science in so doing is to use obscure scientific terms to give credence to his unsupported claims.  Until such time as he both shows the computer code from GCM's which purportedly make the mistakes he claims, and further shows the empirical evidence that it is a mistake the proper response to such clowns is laughter.

  33. Michael, Postkey,

    Yes in my reply @928, I suggested that to disprove Climate science, you would need to overturn or reject a large proportion of well established physics; it seems Alec M has had to resort to trying to do just that.

    The Climate Alchemists from 1989 have imagined a spurious bidirectional photon diffusion argument for which there hasnever been experimental proof..

    This comment (somewhat idiosyncratically phrased) is incorrect, photon emission is omni-directional in gases (due to the fact that molecules in gases are, by the very nature of gases, free to rotate) and this is sufficient to account for downwelling radiation - which has itself been measured. However "experimental proof" can also be gained by looking at a domestic light bulb. It would seem that when it comes to evidence AlecM is confusing "looking but not finding" with "not bothering to look"

  34. The commentor AlecM over at the Torygraph is certainly a blowhard and well out of control. The full post he wrote that Phil @935 quotes from bears being reproduced in full as we get the name of physicists he blames for his pervertion of science. And we also get the name of the scientist he rated as the US top cloud physicist. If GL Stephens did uncover a fatal flaw in climatology in 2010 and been unable to publish, it is not as though he has had problems publishing other works since then.

    Houghton's figure 2.5 plots black body radiation against atmospheric temperature/height (unfortunately the actual page is missing from this google preview) but it's probably the IR-induced convection that the blowhard is saying ensures the GH effect is tiny.

    And an optical pyrometer? Isn't that a thermometer?

    I have measured radiative heat transfer in process plant, made optical pyrometers, done the theory ad nauseum in the days when we used slide rules and Carslaw and Jaeger.

    What we now have is a grotesque parody of science based on the 1989 mistake by Goody and Yung where they arbitrarily assumed Schwarzschild's 'two-stream approximation' could translate to bidirectional photon diffusion, forgetting he knew he was dealing with Irradiances.

    Houghton, taught standard physics, knew this correct physics and in Fig 2.5 of 1977 'Physics of Atmospheres' showed why there can be no Enhanced GHE. When he co-founded the IPCC he supported the EGHE.

    We now have climate models based on 40% more SW thermalisation than reality, the only way the Hansen group could get the numbers to add up in the incorrect physics. The other part of the scam, to use ~double real low level cloud optical depth as a hind-casting parameter to get the right 'positive feedback', was discovered in 2010 by G L Stephens. He hasn't been able to publish this.

    PS The Alchemists don't know what an optical pyrometer measures. I do. As for the computer code, I have examined GISS-E and it's not the problem; that is bad physics.


    [JH] Please resist the tempatation to repost the pseudo-science poppycock being posted on the comment threads of other websites. 

  35. MA Rodger @934, Houghton's Fig 2.5 is shown in google preview of the 3rd edition of his work. I assume it is the same as that shown in the first edition, given that figs 2.4 and 2.6 are unchanged between the two editions. In addition to plotting the radiative equilibrium temperature, Houghton also plots the convective equilibrium temperature (or an approximation with a lapse rate of -6 C per km). If Alex M thinks that plot "showed why there can be no Enhanced GHE", he merely demonstrates he has no understanding of atmospheric physics (as if we needed further proof of that). Consulting the 3rd edition, published in 2002, ie, one year after the Third Assessment report also demonstrates neatly that Houghton saw no contradiction between the physics he continued to teach essentially unchanged after he joined the IPCC, the physics that he had taught before hand.

    The preview of the first edition is also interesting in that it contains on page 10 Houghton's explanation of why climate scientists often (though not in GCMs) treat IR radiation from an atmospheric layer to consist of an upward and a downward flux, rather than as radiating in all directions. The reasoning is simple. As Alex M himself points out, "Standard Physics predicts net unidirectional radiant flux from the vector sum of Irradiances at a plane". But if you have radiation from a sphere with equal temperatures at all points, then at any give point above the surface of the sphere it will have equal radiative flux coming in at φ degrees from all downward directions, for all φ. Thus, for a given φ, with that angle will come equally from a circle on the sphere with a center directly beneath the point. If you sum the vectors of all those fluxes, only the vertical component of those vectors will not cancel out. As this applies to all φ, it follows that the integral of the all fluxes from the surface at any point above the surface consists of a net flux with a vertical component only. Similar reason applies for any point below the surface (assuming it is a transparent region). Because of this, the radiation from a given level of the atmosphere can be treated as consisting of only vertical components for simplicity.
    This simplifying assumption does not hold if large temperature differences exist between different regions of the surface. This is not always true in the atmosphere, but is often approximately true so that the simplifying assumption makes a good approximation. Despite that it is not used in GCMs and so is not a necessary assumption for the theory of the greenhouse effect. (Note, any time we express the black body radiation in terms of W/m^2 rather than in terms of W/m^2/steradian; we are making this simplifying assumption.)
    So, it turns out that one of the biggest problems Alex M has with climate science is a simplifying assumption that is not necessary for the science, is explained in a book he purports to have read, and as it happens, follows reasoning first developed (in relation to gravity) by that well known alchemist, Isaac Newton.

    Finally, as a note for PostKey, your most recent comment has been deleted by the moderators. That may only be because you are in effect allowing Alex M to comment here by proxy whilst ignoring the SkS comment's policy, although I can think of a number of comment's policies you are also violating by just posting full quotes. If you want help understanding where Alex M is in error, quote only the relevant text. Explain what you do not understand about the quoted material, and make sure you post on the appropriate thread. The last may take a bit of reading to find the appropriate thread, but that same reading may well answer your question. I and several other commenters here are always glad to help people who are seeking understanding, but we have no interest in carrying on a discussion by proxy with a conspiracy theorist and pseudoscientist such as Alex M.


    [JH] Postkey would do well to follow your advice. If he does not, his/her future posts are likely to be deleted.

  36. Wow. This AlecM guy is a hoot! I love this comment the best...

    "PS when I used the old term Emittance instead of Exitance, Wikipedia was altered to remove Emittance! This showed the disinformation process in action. We are being conned!"

    It's illuminating in terms of his state of mind.


    [JH] Further discussion of comments posted by AlecM on another website will be deleted for being "off-topic".  

  37. Appears to be no acknowledgement here of the difficulties raised by Edward N Lorenz, MIT. Eg, a 2011 Royal Society paper on Uncertainty in weather and climate prediction: “The richness of the El Nino behaviour, decade by decade and century by century, testifies to the fundamentally chaotic nature of the system that we are attempting to predict. It challenges the way in which we evaluate models and emphasizes the importance of continuing to focus on observing and understanding processes and phenomena in the climate system.”


    [TD]  Enter the word chaos in the Search field at the top left of this page.  Also read The Difference Between Weather and Climate.  Note that many posts have Basic, Intermediate, and Advanced tabbed panes.

  38. NanooGeek,

    Can you please link the paper you are citing?  A quick google of Edward N Lorenz from MIT indicates that he died in 2008.  It is very unlikely  that he published your quote in 2011.  His last paper was published in 2008 and I see nothing in his CV that resembles your citation.  His CV shows nothing published  by the Royal Society after 1990.

  39. michael sweet @938.

    It seems the quote comes from Stilgo & Palmer (2011) 'Uncertainty in weather and climate prediction'. which addresses the legacy of EN Lorenz' work.

  40. The paper seems to reinforce what modellers already say - "models have no skill at decadal level prediction". While models appear to capture ENSO behaviors, there is no way they can predict it. If you compare models to observations over short time frames (<30y), then they wont match well. However, climate is about the long term averages of weather and in that the models do quite well.

  41. "While models appear to capture ENSO behaviors, there is no way they can predict it. "

    We may be getting close to doing just that at the Azimuth Project forum —

    Others are making progress as well [1].

    [1] H. Astudillo, R. Abarca-del-Rio, and F. Borotto, Long-term non-linear predictability of ENSO events over the 20th century, arXiv preprint arXiv:1506.04066, 2015.

  42. I'm looking for a good graphic showing where the surface temperature models with the latest observed temperatures added.   Anyone know of one?

  43. dvaytw: Climate Lab Book has a comparison that is updated frequently.

  44. "Climate Lab Book has a comparison that is updated frequently."

    Thank you, Tom.  I would like to ask a question about this.  A guy is giving me crap that:

    "So when you post a picture from AR5 which was published in 2013, the question people may ask, is this predictions vs observed from the original 1990 predictions, or is it predictions from 5 minutes ago which were modified and have had the goal posts moved?"

    On the graph, it shows a cut-off between historical and RCP's at 2005 (I assume "historical" means post-predictions?)  However, I looked up CMIP5 and their site says

    "Februrary 2011: First model output is expected to be available for analysis".  

    This is confusing to me.  So these projections are from around 2011?  Why does the chart say 2005?  And how do these projections compare with earlier models?  Also, the guy in the argument is claiming:

    "The IPCC has revised its predictions and on each occasion it was down from what was previously predicted."

    Far as I can tell, this isn't true... but I can't find a nice graph with all five ARs' projections compared... best I can come up with is the first four (and that one clearly shows SAR as lower than the other three, so already he's wrong on that).  

    Any help here?


  45. Others can probably give you a more detailed answer but models predict the outcomes from given forcings. Predicting future forcings is uncertain so hardly unreasonable to update a model run done is 2005 with what the actual forcings to 2015 to see how it fared. That is very different thing to tuning a model to reproduce a particular time series which is of course of little value.

    When doing obs/model comparisons, the interesting question is how well did the model perform actual forcings rather than how well did researchers predict when volcanoes would erupt or how much CO2 human would produce.

  46. dvaytw, I'll expand scaddenp's answer: Models are fed actual ("historical") values of anthropogenic and natural forcings up through some date that the modelers decide has reliable forcing data. The actual running of the models can be years after that cutoff date. The vertical line you see in model projection graphs demarcates that cutoff date. For dates beyond that cutoff, the modelers feed the models estimates of future forcings. Although those are "predictions" of those future forcings, the modelers rarely are very confident of those "predictions," because those modelers are not in the business of predicting forcings. Indeed, those models themselves are not predicting forcings; these models take forcings as inputs.

    An ensemble of model runs such as CMIP5 generally uses the same forcings in all the model runs. See, for example, the CMIP5 instructions to the modelers. Differences across model run outputs therefore are due to different constructions of the different models, and tweaks across runs within the same model. (CMIP5 has more than one run of each of the models.) The goal is replication in the sense of seeing whether the fundamental characteristics of the outputs are robust to what should be minor differences in approaches. See AR5 WG1, Chapter 11, Box 11.1 (pp. 959-961) for more explanation. See Figure 11.25 (p. 1011) for detailed graphs of only the CMIP5 projections.

    Modelers almost never rerun old models with new actual (historical) forcing data, because too much time, money, and labor are required to run the models. Instead they run their latest, presumably improved, version of their model. But several authors have made statistical adjustments to model results to approximate the effect of rerunning those models with actual forcings.

    Dana wrote a post with separate sections for the different reports' projections, but it is three years old so does not show the recent upswing in temperature.

    I know there are graphs combining all the IPCC reports' different projections, but I can't find one at this moment. Somebody else must know where one is.

  47. Ed Hawkins has posted a good article on how choice of baseline matters, including a neat animation.

  48. Hi,

    I was wondering if anyone could help me here. I've been inundated by this chart and others like it from my skeptic friends. It compares computer models to observed temperature only using UAH and RSS. Obviously it's cherry-picking since there are other temperature sets but does anyone have a chart similar to this that shows all the major data sets?

    Models vs. Reality


    [TD] Resized image.

  49. spunkinator99: Among other problems with that graph, it is baselined improperly so that the UAH and RSS lines begin near the model mean and diverge over time. Spencer used the same deceptive tactic in later constructing a graph of 90 model runs, as Sou explains at Hotwhopper. Tom Curtis pointed out why 1983 was such an obvious choice for Spencer's distortion.

    Ed Hawkins at Climate Lab Book updates his comparison graph frequently. John Abraham's recent article's graph is bigger and so easier to read, and shows the earlier (CMIP3) model runs as well as CMIP5.

    None of those shows the correct model lines, because those model lines were for surface air temperature despite observations being of surface sea temperature where not ice covered. The correct model lines are shown in an SkS post.

  50. spunkinator99, see also the post countering the myth that satellites show no warming.

Prev  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page

Get It Here or via iBooks.

The Consensus Project Website



(free to republish)



The Scientific Guide to
Global Warming Skepticism

Smartphone Apps


© Copyright 2017 John Cook
Home | Links | Translations | About Us | Contact Us