Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  944  945  946  947  948  949  950  951  952  953  954  955  956  957  958  959  Next

Comments 47551 to 47600:

  1. Water vapor is the most powerful greenhouse gas

    Jose_X - Short answer, yes. The sum of equilibrium forcings and temperature changes for 3*2x CO2 will equal 1x8x CO2. Or for any other subdivision. 

    What is being changed is to total emissivity of the atmosphere, which by the Stephan-Boltzman law and the amount of incoming solar energy sets the climate temperature. 

    The only possible differences would be if 2^3*concentration did not equal 8*concentration (mathematic nonsense), or if the temporal evolution of feedbacks differed with increment size (at equilibrium, there should be no difference), or passing some hysteresis point (say, driving into an Icehouse Earth state that requires a huge amount of forcing change to switch out of - which would require a forcing overshoot and reversal). So no, there should be no differences whatsoever in equilibrium total forcing, in equilibrium temperature, dependent on the path to that increase. 

  2. michael sweet at 23:51 PM on 15 March 2013
    Watts Interview – Denial and Reality Mix like Oil and Water

    Dana,

    You need to take much more personal credit for Watts claiming to be  a "lukewarmer".  I define "Lukewarmer" as a new name deniers call themselves because everyone knows their "skeptic" arguments have been shown to be bunk.  They think that if they put on a new hat they can go on as they always have.  SkS has been so successful in countering their false claims that Watts no longer wants to be associated with his own legacy!  

    Keep up the good work!  Don't let them get away with putting on a new hat, they are still just deniers.

  3. Watts Interview – Denial and Reality Mix like Oil and Water

    Like Jose, I like the graph in DS#10.  I followed the links, but am curious about a couple of things.

    Since it represents a trend, what are the respective confidence limits?  Are they (the three trends) close as far as confidence limits goes?

    Since the graph has been updated since 2007 when it was first done, have the trend lines been recalculated, or have the lines just been extended?

    Was this paper peer reviewed?  I am assuming it is, but can't find it.

    Is there enough data points to say there is a trend?  On a different thread, I was told that 16 years was insufficeint to generate a trend, but here we have 45 years, with 6 years removed for volcanic activity, leaving 39 years to generate 3 different trends?

  4. Watts Interview – Denial and Reality Mix like Oil and Water

    To be clear, I work for a 400+ employee engineering consultancy company in the Netherlands and my function title is Sr. Specialist in Energy and Sustainability. I have 10 years work experience, for what it's worth. My conclusion is that solar and wind energy will grow, but cannot by themselves solve the GHG emission problem. That problem can only be solved by drastic cuts in energy usage (= lifestyle change = not a credible solution pathway) OR a dramatic shift to nuclear power (entirely feasible and sustainable long term in all respects). In my humble opinion, if SS would promote this view than SS has claimed the high ground of a science-based position on sustainable energy systems. If not, then you have opened this site up for unnecessary criticism, which would weaken your cause and mine.

    Best regards,

    Joris

  5. Watts Interview – Denial and Reality Mix like Oil and Water

    In this interesting article, it is stated that renewable energy can substitute for fossil fuels and doesn't even need fossil fueled backup. There is then link to a treatment on this site of the IPCC report on the potential of renewable energy. However, the IPCC report - while full of interesting information - does not at all inspire confidence that renewable energy is able to replace fossil fuels. The IPCC report in fact states in so many words that renewable energy sources will *not* likely reduce GHG emissions as much as is necessary. The claim of 'almost 80% renewables' is no more than an outlier single report by Greenpeace, which is itself deeply unsatisfying and superficial.

    I love this website and consult it frequently as a valuable resource for understanding why and how climate change deniers are wrong. However, the treatment on this site of renewable energy and the challenge of moving to them for 100% of our energy supply is very, very poor indeed, I'm sorry to say. I urge the website owner to overhaul that part of the site thoroughly by noting (for example) very carefully the serious problems with the content of the IPCC renewable energy report, as detailed comprehensively by Ted Trainer here:
    http://bravenewclimate.com/2011/08/09/ipcc-renewables-critique/

    Another option would be for this site to refrain from tackling the question of sustainable future energy systems altogether, which is obviously not it's speciality. As it stands, the treatment of energy systems on this site damages the reputation of SS as a credible source, which I lament. Hopefully, it will be understood that this message is constructive criticism.

    Beste regards,

    Joris

  6. AndrewDoddsUk at 21:57 PM on 15 March 2013
    Watts Interview – Denial and Reality Mix like Oil and Water

    WheelsOC -

    Years of what could be termed 'discussions' with Creationists would lead me to refine #11 to 'when completely and utterly debunked, leave the argument for a while, waiting until you hope people have forgotton, then bring it back'

    This goes past intellectual bankrputcy into the concept of negative credit scores..

     

     

     


  7. Watts Interview – Denial and Reality Mix like Oil and Water

    Jose @8

    "I really like the first graph at DS#10."

    In this case, credit goes to John Nielsen-Gammon, Texas State Climatologist, who first used this kind of analysis here and here.

  8. Shakun et al. Clarify the CO2-Temperature Lag

    Thanks for your answer Tom. I'm impressed with the quick response. What you have written is a bit beyond me, but it looks like a good reply. I suspected the Shaviv objection was false. A lesson from this, it seems refutation of sceptical arguments can become a very complex business. This is the first time I've been out of my depth on the topic.

  9. Shakun et al. Clarify the CO2-Temperature Lag

    OneHappy @152, Shakun et al use 30 (37%) proxies from the SH, and 51 (63%) proxies from the NH.  Because of the method used by Shakun et al, that does mean the global reconstruction is weighted in favour of NH temperatures.  Further, in the SH, 13 (43%) proxies are extra-tropical, while 17 (57%) are tropical; whereas in the NH, 24 (47%) are tropical and 27 (53%) extra-tropical.  As tropical areas cooled less than polar areas, that difference in weighting also means NH temperatures run warm (show less temperature difference between glacial and interglacial than do the SH temperatures).  In fact, the proxies are predominantly (59 out of 81) from a band from 40 degrees north to 10 degrees south, and band that saw minimal temperature change relative to other parts of the planet.  That may well have led to an underestimate of global temperature differences between glacial and interglacial.

    How very odd that Shaviv did not comment on these other distortions, expecially given the importance of the later to his discussion of climate sensitivity.

    The fact is that with a limited number of unevenly distributed proxies, no method will prevent some distortion.  Focusing on just one of these (NH vs SH) is nto good science, it is simply (at best) a failure to recognize the issues involved.

    That being said, the oddest thing is that Shaviv does not show a comparison between Shakun et al's global temperature reconstruction, and that obtained by averaging the hemispheres.  Perhaps the reason is that when you compare them, you get this:

    Does that look like deliberate manipulation to you?  Or that it would compromise the results?

    What it looks like to me is that Shakun et al took a reasonable approach (area weighing on grid cells) and that the difference between that and alternative approaches was so negligible that it was not worthwhile employing more complicated methods.

  10. Watts Interview – Denial and Reality Mix like Oil and Water

    The other misrepresentations are bad enough, but surely Watts knows by now that "BP" refers to 1950, not 2000 or 2013 or some other date. Sure, I can understand how someone might make that faulty assumption initially, as the naming convention isn't exactly intuitive, but at this point it just seems like it would have to be an intentional error by Watts. There isn't a good excuse for making this mistake repeatedly.

  11. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Tom Curtis 31 >> That the distribution of the ensemble predictions is skewed needs to be conveyed because science does not procede by only noting the points that help you make a point, and that fact was conveyed both by figure 3 and by the note about the mean.

    OK, so maybe the paper wasn't suggesting that the models tending far from the average be removed (contrary to what I guessed in Jose_X 32).

  12. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Kevin:

    Look at what the article said:

    >> Some models (particularly cccma_cgcm3_1 [1 in Figure 3] and ncar_ccsm3_0 [6 in Figure 3]) predict more overall global surface warming than observed, although most models simulate the observed average global surface warming accurately.  Due to those overpredictions, on average the models simulate a 0.167°C per decade average global surface warming trend from 1961-2010, whereas the observed trend is approximately 0.138 ± 0.028°C per decade, approximately 20% lower.

    As Tom and/or others pointed out:

    a) It appears that some models are off from the others. If we remove those stray cases, the ensemble average gets rather close to the "observed trend". The study highlights that point, perhaps suggesting future improvements to IPCC projections might be in filtering out the models that are far off the mode before calculating the new mean. [Haven't read the paper.]

    b) The error bars you quoted are I think from our attempt to pin down the observed trends because there is inherently error in observation. It isn't the error bars of the models. If the observations were exact, there would be no error bars around that 0.138 value. On the other hand, a particular model ensemble might predict a trend of .167/decade with say a 95% confidence envelope through the first 3 decades of +/- .1. So if we had this model and the current observed values with error bars, then we'd have this: the observed might be as high as 0.138+0.028=0.166 while the model predicts that the temp might be as low as 0.167-0.1=0.067. In this case, we have that the actual temp -- best we can observe -- is possibly much higher than the lower bounds of the models.

  13. Water vapor is the most powerful greenhouse gas

    KR 160:
    >> To be really clear, I'm speaking of total forcings, not deltas, as the deltas will be dependent on the temperature at the time of the delta - if you are looking at varying temporal evolution without equilibrium, all bets are off.

    To clarify a bit on what you mean by "deltas", would you say that the following is a description of deltas that are off the table if each "slug" was carried out to equilibrium in the runs and if both the slugs and the overlapping large jump avoided feedbacks?

    > So, not only does the RF of a large slug not equal the RF of the sum of a series of small slugs of the same size, but the RF varies depending on whether you are adding, or removing the slug.

    My question was about the nature of RF. Specifically, I am interested in knowing if doing a 2x CO2 and when that is reached doing another 2x CO2 from that new equilibrium point and then another .. if those three added together would give the same value as if we do a single 8x CO2 calculation (or perhaps for some other ghg or other ratio). If the answer is that the values would differ nontrivially, then I have to wonder about the meaning of even a single RF used in a model (though I'm not worried if the model approximations are linear and reflect reality within a limited domain we would work in) and about whether what we get from the ideal situation of doing a 2x in one shot to calculate RF is meaningful to a planet that is adding CO2 in very much smaller increments (smaller relative to the ability of the planet to keep up, if that is true). A primary goal of mine is to understand the model decently.

  14. Watts Interview – Denial and Reality Mix like Oil and Water

    1: WRT DS#1, looking just at those two quotes, I agree with the thrust of shoyemore #2. I think the two Watts quotes misrepresent scientists, but they are consistent with each other and in painting a picture that Watts is more rational than the scientists. "Hey, Watts is rational and one of us normal people who realizes the climate is complex and obviously wasn't going to behave as predicted by scientists. The scientists are alarmists and don't even realize obvious things. The scientists are backpeddling and can't be trusted." Do you have other quotes by Watts making predictions that are incorrect?

    2: I really like the first graph at DS#10. I hadn't seen that before. Will use it.

    One simple improvement to this graph to me would be to have the frame showing the 3 trend lines display a little longer. Another small improvement might be to make the colored squares larger (or grow in some animated fashion as you transition to the 3 trend lines) so we can more easily verify the 3 trend lines (the skeptic that I am) by more easily seeing the colors and that the points do come from where alleged.

    The impact of the graph might also be improved if juxstaposed with several other graphs: (a) the one showing clustering of el nino and la nina, (b) the escalator, and (c) the pic (or vid) showing an animated removal of cyclical effects from the temperature leaving a mostly rising temp. Putting the above 4 graphs into a little animated story would be nice. (a) suggests cycles are real and logical. The current graph, also showing the cycles are logical due to their clustering and periodicity, then highlights that a move to a higher trend might almost be inevitable. (c) offers an animated backup confirmation that the cycles are the problem. And (b) shows that in the absence of these further explanations, many of us will find it easy to fool ourselves.

    3: DS#7 is a good point but also presents a lose-lose situation in the short term. If the climate scientists are right, you can say they are lucky, that alarmism is having a lucky streak, that they have simple minds and any day now the climate will prove them wrong. OTOH, if they misshoot too much, that clearly wouldn't be good either.

    The slog is to try to offer as much evidence as possible as accessible as possible (eg, as is the goal and much success of this website) and within that context show that their decent predictions make sense while many contrarians have been far more incorrect, something that would be more clear only as time ticks away, unfortunately. A reality is that the skeptical mind without time to become an amateur climate scientist ultimately will wait out nature if they suspect scientists are untrustworthy an likely to exaggerate.

    Another point is that it is important to try to avoid over-shooting on the high end, downplaying error bars, and downplaying our always somewhat limited understanding. People frequently judge success subjectively based on expectations being met or not. We know the story about crying wolf. While individual contrarians will cry wolf and come and go, the scientific community as a whole would be a greater loss if it placed itself in a position to be dismissed. The label "alarmism" effectively paints scientists as full of naivite or even as full of hubris, supposedly over-estimating dangers at every turn with lots of self-assuredness. Plus, if you are a bit conservative and undershoot a little, what are others going to do? Pick the top side and essentially promote action? Hopefully. Or they may undershoot more so and make it clear the closest predictions was the still conservative scientists. Of course, it's hard to do science in earnest and not try to be as accurate as possible, but the reality is we are biased creatures and we should continue to be careful and guard against actual alarmism.

    The FAR report, even if using models less accurate than what we have now, did well in their summary by stating the following in a section titled "How much confidence do we have in our predictions"

    > Uncertainties in the above climate predictions arise from our imperfect knowledge of

    > future rates of human-made emissions
    > how these will change the atmospheric concentrations of greenhouse gases
    > the response of climate to these changed concentrations

    > ... Secondly, because we do not fully understand the sources and sinks of the greenhouse gases, there are uncertainties in our calculations of future concentrations arising from a given emissions scenario

    > Thirdly, climate models are only as good as our understanding of the processes which they describe, and this is far from perfect

  15. Shakun et al. Clarify the CO2-Temperature Lag

    DSL: no method, little more than a blog post: http://www.sciencebits.com/Shakun_in_Nature

  16. Shakun et al. Clarify the CO2-Temperature Lag

    OneHappy, can you provide a link to Shaviv's methodology?

  17. Shakun et al. Clarify the CO2-Temperature Lag

    On ScienceBits 21 April 2012 Nir Shaviv raised this objection to the Shakun et al. paper: "in order to recover their average "global" temperature, I needed to mix about 37% of their southern hemisphere temperature with 63% of their northern hemisphere temperature." So he is accusing them of deliberately manipulating the data by weighting it to get the result they wanted (ie that globally on average temperature lags CO2). I am interested in two aspects of this objection. 1) Is he correct, and if so how much does this compromise Shakun's results? 2) Assuming Shaviv is correct, would this mean that temperature does not lag CO2 only during the start of a period of warming (but it would during the mid and latter period), or would this apply across the entire period?

  18. Watts Interview – Denial and Reality Mix like Oil and Water

    The argument in talking point #3 is a guest post by Don Easterbrook, who still refuses to acknowledge getting the dates wrong in the ice core data (he isn't even consistent; his graphic indicates that the last known date in the record would be 1905 yet he refers to the end of the ice core data as 1950 in the body of his text).

    Even though Easterbrook is well aware of these things, I posted a comment to that effect. We'll see if it gets through WUWT moderation unmolested.

    All this is evidence that there should be a Denial Strategy #11: never give up on a bad argument no matter how often or thoroughly it's debunked.

  19. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Kevn @ a model projection is an estimate, from basic principles (Planck's law, Newtons laws of motion and gravitation, the laws of thermodynamics), known current conditions, and projections of future forcings, of the future changes in the climate system.  Because of limited computer power they must be run at resolutions in which micro behaviour is not modelled, where micro-behaviour includes such things as tornados and hurricanes.  As a result, the such micro behaviour must be matched to the resolution of the model by parametrization.  Further, there is uncertainty about the exact values of some current conditions.  Each model represents an estimate of the correct parametrization and value of uncertain conditions.  Those estimates are not predicted by theory, and though modellers try to constrain them with observations, they cannot entirely do so.  

    The result is that our best prediction from basic physical principles is uncertain.  Each model represents a sample from the range of possible parametrizations given current knowledge, and hence provides a sample from the range of possible predictions from basic physics given our current limitations in computer capacity and knowledge.

    Because of that, our best possible prediction from basic physics is determined by the statistical properties of the ensemble of models.  As such, our best prediction is the mean of the ensemble, with the uncertainty of the prediction being a function of the range of the predictions by individual models.

    If you look at the GM section of figure 3 above, you will see that the mode of the distribution of GM trend predictions is very close to the values observed, but that two models drag the mean away from the mode.  The distribution is skewed.  In that situation I would have thought it was better to quote the median model trend rather than the mean of the trends, but there are certainly other ways to show this data, including (as the authors did) showing the full range of model projections relative to the observed trends.  When you look at that comparison, it becomes obvious that the observations have not falsified the ensemble prediction.  Not even close!

    In that context, you are focusing on a single comparison to the exclusion of the full range of data presented to try and create the impression that there is a very large discrepancy between the ensemble prediction and observations.  In fact, there is only a small discrepancy between ensemble predictions and observations because the observations lie close to the mode (and median) of the individual predictions within the ensemble.  That the distribution of the ensemble predictions is skewed needs to be conveyed because science does not procede by only noting the points that help you make a point, and that fact was conveyed both by figure 3 and by the note about the mean.

    You, however, faced with a usefull discussion of the full issue, have chosen to ignore the majority of the data presented to make a case that is not supported by the full range of data.  It seems to be a specialty of yours.

  20. No alternative to atmospheric CO2 draw-down

    An additional note on reserves: the "possible reserves" include all proven reserves, all probable reserves (defined as reserves having a 50% chance of being commercially recovered with current technology and prices), and all possible reserves (defined as having a 10% chance of being recovered at current technology and prices).  Obviously as technology improves and prices rise, recovery rates will go well above the 50 and 10% figures.  Further, as noted by MA Rodger, these reserves do not include the vast majority of tar sands, oil sands and shale oils, and nor do they include unconventional gas (clathrates and gas recoverable only by fracking or underground gasification).

    The total resource base estimate by the IEA includes all fossil fuels currently estimated to be in the ground, excluding the majority of unconventional oil resources (tar sands etc) and clathrates.  Gas recoverable only by fracking and gas from underground gasification will be included as part of current gas and coal TRB respectively.

    I suspect these distinctions are academic, in any event.  Once we get up towards 3,500 GtC total emissions, Mean Global Surface Temperatures are likely to be 6 degrees above the pre-industrial average out to 10 thousand years from now (peaking somewhere between that and 10 C above the preindustrial).  I do not expect the ability or will to keep on burning fossil fuels will long survive in that sort of climate.

  21. No alternative to atmospheric CO2 draw-down

    MA Rodger @49, thankyou for pointing out my error.  As it happens, I made it consistently, ie, at each point where I should have mentioned Pg C, I mentioned Pg CO2.  Consequently the entire post is correct once the substitution for the correct figure is made.

    I should note that figure of one trillion tonnes Carbon as the achievable lower limit of emissions comes from Allen et al 2009, and certain related papers.  A count of the best estimate of emissions todate is kept at trillionthtonne.org.  They indicate that at current emission rates, the trillionth tonne will be emitted in June, 2041.  Just 28 years!

    With regard to the fossil fuel reserve, 5,000 GtC is approximately the World Energy Council 2010 estimate of possible reserves, which with emissions todate comes to 3,575 GtC.  Possible reserves include reserves which have not been proven, or are uneconomic with current technology and prices and which estimates of the likilihood of future recovery are uncertain.  The International Energy Agency 2011 reports a Total Resource Base of fossil fuels which, together with emissions todate, represents cumulative emissions of 16,700 GtC.  Not all of that will be recoverable under any circumstance, but it is likely that new discoveries, especially as that figure does not include oil sands, tar sands and shale oil.  If we are determined to exploit every economic fossil fuel resource regardless of consequences, given a few centuries we will, I think, go well beyond the 5,000 GtC estimate used by Archer.  (Figures and sources taken from my spread sheet.)

  22. Cornelius Breadbasket at 07:39 AM on 15 March 2013
    Watts Interview – Denial and Reality Mix like Oil and Water

    Thank you dana - I'm very pleased that even a layman like me can grasp a little science :)

  23. Pielke Jr and McIntyre Assist Christy's Extreme Weather Obfuscation

    This debate is being reprised at The Conversation with an attack on the Climate Commission's Angry Summer report by the Pielke Jr associated Risk Frontiers group at Macquarie Uni. Similar bait and switch tactics being employed. The CC has issued a statement, Pielke Jr has weighed in.

    http://theconversation.edu.au/weighing-the-toll-of-our-angry-summer-against-climate-change-12793

  24. Watts Interview – Denial and Reality Mix like Oil and Water

    Cornelius @4 - also true.  They are causally related (global warming causes climate change), so the terms are often used interchangeably, but they're not the same thing.

  25. Cornelius Breadbasket at 06:46 AM on 15 March 2013
    Watts Interview – Denial and Reality Mix like Oil and Water

    I've been led to understand that Global Warming and Climate Change are two different things. Global Warming means global temperature increase which causes Climate Change - a shift in the long-term weather patterns.  Watts 'proponants shifted the term' argument is very easy to deflate when you explain it like this.

  26. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Composer99,

    What I was using is the reality of the facts.

    The observable trend over the period is 0.138 +/- 0.028 degrees C/decade. 

    The reported average trend is 0.167 degrees C/decade.

     

    The fact is 0.138 + 0.028 = 0.166.

    The fact is 0.166 is less than 0.167.

     

    All I'm saying is that it is this article that is not very convincing. 

     

     
  27. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Kevin, since you keep going on about short term trends, (the flattish last 10 years), then lets see if I understand what you mean.

    Am I correct that, deep down, you reject the idea that trend is mostly due to negative/neutral ENSO state and believe it is due to some other part of the climate system. And furthermore, if we only understood this "other part" of the climate system we would realise AGW isnt the problem that we thought. Is this what you believe?

    Or alternatively, do you believe the ENSO has gone fundamental change (something models should have found but havent) and it will remain mostly low and temperatures will be stable from now on?

     

  28. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Kevin:

    Since when is it my responsibility to report to the paper's authors (or, since your claim follows from the OP text rather than from the paper, to Dana) what Tom feels are issues with the way the paper handles the observational datasets?

    What I was taking issue with was not the content of the paper itself, but your comment upthread, which you defended because you weren't "trying to say anything 'statistically speaking'".

    You are questioning the quantified analysis using.. what, exactly? Your gut feelings?

    As I said, not very convincing.

  29. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Composer99,

    Have you expressed these statistical concerns to the author?  After all, it was the author who compared an averaged trend with observed trend.  As noted earlier by Tom Curtis, these trends are from different models, and averaging them isn't the best thing. 

    I don't have all the data.  I don't want to do all the calculations.  I don't need to.  I, again, was just making the point that the author's chosen comparrison does not help make his point.

    You are making a claim about trends that are computed using statistical techniques. So if you're not trying to say anything about the statistics, your claim won't be particularly convincing.

    As noted above, the author made a comparrison of an average trend to the observed trend.  It is interesting that his average does not include any +/- , which questions the statistical legitimacy of this averaging.  As such, any comment regarding this comparrison does not require a statistical test.

    My claim doesn't have to be particularly convincing, the data already is!

     

  30. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Kevin:

    I was not trying to say anything "statistically speaking", [...]

    There's your problem right there. You are making a claim about trends that are computed using statistical techniques. So if you're not trying to say anything about the statistics, your claim won't be particularly convincing.

  31. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Tom,

    I didn't say anything about falsification. 

    1)  I am aware the authors chose the index to compare with.  What I am saying is that it is a wrong choice for straightforward reasons.

    But you didn't comment on this, except in regards to my comment.  Why? 

    2)  Only one Global Mean Surface Temperature record is in fact global.  The NCDC record does not include the poles, for example.  Therefore, when comparing with NCDC, an NCDC mask of the model results should be used.

    Same as above.  You have a problem with the paper, but point it out when commenting on my comment.

    I was not trying to say anything "statistically speaking", I was just pointing out, using the comparison the author chose, the trends that the models predict, do not seem to be that good.

  32. Watts Interview – Denial and Reality Mix like Oil and Water

    shoyemore @2 - the statements are contradictory when taken in context.  The first (red Watts) essentially says that we expect linear warming, and the fact that we haven't seen it has scientists scrambling to switch to the term 'climate change'.  The second says the climate is complicated and we shouldn't expect linear warming.

    It's the intent of the first quote and subsequent baloney that makes them contradictory.

  33. Watts Interview – Denial and Reality Mix like Oil and Water

    I am not sure if the first one is a real contradiction on Watts' part - you could call the one on the right a lie or a strawman, and the one on the left exaggerated ("hundreds" of variables?), but the two statements are not mutually exclusive. At least, they do not seem so to me.

    But far be it from me to defend Anthony Watts. I think he gets far too much attention, anyway.

  34. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Kevin:

    1)  I am aware the authors chose the index to compare with.  What I am saying is that it is a wrong choice for straightforward reasons.

    2)  Only one Global Mean Surface Temperature record is in fact global.  The NCDC record does not include the poles, for example.  Therefore, when comparing with NCDC, an NCDC mask of the model results should be used.

    3)  The meaning of statistical significance is that if the observations lie within the 95% confidence intervals of the prediction, the theory is not falsified by the data.  If it exceeds it, it may be falsified given certain other conditions.  Saying that indice is very close to the limit shows a problem simply means you do not understand statistical significane.  This is especially so as you have reversed the appropriate comparison by comparing the mean of the prediction with the confidence limit of the observations (it should be the ohter way round).

    4)  If you look at the GM section of figure 2, it is very clear that all three indices used lie, for the most part, within the 1 sigma (66%) confidence interval of the predictions.  I know that you are desperate to beat that fact into a "falisification" of the models, but all that is being falsified is any belief that you are capable of a sensible analysis.

  35. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Tom Curtis,

    I did not specify which observed trend, the Author did that.  Regardless, from your data, while only one doesn't encompass the models, another has the upper limit right on the model's prediction, and another just above it (0.002) at 0.169.  This still shows a problem with the models.  It is not just due to a small sampling time.

  36. Watts Interview – Denial and Reality Mix like Oil and Water

    Mr. Watts is now clearly showing the traits of someone in denial-- just look at all the examples that Dana found in a short interview.  He is also demonstrating his ignorance of climate science.

    Alas, those in denial have a slew of tricks and techniques that they draw upon to misinform and mislead others.

    But in doing so they almost always make some critical mistakes-- not only factual mistakes, that alone would be bad enough, but they have trouble formulating an internally consistent and coherent message.  What is more they tend to present logical fallacies.  Maybe that is one way that those in denial try and deal with their cognitive dissonance.

    So what we have here is a failure by someone in denial to communicate coherently (with apologies to Axel Rose). Along those lines, this is very much a war on science and scientists by those in denial.  The likes of Mr. Watts seem oblivious to the fact that they are fighting a losing battle with the laws of physics.

  37. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Observed trends, Jan, 1961- Dec, 2010:

    GISS: 0.151 +/- 0.027 C/decade.  (Upper confidence interval: 0.178 C/decade)

    NOAA: 0.142 +/- 0.025 C/decade.  (Upper confidence interval: 0.167 C/decade)

    HadCRUT3:  0.140 +/- 0.029 C/decade.  (Upper confidence interval: 0.169 C/decade)

    HadCRUT4: 0.139 +/- 0.027 C/decade.  (Upper confidence interval: 0.166 C/decade)

    So, one out of four temperature indices just fails to scrape in the confidence interval.  That index is known to not have global coverage, and in particular to have poor coverage of the Arctic, Asia, and North Africa (all areas showing very high temperatures in 2010)  Indeed, the only index of the four to have truly global coverage is also the one that most closely matches the predicted trend.

    Kevin does point toward a genuine problem, however, though it is not what he thinks it is.  It is about time climate scientists started using a HadCRUT3 (or 4) mask on their predictions when comparing predicted temperatures and trends to the Hadley products.  It is known that they do not have global coverage, and it is known that that effects the temperature trends.  The continued reliance on HadleyCRU products without produceing a Hadley mask prediction is the equivalent of comparing North American continent temperature predictions to USHCN CONUS temperature products.  It is not a prediction of the thing being measured.

  38. Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming

    Due to those overpredictions, on average the models simulate a 0.167°C per decade average global surface warming trend from 1961-2010, whereas the observed trend is approximately 0.138 ± 0.028°C per decade, approximately 20% lower.

     

    The max of the observed trend is 0.138 + 0.028 = 0.166.

    What does it mean when the max of the observed trend is less than the model's prediction?

    Since this covers 49 - 50 years, it is a substantial amount of time.  I would say that the model is out of whack!

     

  39. Hans Petter Jacobsen at 21:04 PM on 14 March 2013
    Living in Denial in Norway

    I agree with Esop @19 that the return of cold winter weather has affected the opinion in Norway. Cross country skiing, which requires cold winter weather, is still a part of the national identity for many Norwegians, myself included.

    Norway is a young nation, and polar explorers like Fridtjof Nansen and Roald Amundsen were important for the national identity both before and after Norway was separated from Sweden in 1905. Nansen tried to reach the North Pole in an expedition that lasted for 3 years. They did not manage to ski all the way to the Pole, but he and his men survived, and they were heroes when they returned home. Most Norwegians know about the expedition and the hardships that the men went through in the Arctic ice. An ice free North Pole may therefore change people's opinion. In 2010 a modern Norwegian explorer sailed around the Arctic in a small fiberglass sailboat, and he got much attention in the media. It took Amundsen 3 years to sail the western part of this route and 2 years to sail the eastern part of it, despite his vessels were designed for the pack ice. I assume that someone will sail to the North Pole in a small fiberglass sailboat soon, and that this will have a greater impact on the opinion in Norway than anywhere else.

  40. No alternative to atmospheric CO2 draw-down

    Hi Tom Curtiss @49.

    Thanks for turning Archer's figure 1 round the right way up. In the past I have always met it on its side which does make fully appreciating its content a bit stressful.

    The text of Archer 2009 is a different matter, although you did get me checking who was right.

    The pulses of CO2 he models are in Pg carbon (GtC in my-speak) and not in Pg CO2. Archer describes the size of the smaller of these pulses thus "For comparison, humankind has already released 300 Pg C and will surpass 1000 Pg C total release under business-as-usual projections before the end of the century." Archer's 300 GtC for human releases is surely low, even for 2009 (which if you tot them up would have been 350 GtC back then according to CDIAC, and now over 400 GtC). It also ignores land-use emissions which tot up to a further 160 GtC according to CDIAC which makes today's total release probably over 560 GtC.

    Thus under BAU, I would put the 1,000 PgC emissions milestone as arriving, not as Archer says "before the end of the century", but by mid-century.

    The larger 5,000 PgC pulse he equates to burning of all FF reserves including coal (although likely tar sands & fracked gas probably don't feature). Fuel reserves are always a nightmare, with the numbers quoted ranging from 'reserves from current holes in the ground using current extraction methods' all the way to 'estimated potential global reserves extractable using theoretical methods.' I do think Archer is at the high end of these different figures when he says the 'entire reserviour of fossil fuel' equates to his 5,000 GtC. A figure of 760 GtC (2,795 GtCO2) is encountered commonly which I interpret as 'known reserves less tar sands & fracking'. There is as well resulting carbon feedbacks from permafrost so BAU for 60 years would easily see resultant total accumulative carbon emissions up to 1,500 GtC.

  41. Water vapor is the most powerful greenhouse gas

    gws #154, thanks.

  42. Water vapor is the most powerful greenhouse gas

    Tom Curtis, Jose_X - If I am referring to running the same experiment in steps (one or many), and you are referring to something else entirely (with/without feedback, under different conditions, for example), then my apologies, that's apples and oranges. Different questions entirely, and comparing the two is not particularly relevant. 

    What I was discussing is that a numeric analysis of 400ppm will be the same as another analysis of 400ppm, all other things held constant (including the presence/absence of feedbacks and whether or not sufficient time for equilibrium is allowed), regardless of other calculations, and that interim values for GHG levels will and must fall somewhere between the 0ppm and 400ppm numbers. Not retaining feedback levels for one forcing level over to another, which is invalid, but running each step of the experiment under the same conditions as the final 400ppm evaluation. To be really clear, I'm speaking of total forcings, not deltas, as the deltas will be dependent on the temperature at the time of the delta - if you are looking at varying temporal evolution without equilibrium, all bets are off. 

    But I'll freely admit that I may not be fully following the conversation - I'm still quite unclear on what Jose_X wishes to investigate, what issues he's seeking insight into. 

  43. Water vapor is the most powerful greenhouse gas

    KR @156, I think you aren't sufficiently considering the fact that the radiative forcing between two concentrations is the difference in TOA radiative flux at equilibrium for one concentration and the TOA radiative flux for the other concentration with all other values (ignoring the stratosphere) retaining the equilibrium values for the other concentration.

    In fact, one relevant experiment for assessing a related issue has been done.  In Schmidt et al, 2010 they compared the effect of adding a slug of IR active compounds (and clouds) to a pristine atmosphere (N2 and O2 only), and the effect of removing the same size slug from an atmosphere with the composition found in the 1980s.  Because CO2 has virtually no overlap with any factor other than water vapour and clouds, that experiment effectively determines the RF of adding 340 ppmv to an atmosphere with no CO2; and the RF of removing 340 ppmv from an atmosphere with 340 ppmv of CO2.  The result for adding the CO2, ie RF(0→340), is 38 W/m^2.  The result for removing the CO2, ie, RF(340→0), is 22 W/m^2.  The difference is because, in the very cold climate with 0 ppmv CO2 after equilibriation, there is virtually no water vapour and no clouds in the atmosphere (not quite true but close enough for exposition).  In the case of addition, that means there is no overlap, and the full effect of the addition can be experienced.  In the case of removal, the full load of water vapour and clouds are retained in the atmosphere because this is a no feedback situation.  So, not only does the RF of a large slug not equal the RF of the sum of a series of small slugs of the same size, but the RF varies depending on whether you are adding, or removing the slug.

    This does not mean that the equilibrium temperature will differ for a given concentration of CO2 depending on whether you arrive at that concentration by increasing or decreasing CO2.  To the extent that the difference in RF between the two methods is a consequence of the overlap with water vapour and clouds, as the vapour pressure of water in the atmosphere adjusts to a reduced (or increased) temperature, the extent of overlap will equalize.  So, λ also differs between the two cases such that λ'RF(0→340) = λ"RF(340→0).

    Again, this later property is not necessarilly true, and is not true for some values of CO2 concentration and for Earth System Climate Sensitivity, with a bifurcation between snowball earth and non-snowball earth states resulting in λ'RF(a→b) ≠ λ"RF(b→a) for some CO2 concentrations, a and b.

    Finally, and as you point out, the simplified formular does apply within error, and has been shown to apply for a large range of CO2 concentrations close to the present value (ie, from about 150 ppmv to several thousand ppmv at least), and in that range, to a close approximation, it does not matter whether you increase or decrease, or change the concentration in a single slug or by increments, the answer will be the same.

  44. Eric Grimsrud at 11:05 AM on 14 March 2013
    State Department Downplays the Climate Impact of Keystone XL

    To John Hartz and John Cook, 

    I would be pleased as well as honored to serve as a volunteer on the SkS author team.  I consider SkS and Climate Progress to be the best I have noted to date for updates on climate change issues, both scientific and political.  W.R.T. my own personal efforts, see ericgrimsrud.com and ericgrimsrud.wordpress.com.   

  45. State Department Downplays the Climate Impact of Keystone XL

    john @4 - yes thanks, that should have said 'barrels', not 'gallons'.  Correction made.

  46. State Department Downplays the Climate Impact of Keystone XL

    Given Obama's poor track record, I feel Keystone will be approved later in the summer. Also, knowing Obama, there will be big compensatory gestures to the "green" movement, maybe to do with carbon emissions or coal exports.

    I am not sure if that will be enough. We will need to see the final package.

  47. Water vapor is the most powerful greenhouse gas

    Jose_X @155:  In taking multiple steps in the first experiment, the atmosphere was never allowed to equilibriate.  As a result, the mean global surface temperature, water vapour content of the atmosphere, etc, was constant at 0 ppmv CO2 levels throughout the experiment.  You could, it you want run multiple experiments, where in each experiment you allow the atmosphere to equilibriate at 0 ppmv CO2, then add slugs of 40 ppmv CO2, 80 ppmv CO2, 120 ppmv CO2, etc, but you would get the same result.  That result is the RF(0->40), RF(0->80), etc, which allows you to see the incremental difference in RF not just for the step from 0 to 400, but for all the intermediate steps as well.

    In fact, thinking about it, it would be best to run 10 experiments.  One in which you set CO2 to 0 ppmv, and allow it to equilibriate, then incrementally increase to 400 ppmv without allowing equilibriation between each step.  One in which you set CO2 to 40 ppmv, then incrimentally increase without allowing equilibriation, and so on.  This series of experiments would allow you to calculate the RF(0-40), (0-80), ... . ((0-400); (40-80), (40-120), ..., (40-400), ... ,(360-400).  

    Doing so, I suspect you would find the difference between RF(120-400) and the sum of the differences RF(120-160), RF(160-200), etc would be small.  That is, most of the H2O/clouds/CO2 overlap would arise in the first few increments because the first few increments of CO2 have the largest effect on temperature and hence on CO2 content.  If, however, we pushed the experiments out to 2000 ppmv, the difference introduced by each incremental step would start rising again as the increase in vapour pressure of water with increase in temperture rises rapidly above 40 C (ie, typical tropical tempertures with very high CO2).

    Finally, your suggested experiment is no different than mine, except that it does not obtain intermediate values for the RF relative to 0 ppmv.     Consequently, by my analysis, it would also show the RF(0-400) to be greater than the sum of the incremental radiative forcings.

  48. Water vapor is the most powerful greenhouse gas

    KR 156 >> How could the forcing at 400ppm possibly not equal the forcing at 400ppm?

    I'll quote Tom Curtis 152:
    > b) Radiative Forcing is not a concept in a basic physical theory, but rather a concept used in calculating the approximate consequences of the complex interactions of basic physical theories.  Consequently what is required of it is that it be sufficiently useful in its range of operations - which of course, it is.

    One might ask in Calculus, how could the derivative of x^3 + x^2 not possibly be the same as the derivative of x^3 plus the derivative of x^2? Well, the derivative operator was designed, among other things, to be linear. But we can design many algorithms/operators that don't have that feature. In fact, it's not really clear that an algorithm has such a property until it is "proven" in a rigorous analytical sense.

    The odd thing to me about RF is that it disappers after equilibrium is reached. By looking at equilibrium radiation at TOA or on the surface, you can't tell. In fact, there are many independent variables that go into deriving RF and if any of those are left out of the analysis, you really can't recapture that value. And improvements in our understanding in the future might even lead to different algorithms that would derive different RF. Each time we engage in a new algorithm, arguably, we should try to prove that certain mathematical properties exist. I don't think it is obvious that a complex algorithm dependent on numerous factors would automatically be well-behaved in any particular sense of the word.

    OK, let's assume we are going from a given starting concentration of CO2 to another where the RF value "at" each path point can be modeled by roughly the same logarithmic function (dependent on a reference point). We can take multiple paths there.

    Question: is it obviously true that a*ln(b*(x_1/x_0)) + a*ln(b*(x_2/x_1)) = a*ln(b*(x_2/x_0)) for all x_1 and x_2? At best we should perform the algebra first to be sure (or to show instead that the path does matter). Here I believe the path doesn't matter.

    What if the approximating functions used along the partitioning path were entirely different from each other?

    Also, we can even look at forcings by different gases and ask, what if the gases are added in different orders and quantities?

    If the approximation method used to address any of these questions gives a result that the partition chosen does matters, one can't argue that if we simply had used the true and best method (codes) then it all would have worked out because it would adhere to reality, etc, etc. Every algorithm/calculation is an approximation of reality to some degree. Why should today's current best procedure necessarily be the best we will ever get so to allow that logic to work?

    OK, since I am writing before carefully reviewing the logic of this comment sufficiently, I too would certainly appreciate comments, complaints, etc. I heretoforth reserve the right to backtrack through an unlimited number of "undos".

    PS: "KR" and "RF" can get a little confusing. They each have an R and that looks like the other letter, a verticle line with two smaller lines connected each in at least a quasi horizontal position.

    KR, thanks for the modtran link. I'll see if I can make use of it.

  49. john mfrilett at 07:36 AM on 14 March 2013
    State Department Downplays the Climate Impact of Keystone XL

    I think this may be in error by a factor of 100, "Using 600-gallon tank cars".  Tank cars can have capacities of up to 60,000 U.S. gallons of fluid. 

    Moderator Response: [AS] I believe that is an error, it should say "barrels" instead of "gallons". The SEIS uses 600 barrels per tank car, about 19,000 US gallons.
  50. Hans Petter Jacobsen at 07:10 AM on 14 March 2013
    Does Norway lack political commitment to renewables?

    StBarnabas comment #1 inspired me to look more into the possibilities for power exchange between Norway and Europe.

    A report from a seminar arranged by CEDREN gives a good overview of how Norway, Germany and the UK may balance power using the hydro reservoirs in Norway. The report states that "Demand for Norwegian pumped-storage hydropower is rising." The UK has signalled a long-term demand for balancing power in the range of 15–20 GW. Germany has indicated a substantially greater need (20-60 GW). The Norwegian Statnet states that a balancing power regime of up to 20–25 GW is obtainable from a Norwegian technical standpoint. There are more details in the report.

    A report form Zero states that "With hydro reservoirs of 84 TWh, Norway holds about 50 percent of  Europe’s hydro power storage capacity." The report focuses on balancing power between Germany and Norway. Today Germany has 30 hydro power pump storage stations with a total capacity of 6.8 GW. When the magazines are fully loaded, they can run for 4-8 hours and produce a total of 0.04 TWh. A totally 100% renewable electricity system in Germany in 2050 will require 76 TWh of reimport each year, which corresponds to almost the total storage potential of the Norwegian hydro power magazines. A maximum capacity of 50 GW in- and output is required. To obtain the approximately 50 GW input and output capacity, the turbine capacity of Norwegian power plants would have to be expanded, apart from stepping up pumping capacity. Current installed hydropower capacity in Norway is 28 GW (The numbers in the report vary a little). Statkraft carefully indicates a potential from everything between 30 to 85 GW, but an interim report states that Norway could supply up to 20 GW of balancing hydropower. 

    The Zero report states that "The construction of new hydro reservoirs in new areas in Norway for electricity export is highly unlikely. The most discussed solutions in recent reports on balancing options are pump storage and expansion of existing hydro power plants". The report also discusses the opposition among people in Norway against new power lines due to visibility in the landscape.

    I have played with some numbers to set 20 GW balancing power in perspective. The 500 million people in the EU countries consume approximately 2600 TWh each year, which is approximately 300 GW on average. The energy capacity in the Norwegian hydro reservoirs is 84 TWh, which corresponds to 20 GW power for 4200 hours, i.e. for almost half a year.

Prev  944  945  946  947  948  949  950  951  952  953  954  955  956  957  958  959  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us