Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  879  880  881  882  883  884  885  886  887  888  889  890  891  892  893  894  Next

Comments 44301 to 44350:

  1. 2013 SkS Weekly News Roundup #25B

    Terra@5: I wonder if the people of New York and New Jersey might have said the same thing....last year.

  2. 2013 SkS Weekly News Roundup #25B

    @Terranova #5:

    Although you did not provide a direct link to it, I presume that you found the 3.15mm/yr rate of seal level rise for Charleston, SC on NOAA's graph, Mean Sea Level Trend 8665530 Charleston, South Carolina. Am I correct?

  3. Stephen Leahy at 01:09 AM on 23 June 2013
    Peak Water, Peak Oil…Now, Peak Soil?

    Boswarm: Since I was at the conference, attended the sessions, interviewed a dozen people and wrote the article let me clarify a couple of things: Iceland has not recovered, it remains Europe's largest desert despite the amazing efforts of the soil cons service. That is what their scientists told me and I quoted them. I spent 2 wks there.

    You seem to imply I made this stuff up. Were you at the conf?

    FYI It is a 1000 word article, not a transcript of 3 days of talks

    Phil L: the article does not mention earthworms, it's in the photo cutline and have no idea who wrote it. Nor did I write the headline. However more than one soil scientist has used the term 'peak soil'.  

  4. 2013 SkS Weekly News Roundup #25B

    Rugbyguy, you're right.   But,  I just checked NOAA for Charleston's SLR and it is at 3.15 mm/yr. Not a lot to worry about. 

  5. The Consensus Project data visualisation - a history

    Chriskoz@18, yes the original had the clusters of circles in a line.
    We decided to reduce the width of this app to fit the current web site layout, which meant re-arranging the positions of the clusters.

    I'm working on a bar chart option in addition to the circles, although I can't say exactly when it will be available.

  6. The Consensus Project data visualisation - a history

    chriskoz @18, click the "Interactive History of Climate Science" on the left bar (just under and to the right of the button for the "Consensus Project".

    The interactive history starts with Fourier's classic in 1824, and runs through to 2012.  In total, it has 266 Skeptic, 2376 Neutral, and 2493 Pro AGW papers, making the percentages 5.2% Skeptic, 46.3% Neutral, and 48.5% Pro AGW.  Excluding Neutrals, that is 9.6%  Skeptic, and 90.4% Pro-AGW.

    From 1991-2012 inclusive, there were 252 Skeptic, 2100 Neutral, and 2355 Pro-AGW.  That is 5.4% Skeptic, 44.6% Neutral, and 50% Pro-AGW, or excluding Neutrals - 9.7% Skeptic, and 90.3% Pro-AGW

    On categorization, the Interactive History of Climate says:

    "Skeptical Science takes a different approach to Naomi Oreskes' Science paper who sorted her papers into "explicit endorsement of the consensus position", "rejection of the consensus position" and everything else (neutral). In this case, the backbone of our site is our list of climate myths. Whenever a climate link is added to our database, it is matched to any relevant climate myths. Therefore, each link is assigned "skeptic", "neutral" or "proAGW" whether it confirms or refutes the climate myth.

    This means a skeptic paper doesn't necessarily "reject the consensus position" that humans are causing global warming. It may address a more narrow issue like ocean acidification or the carbon cycle. For example, say a paper is published examining the impacts of ocean acidification on coral reefs. If the paper finds evidence that ocean acidification is serious, the paper is categorised as pro-AGW and added to the list of papers addressing the "ocean acidification isn't serious" myth.

    There are a large number of neutral papers. Neutral does not mean to say each paper was unable to resolve the climate myth. Sometimes, a paper is relevant to a number of climate myths and the results are mixed as to whether it endorses or rejects all the myths. In many cases, the paper doesn't directly set out to directly resolve the myth or the paper has a regional emphasis rather than global. Papers that met any of these criteria are often categorised as neutral."

    So, it differs from the Consensus Project in that it classified based on evidentiary contribution, whereas the Consensus Project classified based on endorsement.  Further, it categorized based on support of any of 174 climate myths listed at SkS, so that many of the "skeptic" papers in fact are perfectly consistent with AGW.

  7. The Consensus Project data visualisation - a history

    After having fended off MD's trolls, everyone should be pleased with my perfectly on-topic question:

    In 2011, when I was first looking at SkS (that's well before the consensus project) I've seen the "bouncing balls" visualisations here. The visualisation back then, was also about how many papers were "pro-global warming" vs. how many "against" and "neutral". I remember the visualisation very well (a testimony how good such visualisation is a teaching tool): the balls were grouped along the line rather than in triangle; although I don't remember the precise number nor if "pro" vs. "against" amounted to 97%. I cannot find that old visualisation anymore. looks like the consensus project visualisation superceded it.

    So, this visualisation is not new. But certainly, the data is new coming from Cook 2013. Finaly the question: what is the relationship between those two? What data was the old visualisation based on and were its categories defined somewhat differently than those in Cook 2013?

  8. The Consensus Project data visualisation - a history

    Don't feed the troll.

  9. 2013 SkS Weekly News Roundup #25B

    And guess what Terra.....it already is, and will increasingly be, more dangerous as increased sea levels exacerbate those existing reasons for flooding.

  10. The Consensus Project data visualisation - a history

    I hope you do look around, JM< because you'll find that this site has more science-based discussion than any other site on the net -- by a long shot.  When I say  "science-based discussion" I mean arguments that are based on the published science, and that link directly to that science.  The number of linked publications site-wide has to be approaching 10k.  Several of the regular posters are published, and the site frequently gets guest posts from working scientists.

    So when you post evidence-free rhetoric full of what you might think are sly insinuations, it really just comes off as sort of juvenile tough talk.

    I am actually professionaly interested in how your current understanding of climate science has been developed, so I'd love it if you'd provide the evidence that led you to write the posts you've written so far.  Who knows, maybe you know something that everyone here doesn't, at least where climate is concerned.  I'd be willing to bet that everyone here will be more than happy to discuss any new evidence or fresh interpretations of existing studies. 

  11. James Madison at 13:21 PM on 22 June 2013
    The Consensus Project data visualisation - a history

    Rob, thanks.

    Rob and Tom, although we disagree, I appreciate your patience and civility.

    Refreshing really.

  12. Rob Honeycutt at 13:15 PM on 22 June 2013
    The Consensus Project data visualisation - a history

    James...  If you wish to discuss the empirical evidence related to AGW you should do so on this thread.

  13. The Consensus Project data visualisation - a history

    James Madison persists in discussing anything but the topic of this post.  The reason is transparently obvious.  Where he to discuss climate sensitivity, for example, at the "climate sensitivity is low" rebuttal, and write " If you look at the empirical data, it's simply not there", readers would just scroll to the top of the page and see climate sensitivity estimates from emperical data (not models) from thirty different studies (see below).  They would then know immediately that he had not read the article, or not understood it; and that his points were based on thoughtless mouthings of denier talking points rather than actual knowledge of climate science.

    Not content with showing his ignorance on one topic, James Madison proceeds to show it on several.  He brings up the "no warming in x years meme" (@9).  In doing so, he ignores the fact that the rate of increase of GMST over the last twenty years (his chosen time period) is 0.134 +/- 0.105 C per decade, ie, more than 80% greater than the twentieth century average of 0.072 +/-0.01 C per decade (Gisstemp, determined on the SkS trend calculator).  In denier speak, warming faster than the rate in the twentieth century is a "pause" in the warming.

    Again, by keeping his discussion carefully of topic, Madison avoids the comparison between his talking points and the rebutals which show already that his blatherings are without substance.

    Finally he flaunts the fact that he has not read the comments policy by indicating he does not know the meaning of "sloganeering", which is defined therein.  And why, after all should he read the comments policy.  Though posting here is a privilige conditional on compliance with that policy, he has now shown repeatedly that he has no intention of doing so.  

  14. Rob Honeycutt at 12:08 PM on 22 June 2013
    The Consensus Project data visualisation - a history

    James...  When you make statements like this, "3) maybe, maybe not. What is to discuss? If you look at the empirical data, it's simply not there" you are clearly making a claim that is flatly untrue.  You are making a statement from ignorance, and implying that the 30,000+ researchers who have done the hard work to explore the science of climate change don't know what they're doing.

  15. 2013 SkS Weekly News Roundup #25B

    John,  now you are using Rolling Stone as a news source?   Wow!   Please read those articles and not just the headlines.   Coastal cities with low elevations have  historically experienced these problems.   Nothing new here.   I've walked out of my downtown house in Charleston to be greeted by 2 feet of water in the streets.   Combination of a high tide,  full moon, and some rain.  Oh, by the way,  a lot of the Charleston peninsula was built over a landfill.   Building cities in low lying area is always dangerous.  

    Moderator Response:

    [JH] The introduction to this site's Comment Policy reads as follows: 

    The purpose of the discussion threads is to allow notification and correction of errors in the article, and to permit clarification of related points. Though we believe the only genuine debate on the science of global warming is that which occurs in the scientific literature, we welcome genuine discussion as both an aid to understanding and a means of correcting our inadvertent errors.  To facilitate genuine discussion, we have a zero tolerance approach to trolling and sloganeering. [My bold.]

    Please take the time to read the entire Comments Polcy and please adhere to it in your future posts.

  16. Rob Honeycutt at 11:09 AM on 22 June 2013
    The Consensus Project data visualisation - a history

    James...  The point here, as is amply shown in the graphic, as is demonstrated by Cook 2013, is that nearly every research paper being produced on the topic of climate change (which express a position) agrees that human emissions are causing warming.

    This is shown through models, it's shown through empirical evidence of all kinds, and is through basic physics.  It has been demonstrated nine ways to Sunday, and back again.

    I will reiterate my previous point where I said, please consider that perhaps you do not yet fully comprehend the full body of reseach on this complicated issue. Those who do understand, and have spent their entire professional careers working in this field, are nearly all in agreement on the broad aspects of climate change.

  17. 2013 SkS Weekly News Roundup #25B

    The dumbest claim I have found about global warming and its effects was "we will be able to adapt to overcome those problems" - a claim made on SkS!

  18. James Madison at 10:25 AM on 22 June 2013
    The Consensus Project data visualisation - a history

    Rob,

    @9, (now 8) well, if you have a large warming showing in the empirical data over say, the last twenty years, I'll take a look. We can leave out the % caused by CO2 vs amplification vs other for the moment to, you know, keep it simple. We should aask them to put this in the app. (just tryin to stay on topic, boss)

    @10, looks like you've been snipped, guess it's time to move.

    Moderator Response:

    [DB] Yes, please stay on-topic.  Literally thousands of other threads exist at this venue covering the near-entirety of climate science and the denial of it.  Typo fixed.

  19. Rob Honeycutt at 09:22 AM on 22 June 2013
    The Consensus Project data visualisation - a history

    James...  I was paraphrasing this comment:  "...what is lacking is substantiation of the premise, that warming will be large, amplified through positive feedbacks and mostly caused by CO2."

  20. James Madison at 09:05 AM on 22 June 2013
    The Consensus Project data visualisation - a history

    Paul D,

    whatsa "custom"?

     

    ok, what's the purpose of the app if Post 1 is true?

  21. The Consensus Project data visualisation - a history

    Re: James

    The blog post is about the javascript app.
    'The Project' in this case is the app.
    If you want to discuss the data the app uses, then take your custom to another blog post.

  22. James Madison at 08:36 AM on 22 June 2013
    The Consensus Project data visualisation - a history

    sloganeering, now there's a technical term. Got me lost on that one.

  23. James Madison at 08:32 AM on 22 June 2013
    The Consensus Project data visualisation - a history

    Sorry for your misunderstanding Tom.

    This goes directly to the Consensus Project, not to the other issues you mentioned.

    1) yes, so what, I agree. what is to discuss?

    2) yes, so what, I agree. what is to discuss?

    3) maybe, maybe not. What is to discuss? If you look at the empirical data, it's simply not there.

    As for misrepresentations or skeptics, etc., why would one waste the time? The data tells a much more objective story, now, doesn't it?

    Now to revisit my question -

    (-snip-)

     

    Moderator Response:

    [DB] Actually, Tom Curtis is spot-on with his assessment of you.  You would do well to listen to any suggestion he takes the time to write up.  Further, as others note, take the discussion of the Consensus Project to a more appropriate thread as noted (use the Search function).  This thread is about the app visualization.  More off-topic sloganeering snipped.

  24. Rob Honeycutt at 08:26 AM on 22 June 2013
    The Consensus Project data visualisation - a history

    James Madison...  Your position is internally contradictory (as well as off-topic, as Tom points out).  You're ostensibly agreeing that 97% of pubished research agrees that AGW is real, and yet are saying that it's not proven.  So, what is the 97% of research agreeing on?

    Think of it this way:  What are the chances that nearly all the published research has missed some critical element of climate that could explain everything we're ascribing to CO2?  Are you really willing to bet the future on those odds?

    And BTW, we can measure those elements we are ascribing to CO2.  So, not only do these highly detailed measurements have to be wrong, we have to have something else that fully explains everything that CO2 explains.

    Please consider that perhaps you do not yet fully comprehend the full body of reseach on this complicated issue.  Those who do understand, and have spent their entire professional careers working in this field, are nearly all in agreement on the broad aspects of climate change.

  25. The Consensus Project data visualisation - a history

    James Madison @1, it takes a certain sort of perverseness to come to a site with hundreds of posts dealing with the questions you ask, find a post on another topic, and then ask the questions there.  I suggest, in answer to your questions that:

    1)  CO2 is a greenhouse gas;

    2)  It is not saturated; and

    3)  Climate sensitivity is very likely to be in the IPCC range of 2-4.5 C per doubling of CO2.

    Your further questions should be on those posts so that you comply with the comments policy.

    Alternatively, on this topic, topic we can discuss the continuing misrepresentations by leading "skeptics" of the level of scientific agreement on AGW; which have made both the paper and this post necessary.

  26. James Madison at 07:37 AM on 22 June 2013
    The Consensus Project data visualisation - a history

    OK, I'll bite. Nice project, but what really is the point of the exercise?

    As, of course, paper don't care what you write on it.

    Consensus or not, what is lacking is substantiation of the premise, that warming will be large, amplified through positive feedbacks and mostly caused by CO2.

    See even here: http://www.climate.gov/. (-snip-)?

    While increasing GHGs may or should, on net, warm the earth that is not the real question. The real question is, how much it will warm the earth. To date, I have not seen any “useful quantitative results” regarding that question.

    Once those quantitative results are in, we can proceed to the next question: what should one do about it.

    thank-you

    Moderator Response:

    [DB] Off-topic sloganeering snipped.  See this thread for an on-topic explanation as to why your snipped statement is wrong.  Lastly, review the Comments Policy of this site for an understanding of the rules of this venue.

  27. 2013 SkS Weekly News Roundup #25B

    I have just been playing Arround with you graphics.

    Extremely cool!

  28. The True Cost of Coal Power

    A very good article and a good discussion.  Do you know how the coal plant prices were calculated?  Does the calculation include the cost of building the plant and associated transmission or does it assume the plant is in place?  Do the renewable prices include the initial hardware costs?

    There are externalities for renewable energy sources also.

    One can always find things to add or take awy from an analysis like this.  However, the conclusions usually stand when most of the factors are included like this study.

    In my oppinion any utility executive promoting a new coal fired plant should be removed for obvious incompetatance.  Does anyone think a new plant will ever be profitable when it's expected useful life is probably less than a decade.

  29. New paper on agnotology and scientific consensus

    @12 Mal,

    "arrognoramus"

    I shall, of course, now claim to have thought that one up myself.

  30. Glenn Tamblyn at 18:24 PM on 21 June 2013
    New paper on agnotology and scientific consensus

    WheelsOC

    I just read Kitzmiller v. Dover. Awesome!

    It's not surprising that Climate Agnomaniacs don't go near a courtroom very often. While the Law is very different from Science, in both disciplines you learn a lot about logic and how to make (or fail to make) a case.

  31. New paper on agnotology and scientific consensus

    Like WheelsOC @13, I learnt what I know of biology by first watching, and then participating in the creation/evolution "debate".  From that experience, I have a healthy respect for the teaching power of agnotology, but a clear grasp of its limitations as well.  Knowledge gained by refutation of particular arguments will be shaped by the arguments actually made.  Thus somebody who learns biology through the creation evolution debate will learn a great deal about peppered moths and bombadier beetles, but very little most other insects.  They will gain an indepth knowledge of population genetics, but only a cursory knowledge of ecology.  And so on.

    The consequence is a group of people very adept at refuting creationists arguments that have been made, but potentially vulnerable to new arguments that exploit the limit of their knowledge.

    For that reason, while I can see a usefull role for agnotology as a supplemental part of a course on climate change, I would not want to see it as the lions share.  Rather, having taught the subject either systematically (also), or historically, I would finish with a section discussing denier arguments as a means of teaching students to review and apply the knowledge they had previously gained.

  32. New paper on agnotology and scientific consensus

    Bedford suggests how how examining and refuting misinformation is actually a powerful way to teach climate science, sharpen critical thinking skills and raise awareness of the scientific method.

    My own empirical (read: anecdotal) experience has been exactly this. When I began looking into Creationists' arguments about the validity of this-or-that facet of science which presented a challenged to (or appeared to support!) their beliefs, I had to then read what the scientists themselves were saying about these things. Looking into those "debates" (to use a generous term) gave me a much, much deeper understanding of evolution, biology, science generally, AND the philosophy of science than all of my formal schooling put together.

    A lot of that knowledge carried over to serve me well when evaluating the "two sides" of the climate issue. Even if my knowledge of science hadn't been so greatly expanded through the experience, the similarities between so-called "skeptics of AGW" and anti-evolutionists was overwhelming. They displayed the same failures of critical thinking, the same tendency to misinterpret or misrepresent, and the same inability to back down despite overwhelming facts and evidence to the contrary. All they had to rely on was a wall of anti-knowledge: talking points that were asserted as facts but really had no factual basis. These filled up the spaces in their mental stockpile where real knowledge could have fit and influenced their worldview, and they're wedged in so tightly that they keep contrary facts out in the cold. For example, the Young Earth Creationists are absolutely sure that the Grand Canyon was both laid down and then carved out by the waters of Noah's flood. The climate denialists convince themselves that it's impossible to know what's going on with the climate system if that would mean acknowledging the full extent of anthropogenic warming. This anti-knowledge insulates them from the uncomfortable truths; hence the science must be unsettled and uncertain enough to allow for non-artificial factors, if they even admit that there's any climatary pattern to explain at all.

    Quote-mining, selective citations, misrepresentations, conspiracy theories, appeals to crackpot 'experts,' and nice-sounding but utterly baseless assertions all contribute to the wall of anti-knowledge by providing factual-seeming nuggets that can be used like facts to construct an argument or defend a viewpoint. They're fact substitutes.  Creationists and climate denialist gurus both have huge stockpiles of them from which the average mook could pick and choose to support whichever version of wrongheadedness they favored, which were then regurgitated into public discourse at every level (except the rarified atmospheres of the scientific literature, where the primary audience knows better). At least there is a silver lining to this proliferation of misinformation; resources like SkepticalScience come along and put the myths to rest in plain terms and with scientific references that leave the reader more educated and informed than they were going in. It's a good way to pluck the offending nugget out of someone's gullet before they spew it all over new venues. But that necessary service may not be enough to convince the public not to swallow those anti-knowledge nuggets in the first place.

    Creationistm's biggest recent campaign, the Intelligent Design movement, was dealt a lethal blow in Kitzmiller v. Dover not only by the plaintif's scientific superiority and plainspoken rebuttals but also by the defense's own testimony which showed how terribly anti-science and religiously motivated their actions were. It has not been able to recover its pre-Dover glamor in the popular mind since that stunning court case. I can only hope something similar happens to take all the wind out of climate denialists' sails, and soon (assuming the necessary event isn't some kind of natural disaster). The more deeply entrenched these anti-knowledge campaigns become in the populace, the more we'll all suffer going forward and the less we'll be able to leave behind for the generations that follow.

  33. The anthropogenic global warming rate: Is it steady for the last 100 years? Part 2.

    Two corrections to my post 179:Using your exact example and your exact method (with linear trend as a regressor for human), we repeated your experiment 10,000 times, and found that the true human answer lies within the 95% confidence level of the estimate 94% of the time.  There are two errors in this sentence of mine: 94% should be 93%, and the (....) should be deleted, because we were using the exact method of Dumb Scientist, who used the exact human regressor.  DS also pointed out this second error on my part.  Sorry.  I wrote that post on a small laptop while traveling without checking/scrolling the posts carefully.

  34. New paper on agnotology and scientific consensus

    billthefrog:

    I have been trying for some time - funny, my wife just chuckled as she walked past the screen - to ascertain if there is a word in the English language to describe this weird amalgam of arrogance and ignorance.

    How about "arrognoramus" 8^D?

  35. 2013 SkS Weekly News Roundup #25A

    By the way, Joe Bastardi's on the hook over at Rolling Stone (comment stream): http://www.rollingstone.com/politics/news/the-10-dumbest-things-ever-said-about-global-warming-20130619

    Trying desperately to defend the main article's dig at him, and spreading it thick.

  36. Dumb Scientist at 02:52 AM on 21 June 2013
    The anthropogenic global warming rate: Is it steady for the last 100 years? Part 2.
    Given your new example, which I think is unrealistic in the shape of the total global mean temperature not having any trend before 1979 and most of the trend occurring after, I would not have chosen to have a linear function as a first guess in the multiple linear regression procedure. I would choose a monotonic function that looks like the the total trend as a first guess, such as QCO2 discussed in part 1 of my post. Using your exact example and your exact method (with linear trend as a regressor for human)... [KK Tung]

    Actually, both of my simulations used the (nonlinear) exact human influence as a human regressor, specifically to avoid this objection. You can verify this by examining my code: "regression = lm(global~human_p+amo_p)". Since correcting this misconception might alter some of your claims, I'll wait to respond until you say otherwise.

  37. grindupBaker at 02:17 AM on 21 June 2013
    2013 SkS Weekly News Roundup #25A

    "Antarctic melting from underneath". Obviously. I presume it's useful for projection data if they can quantify it, to project the rate once it really gets going. Since ocean temp is 4.05 an increase to 5.55 to 7.05 for CO2x2 (depending on whether, say, 2.0-3.0 Celsius is final CO2x2 radiative balance restored after ocean equilibrium and whether oceans dissolve enough CO2 to slow it) should have an effect considering the huge proportional increase above the freeze/melt point of water (presumbly the -1.9C for sea water). Balmaseda, Trenberth, and Källén (2013) asserts 200+-40 ZettaJoules added to oceans since 1958. Since 11,000 to 17,000 ZettaJoules must be added to oceans before the oceans will permit the surface to restore its radiative balance of CO2x2 for my example +2.0-3.0 Celsius, it would be interesting to know how much ice melt for the trivial 200 ZettaJoules thus far.

    "Global warming appears to have slowed lately" Plumer, Wonkblog, Washington Post states "the “missing heat” may be lurking in the deep layers, 700 meters below the surface" but SKS Posted on 25 March 2013 by dana1981states categorically "A new study of ocean warming has just been published in Geophysical Research Letters by Balmaseda, Trenberth, and Källén (2013)." and "...has been found in the deep oceans...". What is the certainty of this paper and if >90%, say, then why is Plumer, Wonkblog saying "may be lurking". The slope of B,T & K (2013) indicates 0.85 wm**-2 average 2000-2010. This seems crystal clear. I understand that the buoys' data of prior decades likely has suspect accuracy, but typically these random errors cancel well to near-zero if a large enough statistical sample is used and I see no reason why buoys' data of prior decades would affect the slope 2000-2010.

    On the same topic when are you educated bods going to tell media suits and the public what "global warming" is so that this nonsense stops ? Typically for science, what the subject is would be outlined fairly early in a discussion of the science subject, not 20 years after "the science is settled".

  38. ShaneGreenup at 00:44 AM on 21 June 2013
    New paper on agnotology and scientific consensus

    "Bedford suggests how how examining and refuting misinformation is actually a powerful way to teach climate science, sharpen critical thinking skills and raise awareness of the scientific method."

    Is this a good time to mention http://rbutr.com again?

  39. Eric (skeptic) at 00:32 AM on 21 June 2013
    Citizens Climate Lobby - Pushing for a US Carbon Fee and Dividend

    Sphaerica and Dumb Scientist, thanks for the replies.  I like DS's conclusion as expressed in this sentence from the second link: "If competitiveness provisions were to be used as a sweetener to enable the adoption of domestic climate legislation, the WTO consistency of such provisions is, therefore, crucial."

    If it works it reduces the need for a rigid global carbon fee agreement which probably would not pass, it incorporates the carbon issue into broader trade agreements which gives it more weight, and it incentivizes every country to raise their own fee.  In poorer countries it seems to me that the workers there would effectively get a wage increase based on the energy intensity of what their country produces.

  40. Dikran Marsupial at 23:59 PM on 20 June 2013
    Human CO2 is a tiny % of CO2 emissions

    Following on from what Daniel wrote, it is worth adding that it would be a good idea to try and limit further deforrestation of the tropics, for many reasons, including CO2!

    There have also been attempts at seeding the oceans with nutrients to try and increase uptake by marine biota, but it didn't seem to help much.  The link below discusses a chance experiment following volcanic activity, but I seem to recall this type of seeding being tried deliberately as well.

    http://www.scientificamerican.com/article.cfm?id=seeding-atlantic-ocean-with-volcanic-iron-did-little-to-lower-co2

    At the end of the day, cutting down fossil fuel use is likely to be easier and cheaper for the forseable future.

  41. CO2 effect is saturated

    Thanks, Tom.  This material needs to be worked into some sort of category level collection point -- e.g. WUWT Debunkings or WWWT (Watts Wrong With That).  I am most anxious to read Stealth's response, as s/he is a prime candidate for developing an authentic case of DK.  

  42. New paper on agnotology and scientific consensus

    As someone whose level of ignorance is absolute in virtually every aspect of human knowledge, it would be utterly hypocritical of me to even consider castigating people for their ignorance of climate change science.

    At the other end of the spectrum, I do have some sympathy for those who, through paistaking hard work, have become expert in a subject and, as a consequence, find themselves unable to refrain from a modicum of arrogance when speaking to us lesser mortals.

    What does mark this "debate" apart is the astonishing arrogance with which some people unwittingly demonstrate their abject ignorance of the subject matter. The adamantine self-confidence which accompanies utter twaddle has to be seen to be believed. I live in a village on Dartmoor (SW England) wherein the two best selling newspapers are The Telegraph and The Mail, so readers of SkS can probably well imagine the level of self-opinioned garbage that is spoken about Climate Change in these parts.

    I have been trying for some time - funny, my wife just chuckled as she walked past the screen - to ascertain if there is a word in the English language to describe this weird amalgam of arrogance and ignorance.

    Perhaps the above post might show the way with a suitable neologism.

    "Arragnophobia"  Noun: the abnormal fear of being revealed to know much less than one pretands to know. (Although it does sound as though one is somewhat scared of spiders.)

  43. The anthropogenic global warming rate: Is it steady for the last 100 years? Part 2.

    Continue from my post 178:  Given your new example, which I think is unrealistic in the shape of the total global mean temperature not having any trend before 1979 and most of the trend occurring after, I would not have chosen to have a linear function as a first guess in the multiple linear regression procedure.  I would choose a monotonic function that looks like the the total trend as a first guess, such as QCO2 discussed in part 1 of my post.

    Using your exact example and your exact method (with linear trend as a regressor for human), we repeated your experiment 10,000 times, and found that the true human answer lies within the 95% confidence level of the estimate 94% of the time.  This is using the linearly detrended n_atlantic as the AMO index, unsmoothed as in your original example. If this AMO index is smoothed, the success rate drops to 33%.  In our PNAS paper we used a smoothed AMO index and we also looked at the unsmoothed index (though not published), and in that realistic case there is only a small difference between the result obtained using the smooth index vs using the unsmoothed index.  In your unrealistic case this rather severe sensitivity is a cause of alarm, and this is the time for you to try a different method, such as the wavelet method, for verification.

  44. The anthropogenic global warming rate: Is it steady for the last 100 years? Part 2.

    In Reply to Dumb Scientist’s post 153: We applaud Dumb Scientist for grounding your example with aspects of the real observation. By doing so you have come up with the first credible challenge to our methodology. Our criticism of your original example was mainly that the noise in your N. Atlantic data was the same as the noise in the global mean data. In fact, they came from the realization. This is extremely unrealistic, because the year-to-year wiggles in N. Atlantic line up with those in the global mean. Much of the year-to-year regional variations come from redistribution or transport of heat from one region to the other in the real case, and these are averaged out in the global mean. We argued in our PNAS paper that it is the low-frequency component of the regional variability that has an effect on the global mean. So although you tried to match the high correlation of the two quantities in the observed, this was accomplished by the wrong frequency part of the variance. In my post 124 I offered two remedies to the problem of the noise being almost the same in your example in post 117: (1) increase the regional noise from 0.1 to 0.3. This created a difference of the N. Atlantic data from the global mean data. Here you said you do not like this modification because it is making the variance too large. (2) Keep the noise amplitudes the same as what you proposed, but the noise from the regional data is from a different draw of the random number generator than the noise from the global mean. If you agree with the amplitudes of the noise in your previous example, then we can proceed with this example. Your only concern in this case was that the correlation coefficient between N. Atlantic and global data is 0.64, a bit smaller than the observed case of 0.79. “That looked more realistic but the average correlation coefficient over 10,000 runs was 0.64±0.08, which is too small.” I suggest that we do not worry about this small difference. Your attempt to match them using the wrong part of the frequency makes the example even less realistic.  We performed 10,000 Monte- Carlo simulations of your example, and found that the true value of anthropogenic response, 0.17 C per decade, lies within the 95% confidence interval of the MLR estimate 94% of the time. So the MLR is successful in this example. If you do not believe our numbers you can perform the calculation yourself to verify. If you agree with our result please say so, so that we can bring that discussion to a close, before we move to a new example. Lack of closure is what confuses our readers.

    You casually dismissed the wavelet method as “curve-fit”. Wavelet analysis is an standard method for data analysis. In fact most empirical methods in data analysis can be “criticized” as “curve-fit”. The MLR method that you spent so much of your time on is a least-square best fit method. So it is also "curve-fit".  For your examples and all the cases discussed so far, the estimation of the true anthropogenic response by the wavelet method is successful. When in doubt we should always try to use multiple methods to verify the result.

    In post 153, you created yet a new example. This example is even more extreme in that the true anthropogenic warming is a seventh order polynomial, from the fifth order polynomial in your original example in post 117, and the second order polynomial in Dikran Marsupial’s examples. This is unrealistic since in this example most of the anthropogenic warming since 1850 occurs post 1979. Before that it is flat. This cannot be justified even if we take all of the observed increase in temperature as anthropogenically forced. It also increases faster than the known rates of increase of the greenhouse gases. You decreased the standard deviation of the global noise of your original example by half. You took my advice to have a different draw of the random number generator for n_atlantic but you reduced the variance from your original example.

    From your first sentence: "My Monte Carlo histograms estimated the confidence intervals", we can infer that you must have used a wrong confidence interval (CI). We have not realized that you have been using a wrong CI until now. The real observation is one realization and it is the real observation that Tung and Zhou (2013) applied the multiple linear regression (MLR) to. There is no possibility of having 10,000 such parallel real observations for you to build a histogram and estimate your confidence interval! So the CI that we were talking about must be different, and it must be applicable to a single realization. Our MLR methodology involves using a single realization to first coming up with the “adjusted data”, which is obtained by adding back the residual to the regressed anthropogenic response, as discussed in part 1 of my post. The adjusted data can be interpreted as anthropogenic response with climate noise. If the procedure is successful the deduced adjusted data should contain the real anthropogenic response. For the hypothetical case where you know the true anthropogenic response, one needs to have a metric for comparing the adjusted data, which is wiggly, with the true value, which is smooth. One way for such comparisons is to fit a linear trend to a segment of the adjusted data and compare such a trend with the corresponding trend of the true anthropogenic response. The segment chosen is usually the last 33 years or the last 50 years. In fitting such a linear trend using least squares fit we obtain a central value (or called the mean) and deviations from the mean. The two standard deviations from the mean constitute the confidence interval (CI) of that estimate. If the true value lies within the CI of the estimate, we say the estimate is correct at 95% confidence level. This is done for each realization. When there are many more realizations, we can say how many times the estimate is correct at 95% confidence level.

  45. New paper on agnotology and scientific consensus

    So then someone who loves agnophilia would be an agnophiliac, which actually sounds like a disease :-).

  46. CO2 effect is saturated

    I have been looking more carefully at the PDF which is the detailed explanation of the WUWT story which is the basis of Stealth's comments.  The inconsistency and, frankly, the dishonesty of the author, Ed Hoskins, is shown in the fifth chart of the PDF (page 3).  It purports to show the expected temperature response to increases in CO2 according to a group of "skeptics" (Plimer, Carter, Ball, and Archibald), and three "IPCC assessments" by three authors.  It also shows a "IPCC average", but that is not the average value from any IPCC assessment, but rather the average of the three "IPCC assessments" by the three authors.

    The first thing to note about this chart is that it gets the values wrong.  Below are selected values from the chart, with the values as calculated using the standard formula for CO2 forcing, and using their 100-200 value as a benchmark for the temperature response:

    Concentration Skeptic Lindzen Krondratjew Charnock “IPCC” Mean IPCC
    100-200______0.29____0.56____0.89________1.48______0.98_______3
    200-300______0.14____0.42____0.44________1.34______0.73
    Calc 200-300_0.17____0.33____0.52________0.87______0.57_______1.76
    400-1000_____0.15____0.7_____1.19________1.78______1.22
    Calc 400-1000_0.38___0.74____1.18________1.96______1.29_______3.97

    The "Calc" values are those calculated using the standard formula for radiative forcing, with a climate sensitivity factor determined by the claimed temperure response for a doubling of CO2 from 100-200 ppmv.  The '"IPCC" Mean' column is the mean of the three prior columns.

    Clearly the values in the table are not consistent with the standard formula, typically overestimating the response from 200-300 ppmv, and underestimating the response from 400-1000 ppmv.  That pattern, however, is not entirely consistent, being reversed in the case of Kondratjew.  Other than that odd inconsistency, this is just the same misrepresentation of temperature responses shown in my 211 above.

    More bizarre is the representation of the IPCC by Lindzen, Kondratjew and Charnock.  As can be seen, their values, and the mean of their values significantly underrepresent the best estimate of the IPCC AR4 of 3 C per doubling of CO2.  That is a well known result, and the misrepresentation can have no justification.  It especially cannot have any justification given that neither Kondratjew nor Charnock are authors (let alone lead authors) of any relevant chapter in the IPCC AR4.  Nor are they cited in any relevant chapter of the IPCC AR4.  Presenting their work as "IPCC assessments" is, therefore, grossly dishonest.

    Moving on, Hoskins shows another chart on page 2, which helps explain at least one cause of his error.  It is a reproduction of a chart produced by David Archibald, purportedly showing the temperature response for succesive 20 ppmv increases in CO2 concentration.  Looking at Archibald's article, he claims it is a presentation, in bar graph form, of a chart posted by Willis Eschenbach on Climate Audit:

     

    As a side note, the forcing shown is 2.94 log(CO2)+233.6, and hence the modtran settings used do not correspond to the global mean forcing.  The method used by Eschenbach, therefore, cannot produce a correct value for the global mean forcing of CO2.  As it happens, his values produce a forcing per doubling of CO2 of 2 W/m^2 per doubling of CO2, and hence underestimates the true forcing by 46%.  Note, however, that it does rise linearly for each doubling of CO2, so Hoskins has not even mimmicked Eschenbach accurately.

    Far more important is that it is a plot of the downward IR flux at ground level with all non-CO2 green house gases (including water vapour) present.  The IPCC, however, defines 'radiative forcing' as "... the change in net (down minus up) irradiance (solar plus longwave; in W m–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values".  (My emphasis.)

    It does so for two reasons.  First, the theory of radiative forcing is essentially a theory about the energy balance of the planet.  Therefore it is not the downward radiation at the surface that is at issue, but the balance between incoming and outgoing radiation at the top of the atmosphere.  

    Second, the temperature at the tropopause and at the surface are bound together by the lapse rate.  Therefore any temperature increase at the tropopause will be matched by a temperature increase at the surface.  Given reduced outward radiation at the tropopause, the energy imbalance between incoming solar radiation and outgoing IR radiation will result in warming at the surface and intermediate levels of the atmosphere.  Adjustments in the rate of convection driven by temperature differences will reestablish the lapse rate, maintaining the same linear relationship between tropopause and surface temperature (ignoring the lapse rate feedback).  The net effect is that the same effective temperature increase will occure at all levels, resulting in a larger downard radiation at the surface than the initial change at either the tropopause or the surface.

    So, Eschenbach (and Hoskins) derive their values incorrectly because they simply do not understand the theory they are criticizing, and which is accepted without dispute by knowledgeable "skeptics" such as Lindzen and Spencer.  They are in the same boat of denying simple physics as are the "skydragon slayers" who Watts excoriates.  Watts, however, publishes pseudo-scientific claptrap on the same level as the "skydragon slayers" on a daily basis, however, because he also is completely ignorant of the theory he so vehemently rejects.    

  47. Daniel Bailey at 10:54 AM on 20 June 2013
    Human CO2 is a tiny % of CO2 emissions

    Juanss, due to albedo changes, replacing the world's agricultural areas with trees will not necessarily be of any aid in stopping global warming. Scientist Ken Caldeira has shown that replanting all available boreal forests and even mid-latitude temperate forests will lead to warming.

    Only replanting all tropical areas with trees produces cooling. And there simply isn't enough of it to be effective (an area greater than the surface area of the United States must be replanted and no such sizable area exists). Only a drastic reduction in CO2 emissions will have any effects.

    First Link

    Second Link

    Third Link

    Fourth Link

  48. Dikran Marsupial at 08:33 AM on 20 June 2013
    The anthropogenic global warming rate: Is it steady for the last 100 years? Part 2.

    Prof Tung@175 As it happens I was using the word "unobservable" is its usual everyday meaning, i.e. "not accessible to direct observation". If the meaning were not clear to you, a better approach would be to ask what it meant, rather than make an incorrect assumption leading to yet another misunderstanding.

    The physical process of AMO is not (currently) accessible to direct observation, instead it is deduced from Atlantic SSTs. Therefore in my thought experiment I said that D was unobservable to parallel the fact that we don't observe the true AMO. Trying to get round this restriction by Fourier analysis is clearly just violating the purpose of the thought experiment rather than engaging with it. 

    Prof. Tung@176 writes "Please take a look at his figure in Dikran's post 158, the true A in red is entirely within the estimate, in green."

    The green is not the estimate, as I have already pointed out, the confidence interval on the regression coefficient is, and the true value is not within it.

    I have already pointed out that the offset on the green signal is arbitrary and essentially meaningless.  It is common statistical practice to subtract the means from variables before performing the regression, in which case the red curve is not in the spread of the green signal anyway.  That is what I did the first time.  For the second graph I changed the offset at Prof. Tungs request, to show that it made no difference to whether the true value was in the confidence interval or not. 

    In this case, we are looking at the time variable T.  Should it make a difference to the result if we start measuring time from 0AD or 1969 or 1683 or 42BC?  No, of course not, the point where we start measuring time is arbitrary (unless perhaps we use the date of the big bang).  Thus it is perfectly reasonable to center (subtract the mean from) the time variable, as I did.

  49. The anthropogenic global warming rate: Is it steady for the last 100 years? Part 2.

    In reply to Bob Loblaw in post 174:

    How did you come to the conclusion "iii) Dikran's example shows that Dr. Tung's methodology fails to come up with the correct answer (which was known because Dikran created it)." ?  I thought we just showed in my post 172 that it was incorrect for him to draw that conclusion.  If you have evidence that Dikran's example shows that our methodology fails to come up with the correct answer, please point it out to me.

    Please take a look at his figure in Dikran's post 158, the true A in red is entirely within the estimate, in green.  I tried to be even more conservative than Dikran, and say this successful estimate is only one realization.  We went on to look at 10,000 realizations, and found that this success occurs 70% of time.  This is using his convoluted example unchanged.  When we cleared up some of the convolution the success rate goes above 90%.  Given this, how did you still come to the conclusion that his example showed that our methodology failed?

  50. New paper on agnotology and scientific consensus

    Agnophilia: 1. The love or promotion of culturally-induced ignorance or doubt, particularly the publication of inaccurate or misleading scientific data.

    Agnophile: 1. A person who consciously indulges in agnophilia.

    Agnomaniac: 1. A person who indulges in agnophilia to insane, irrational or inordinate extents.

    Some new fancy names for the extreme forms of denialism and deniers.

Prev  879  880  881  882  883  884  885  886  887  888  889  890  891  892  893  894  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us