Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  715  716  717  718  719  720  721  722  723  724  725  726  727  728  729  730  Next

Comments 36101 to 36150:

  1. One Planet Only Forever at 12:49 PM on 6 June 2014
    The Skepticism In Skeptical Science

    This is indeed a thorough and clear presentation clarifying the intent of Skeptical at Skeptical Science.

    However, the challenge remains that many people will be strongly inclined to believe something that suits their interest. And that confirmation bias, the tendancy to seek out and accept information suiting a pre-determined belief and disbelive anything contradicting thtt preferred belief, can be very stong in a person who wants to obtain the most possible benefit from the burning of fossil fuels. And if that person also strongly believes that "everyone else has their own equally strong confirmation bias", then they can discount or dismiss this entire presentation as just an extended biased presentation.

    That potential for the development of a strong bias in people who want to benefit from clearly unacceptable activity appears to be the best explanation for the ease with which the deliberate misleaders get "traction" for their rather weak and easily refuted criticisms of the science. It would also explain the persistence of many of the weakest and most thoroughly debunked criticisms of the science, including the criticism that Skeptical Science is not unbiased because equal space is not given to criticism of the best understanding to date. All that happens on Skeptical Science can be seen as criticism of what strongly biased people prefer to believe.

    A person cannot be convinced of anything against their will. But it would be nice if leaders stood up and spoke out to better inform the entire population, even if doing so could be "unpopular" among their hoped to be relied upon votes of support.

  2. Models are unreliable

    Also, if you are actually a skeptic, how about trying some of that skepticism on the disinformation sites you seem to be reading.

  3. Models are unreliable

    What makes you think uncertainty is your friend? Suppose the real sensitivity is 4.5, or 8? You seem to overemphasising uncertaintly, seeking minority opinions to rationalize a "do nothing" predisposition. Try citing peer-reviewed science instead of distortions by deniers.

    Conservatively we know how live in a world with slow rates of climate change and 300pm of CO2. 400ppm was last seen when we didnt have ice sheets.

    I agree science might change,  the fly spaghetti monster or the Second Coming might happen instead, but that is not the way to do policy. I doubt would take the attitude to uncertainty in medical science if it came to treating a personal illness.

    Moderator Response:

    [JH] Please specify to whom your comment is directed.  

  4. Models are unreliable

    Winston2014:

    You can propose, suppose, or, as you say, "show" as many "factors that aren't modelled that may very well be highly significant" as you like.

    Unless and until you have some cites showing what they are and why they should be taken seriously, you're going to face some serious... wait for it... skepticism on this thread.

    You can assert you're "playing the proper role of a skeptic" if you like. But as long as you are offering unsupported speculation about "factors" that might be affecting model accuracy, in lieu of (a) verifiable evidence of such factors' existence, (b) verifiable evidence that climatologists and climate modellers haven't already considered them, and (c) verifiable evidence that they are "highly significant", I think you'll find that your protestations of being a skeptic will get short shrift.

    Put another way: there are by now 15 pages of comments on this thread alone, stretching back to 2007, of self-styled "skeptics" trying to cast doubt on or otherwise discredit climate modelling. I'm almost certain some of them have also resorted to appeals to "factors that aren't modelled that may very well be highly significant", without doing the work of demonstrating that these appeals have a basis in reality.

    What have you said so far that sets you apart?

    Moderator Response:

    [JH] Please specify to whom your comment is directed.  

  5. Models are unreliable

    "Can I suggest that we ignore Winstons most recent post. The discovery that bacteria have a novel pathway for generating oxygen should not be incorporated into climate models unless there is a good reason to suppose that the effects of this pathway are of sufficient magnitude to significantly alter the proportions of gasses in the atmosphere."

    I didn't say they necsessarily should be, my intent was to show the likely huge number of factors that aren't modelled that may very well be highly significant, just as were the bacteria that once generated most of the oxygen on this planet. 

  6. Models are unreliable

    "I suggest DNFTT"

    I'm not trolling. It's called playing the proper role of a skeptic which is asking honest questions. Emphasis is mine.

    Here's my main point in which I am in agreement with David Victor, "The science is “in” on the first steps in the analysis—historical emissions, concentrations, and brute force radiative balance—but not for the steps that actually matter for _policy_. Those include impacts, ease of adaptation, mitigation of emissions and such—are surrounded by error and uncertainty."

  7. Models are unreliable

    "BTW, the above risk-reward analysis is the driver of policy response. Climate models have nothing to do with it. Your statement repeated after that 12min video that "Models will drive policy" is just nonsense. Policy should be driven by our best understanding of the ECS. ECS is derived from mutiple lines of evidence, e.g. paleo being one of them. The problem has nothing to do with your pathetic "Models fail. Are they still useful?""

    Equilibrium Climate Sensitivity

    http://clivebest.com/blog/?p=4923

    Excerpt from comments:

    "The calculation of climate sensitivity assumes only the forcings included in climate models and do not include any significant natural causes of climate change that could affect the warming trends."

    If it's all about ECS and ECS is "determined" via the adjustment of models to track past climate data, how are models and their degree of accuracy irrelevant?

  8. Other planets are warming

    Dear LarianLeQuella,  you stated "I suggest that people who think that the sun is responsible, and cite warming on other planets become familiar with the Inverse Square Law. ;)"  Nice winky emoticon, but yet I am wondering how this is applicable given that Jupiters' orbit has not changed - not to mention that it is much farther from the Sun than Earth is...shouldnt its weather be near constant given your arguement?[Indeed it has had very consistent storm patterns in the PAST, rings of weather and the famous red spot].  Obvisously the intensity of the suns output effects deminish over distance(the distance squared is still just a relationship to distance, this is much more relevant for gravity)...how would this NOT be the case? Specifically whether or not the sun is causing Warming or Cooling (and I suspect COOLING due to its decreasing MAGNETIC FIELD, not  increasing solar output), it is still THE driving force of weather on every planet in the solar system.  Its silly to imply that the amount of water vapor or CO2 on Jupiter has been drastically changing over the past few years, and the Suns output was kept constant

    Moderator Response:

    [DB] Thank you for your attempts to dialogue with Larian, but Larian has not posted since that comment on this thread, back in 2008. It was a one-off, with no intent to engage anyone.

  9. CollinMaessen at 06:42 AM on 6 June 2014
    The Skepticism In Skeptical Science

    Thanks heb0, I'm leaving on a short vacation tomorrow. I'll take a look at it as soon as I'm back.

  10. Richard Tol accidentally confirms the 97% global warming consensus

    Despite the broad agreement between the abstract analysis and the author self-rating survey and C13 with previous studies, there remains the possibility that the SkS abstract raters introduced a systematic bias. The only definitive way to test for this is for more "skeptical" raters to repeat all or part of the abstract analysis.

    They can easily do this, using the online tool that is provided here, or doing their own study. It is a pity that nobody has tried this yet, obviously, it's not something any of us can do. 

    In reality, all of the raters in the TCP study were aware of this potential source of bias and made efforts to be as impartial as possible. For that reason, I doubt that different raters would get substantially different results. I suspect that many of our critics know this too.  

    A shortcut for our critics would be to identify the rejection abstracts that we missed, there's no need to look at the rest of the database. There are long lists of rejection papers on the internet and thse could be used to search for them in our sample. If our critics could show that, say, we missed or misapplied the ratings for 300 rejection abstracts, then Professor Tol would be vindicated. It shouldn't be that hard to do. Our paper is easy, in principle, to falsify. The fans of Karl Popper should be pleased. 

  11. Richard Tol accidentally confirms the 97% global warming consensus

    If we assume the paper authors accurately categorized their own research, the fact that our abstract ratings arrived at the same result (97%) is another strong indication that we did not introduce any significant bias.

    As I noted, we were very conservative in our approach, tending to err on the side of 'no position' or 'rejection' where possible.  Again, if anything our estimate was probably biased low, but the fact that it matches the author self-ratings adds confidence that our result was pretty darn accurate.  It's certainly not wrong by several percent.

  12. Dikran Marsupial at 02:13 AM on 6 June 2014
    Models are unreliable

    Can I suggest that we ignore Winstons most recent post.  The discovery that bacteria have a novel pathway for generating oxygen should not be incorporated into climate models unless there is a good reason to suppose that the effects of this pathway are of sufficient magnitude to significantly alter the proportions of gasses in the atmosphere.

    Winston has not provided this, and I suspect he cannot (which in itself would answer the question of whether they were included in the models and why).  Winstons post comes across as searching for some reason, any reason, to criticize the models and is already clutching at straws. I suggest DNFTT.

     

  13. Mark Harrigan at 02:08 AM on 6 June 2014
    The Skepticism In Skeptical Science

    Thanks, enjoyed this.  Readers might also be interested in what ethicist Lawrence Torcello has to say on this

    http://philpapers.org/rec/TORTEO-2

    Torcello points out that "actual skepticism is about positive inquiry and critical thinking, as well as proportioning one’s beliefs to the available evidence (not to mention being willing to alter those beliefs if and when the evidence changes significantly). Pseudoskepticism, on the contrary, makes a virtue of doubt per se, regardless of other considerations, and is therefore irrational.
    he also says
    (1) Ethical obligations of inquiry extend to every voting citizen insofar as citizens are bound together as a political body;
    (2) It is morally condemnable to put forward unwarranted public assertions contrary to scientific consensus when such consensus is decisive for public policy and legislation;
    (3) It is imperative upon educators, journalists, politicians and all those with greater access to the public forum to condemn, factually and ethically, pseudoskeptical assertions without equivocation.
    Thus healthy skepticism includes refusing to condemn something as false unless it can be shown to be false. Someone with healthy skepticism may doubt something if it has not been proven to be true, but he would not condemn it as false unless he obtained verifiable evidence that it is false."

     

    What Judith Curry has to say?  http://judithcurry.com/2014/06/05/what-is-skepticism-anyway/.  Perhaps less inspiring?

  14. Models are unreliable

    Have these important discoveries been included in models? Considering that it is believed that bacteria generated our initial oxygen atmosphere, one that metabolizes methane should be rather important when considering greenhouse gases. As climate changes, how many more stagnant, low oxygen water habitats for them will emerge?

    Bacteria Show New Route to Making Oxygen

    http://www.usnews.com/science/articles/2010/03/25/bacteria-show-new-route-to-making-oxygen

    Excerpt:

    Microbiologists have discovered bacteria that can produce oxygen by breaking down nitrite compounds, a novel metabolic trick that allows the bacteria to consume methane found in oxygen-poor sediments.

    Previously, researchers knew of three other biological pathways that could produce oxygen. The newly discovered pathway opens up new possibilities for understanding how and where oxygen can be created, Ettwig and her colleagues report in the March 25 (2010) Nature.

    “This is a seminal discovery,” says Ronald Oremland, a geomicrobiologist with the U .S. Geological Survey in Menlo Park, Calif., who was not involved with the work. The findings, he says, could even have implications for oxygen creation elsew here in the solar system.

    Ettwig’s team studied bacteria cultured from oxygen-poor sediment taken from canals and drainage ditches near agricultural areas in the Netherlands. The scientists found that in some cases the labgrown organisms could consume methane — a process that requires oxygen or some other
    substance that can chemically accept electrons — despite the dearth of free oxygen in their environment. The team has dubbed the bacteria species Methylomirabilis oxyfera, which translates as “strange oxygen producing methane consumer.”

    --------

    Considering that many plants probably evolved at much higher CO2 levels than found at present, the result of this study isn't particularly surprising, but has it been included in climate models? Has the unique respiration changes with CO2 concentration for every type of plant on Earth been determined and can the percentage of ground cover of each type be projected as climate changes?

    High CO2 boosts plant respiration, potentially affecting climate and crops

    http://www.eurekalert.org/pub_releases/2009-02/uoia-hcb020609.php

    Excerpt:

    "There's been a great deal of controversy about how plant respiration responds to elevated CO2," said U. of I. plant biology professor Andrew Leakey, who led the study. "Some summary studies suggest it will go down by 18 percent, some suggest it won't change, and some suggest it will increase as much as 11 percent." 

    Understanding how the respiratory pathway responds when plants are grown at elevated CO2 is key to reducing this uncertainty, Leakey said. His team used microarrays, a genomic tool that can detect changes in the activity of thousands of genes at a time, to learn which genes in the high CO2 plants
    were being switched on at higher or lower levels than those of the soybeans grown at current CO2 levels.

    Rather than assessing plants grown in chambers in a greenhouse, as most studies have done, Leakey's team made use of the Soybean Free Air Concentration Enrichment (Soy FACE) facility at Illinois. This open-air research lab can expose a soybean field to a variety of atmospheric CO2
    levels – without isolating the plants from other environmental influences, such as rainfall, sunlight and insects.

    Some of the plants were exposed to atmospheric CO2 levels of 550 parts per million (ppm), the level predicted for the year 2050 if current trends continue. These were compared to plants grown at ambient CO2 levels (380 ppm).

    The results were striking. At least 90 different genes coding the majority of enzymes in the cascade of chemical reactions that govern respiration were switched on (expressed) at higher levels in the soybeans grown at high CO2 levels. This explained how the plants were able to use the increased
    supply of sugars from stimulated photosynthesis under high CO2 conditions to produce energy, Leakey said. The rate of respiration increased 37 percent at the elevated CO2 levels.

    The enhanced respiration is likely to support greater transport of sugars from leaves to other growing parts of the plant, including the seeds, Leakey said.

    "The expression of over 600 genes was altered by elevated CO2 in total, which will help us to understand how the response is regulated and also hopefully produce crops that will perform better in the future," he said.

    --------

    I could probably spend days coming up with examples of greenhouse gas sinks that are most likely not included in current models. Unless you fully understand a process, you cannot accurately “model” it. If you understand, or think you understand, 1,000 factors about the process but there are another 1,000 factors you only partially know about, don't know about, or have incorrectly deemed unimportant in a phenomenally complex process, there is no possibility whatsoever that your projections from the model will be accurate, and the further out you go in your projections, the less accurate they will probably be.

    The current climate models certainly do not have all the forces that create changes in the climate integrated, and there are who knows how many more factors that have not even been realized as yet. I suspect there are a huge number of them if my reading about the newly discovered climate relevant factors discovered almost weekly is anything to judge by. Too little knowledge, too few data points or proxy data points with uncertain accuracy lead to a "Garbage in - Garbage models - Garbage out" situation.

  15. Models are unreliable

    "The answer to that question is: models output, even if they fail, is irrelevant here."

    Not in politcs and public opinion, which in the real world is what drives policy when politicians respond to the dual forces of lobbyists and the desire to project to the voting public that they're "doing something to protect us." Model projections of doom drive the public perception side. The claim that policy is primarily driven by science is, I think, terribly niave. If that were the case, the world would be a wonderfully different place.

    "Your 12min video talks about models' and climate sensitivity uncertainty. However, it cherry picks the lower "skeptic" half of ECS uncertainty only. It is silent about the upper long tail of ECS uncertainty, which goes well beyond 4.5degrees - up to 8degrees - although with low probability."

    But isn't climate sensitivity uncertainty what it's all about?

    "Incidentally, concentrating on models' possible failure due to warming overestmation (as in your 12min video) while ignoring that models may also fail (more spectacularly) by underestimating over aspects of global warming (e.g. arctic ice melt), indicates cherry picking on a single aspect only that suits your agenda."

    Exactly and skeptics can then use that to point out that the projections themselves are likely garbage. No one has yet commented on the rather damning paper in that respect I posted a link to:

    Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences

    Naomi Oreskes,* Kristin Shrader-Frechette, Kenneth Belitz

    SCIENCE * VOL. 263 * 4 FEBRUARY 1994

    Abstract: Verification and validation of numerical models of natural systems is impossible. This is because natural systems are never closed and because model results are always non-unique. Models can be confirmed by the demonstration of agreement between observation and prediction, but confirmation is inherently partial. Complete confirmation is logically precluded by the fallacy of affirming the consequent and by incomplete access to natural phenomena. Models can only be evaluated in relative terms, and their predictive value is always open to question. The primary
    value of models is heuristic.

    http://courses.washington.edu/ess408/OreskesetalModels.pdf

    Also:

    Twenty-three climate models can't all be wrong...or can they?

    http://link.springer.com/article/10.1007/s00382-013-1761-5

    Climate Dynamics
    March 2014, Volume 42, Issue 5-6, pp 1665-1670
    A climate model intercomparison at the dynamics level
    Karsten Steinhaeuser, Anastasios A. Tsonis

    According to Steinhaeuser and Tsonis, today "there are more than two dozen
    different climate models which are used to make climate simulations and
    future climate projections." But although it has been said that "there is
    strength in numbers," most rational people would still like to know how
    well this specific set of models does at simulating what has already
    occurred in the way of historical climate change, before they would be
    ready to accept what the models predict about Earth's future climate. The
    two researchers thus proceed to do just that. Specifically, they examined
    28 pre-industrial control runs, as well as 70 20th-century forced runs,
    derived from 23 different climate models, by analyzing how well the models
    did in hind-casting "networks for the 500 hPa, surface air temperature
    (SAT), sea level pressure (SLP), and precipitation for each run."

    In the words of Steinhaeuser and Tsonis, the results indicate (1) "the
    models are in significant disagreement when it comes to their SLP, SAT, and
    precipitation community structure," (2) "none of the models comes close to
    the community structure of the actual observations," (3) "not only do the
    models not agree well with each other, they do not agree with reality," (4)
    "the models are not capable to simulate the spatial structure of the
    temperature, sea level pressure, and precipitation field in a reliable and
    consistent way," and (5) "no model or models emerge as superior."

    In light of their several sad findings, the team of two suggests "maybe the
    time has come to correct this modeling Babel and to seek a consensus
    climate model by developing methods which will combine ingredients from
    several models or a supermodel made up of a network of different models."
    But with all of the models they tested proving to be incapable of
    replicating any of the tested aspects of past reality, even this approach
    would not appear to have any promise of success.

  16. Models are unreliable

    On the costs of mitigation, an IEA Special Report "World Energy Investment" is just out that puts the mitigation costs in the context of the $48 trillion investments required to keep the lights on under a BAUesque scenario. They suggest the additional investment required to allow a +2ºC future rather than a +4ºC BAUesque future is an extra $5 trillion on top.

  17. Pierre-Emmanuel Neurohr at 01:21 AM on 6 June 2014
    President Obama gets serious on climate change

    "Pierre, people without the massive anti-American chip on their shoulder might take "world leader" to mean 'leader capable of acting on the world stage' rather than 'leader of the world'."

    To be compared with (in the end of the text) :

    "One reason why this is important is it helps set the U.S. as the world leader."

    To point out that the American system is based on overconsumption of raw materials and energy is not exactly controversial to any rational observer. To find that a person who points this out has a "massive anti-American chip" logically confirms the first point.

    When it comes to reducing GHG pollution, the ones most responsible cannot be credible, by definition. One way of changing that is to think a little bit about this overconsumption mania - maybe reduce it ??? - before trying to come up with fixes to fuel it.

  18. Richard Tol accidentally confirms the 97% global warming consensus

    Terrific analysis.   This is not denial, rather this is displacement or distraction.  Argumentation by written hand waving.

  19. Models are unreliable

    Winston2014,

    Your 12min video talks about models' and climate sensitivity uncertainty. However, it cherry picks the lower "skeptic" half of ECS uncertainty only. It is silent about the upper long tail of ECS uncertainty, which goes well beyond 4.5degrees - up to 8degrees - although with low probability.

    The cost of global warming in highly non-linear - very costly at the high end of the tail - essentially a catastrophe above 4degC. Therefore, in order to formulate the policy response you need to convolve the probability function with the potential cost function and integrate it and compare with the cost of mitigation.

    Because we can easily adapt to changes up to say 1degc, the cost of low sensitivity is almost zero - it does not matter. What really matters is the long tail of potential warming distribution, because its high cost - even at low probability - resulting in high risk, demanding serious preventative response.

    BTW, the above risk-reward analysis is the driver of policy response. Climate models have nothing to do with it. Your statement repeated after that 12min video that "Models will drive policy" is just nonsense. Policy should be driven by our best understanding of the ECS. ECS is derived from mutiple lines of evidence, e.g. paleo being one of them. The problem has nothing to do with your pathetic "Models fail. Are they still useful?" question. The answer to that question is: models output, even if they fail, is irrelevant here.

    Incidentally, concentrating on models' possible failure due to warming overestmation (as in your 12min video) while ignoring that models may also fail (more spectacularly) by underestimating over aspects of global warming (e.g. arctic ice melt), indicates cherry picking on a single aspect only that suits your agenda. If you were not biased in your objections, you would have noticed that models departure from observations are much higher in case of sea ice melt rather than in case of surface temps and concentrate your critique on that aspect.

  20. Dikran Marsupial at 22:35 PM on 5 June 2014
    There's no correlation between CO2 and temperature

    Razo wrote "I cant help but thinking they may not have a common cause."

    that is pretty irrational, given that we know that both anthropogenic and natural forcings have changed over the last century, and that neither can explain both sets of warming.  So for that hypothesis to be correct, virtually everything we know ablut natural and anthropogenic forcings must be wrong.  Personally I'd say the hypothesis was wrong.

    "When I look at the modeling results of natural forcing only, like in the intermediate rebuttal of the 'models are unreliable page', niether period is modelled well."

    It has already been explained to you that this is likely an artefact of the baselining.  The fact that both periods are reasonably well modelled by including both natural and anthropogenic forcings kind of suggests that the two periods do not have a common cause.

    "but has there been any kind of study comparing these two periods?"

    Try the IPCC report, the chapter where the figure was taken from is a good start.

  21. There's no correlation between CO2 and temperature

    Razo @39, yes there has been such a study.  Many of them, which are summarized in the IPCC.

    In summary the results are:

    1)  The early twentieth century warming was of a shorter duration, and with a lower trend than the late twentieth century warming;

    2)  Durring the early twentieth century warming, volcanic forcing, solar forcing and anthropogenic forcings were all positive relative to the preceding decades, and had similar magnitudes;

    3)  Durring the late twentieth century warming, volcanic forcing and solar forcing were both negative relative to preceding decades, while anthropogenic forcings were strongly positive.

  22. Richard Tol accidentally confirms the 97% global warming consensus

    michael sweet @11, Cook et al found 3896 abstracts endorsing the consensus and 78 rejecting it, and endorsement rate of 98% (excluding abstracts indicating no opinion, or uncertainty). To drop that endorsement rate below 95% requires that 121 abstracts rated as endorsing the consensus be rerated as neutral, and 121 rated as neutral be rerated as rejecting the consensus.  If more endorsing abstracts are rerated as neutral, fewer neutral abstracts need be rerated as rejecting to achieve so low a consensus rating.  If endorsing abstracts are reduced by 40%, no increase in rejecting abstracts is needed to reduce the consensus rate to 95%.  (You will notice that pseudoskeptics do spend a lot of time trying to argue that endorsement is overstated ;))

    Anyway, the upshot is that I agree that a large bias by the SkS rating team is extraordinarilly unlikely.  Never-the-less, even a small bias coupled with other sources of error could lift an actual consensus rate from just below 95% to 97% with relative ease, both by inflating the endorsement papers and simultaneiously deflating rejection papers.  For instance, taking the prima facie bias shown by comparison with author ratings (discussed above), and correcting for it in the abstract ratings drops the endorsement ratings to 3834, while lifting the rejection papers to 176, giving an endorsement percentage of 95.6%.  Dropping another 1% due to "uncertain" papers, and 0.5% due to other error factors brings the consensus rate down to 94.1%.  As previously stated, I do not think the prima facie bias shown in the author comparisons should be accepted at face value.  There are too many confounding factors.  But we certainly cannot simply exclude it from the range of possible bias effects.

    As the range of uncertainty I allow, I work in units of 5% because the uncertainty is too poorly quantified to pretend to greater precision (with exception of error due to internal rating inconstency which has now been quantified by Cook and co authors).  I extend the range of plausible consensus ratings down to 90% because in calculating the effects of numerous skeptical and pseudo-skeptical objections, I have never come up with a consensus rating lower than 90%; and I think it is unlikely (<33% subjective probability) to be below 95% because of the size of the biases needed to get a consensus rate less than that.  I think it is important to be aware, and to make people aware of the uncertainty in the upper range (95-99%) so that they do not have an unrealistic idea of the precision of the result, and of the unlikely but possible lower range (90-95%) so that people are aware how little significance attaches to the obssessive attacks on the concensus paper result, very well summarized by CBDunkerson above.

  23. Richard Tol accidentally confirms the 97% global warming consensus

    Chriskoz @9, the author ratings are drawn from a subset of the papers which had an abstract rating, and that subset is not representative in that it is heavilly weighted towards more recent papers.  Further, authors had access to far more information (the complete paper plus their known intentions) than did abstract raters.  Further, author ratings may have been self selected towards people with stronger views on AGW, either pro or anti or towards both extremes.  Finally, authors may have been more confused about interpretation of the rating criteria than abstract raters, who had the advantage of more copious explanation through the ability to direct questions to the lead author, and discuss the responses.  It is also possible that author raters are biased against endorsement due to scientific reticence, or that abstracts are biased with respect to papers in terms of rejections due to "skeptical" scientists deliberately keeping abstracts innocuous to ensure publication of conclusion portrayed as rejecting AGW either in brief comments in the conclusion, or in press releases.  Together these factors make direct comparison of author and abstract ratings difficult, and not susceptible to precise conclusions.

    One of those factors can be eliminated with available data by comparing author ratings with only those abstract ratings on papers that actually recieved an author rating.  If we do so we get the following figures:

    _________Abstract__Author
    endorse___787_____1338
    neutral___1339______759
    reject______10_______39

    Reduced to percentage terms, that represents a 98.75% endorsement rate among the subset of abstract ratings also rated by authors, and  97.17% endorsement rate among the corresponding author ratings.  The simplest interpretation would be that the abstract raters demonstrated a 1.6% bias in favour of endorsements, and a 125.7% bias against rejections.  I would reject such an interpretation as too simplistic, ignoring as it does the other confounding factors.  However, a small abstract rating team bias in favour of endorsements is certainly consistent with these results.  Thus, the abstract rating team may have been biased in favour of endorsements.  (Given available evidence it may also have been biased against endorsements as well, although with far less probability.)

    This in no way invalidates the author ratings as confirming the basic result of the abstract ratings, ie, a endorsement rating relative to endorsements and rejections >95%.  The results give that value no matter how you slice it.  But that should not be interpreted as confirmation of the precise figure from the abstract ratings (98%), or even confirmation of that figure within +/- 1%.  Finally, even though the author ratings confirm a >95% figure, that does not preclude uncertainty from allowing realistic probabilities for values below 95%.  After all, emperical "confirmations" are falsifiable and the next study from impeccable sources may turn up a lower percentage.

  24. michael sweet at 21:25 PM on 5 June 2014
    Richard Tol accidentally confirms the 97% global warming consensus

    Tom,

    Additional very strong constraints can be placed on possible bias in the SkS raters.  Since the authors of the paper set up a web site (referenced in the OP) that allows individuals to rate abstracts, lots of people have presumably rated papers.  Skeptics could easily search for papers from people like Spencer, Lindzen or Poptech's list to find misrated papers.  In addition, authors like Spencer who are skeptical could bring attention to their misrated papers.  Dana could correct me, but I have only seen reports of less than 5 such misrated papers that have been found . About 30 are needed to lower the consensus to 92%.  Tol found zero papers.  It seems to me unlikely that enough papers have been misrated to lower the consensus even to 92% with so few misrated papers found by those skeptics who have searched.  I will note that in his congressional testimony Spencer said that he was part of the consensus.  Presumably that means his papers had been misrated as skeptical and should be subtracted from the skeptical total.

    I think you are making a maximum estimate of error and underestimating the efforts by the author team to check their ratings.

  25. President Obama gets serious on climate change

    Pierre, people without the massive anti-American chip on their shoulder might take "world leader" to mean 'leader capable of acting on the world stage' rather than 'leader of the world'. Just sayin'.

  26. Richard Tol accidentally confirms the 97% global warming consensus

    So... even with numerous mathematical errors, blatantly false assumptions, and generally slanting things to the point of absurdity... the 'best' Tol could do was claim that there is 'only' a ninety-one percent consensus?

    You've got to wonder what keeps them going... even when their own best spin proves them wrong they still just keep right on believing the nonsense.

  27. There's no correlation between CO2 and temperature

    When ever I look at the air temperature data, my eye always falls on the two upward trends, from 1910-1943 and 1970 to 2001. I keep finding myself thinking they have pretty much the same slope and the same duration. I cant help but thinking they may not have a common cause.

    When I look at the modeling results of natural forcing only, like in the intermediate rebuttal of the 'models are unreliable page', niether period is modelled well. The man made forcing only model only captures the second increase. CO2 levels and increases are quite different in these two periods.

    I realize that weather and climate data can be variable and can make one imagine things, but has there been any kind of study comparing these two periods?

  28. Richard Tol accidentally confirms the 97% global warming consensus

    Tom@8,

    How do you reconcile your opinion that "pervasive bias by the SkS recruited rates" may have inflated the results the Cook 2013, with the fact that scientists' self rating confirmed the Cook 2013 findings at even higher  (98%+) rate? Shouldn't we rather conclude from that confirmation, that Cook 2013 findings are more likely biased low, rather than high as you suggest?

    Moderator Response:

    [DB] Fixed text per request.

  29. Richard Tol accidentally confirms the 97% global warming consensus

    Dana @6, thankyou for the clarrification.  

    One thing detailed analysis of the data shows is a very slight, but statistically significant trend towards more conservative ratings in both first and second ratings.  The trend is larger in first ratings.  That can be construed as an improvement in rating skill over time, or a degradation of performance.  The former is more likely given that third and later ratings (ie, review and tie break ratings) also tended to be more conservative than initial ratings.  It may also be an artifact of relative number and time of ratings by individual raters, given that they rated at different rates.  So, first, have you tested individual raters results to see if they show a similar trend?  And second, does your error term include allowance for this trend either as a whole, or for individual raters?

    More generally, I think a similar rating excercise to that in the Consensus Project carried out by raters exclusively recruited from WUWT would generate a substantially different consensus rate to that found in Cook et al (2013).  We only need look at poptech's contortions to see who willing pseudoskeptics are to distort their estimates.  They do not do such rating excercises (or at least have not till now) because even with that massive bias they will find well over 50% endorsement of the consensus, and likely well over 80% which would demolish their line about no consensus and potentially undermine the confidence of raters too much.  Or at least that is what I believe.  The crucial point, however, is such a general bias among raters will not show up in internal consistency tests such as used by Tol, and as I understand, by you, to determine the error rate.

    Being objective, we must allow at least the possibility of equivalent pervasive bias by the SkS recruited rates used for Cook et al.  I think there is over whelming evidence that we are not as biased as a similar cadre from WUWT, but that does not mean we are not biased at all.  Such general bias within the raters cannot be tested for by internal estimates of error or bias.  They can be partly tested for by external tests such as comparison with the self ratings, but there are sufficient confounding factors in that test that while we can say any such bias is not large, we cannot say it does not exist.  It is because of the possibility of this bias (more than anything else) that leads me to reject a tightly constrained error estimate (+/- 1%).  

  30. Richard Tol accidentally confirms the 97% global warming consensus

    I'm not sure if this is actually covered in previous addressings of Tol's analysis as I am skim-reading this during my lunch break, but I have an issue with his implicit assumption that the causes for categorisation discrepancies in those papers where the scorers initially disagreed, are present at the same statistical distribution within the population of papers where there was initial concordance.

    By the very nature of the scoring process the more ambiguous categorisations would 'self-select' and manifest as discordances, leaving those initial concordances more likely to be so than if the whole population was retested blind.

    Tol appears to be making assumptions of homogeneity where there is quite probably no basis for such.  And inhomogeneity is a significant modifer of many analytical processes.

  31. Richard Tol accidentally confirms the 97% global warming consensus

    Tom, we used several methods to estimate the error.  Using Tol's approach it's ± 0.6 or 0.7%.  Individual rater estimates of the consensus varied by about ± 1%.  Hence that's a conservative estimate.  As you know, our approach was quite conservative, so if anything we may have been biased low.  However, there's not much room above 97%.

  32. Richard Tol accidentally confirms the 97% global warming consensus

    In the OP it is stated:

    "Accounting for the uncertainties involved, we ultimately found the consensus is robust at 97 ± 1%"

    I assume that error margin is based on the uncertainties arising from an analysis of internal error rates (such as used by Tol, and done correctly in the section with the quote).  As such it does not include all sources of error, and cannot do so.  It is possible, for example, that the raters displayed a consistent bias which would not be detected by that test.  Thus that statement should not be interpreted as saying the consensus rate lies withing 96-98% with 95% probability, but that certain tests constrain the 95% probability range to not be less than that.  Allowing for all sources of potential error, it is possible that the actual consensus rate may even be in the low 90 percents, although it is likely in the mid to high 90 percents.

  33. Models are unreliable

    Victor -> Winston — where "Victor" came from, I have no idea.

  34. Models are unreliable

    Victor, when you say it's cheaper to adapt, you're falling into an either-or fallacy.  Mitigation and adaptation are the extreme ends of a range of action.  Any act you engage in to reduce your carbon footprint is mitigation.  Adaptation can mean anything from doing nothing and letting the market work things out to engaging government-organized and subsidized re-organization of human life to create the most efficient adaptive situation.   If you act only in your immediate individual self-interest, with no concern for how your long-term individual economic and political freedom are constructed socially in complex and unpredictable ways, then your understanding of adaptation iss probably the first of my definitions.  If you do understand your long-term freedoms as being socially constructed, you might go for some form of the second, but if you do, you will--as Tom points out--be relying on some sort of model, intuitive or formal.

    Do you think work on improving modeling should continue?  Or should modeling efforts be scrapped? 

  35. Richard Tol accidentally confirms the 97% global warming consensus

    I notice that the list of 24 errors by Tol is not exhaustive.  In section 3.2 "signs of bias", Tol writes:

    "I run consistency tests on the 24,273 abstract ratings; abstracts were rated between 1 and 5 times, with an average of 2.03. I computed the 50-, 100- and 500-abstract rolling standard deviation, first-order autocorrelation – tests for fatigue – and rolling average and skewness – tests for drift."

    In fact, there were not 24,273 abstract ratings (strictly abstract rating records) released to Tol, but 26,848.  They are the records of all first ratings, second ratings, review ratings and tie break rating generated for the 11,944 abstracts rated for the paper.  That Tol dropped 2,573 ratings records from his analysis is neither explained nor acknowledged in the paper.  That is clearly an additional (25th) error, and appears to go beyond error into misrepresentation of the data and analysis.

    Paranthetically, Tol is unclear about that number, claiming that "Twelve volunteers rated on average 50 abstracts each, and another 12 volunteers rated an average of 1922 abstracts each", a total of 22,464 abstracts. That is 1,424 less than the total of first and second ratings of the 11,944, and is too large a discrepancy to be accounted for by rounding errors.  He also indicates that abstracts were rated on average 2.03 times, yielding an estimate of 24,246 abstracts.  That is within rounding error of his erroneous claim of 24,273 ratings, but inconsistent with his estimate of the number of ratings  by volunteers and the actual number of rating records.

    Some clarrification of why Tol inlcuded only 24,273 ratings is found in his blog, where he describes the same test as used in the paper, saying:

    "The graphs below show the 500-abstract rolling mean, standard deviation, skewness, and first-order autocorrelation for the initial ratings of the Consensus Project."

    The initial ratings are the first and second rating for each abstract, of which there are 23,888.  However, comparison of the 100 point average of the mean value with S6 from the paper shows the test to have been the same.  A problem then arises that his graph of "initial ratings" is not restricted to just first and second ratings.  Consider the following figure:

    The middle graph is Tol's figure S6 as displayed at his blog.  The top graph is the 100 point mean of all endorsement_final ratings from the rating records (the values actually graphed and analysed by Tol).  As can be seen, Tol's graph is clearly truncated early.  The third graph is the 100 point mean values of enfdorsement_final ratings from all first and second ratings.  Although identical at the start of the graph (of logical necessity), the end of the graph diverges substantially.  That is because the first 24,273 ratings in chronological order do no included all first and second ratings (and do include a significant number of third, fourth and fifth ratings ie, review and tie break ratings).  So, we have here another Tol mistake, though technically a mistake in the blog rather than the paper.

    Far more important is that without strictly dividing first ratings from second ratings, and excluding later ratings, it is not possible for Tol's analysis to support his conclusions.  That is because, when selecting an abstract for a rater to rate, the rating mechanism selected randomly from all available abstracts not previously rated by a particular rater.  Initially, for the first person to start rating, that meant all available abstracts had no prior rating.  If we assume that that person rated 10 abstracts, and then ceased, the next person to start rating would have had their ratings selected randomly from 11934 unrated abstracts, and 10 that had a prior rating.  Given that second ratings were on average slightly more conservative (more likely to give a rating of 4) than first ratings, this alone would create a divergence from the bootstrapped values generated by Tol.  Given that raters rated papers as and when they had time and inclination, and that therefore they did not rate at either the same pace or time, or even at a consistent pace, the divergence from bootstrap values from this alone could be quite large.  Given that raters could diverge slightly in the ratings they give, there is nothing in Tol's analysis to show his bootstrap analyses are anything other than the product of that divergence and the differences in rating times and paces among raters.  His conclusions of rater fatigue do not, and cannot come from the analysis he performs, given the data he selects to analyse.

    This then is error 27 in his paper, or perhaps 27-30 given that he repeats the analysis of that data for standard deviation, skewness and autocorrelation, each of which tests is rendered incapable of supporting his conclusions due to his poor (and mistated) data selection.

  36. Models are unreliable

    Winston @734, the claim that the policies will be costly is itself based on models, specifically economic models.  Economic models perform far worse than do climate models, so if models are not useful "... for costly policies until the accuracy of their projections is confirmed", the model based claim that the policies are costly must be rejected. 

  37. Models are unreliable

    DSL,

    "Models fail. Are they still useful?"

    Not for costly policies until the accuracy of their projections is confirmed. From the 12 minute skeptic video, it doesn't appear that they have been confirmed to be accurate where it counts, quite the opposite. To quote David Victor again, "The science is “in” on the first steps in the analysis—historical emissions, concentrations, and brute force radiative balance—but not for the steps that actually matter for policy."

    "Models will drive policy"

    Until they are proven more accurate than I have seen in my investigations thus far, I don't believe they should.

    The following video leads me to believe that even if model projections are correct, it would actually be far cheaper to adapt (according to official figures) to climate change than it would be to attempt to prevent it based upon the "success" thus far of the Australian carbon tax:

    The 50 to 1 Project

    https://www.youtube.com/watch?v=Zw5Lda06iK0

  38. Dumb Scientist at 06:53 AM on 5 June 2014
    Richard Tol accidentally confirms the 97% global warming consensus

    It's astonishing that Energy Policy's "review" apparently didn't ask for a single example of the ~300 gremlin-conjured rejection abstracts.

    "If I submit a comment that argues that the Cook data are inconsistent and invalid, even though they are not, my reputation is in tatters." [Dr. Richard Tol, 2013-06-11]

    Not necessarily. Retracting errors is difficult but ultimately helps one's inner peace and reputation because it shows integrity and healthy confidence. When in a hole, stop digging.

  39. Models are unreliable

    Winston2014,two things:

    1. What does Victor's point allow you to claim?  By the way, Victor doesn't address utility in the quote.

    2. Oreskes point is a no-brainer, yes?  No one in the scientific community disagrees, or if they do, they do it highly selectively (hypocritically).  Models fail.  Are they still useful?  Absolutely: you couldn't drive a car without using an intuitive model, and such models fail regularly.  The relationship between climate models and policy is complex.  Are models so inaccurate they're not useful?  Can we wait till we get a degree of usefulness that's satisfactory to even the most "skeptical"?  Suppose, for example, that global mean surface temperature rises at 0.28C per decade for the next decade.  This would push the bounds of the AR4/5 CMIP3/5 model run ranges.  What should policy response be ("oh crap!")?  What if that was followed by a decade of 0.13C per decade warming?  What should policy response be then ("it's a hoax")?

    Models will drive policy; nature will drive belief.  

  40. Models are unreliable

    Have the points in this video ever been adressed here?

    Climate Change in 12 Minutes - The Skeptics Case

    https://www.youtube.com/watch?v=vcQTyje_mpU

    From my readings thus far, I agree with this evaluation of the accuarcy and utility of current climate models:

    Part of a speech delivered by David Victor of the University of California, San Diego, at the Scripps Institution of Oceanography as part of a seminar series titled “Global Warming Denialism: What science has to say” (Special Seminar Series, Winter Quarter, 2014):

    "First, we in the scientific community need to acknowledge that the science is softer than we like to portray. The science is not “in” on climate change because we are dealing with a complex system whose full properties are, with current methods, unknowable. The science is “in” on the first steps in the analysis—historical emissions, concentrations, and brute force radiative balance—but not for the steps that actually matter for policy. Those include impacts, ease of adaptation, mitigation of emissions and such—are surrounded by error and uncertainty. I can understand why a politician says the science is settled—as Barack Obama did…in the State of the Union Address, where he said the “debate is over”—because if your mission is to create a political momentum then it helps to brand the other side as a “Flat Earth Society” (as he did last June). But in the scientific community we can’t pretend that things are more certain than they are."

    Also, any comments on this paper:

    Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences

    Naomi Oreskes,* Kristin Shrader-Frechette, Kenneth Belitz

    SCIENCE * VOL. 263 * 4 FEBRUARY 1994

    Abstract: Verification and validation of numerical models of natural systems is impossible. This is because natural systems are never closed and because model results are always non-unique. Models can be confirmed by the demonstration of agreement between observation and prediction, but confirmation is inherently partial. Complete confirmation is logically precluded by the fallacy of affirming the consequent and by incomplete access to natural phenomena. Models can only be evaluated in relative terms, and their predictive value is always open to question. The primary
    value of models is heuristic.

    http://courses.washington.edu/ess408/OreskesetalModels.pdf

  41. The Skepticism In Skeptical Science

    CollinMaessen - Thanks a bunch for the help. I've sent an edited version through the Contact page. Hopefully the raw format isn't too inconvenient.

  42. Dikran Marsupial at 05:30 AM on 5 June 2014
    Models are unreliable

    Razo wrote "Thats a kind of calibration."

    sorry, I have better things to do with my time than to respond to tedious pedantry  used to evade discussion of the substantive points.  You are just trolling now. 

    Just to be clear, calibration or tuning refers to changes made to the model in order for it to improve its behaviour.  Baselining is a method used in analysis of the output of the model (but which does not change the model itself in any way).

    "I said, narrower is better if you want to  predict a number, or justify a trend. "

    No, re-read what I wrote.  What you are suggesting is lampooned in the famous quote "he uses statistics in the same way a drunk uses a lamp post - more for support than illumination".  The variability is what it is, and a good scientist/statistician wants to have as accurate an estimate as possible and then see what conclusions can be drawn from the results.

  43. Models are unreliable

    Dikran Marsupial  wrote "Models are able to predict the response to changes in forcings more accurately than they are able to estimate the absolute temperature of the Earth, hence baselining is essential in model-observation comparisons."

    Thats a kind of calibration. I understand the need. People here were trying to tell me that no such thing was happening, and it just pure physics. I didn't know exactly how it was calculated, but I expected it. I know very well that "Models are able to predict the response to changes in forcings more accurately than they are able to estimate the absolute...".

     

    "a constant is subtracted from each model run."

    That's offsetting. Please don't disagree. Its practically the OED definition. Even the c in y=mx+c is sometimes called an offset.

     

    "neither broader nor narrower is "better", what you want is for it to be accurate"

    I said, narrower is better if you want to  predict a number, or justify a trend. I mean this regardless of the issue of the variance in the baseline region.

  44. Richard Tol accidentally confirms the 97% global warming consensus

    He wants to take Cook et al. down, but he failed miserably.  His error couldn't be more obvious - he created about 300 rejection abstracts out of thin air from a claimed 6.7% error rate, when in the entire sample we only found 78 rejection papers.  This has been explained to Tol several times using several different methods, and he still can't seem to grasp it.

  45. Doug Bostrom at 04:49 AM on 5 June 2014
    The Skepticism In Skeptical Science

    An excellent article, in my wholly unbiased opinion. :-)

    "Unless the doubt is removed by your friend showing you a picture of Morgan Freeman standing on his porch."

    And there lies the point where we discover the difference between Collin's "so-called skeptics" and the "pseudo-skeptic." 

    The so-called skeptic will rejoin with something along the lines of "I'll be; Morgan Freeman on your porch! Who'd a thunk it?" 

    The pseudo-skeptic will often follow the general path of first accusing you of having altered the photo, and then when you show it to be unaltered output from your digital camera will hypothesize that the digital camera manufacturer is in cahoots with you.  More generously, they might offer that Morgan Freeman is an astounding artifact of camera malfunction.

    Numerous variations abound on the overall theme of pseudo-skepticism, having in common the feature of starting as a straight line and then if necessary adopting the topography of a Klein bottle if that is what is necessary to avoid acknowledging the simply obvious. 

  46. Richard Tol accidentally confirms the 97% global warming consensus

    Basically, it seems to me that Dr Tol doesn't really dispute the existence of the scientific consensus (either in the form of consilience of evidence or the form of consilience of opinion of practicing scientists).

    It appears, rather, that he wants to take down Cook et al because... well, because reasons. (At least that is the best I can come up with.)

  47. CollinMaessen at 03:36 AM on 5 June 2014
    The Skepticism In Skeptical Science

    You can always submit what you have via the contact page:
    http://skepticalscience.com/contact.php

    If you submit It there I will eventually receive your feedback. You could also directly contact me via my website and start an email exchange with me about this:
    http://www.realsceptic.com/contact/

  48. Dikran Marsupial at 03:30 AM on 5 June 2014
    Climate is chaotic and cannot be predicted

    Razo, as I have pointed out, climate (the long term statistical properties of the weather) is not necessarily chaotic, even though the weather is.  Climate models do not try and predict the behaviour of a chaotic system, but to simulate it. 

    "How chaos could impact climate might be more like this, I think. If one could show that global warmimg is effecting the chaotic indicies that cause ElNino to the degree that it becomes a more frequent and long lasting event, ie, the regular weather, that could impact climate."

    El-Nino is a mode of internal climate variability, it is one of the things that gives rise to the spread of runs from a particular climate model, but (at least assymptotically) it doesn't affect the forced response of the climate (estimated by the ensemble mean), which is what we really want to know as a guide for policy.

    You clearly know something about chaotic systems, however your understaning of climate and what climate models aim to do is fundamentally misguided.  Please take some time to find out more about the nature of the problem, as otherwise you are contributing to the noise here, not the signal.

  49. Dikran Marsupial at 03:21 AM on 5 June 2014
    Models are unreliable

    Razo wrote "But I'm suprised that people use it when comparing models with the actual data."

    Models are able to predict the response to changes in forcings more accurately than they are able to estimate the absolute temperature of the Earth, hence baselining is essential in model-observation comparisons.  There is also the point that the observations are not necessarily observations of exactly the same thing projected by the models (e.g. limitations in coverage etc.) and baselining helps to compensate for that to an extent.

    "its quite another to offset model runs or different models to match the mean of the baseline."

    This is not what is done, a constant is subtracted from each model run and set ob observations independently such that it has a zero offset during the baseline period.  This is a perfectly reasonable thing to do in research on climate change as it is the anomalies from a baseline in which we are primarily interested.

    "You seem to be saying a higher variance is better. "

    No, I am saying that an accurate estimate of the variance (which is essentially an estimate of the variability due to unforced climate change) is better, and that the baselining process has an unfortunate side effect in artificially reducing the variance, which we ought to be aware of in making model-observation comparisons.

    "Having the hiatus within the 95% confidence interval is a good thing, but a narrower interval is better if you want to more accurately predict a number, or justify a trend."

    no, you are fundamentally missing the point of the credible interval, which is to give an indication of the true uncertainty in the projection.  It is what it is, neither broader nor narrower is "better", what you want is for it to be accurate.  Artifically making them narrower as a result of baselining does not make the projection more accurate, it just makes the interval a less accurate representation of the uncertainty.

    "Another thing to add, as I understand if the projected values are less than 3 times the variance, one says there is no result."


    No, one might say that the observations are consistent with the model (at some level of significance), however this is not a strong comment on the skill of the model.

    "If it is over 3 times one says there is a trend, and not until the ratio is 10, does one quote a value."

    No, this would not indicate a "trend" simply because an impulse (e.g. the 1998 El-Nino event) could cause such a result.  One would instead say that the observations were inconsistent with the models (at some level of significance).  Practice varies about the way in which significance levels are quoted and I am fairly confident that most of them would have attracted the ire of Ronald Aylmer Fisher.

    "Can one use thse same rules here?"

    In statistics, it is a good idea to clearly state the hypothesis you want to test before conducting the test as the details of the test depend on the nature of the hypothesis.  Explain what it is that you want to determine and we can discuss the nature of the statistical test.

  50. The Skepticism In Skeptical Science

    This is an excellent article and will be a handy resource to link to every time someone gets pedantic about SkS and the meaning of "skepticism."

    This article is concise, engaging and--I think--convincing. However, it has a tremendous number of grammatical errors and awkward wordings. The first part especially could do with more liberal use of commas. It really should be combed over if this article is intended to be a long-term reference. I wouldn't mind doing it myself, but I'm not sure the best way of submitting a proofread version.

Prev  715  716  717  718  719  720  721  722  723  724  725  726  727  728  729  730  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us