Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  715  716  717  718  719  720  721  722  723  724  725  726  727  728  729  730  Next

Comments 36101 to 36150:

  1. Resources and links documenting Tol's 24 errors

    Someone mentioned on the previous thread that Dr Tol considered that admitting an error in his criticism of Cook et al 2013 would leave his reputation in tatters (or equivalent).

    As far as I can see, Dr Tol's perseverance in his criticisms, despite their many flaws, his ethically-dubious (if not outright unethical) use of hacked correspondence, and his apparent confabulation on the subject of rater fatigue, are altogether doing an excellent job of destroying his reputation in a way that admitting error could not.

    In fact, admitting that his compulsive attack on Cook et al 2013 is a great error and retracting it would go some way to restoring his reputation.

  2. Models are unreliable

    John Hartz:

    Further to your inquiry appended to my previous comment, as far as I can see I opened the comment up immediately by addressing it to Winston2014.

    Moderator Response:

    [JH] My bad. My comment was meant for another commenter, not you. I will make appropriate corrections. I apologize for the mistake.  

  3. Richard Tol accidentally confirms the 97% global warming consensus

    I am most appreciative of the link to data for the monumental literature review that has become popularly known as the “97% consensus. “ I have no qualms with the analysis as presented by Cook et al., but given data I cannot resist applying my own analysis to see what other inferences may be drawn upon it.


    My method was to first reduce the data to include only abstracts that somehow endorse or not AGW. Then, in order to statistically summarize individuals instead of articles, I created a database by individual Author and Title, and carrying through the Date, Category and Endorsement values. In this way I was able to identify the endorsement of individuals, and could eliminate duplicate counting due to multiple titles by the same author. This approach produced about 13,500 unique Author-Titles in my dataset.

    Thus, I am able to make the following inference:

    A literature survey of peer-reviewed articles published in the 20 year interval of 1991 through 2011 reveals that one-third of about 12,000 abstracts considered made some pronouncement on global warming. Of these, 73 titles generated by 188 authors quantify the human impact on the observed global warming as either being greater or less than 50%.

    This is a highly qualified but strong assertion, and is but one of many that can be gleaned from the data provided.

  4. President Obama gets serious on climate change

    Pierre, maybe you could just set aside the bigoted attitude and discuss the facts?

    The sentence you cite from the end of the text disproves your point. It is clearly not saying that the U.S. (or its president) controls the world, but rather that the U.S. is taking a leading position in reducing GHG emissions. Yes, this is new and long overdue... the article explicitly says that too (i.e. "No longer is the U.S. the world laggard").

    Similarly, your claim that Americans would 'continue' emitting up to 20 t of CO2 per person per year even after these reductions is ridiculous given that U.S. emissions are lower than that now. U.S. per capita emissions peaked at 20 t for a single year back in 2000 and have fallen significantly since then. Contrary to your statements about Americans being "the most climate-destroying persons on Earth", there are actually several countries with higher emissions per capita... and countries with higher total emissions... and countries still increasing their emissions while the U.S. is decreasing its.

    Finally, your stated belief that even after this, "The US and its leader would continue to set a bad example in all matters related to the climate of the Earth" aptly demonstrates the absence of reason in your position. The U.S. just set a good example... banning power plants with high emissions. It is impossible to "continue" setting a bad example after doing the opposite. The country could theoretically roll back this change and resume setting a bad example, but you said it was setting a bad example, "even if this plan was actually successful"... and that's just nonsense.

  5. Models are unreliable

    Modertor's Comment:

    Both Razo and Winston2014 are playing a game I call "Trvia Prusuit" They posit trivial observations about climate models and expect other readers to pursue that trivia. They also gloss over or ignore the learned responses to their trivia provided to them by other readers. All thing cosnsidered, they are both engaging in a form of concern trolling. They both are on the cusp of relinquishing their repective privilege of posting on the SkS comments threads.

  6. Models are unreliable

    Winston2014@,

    ECS is "determined" via the adjustment of models to track past climate data

    Non only. You are ignoring my previous comment asserting that ESC is determined by multiple lines, namely various paleo studies. Check for example here. In your reference (some "skeptic" blog) to the method of ECS estimation we read:

    The new lower result is mainly due to the stalling in observed global temperatures since 1998 despite rising CO2 levels [...] In this post I focus on ECS and simply assume that GCM models are a correct description of climate. I then use HADCRUT4 temperature data to try to pin down ECS. Unlike the Otto et al. paper I will avoid using OHC data and simply assume an e-folding ocean heat capacity delay of 15 years (also based on models) to reach equilibrium

    (emphasis mine)

    I stopped reading after that. If the author is acknowledging ocean heat capacity is having large impact on surface temps but then ignores OHC in his calculation of ECS, then he simply contradicts himself and undermines the validity of his calculations. And as we know, the multi-decadal ocean oscillations (ENSO, AMO) can and do influence the short-term surface temp records (such as since 1998) so that the surface temp data (just 4% of total heat content) is highly irrelevant to the total radiative balance.

    It's time for you to ditch such sources and move on to more liable ones, if you want to discuss your point. Unless of course, you don't want to be taken seriously.

  7. Richard Tol accidentally confirms the 97% global warming consensus

    Kevin C @18 & 19, first, I have never found it profitable to ignore you.

    Second, simply multiplying abstract ratings by the ratio of author ratings to their corresponding abstract ratings will simply reproduce the author rating percentages.  Doing so  thereby assumes that the rate of endorsement did not increase in time (given the temporal bias in author ratings).  It also assumes that the massive difference in neutral ratings between author ratings and abstract ratings is simply due to conservatism by abstract raters.  If instead we assume it is mostly due to the lack of information available in abstracts (almost certainly the primary reason), then we should require neutral ratings to be almost constant between the abstract ratings and abstract ratings adjusted for bias relative to author ratings.  The prima facie adjustment I used makes that assumption and only multiplies endorsements or rejections for their relative bias ratios, thereby keeping neutral ratings constant.

    I am not arguing that that is the uniquely correct approach.  I am arguing that it is a reasonable approach, and hence that reasonable assessments of the biases allow endorsement percentages below 95% with reasonable though low probability.

    Third, if you are using the values from Richard Tol's spreadsheet (as I am at the moment due to lack of access to my primary computer), you should note that the itemized values (ie, totals for ratings 1 through 7) do not sum to the same values as the summary values (ie, binned as endorsing, neutral or rejecting) in his spread sheet.  Further, the itemized values do not have the same totals for abstract ratings and author ratings.  Using the summary values, and applying your method, the endorsement percentage is 95.6% excluding all neutral papers, and drops to 95.3% if we include 0.5% of neutral papers as "uncertain".

  8. Richard Tol accidentally confirms the 97% global warming consensus

    Oh, it was in the original paper. Ignore me...

  9. Dikran Marsupial at 18:20 PM on 6 June 2014
    Models are unreliable

    Winston wrote "I didn't say they necsessarily should be, my intent was to show the likely huge number of factors that aren't modelled that may very well be highly significant, just as were the bacteria that once generated most of the oxygen on this planet. "

    That is trolling.  Come back when you can think of a factor that is likely to have a non-negligible impact on climate that is not included in the model, and until then stop wasting out time by trying to discuss things that obviously aren't.

    "I'm not trolling. It's called playing the proper role of a skeptic which is asking honest questions."

    It isn't an honest question as you can't propose a factor that has a non-negligible impact on climate, you just posit that they exist with no support.  That is not skepticism.

  10. President Obama gets serious on climate change

    The simple fact of that matter is that reducing the (rate of) emissions will only slow down the speed of increase in the concentration level of carbon dioxide in the atmosphere (currently 4 ppm/year). So this action by the US will only slighty mitigate the future impact of climate change. The latest IPCC report had a section dealing with adaption to the impact. Ironically, New York authorities show more understanding of the reality. They are carrying out measures to protect the city from sea level rise.

    At least the US President is proposing some useful mitigation measures. Our Australian Prime Minister says climate change is bunkum nad has cancelled mitigation measures instigated by the previous goverment.

  11. Resources and links documenting Tol's 24 errors

    So when can we expect an Auditor to write up their scathing audity of Tol 2014? Especially since the method has a systematic bias which produces the same result even if the data is fed in the opposite way, since that allegation seems to be one of the Auditor's favorite bones to pick.

  12. Richard Tol accidentally confirms the 97% global warming consensus

    Tom: I just tried the naive test and set up a matrix of p(self|abstract), multiplied this by n(abstract) to get an estimate for n(self) for the whole dataset. If I haven't made any mistakes (I only spent 10 mins on it), it looks like this:

    1806.6 4435.8 4014.7 1337.4 217.2 60.6 71.7

    That's a consensus score of 96.7%. Most of the neutrals have gone one way or the other, so there are both more endorsements and more rejections.

    I had to drop multiple self ratings with fractional scores. There will be a few more multiple self ratings which lead to integral scores which should also be dropped. Better, all the self ratings should be included weighted according to the number of self ratings for the paper. The TCP team should have the data to do this.

    I also added 1 extra count in each diagonal self=abstract category to address the lack of self ratings for papers starting at 7. This will have no discernable effect on the heavily populated endorsement and neutral categories but tends to ensure that rejections stay as rejections. Increasing this number to 10 or 100 doesn't affect the result.

    There are only 6 rejection papers with integral self-ratings which leads to potentially large uncertainties, but I think that is addressed in a conservative manner by inflating the diagonal elements. An unquantifiable but probably bigger uncertainty will arise from self selection of respondents.

    (Apologies to anyone who has done this before - I have a feeling I may have seen it somewhere.)

  13. Models are unreliable

    Winston: "I didn't say they necsessarily should be, my intent was to show the likely huge number of factors that aren't modelled that may very well be highly significant, just as were the bacteria that once generated most of the oxygen on this planet."


    This is why you're not being taken seriously.  You're comparing the potential impact of new bacterial growth sites over decades to centuries to the impact of the great oxygenation event, when a brand new type of life was introduced to the globe over a period of millions of years. 

    You fail to recognize the precise nature of the change taking place.  Rising sea level and changing land storage of freshwater will be persistent.  This is not a step change where we go from one type of land-water transitional space to another.  It is a persistently-changing transitional space.  Thus, adaptation in these spaces must be persistent.  How many suitable habitats will be destroyed for every one created?   It's inevitable that some--many--will be destroyed. 

    Further, what impact would additional oxygen mean for the radiative forcing equation?  Note that human burning of fossil carbon has been taking oxygen out of the atmosphere for the last 150 years.

    Your plant argument belongs on another page.

  14. One Planet Only Forever at 13:02 PM on 6 June 2014
    The Skepticism In Skeptical Science

    I wish to clarify that a person wishing to dismiss climate science because of a strong desire to obtain personal benefit can firmly beleive that everyone else is similarly motivated. Their perspective could be that everyone else is just as strongly motivated to pursue the most possible benefit any way they can get away with. They would tend to seek out any possible evidence of such motivation, including believing that any action that would increase the cost to them of what they want to enjoy is just a devious unjustified action by those they have to fight against.

  15. One Planet Only Forever at 12:49 PM on 6 June 2014
    The Skepticism In Skeptical Science

    This is indeed a thorough and clear presentation clarifying the intent of Skeptical at Skeptical Science.

    However, the challenge remains that many people will be strongly inclined to believe something that suits their interest. And that confirmation bias, the tendancy to seek out and accept information suiting a pre-determined belief and disbelive anything contradicting thtt preferred belief, can be very stong in a person who wants to obtain the most possible benefit from the burning of fossil fuels. And if that person also strongly believes that "everyone else has their own equally strong confirmation bias", then they can discount or dismiss this entire presentation as just an extended biased presentation.

    That potential for the development of a strong bias in people who want to benefit from clearly unacceptable activity appears to be the best explanation for the ease with which the deliberate misleaders get "traction" for their rather weak and easily refuted criticisms of the science. It would also explain the persistence of many of the weakest and most thoroughly debunked criticisms of the science, including the criticism that Skeptical Science is not unbiased because equal space is not given to criticism of the best understanding to date. All that happens on Skeptical Science can be seen as criticism of what strongly biased people prefer to believe.

    A person cannot be convinced of anything against their will. But it would be nice if leaders stood up and spoke out to better inform the entire population, even if doing so could be "unpopular" among their hoped to be relied upon votes of support.

  16. Models are unreliable

    Also, if you are actually a skeptic, how about trying some of that skepticism on the disinformation sites you seem to be reading.

  17. Models are unreliable

    What makes you think uncertainty is your friend? Suppose the real sensitivity is 4.5, or 8? You seem to overemphasising uncertaintly, seeking minority opinions to rationalize a "do nothing" predisposition. Try citing peer-reviewed science instead of distortions by deniers.

    Conservatively we know how live in a world with slow rates of climate change and 300pm of CO2. 400ppm was last seen when we didnt have ice sheets.

    I agree science might change,  the fly spaghetti monster or the Second Coming might happen instead, but that is not the way to do policy. I doubt would take the attitude to uncertainty in medical science if it came to treating a personal illness.

    Moderator Response:

    [JH] Please specify to whom your comment is directed.  

  18. Models are unreliable

    Winston2014:

    You can propose, suppose, or, as you say, "show" as many "factors that aren't modelled that may very well be highly significant" as you like.

    Unless and until you have some cites showing what they are and why they should be taken seriously, you're going to face some serious... wait for it... skepticism on this thread.

    You can assert you're "playing the proper role of a skeptic" if you like. But as long as you are offering unsupported speculation about "factors" that might be affecting model accuracy, in lieu of (a) verifiable evidence of such factors' existence, (b) verifiable evidence that climatologists and climate modellers haven't already considered them, and (c) verifiable evidence that they are "highly significant", I think you'll find that your protestations of being a skeptic will get short shrift.

    Put another way: there are by now 15 pages of comments on this thread alone, stretching back to 2007, of self-styled "skeptics" trying to cast doubt on or otherwise discredit climate modelling. I'm almost certain some of them have also resorted to appeals to "factors that aren't modelled that may very well be highly significant", without doing the work of demonstrating that these appeals have a basis in reality.

    What have you said so far that sets you apart?

    Moderator Response:

    [JH] Please specify to whom your comment is directed.  

  19. Models are unreliable

    "Can I suggest that we ignore Winstons most recent post. The discovery that bacteria have a novel pathway for generating oxygen should not be incorporated into climate models unless there is a good reason to suppose that the effects of this pathway are of sufficient magnitude to significantly alter the proportions of gasses in the atmosphere."

    I didn't say they necsessarily should be, my intent was to show the likely huge number of factors that aren't modelled that may very well be highly significant, just as were the bacteria that once generated most of the oxygen on this planet. 

  20. Models are unreliable

    "I suggest DNFTT"

    I'm not trolling. It's called playing the proper role of a skeptic which is asking honest questions. Emphasis is mine.

    Here's my main point in which I am in agreement with David Victor, "The science is “in” on the first steps in the analysis—historical emissions, concentrations, and brute force radiative balance—but not for the steps that actually matter for _policy_. Those include impacts, ease of adaptation, mitigation of emissions and such—are surrounded by error and uncertainty."

  21. Models are unreliable

    "BTW, the above risk-reward analysis is the driver of policy response. Climate models have nothing to do with it. Your statement repeated after that 12min video that "Models will drive policy" is just nonsense. Policy should be driven by our best understanding of the ECS. ECS is derived from mutiple lines of evidence, e.g. paleo being one of them. The problem has nothing to do with your pathetic "Models fail. Are they still useful?""

    Equilibrium Climate Sensitivity

    http://clivebest.com/blog/?p=4923

    Excerpt from comments:

    "The calculation of climate sensitivity assumes only the forcings included in climate models and do not include any significant natural causes of climate change that could affect the warming trends."

    If it's all about ECS and ECS is "determined" via the adjustment of models to track past climate data, how are models and their degree of accuracy irrelevant?

  22. Other planets are warming

    Dear LarianLeQuella,  you stated "I suggest that people who think that the sun is responsible, and cite warming on other planets become familiar with the Inverse Square Law. ;)"  Nice winky emoticon, but yet I am wondering how this is applicable given that Jupiters' orbit has not changed - not to mention that it is much farther from the Sun than Earth is...shouldnt its weather be near constant given your arguement?[Indeed it has had very consistent storm patterns in the PAST, rings of weather and the famous red spot].  Obvisously the intensity of the suns output effects deminish over distance(the distance squared is still just a relationship to distance, this is much more relevant for gravity)...how would this NOT be the case? Specifically whether or not the sun is causing Warming or Cooling (and I suspect COOLING due to its decreasing MAGNETIC FIELD, not  increasing solar output), it is still THE driving force of weather on every planet in the solar system.  Its silly to imply that the amount of water vapor or CO2 on Jupiter has been drastically changing over the past few years, and the Suns output was kept constant

    Moderator Response:

    [DB] Thank you for your attempts to dialogue with Larian, but Larian has not posted since that comment on this thread, back in 2008. It was a one-off, with no intent to engage anyone.

  23. CollinMaessen at 06:42 AM on 6 June 2014
    The Skepticism In Skeptical Science

    Thanks heb0, I'm leaving on a short vacation tomorrow. I'll take a look at it as soon as I'm back.

  24. Richard Tol accidentally confirms the 97% global warming consensus

    Despite the broad agreement between the abstract analysis and the author self-rating survey and C13 with previous studies, there remains the possibility that the SkS abstract raters introduced a systematic bias. The only definitive way to test for this is for more "skeptical" raters to repeat all or part of the abstract analysis.

    They can easily do this, using the online tool that is provided here, or doing their own study. It is a pity that nobody has tried this yet, obviously, it's not something any of us can do. 

    In reality, all of the raters in the TCP study were aware of this potential source of bias and made efforts to be as impartial as possible. For that reason, I doubt that different raters would get substantially different results. I suspect that many of our critics know this too.  

    A shortcut for our critics would be to identify the rejection abstracts that we missed, there's no need to look at the rest of the database. There are long lists of rejection papers on the internet and thse could be used to search for them in our sample. If our critics could show that, say, we missed or misapplied the ratings for 300 rejection abstracts, then Professor Tol would be vindicated. It shouldn't be that hard to do. Our paper is easy, in principle, to falsify. The fans of Karl Popper should be pleased. 

  25. Richard Tol accidentally confirms the 97% global warming consensus

    If we assume the paper authors accurately categorized their own research, the fact that our abstract ratings arrived at the same result (97%) is another strong indication that we did not introduce any significant bias.

    As I noted, we were very conservative in our approach, tending to err on the side of 'no position' or 'rejection' where possible.  Again, if anything our estimate was probably biased low, but the fact that it matches the author self-ratings adds confidence that our result was pretty darn accurate.  It's certainly not wrong by several percent.

  26. Dikran Marsupial at 02:13 AM on 6 June 2014
    Models are unreliable

    Can I suggest that we ignore Winstons most recent post.  The discovery that bacteria have a novel pathway for generating oxygen should not be incorporated into climate models unless there is a good reason to suppose that the effects of this pathway are of sufficient magnitude to significantly alter the proportions of gasses in the atmosphere.

    Winston has not provided this, and I suspect he cannot (which in itself would answer the question of whether they were included in the models and why).  Winstons post comes across as searching for some reason, any reason, to criticize the models and is already clutching at straws. I suggest DNFTT.

     

  27. Mark Harrigan at 02:08 AM on 6 June 2014
    The Skepticism In Skeptical Science

    Thanks, enjoyed this.  Readers might also be interested in what ethicist Lawrence Torcello has to say on this

    http://philpapers.org/rec/TORTEO-2

    Torcello points out that "actual skepticism is about positive inquiry and critical thinking, as well as proportioning one’s beliefs to the available evidence (not to mention being willing to alter those beliefs if and when the evidence changes significantly). Pseudoskepticism, on the contrary, makes a virtue of doubt per se, regardless of other considerations, and is therefore irrational.
    he also says
    (1) Ethical obligations of inquiry extend to every voting citizen insofar as citizens are bound together as a political body;
    (2) It is morally condemnable to put forward unwarranted public assertions contrary to scientific consensus when such consensus is decisive for public policy and legislation;
    (3) It is imperative upon educators, journalists, politicians and all those with greater access to the public forum to condemn, factually and ethically, pseudoskeptical assertions without equivocation.
    Thus healthy skepticism includes refusing to condemn something as false unless it can be shown to be false. Someone with healthy skepticism may doubt something if it has not been proven to be true, but he would not condemn it as false unless he obtained verifiable evidence that it is false."

     

    What Judith Curry has to say?  http://judithcurry.com/2014/06/05/what-is-skepticism-anyway/.  Perhaps less inspiring?

  28. Models are unreliable

    Have these important discoveries been included in models? Considering that it is believed that bacteria generated our initial oxygen atmosphere, one that metabolizes methane should be rather important when considering greenhouse gases. As climate changes, how many more stagnant, low oxygen water habitats for them will emerge?

    Bacteria Show New Route to Making Oxygen

    http://www.usnews.com/science/articles/2010/03/25/bacteria-show-new-route-to-making-oxygen

    Excerpt:

    Microbiologists have discovered bacteria that can produce oxygen by breaking down nitrite compounds, a novel metabolic trick that allows the bacteria to consume methane found in oxygen-poor sediments.

    Previously, researchers knew of three other biological pathways that could produce oxygen. The newly discovered pathway opens up new possibilities for understanding how and where oxygen can be created, Ettwig and her colleagues report in the March 25 (2010) Nature.

    “This is a seminal discovery,” says Ronald Oremland, a geomicrobiologist with the U .S. Geological Survey in Menlo Park, Calif., who was not involved with the work. The findings, he says, could even have implications for oxygen creation elsew here in the solar system.

    Ettwig’s team studied bacteria cultured from oxygen-poor sediment taken from canals and drainage ditches near agricultural areas in the Netherlands. The scientists found that in some cases the labgrown organisms could consume methane — a process that requires oxygen or some other
    substance that can chemically accept electrons — despite the dearth of free oxygen in their environment. The team has dubbed the bacteria species Methylomirabilis oxyfera, which translates as “strange oxygen producing methane consumer.”

    --------

    Considering that many plants probably evolved at much higher CO2 levels than found at present, the result of this study isn't particularly surprising, but has it been included in climate models? Has the unique respiration changes with CO2 concentration for every type of plant on Earth been determined and can the percentage of ground cover of each type be projected as climate changes?

    High CO2 boosts plant respiration, potentially affecting climate and crops

    http://www.eurekalert.org/pub_releases/2009-02/uoia-hcb020609.php

    Excerpt:

    "There's been a great deal of controversy about how plant respiration responds to elevated CO2," said U. of I. plant biology professor Andrew Leakey, who led the study. "Some summary studies suggest it will go down by 18 percent, some suggest it won't change, and some suggest it will increase as much as 11 percent." 

    Understanding how the respiratory pathway responds when plants are grown at elevated CO2 is key to reducing this uncertainty, Leakey said. His team used microarrays, a genomic tool that can detect changes in the activity of thousands of genes at a time, to learn which genes in the high CO2 plants
    were being switched on at higher or lower levels than those of the soybeans grown at current CO2 levels.

    Rather than assessing plants grown in chambers in a greenhouse, as most studies have done, Leakey's team made use of the Soybean Free Air Concentration Enrichment (Soy FACE) facility at Illinois. This open-air research lab can expose a soybean field to a variety of atmospheric CO2
    levels – without isolating the plants from other environmental influences, such as rainfall, sunlight and insects.

    Some of the plants were exposed to atmospheric CO2 levels of 550 parts per million (ppm), the level predicted for the year 2050 if current trends continue. These were compared to plants grown at ambient CO2 levels (380 ppm).

    The results were striking. At least 90 different genes coding the majority of enzymes in the cascade of chemical reactions that govern respiration were switched on (expressed) at higher levels in the soybeans grown at high CO2 levels. This explained how the plants were able to use the increased
    supply of sugars from stimulated photosynthesis under high CO2 conditions to produce energy, Leakey said. The rate of respiration increased 37 percent at the elevated CO2 levels.

    The enhanced respiration is likely to support greater transport of sugars from leaves to other growing parts of the plant, including the seeds, Leakey said.

    "The expression of over 600 genes was altered by elevated CO2 in total, which will help us to understand how the response is regulated and also hopefully produce crops that will perform better in the future," he said.

    --------

    I could probably spend days coming up with examples of greenhouse gas sinks that are most likely not included in current models. Unless you fully understand a process, you cannot accurately “model” it. If you understand, or think you understand, 1,000 factors about the process but there are another 1,000 factors you only partially know about, don't know about, or have incorrectly deemed unimportant in a phenomenally complex process, there is no possibility whatsoever that your projections from the model will be accurate, and the further out you go in your projections, the less accurate they will probably be.

    The current climate models certainly do not have all the forces that create changes in the climate integrated, and there are who knows how many more factors that have not even been realized as yet. I suspect there are a huge number of them if my reading about the newly discovered climate relevant factors discovered almost weekly is anything to judge by. Too little knowledge, too few data points or proxy data points with uncertain accuracy lead to a "Garbage in - Garbage models - Garbage out" situation.

  29. Models are unreliable

    "The answer to that question is: models output, even if they fail, is irrelevant here."

    Not in politcs and public opinion, which in the real world is what drives policy when politicians respond to the dual forces of lobbyists and the desire to project to the voting public that they're "doing something to protect us." Model projections of doom drive the public perception side. The claim that policy is primarily driven by science is, I think, terribly niave. If that were the case, the world would be a wonderfully different place.

    "Your 12min video talks about models' and climate sensitivity uncertainty. However, it cherry picks the lower "skeptic" half of ECS uncertainty only. It is silent about the upper long tail of ECS uncertainty, which goes well beyond 4.5degrees - up to 8degrees - although with low probability."

    But isn't climate sensitivity uncertainty what it's all about?

    "Incidentally, concentrating on models' possible failure due to warming overestmation (as in your 12min video) while ignoring that models may also fail (more spectacularly) by underestimating over aspects of global warming (e.g. arctic ice melt), indicates cherry picking on a single aspect only that suits your agenda."

    Exactly and skeptics can then use that to point out that the projections themselves are likely garbage. No one has yet commented on the rather damning paper in that respect I posted a link to:

    Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences

    Naomi Oreskes,* Kristin Shrader-Frechette, Kenneth Belitz

    SCIENCE * VOL. 263 * 4 FEBRUARY 1994

    Abstract: Verification and validation of numerical models of natural systems is impossible. This is because natural systems are never closed and because model results are always non-unique. Models can be confirmed by the demonstration of agreement between observation and prediction, but confirmation is inherently partial. Complete confirmation is logically precluded by the fallacy of affirming the consequent and by incomplete access to natural phenomena. Models can only be evaluated in relative terms, and their predictive value is always open to question. The primary
    value of models is heuristic.

    http://courses.washington.edu/ess408/OreskesetalModels.pdf

    Also:

    Twenty-three climate models can't all be wrong...or can they?

    http://link.springer.com/article/10.1007/s00382-013-1761-5

    Climate Dynamics
    March 2014, Volume 42, Issue 5-6, pp 1665-1670
    A climate model intercomparison at the dynamics level
    Karsten Steinhaeuser, Anastasios A. Tsonis

    According to Steinhaeuser and Tsonis, today "there are more than two dozen
    different climate models which are used to make climate simulations and
    future climate projections." But although it has been said that "there is
    strength in numbers," most rational people would still like to know how
    well this specific set of models does at simulating what has already
    occurred in the way of historical climate change, before they would be
    ready to accept what the models predict about Earth's future climate. The
    two researchers thus proceed to do just that. Specifically, they examined
    28 pre-industrial control runs, as well as 70 20th-century forced runs,
    derived from 23 different climate models, by analyzing how well the models
    did in hind-casting "networks for the 500 hPa, surface air temperature
    (SAT), sea level pressure (SLP), and precipitation for each run."

    In the words of Steinhaeuser and Tsonis, the results indicate (1) "the
    models are in significant disagreement when it comes to their SLP, SAT, and
    precipitation community structure," (2) "none of the models comes close to
    the community structure of the actual observations," (3) "not only do the
    models not agree well with each other, they do not agree with reality," (4)
    "the models are not capable to simulate the spatial structure of the
    temperature, sea level pressure, and precipitation field in a reliable and
    consistent way," and (5) "no model or models emerge as superior."

    In light of their several sad findings, the team of two suggests "maybe the
    time has come to correct this modeling Babel and to seek a consensus
    climate model by developing methods which will combine ingredients from
    several models or a supermodel made up of a network of different models."
    But with all of the models they tested proving to be incapable of
    replicating any of the tested aspects of past reality, even this approach
    would not appear to have any promise of success.

  30. Models are unreliable

    On the costs of mitigation, an IEA Special Report "World Energy Investment" is just out that puts the mitigation costs in the context of the $48 trillion investments required to keep the lights on under a BAUesque scenario. They suggest the additional investment required to allow a +2ºC future rather than a +4ºC BAUesque future is an extra $5 trillion on top.

  31. Pierre-Emmanuel Neurohr at 01:21 AM on 6 June 2014
    President Obama gets serious on climate change

    "Pierre, people without the massive anti-American chip on their shoulder might take "world leader" to mean 'leader capable of acting on the world stage' rather than 'leader of the world'."

    To be compared with (in the end of the text) :

    "One reason why this is important is it helps set the U.S. as the world leader."

    To point out that the American system is based on overconsumption of raw materials and energy is not exactly controversial to any rational observer. To find that a person who points this out has a "massive anti-American chip" logically confirms the first point.

    When it comes to reducing GHG pollution, the ones most responsible cannot be credible, by definition. One way of changing that is to think a little bit about this overconsumption mania - maybe reduce it ??? - before trying to come up with fixes to fuel it.

  32. Richard Tol accidentally confirms the 97% global warming consensus

    Terrific analysis.   This is not denial, rather this is displacement or distraction.  Argumentation by written hand waving.

  33. Models are unreliable

    Winston2014,

    Your 12min video talks about models' and climate sensitivity uncertainty. However, it cherry picks the lower "skeptic" half of ECS uncertainty only. It is silent about the upper long tail of ECS uncertainty, which goes well beyond 4.5degrees - up to 8degrees - although with low probability.

    The cost of global warming in highly non-linear - very costly at the high end of the tail - essentially a catastrophe above 4degC. Therefore, in order to formulate the policy response you need to convolve the probability function with the potential cost function and integrate it and compare with the cost of mitigation.

    Because we can easily adapt to changes up to say 1degc, the cost of low sensitivity is almost zero - it does not matter. What really matters is the long tail of potential warming distribution, because its high cost - even at low probability - resulting in high risk, demanding serious preventative response.

    BTW, the above risk-reward analysis is the driver of policy response. Climate models have nothing to do with it. Your statement repeated after that 12min video that "Models will drive policy" is just nonsense. Policy should be driven by our best understanding of the ECS. ECS is derived from mutiple lines of evidence, e.g. paleo being one of them. The problem has nothing to do with your pathetic "Models fail. Are they still useful?" question. The answer to that question is: models output, even if they fail, is irrelevant here.

    Incidentally, concentrating on models' possible failure due to warming overestmation (as in your 12min video) while ignoring that models may also fail (more spectacularly) by underestimating over aspects of global warming (e.g. arctic ice melt), indicates cherry picking on a single aspect only that suits your agenda. If you were not biased in your objections, you would have noticed that models departure from observations are much higher in case of sea ice melt rather than in case of surface temps and concentrate your critique on that aspect.

  34. Dikran Marsupial at 22:35 PM on 5 June 2014
    There's no correlation between CO2 and temperature

    Razo wrote "I cant help but thinking they may not have a common cause."

    that is pretty irrational, given that we know that both anthropogenic and natural forcings have changed over the last century, and that neither can explain both sets of warming.  So for that hypothesis to be correct, virtually everything we know ablut natural and anthropogenic forcings must be wrong.  Personally I'd say the hypothesis was wrong.

    "When I look at the modeling results of natural forcing only, like in the intermediate rebuttal of the 'models are unreliable page', niether period is modelled well."

    It has already been explained to you that this is likely an artefact of the baselining.  The fact that both periods are reasonably well modelled by including both natural and anthropogenic forcings kind of suggests that the two periods do not have a common cause.

    "but has there been any kind of study comparing these two periods?"

    Try the IPCC report, the chapter where the figure was taken from is a good start.

  35. There's no correlation between CO2 and temperature

    Razo @39, yes there has been such a study.  Many of them, which are summarized in the IPCC.

    In summary the results are:

    1)  The early twentieth century warming was of a shorter duration, and with a lower trend than the late twentieth century warming;

    2)  Durring the early twentieth century warming, volcanic forcing, solar forcing and anthropogenic forcings were all positive relative to the preceding decades, and had similar magnitudes;

    3)  Durring the late twentieth century warming, volcanic forcing and solar forcing were both negative relative to preceding decades, while anthropogenic forcings were strongly positive.

  36. Richard Tol accidentally confirms the 97% global warming consensus

    michael sweet @11, Cook et al found 3896 abstracts endorsing the consensus and 78 rejecting it, and endorsement rate of 98% (excluding abstracts indicating no opinion, or uncertainty). To drop that endorsement rate below 95% requires that 121 abstracts rated as endorsing the consensus be rerated as neutral, and 121 rated as neutral be rerated as rejecting the consensus.  If more endorsing abstracts are rerated as neutral, fewer neutral abstracts need be rerated as rejecting to achieve so low a consensus rating.  If endorsing abstracts are reduced by 40%, no increase in rejecting abstracts is needed to reduce the consensus rate to 95%.  (You will notice that pseudoskeptics do spend a lot of time trying to argue that endorsement is overstated ;))

    Anyway, the upshot is that I agree that a large bias by the SkS rating team is extraordinarilly unlikely.  Never-the-less, even a small bias coupled with other sources of error could lift an actual consensus rate from just below 95% to 97% with relative ease, both by inflating the endorsement papers and simultaneiously deflating rejection papers.  For instance, taking the prima facie bias shown by comparison with author ratings (discussed above), and correcting for it in the abstract ratings drops the endorsement ratings to 3834, while lifting the rejection papers to 176, giving an endorsement percentage of 95.6%.  Dropping another 1% due to "uncertain" papers, and 0.5% due to other error factors brings the consensus rate down to 94.1%.  As previously stated, I do not think the prima facie bias shown in the author comparisons should be accepted at face value.  There are too many confounding factors.  But we certainly cannot simply exclude it from the range of possible bias effects.

    As the range of uncertainty I allow, I work in units of 5% because the uncertainty is too poorly quantified to pretend to greater precision (with exception of error due to internal rating inconstency which has now been quantified by Cook and co authors).  I extend the range of plausible consensus ratings down to 90% because in calculating the effects of numerous skeptical and pseudo-skeptical objections, I have never come up with a consensus rating lower than 90%; and I think it is unlikely (<33% subjective probability) to be below 95% because of the size of the biases needed to get a consensus rate less than that.  I think it is important to be aware, and to make people aware of the uncertainty in the upper range (95-99%) so that they do not have an unrealistic idea of the precision of the result, and of the unlikely but possible lower range (90-95%) so that people are aware how little significance attaches to the obssessive attacks on the concensus paper result, very well summarized by CBDunkerson above.

  37. Richard Tol accidentally confirms the 97% global warming consensus

    Chriskoz @9, the author ratings are drawn from a subset of the papers which had an abstract rating, and that subset is not representative in that it is heavilly weighted towards more recent papers.  Further, authors had access to far more information (the complete paper plus their known intentions) than did abstract raters.  Further, author ratings may have been self selected towards people with stronger views on AGW, either pro or anti or towards both extremes.  Finally, authors may have been more confused about interpretation of the rating criteria than abstract raters, who had the advantage of more copious explanation through the ability to direct questions to the lead author, and discuss the responses.  It is also possible that author raters are biased against endorsement due to scientific reticence, or that abstracts are biased with respect to papers in terms of rejections due to "skeptical" scientists deliberately keeping abstracts innocuous to ensure publication of conclusion portrayed as rejecting AGW either in brief comments in the conclusion, or in press releases.  Together these factors make direct comparison of author and abstract ratings difficult, and not susceptible to precise conclusions.

    One of those factors can be eliminated with available data by comparing author ratings with only those abstract ratings on papers that actually recieved an author rating.  If we do so we get the following figures:

    _________Abstract__Author
    endorse___787_____1338
    neutral___1339______759
    reject______10_______39

    Reduced to percentage terms, that represents a 98.75% endorsement rate among the subset of abstract ratings also rated by authors, and  97.17% endorsement rate among the corresponding author ratings.  The simplest interpretation would be that the abstract raters demonstrated a 1.6% bias in favour of endorsements, and a 125.7% bias against rejections.  I would reject such an interpretation as too simplistic, ignoring as it does the other confounding factors.  However, a small abstract rating team bias in favour of endorsements is certainly consistent with these results.  Thus, the abstract rating team may have been biased in favour of endorsements.  (Given available evidence it may also have been biased against endorsements as well, although with far less probability.)

    This in no way invalidates the author ratings as confirming the basic result of the abstract ratings, ie, a endorsement rating relative to endorsements and rejections >95%.  The results give that value no matter how you slice it.  But that should not be interpreted as confirmation of the precise figure from the abstract ratings (98%), or even confirmation of that figure within +/- 1%.  Finally, even though the author ratings confirm a >95% figure, that does not preclude uncertainty from allowing realistic probabilities for values below 95%.  After all, emperical "confirmations" are falsifiable and the next study from impeccable sources may turn up a lower percentage.

  38. michael sweet at 21:25 PM on 5 June 2014
    Richard Tol accidentally confirms the 97% global warming consensus

    Tom,

    Additional very strong constraints can be placed on possible bias in the SkS raters.  Since the authors of the paper set up a web site (referenced in the OP) that allows individuals to rate abstracts, lots of people have presumably rated papers.  Skeptics could easily search for papers from people like Spencer, Lindzen or Poptech's list to find misrated papers.  In addition, authors like Spencer who are skeptical could bring attention to their misrated papers.  Dana could correct me, but I have only seen reports of less than 5 such misrated papers that have been found . About 30 are needed to lower the consensus to 92%.  Tol found zero papers.  It seems to me unlikely that enough papers have been misrated to lower the consensus even to 92% with so few misrated papers found by those skeptics who have searched.  I will note that in his congressional testimony Spencer said that he was part of the consensus.  Presumably that means his papers had been misrated as skeptical and should be subtracted from the skeptical total.

    I think you are making a maximum estimate of error and underestimating the efforts by the author team to check their ratings.

  39. President Obama gets serious on climate change

    Pierre, people without the massive anti-American chip on their shoulder might take "world leader" to mean 'leader capable of acting on the world stage' rather than 'leader of the world'. Just sayin'.

  40. Richard Tol accidentally confirms the 97% global warming consensus

    So... even with numerous mathematical errors, blatantly false assumptions, and generally slanting things to the point of absurdity... the 'best' Tol could do was claim that there is 'only' a ninety-one percent consensus?

    You've got to wonder what keeps them going... even when their own best spin proves them wrong they still just keep right on believing the nonsense.

  41. There's no correlation between CO2 and temperature

    When ever I look at the air temperature data, my eye always falls on the two upward trends, from 1910-1943 and 1970 to 2001. I keep finding myself thinking they have pretty much the same slope and the same duration. I cant help but thinking they may not have a common cause.

    When I look at the modeling results of natural forcing only, like in the intermediate rebuttal of the 'models are unreliable page', niether period is modelled well. The man made forcing only model only captures the second increase. CO2 levels and increases are quite different in these two periods.

    I realize that weather and climate data can be variable and can make one imagine things, but has there been any kind of study comparing these two periods?

  42. Richard Tol accidentally confirms the 97% global warming consensus

    Tom@8,

    How do you reconcile your opinion that "pervasive bias by the SkS recruited rates" may have inflated the results the Cook 2013, with the fact that scientists' self rating confirmed the Cook 2013 findings at even higher  (98%+) rate? Shouldn't we rather conclude from that confirmation, that Cook 2013 findings are more likely biased low, rather than high as you suggest?

    Moderator Response:

    [DB] Fixed text per request.

  43. Richard Tol accidentally confirms the 97% global warming consensus

    Dana @6, thankyou for the clarrification.  

    One thing detailed analysis of the data shows is a very slight, but statistically significant trend towards more conservative ratings in both first and second ratings.  The trend is larger in first ratings.  That can be construed as an improvement in rating skill over time, or a degradation of performance.  The former is more likely given that third and later ratings (ie, review and tie break ratings) also tended to be more conservative than initial ratings.  It may also be an artifact of relative number and time of ratings by individual raters, given that they rated at different rates.  So, first, have you tested individual raters results to see if they show a similar trend?  And second, does your error term include allowance for this trend either as a whole, or for individual raters?

    More generally, I think a similar rating excercise to that in the Consensus Project carried out by raters exclusively recruited from WUWT would generate a substantially different consensus rate to that found in Cook et al (2013).  We only need look at poptech's contortions to see who willing pseudoskeptics are to distort their estimates.  They do not do such rating excercises (or at least have not till now) because even with that massive bias they will find well over 50% endorsement of the consensus, and likely well over 80% which would demolish their line about no consensus and potentially undermine the confidence of raters too much.  Or at least that is what I believe.  The crucial point, however, is such a general bias among raters will not show up in internal consistency tests such as used by Tol, and as I understand, by you, to determine the error rate.

    Being objective, we must allow at least the possibility of equivalent pervasive bias by the SkS recruited rates used for Cook et al.  I think there is over whelming evidence that we are not as biased as a similar cadre from WUWT, but that does not mean we are not biased at all.  Such general bias within the raters cannot be tested for by internal estimates of error or bias.  They can be partly tested for by external tests such as comparison with the self ratings, but there are sufficient confounding factors in that test that while we can say any such bias is not large, we cannot say it does not exist.  It is because of the possibility of this bias (more than anything else) that leads me to reject a tightly constrained error estimate (+/- 1%).  

  44. Richard Tol accidentally confirms the 97% global warming consensus

    I'm not sure if this is actually covered in previous addressings of Tol's analysis as I am skim-reading this during my lunch break, but I have an issue with his implicit assumption that the causes for categorisation discrepancies in those papers where the scorers initially disagreed, are present at the same statistical distribution within the population of papers where there was initial concordance.

    By the very nature of the scoring process the more ambiguous categorisations would 'self-select' and manifest as discordances, leaving those initial concordances more likely to be so than if the whole population was retested blind.

    Tol appears to be making assumptions of homogeneity where there is quite probably no basis for such.  And inhomogeneity is a significant modifer of many analytical processes.

  45. Richard Tol accidentally confirms the 97% global warming consensus

    Tom, we used several methods to estimate the error.  Using Tol's approach it's ± 0.6 or 0.7%.  Individual rater estimates of the consensus varied by about ± 1%.  Hence that's a conservative estimate.  As you know, our approach was quite conservative, so if anything we may have been biased low.  However, there's not much room above 97%.

  46. Richard Tol accidentally confirms the 97% global warming consensus

    In the OP it is stated:

    "Accounting for the uncertainties involved, we ultimately found the consensus is robust at 97 ± 1%"

    I assume that error margin is based on the uncertainties arising from an analysis of internal error rates (such as used by Tol, and done correctly in the section with the quote).  As such it does not include all sources of error, and cannot do so.  It is possible, for example, that the raters displayed a consistent bias which would not be detected by that test.  Thus that statement should not be interpreted as saying the consensus rate lies withing 96-98% with 95% probability, but that certain tests constrain the 95% probability range to not be less than that.  Allowing for all sources of potential error, it is possible that the actual consensus rate may even be in the low 90 percents, although it is likely in the mid to high 90 percents.

  47. Models are unreliable

    Victor -> Winston — where "Victor" came from, I have no idea.

  48. Models are unreliable

    Victor, when you say it's cheaper to adapt, you're falling into an either-or fallacy.  Mitigation and adaptation are the extreme ends of a range of action.  Any act you engage in to reduce your carbon footprint is mitigation.  Adaptation can mean anything from doing nothing and letting the market work things out to engaging government-organized and subsidized re-organization of human life to create the most efficient adaptive situation.   If you act only in your immediate individual self-interest, with no concern for how your long-term individual economic and political freedom are constructed socially in complex and unpredictable ways, then your understanding of adaptation iss probably the first of my definitions.  If you do understand your long-term freedoms as being socially constructed, you might go for some form of the second, but if you do, you will--as Tom points out--be relying on some sort of model, intuitive or formal.

    Do you think work on improving modeling should continue?  Or should modeling efforts be scrapped? 

  49. Richard Tol accidentally confirms the 97% global warming consensus

    I notice that the list of 24 errors by Tol is not exhaustive.  In section 3.2 "signs of bias", Tol writes:

    "I run consistency tests on the 24,273 abstract ratings; abstracts were rated between 1 and 5 times, with an average of 2.03. I computed the 50-, 100- and 500-abstract rolling standard deviation, first-order autocorrelation – tests for fatigue – and rolling average and skewness – tests for drift."

    In fact, there were not 24,273 abstract ratings (strictly abstract rating records) released to Tol, but 26,848.  They are the records of all first ratings, second ratings, review ratings and tie break rating generated for the 11,944 abstracts rated for the paper.  That Tol dropped 2,573 ratings records from his analysis is neither explained nor acknowledged in the paper.  That is clearly an additional (25th) error, and appears to go beyond error into misrepresentation of the data and analysis.

    Paranthetically, Tol is unclear about that number, claiming that "Twelve volunteers rated on average 50 abstracts each, and another 12 volunteers rated an average of 1922 abstracts each", a total of 22,464 abstracts. That is 1,424 less than the total of first and second ratings of the 11,944, and is too large a discrepancy to be accounted for by rounding errors.  He also indicates that abstracts were rated on average 2.03 times, yielding an estimate of 24,246 abstracts.  That is within rounding error of his erroneous claim of 24,273 ratings, but inconsistent with his estimate of the number of ratings  by volunteers and the actual number of rating records.

    Some clarrification of why Tol inlcuded only 24,273 ratings is found in his blog, where he describes the same test as used in the paper, saying:

    "The graphs below show the 500-abstract rolling mean, standard deviation, skewness, and first-order autocorrelation for the initial ratings of the Consensus Project."

    The initial ratings are the first and second rating for each abstract, of which there are 23,888.  However, comparison of the 100 point average of the mean value with S6 from the paper shows the test to have been the same.  A problem then arises that his graph of "initial ratings" is not restricted to just first and second ratings.  Consider the following figure:

    The middle graph is Tol's figure S6 as displayed at his blog.  The top graph is the 100 point mean of all endorsement_final ratings from the rating records (the values actually graphed and analysed by Tol).  As can be seen, Tol's graph is clearly truncated early.  The third graph is the 100 point mean values of enfdorsement_final ratings from all first and second ratings.  Although identical at the start of the graph (of logical necessity), the end of the graph diverges substantially.  That is because the first 24,273 ratings in chronological order do no included all first and second ratings (and do include a significant number of third, fourth and fifth ratings ie, review and tie break ratings).  So, we have here another Tol mistake, though technically a mistake in the blog rather than the paper.

    Far more important is that without strictly dividing first ratings from second ratings, and excluding later ratings, it is not possible for Tol's analysis to support his conclusions.  That is because, when selecting an abstract for a rater to rate, the rating mechanism selected randomly from all available abstracts not previously rated by a particular rater.  Initially, for the first person to start rating, that meant all available abstracts had no prior rating.  If we assume that that person rated 10 abstracts, and then ceased, the next person to start rating would have had their ratings selected randomly from 11934 unrated abstracts, and 10 that had a prior rating.  Given that second ratings were on average slightly more conservative (more likely to give a rating of 4) than first ratings, this alone would create a divergence from the bootstrapped values generated by Tol.  Given that raters rated papers as and when they had time and inclination, and that therefore they did not rate at either the same pace or time, or even at a consistent pace, the divergence from bootstrap values from this alone could be quite large.  Given that raters could diverge slightly in the ratings they give, there is nothing in Tol's analysis to show his bootstrap analyses are anything other than the product of that divergence and the differences in rating times and paces among raters.  His conclusions of rater fatigue do not, and cannot come from the analysis he performs, given the data he selects to analyse.

    This then is error 27 in his paper, or perhaps 27-30 given that he repeats the analysis of that data for standard deviation, skewness and autocorrelation, each of which tests is rendered incapable of supporting his conclusions due to his poor (and mistated) data selection.

  50. Models are unreliable

    Winston @734, the claim that the policies will be costly is itself based on models, specifically economic models.  Economic models perform far worse than do climate models, so if models are not useful "... for costly policies until the accuracy of their projections is confirmed", the model based claim that the policies are costly must be rejected. 

Prev  715  716  717  718  719  720  721  722  723  724  725  726  727  728  729  730  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us