Recent Comments
Prev 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 Next
Comments 36201 to 36250:
-
chriskoz at 00:00 AM on 6 June 2014Models are unreliable
Winston2014,
Your 12min video talks about models' and climate sensitivity uncertainty. However, it cherry picks the lower "skeptic" half of ECS uncertainty only. It is silent about the upper long tail of ECS uncertainty, which goes well beyond 4.5degrees - up to 8degrees - although with low probability.
The cost of global warming in highly non-linear - very costly at the high end of the tail - essentially a catastrophe above 4degC. Therefore, in order to formulate the policy response you need to convolve the probability function with the potential cost function and integrate it and compare with the cost of mitigation.
Because we can easily adapt to changes up to say 1degc, the cost of low sensitivity is almost zero - it does not matter. What really matters is the long tail of potential warming distribution, because its high cost - even at low probability - resulting in high risk, demanding serious preventative response.
BTW, the above risk-reward analysis is the driver of policy response. Climate models have nothing to do with it. Your statement repeated after that 12min video that "Models will drive policy" is just nonsense. Policy should be driven by our best understanding of the ECS. ECS is derived from mutiple lines of evidence, e.g. paleo being one of them. The problem has nothing to do with your pathetic "Models fail. Are they still useful?" question. The answer to that question is: models output, even if they fail, is irrelevant here.
Incidentally, concentrating on models' possible failure due to warming overestmation (as in your 12min video) while ignoring that models may also fail (more spectacularly) by underestimating over aspects of global warming (e.g. arctic ice melt), indicates cherry picking on a single aspect only that suits your agenda. If you were not biased in your objections, you would have noticed that models departure from observations are much higher in case of sea ice melt rather than in case of surface temps and concentrate your critique on that aspect.
-
Dikran Marsupial at 22:35 PM on 5 June 2014There's no correlation between CO2 and temperature
Razo wrote "I cant help but thinking they may not have a common cause."
that is pretty irrational, given that we know that both anthropogenic and natural forcings have changed over the last century, and that neither can explain both sets of warming. So for that hypothesis to be correct, virtually everything we know ablut natural and anthropogenic forcings must be wrong. Personally I'd say the hypothesis was wrong.
"When I look at the modeling results of natural forcing only, like in the intermediate rebuttal of the 'models are unreliable page', niether period is modelled well."
It has already been explained to you that this is likely an artefact of the baselining. The fact that both periods are reasonably well modelled by including both natural and anthropogenic forcings kind of suggests that the two periods do not have a common cause.
"but has there been any kind of study comparing these two periods?"
Try the IPCC report, the chapter where the figure was taken from is a good start.
-
Tom Curtis at 22:34 PM on 5 June 2014There's no correlation between CO2 and temperature
Razo @39, yes there has been such a study. Many of them, which are summarized in the IPCC.
In summary the results are:
1) The early twentieth century warming was of a shorter duration, and with a lower trend than the late twentieth century warming;
2) Durring the early twentieth century warming, volcanic forcing, solar forcing and anthropogenic forcings were all positive relative to the preceding decades, and had similar magnitudes;
3) Durring the late twentieth century warming, volcanic forcing and solar forcing were both negative relative to preceding decades, while anthropogenic forcings were strongly positive.
-
Tom Curtis at 22:27 PM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
michael sweet @11, Cook et al found 3896 abstracts endorsing the consensus and 78 rejecting it, and endorsement rate of 98% (excluding abstracts indicating no opinion, or uncertainty). To drop that endorsement rate below 95% requires that 121 abstracts rated as endorsing the consensus be rerated as neutral, and 121 rated as neutral be rerated as rejecting the consensus. If more endorsing abstracts are rerated as neutral, fewer neutral abstracts need be rerated as rejecting to achieve so low a consensus rating. If endorsing abstracts are reduced by 40%, no increase in rejecting abstracts is needed to reduce the consensus rate to 95%. (You will notice that pseudoskeptics do spend a lot of time trying to argue that endorsement is overstated ;))
Anyway, the upshot is that I agree that a large bias by the SkS rating team is extraordinarilly unlikely. Never-the-less, even a small bias coupled with other sources of error could lift an actual consensus rate from just below 95% to 97% with relative ease, both by inflating the endorsement papers and simultaneiously deflating rejection papers. For instance, taking the prima facie bias shown by comparison with author ratings (discussed above), and correcting for it in the abstract ratings drops the endorsement ratings to 3834, while lifting the rejection papers to 176, giving an endorsement percentage of 95.6%. Dropping another 1% due to "uncertain" papers, and 0.5% due to other error factors brings the consensus rate down to 94.1%. As previously stated, I do not think the prima facie bias shown in the author comparisons should be accepted at face value. There are too many confounding factors. But we certainly cannot simply exclude it from the range of possible bias effects.
As the range of uncertainty I allow, I work in units of 5% because the uncertainty is too poorly quantified to pretend to greater precision (with exception of error due to internal rating inconstency which has now been quantified by Cook and co authors). I extend the range of plausible consensus ratings down to 90% because in calculating the effects of numerous skeptical and pseudo-skeptical objections, I have never come up with a consensus rating lower than 90%; and I think it is unlikely (<33% subjective probability) to be below 95% because of the size of the biases needed to get a consensus rate less than that. I think it is important to be aware, and to make people aware of the uncertainty in the upper range (95-99%) so that they do not have an unrealistic idea of the precision of the result, and of the unlikely but possible lower range (90-95%) so that people are aware how little significance attaches to the obssessive attacks on the concensus paper result, very well summarized by CBDunkerson above.
-
Tom Curtis at 21:43 PM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
Chriskoz @9, the author ratings are drawn from a subset of the papers which had an abstract rating, and that subset is not representative in that it is heavilly weighted towards more recent papers. Further, authors had access to far more information (the complete paper plus their known intentions) than did abstract raters. Further, author ratings may have been self selected towards people with stronger views on AGW, either pro or anti or towards both extremes. Finally, authors may have been more confused about interpretation of the rating criteria than abstract raters, who had the advantage of more copious explanation through the ability to direct questions to the lead author, and discuss the responses. It is also possible that author raters are biased against endorsement due to scientific reticence, or that abstracts are biased with respect to papers in terms of rejections due to "skeptical" scientists deliberately keeping abstracts innocuous to ensure publication of conclusion portrayed as rejecting AGW either in brief comments in the conclusion, or in press releases. Together these factors make direct comparison of author and abstract ratings difficult, and not susceptible to precise conclusions.
One of those factors can be eliminated with available data by comparing author ratings with only those abstract ratings on papers that actually recieved an author rating. If we do so we get the following figures:
_________Abstract__Author
endorse___787_____1338
neutral___1339______759
reject______10_______39Reduced to percentage terms, that represents a 98.75% endorsement rate among the subset of abstract ratings also rated by authors, and 97.17% endorsement rate among the corresponding author ratings. The simplest interpretation would be that the abstract raters demonstrated a 1.6% bias in favour of endorsements, and a 125.7% bias against rejections. I would reject such an interpretation as too simplistic, ignoring as it does the other confounding factors. However, a small abstract rating team bias in favour of endorsements is certainly consistent with these results. Thus, the abstract rating team may have been biased in favour of endorsements. (Given available evidence it may also have been biased against endorsements as well, although with far less probability.)
This in no way invalidates the author ratings as confirming the basic result of the abstract ratings, ie, a endorsement rating relative to endorsements and rejections >95%. The results give that value no matter how you slice it. But that should not be interpreted as confirmation of the precise figure from the abstract ratings (98%), or even confirmation of that figure within +/- 1%. Finally, even though the author ratings confirm a >95% figure, that does not preclude uncertainty from allowing realistic probabilities for values below 95%. After all, emperical "confirmations" are falsifiable and the next study from impeccable sources may turn up a lower percentage.
-
michael sweet at 21:25 PM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
Tom,
Additional very strong constraints can be placed on possible bias in the SkS raters. Since the authors of the paper set up a web site (referenced in the OP) that allows individuals to rate abstracts, lots of people have presumably rated papers. Skeptics could easily search for papers from people like Spencer, Lindzen or Poptech's list to find misrated papers. In addition, authors like Spencer who are skeptical could bring attention to their misrated papers. Dana could correct me, but I have only seen reports of less than 5 such misrated papers that have been found . About 30 are needed to lower the consensus to 92%. Tol found zero papers. It seems to me unlikely that enough papers have been misrated to lower the consensus even to 92% with so few misrated papers found by those skeptics who have searched. I will note that in his congressional testimony Spencer said that he was part of the consensus. Presumably that means his papers had been misrated as skeptical and should be subtracted from the skeptical total.
I think you are making a maximum estimate of error and underestimating the efforts by the author team to check their ratings.
-
CBDunkerson at 21:22 PM on 5 June 2014President Obama gets serious on climate change
Pierre, people without the massive anti-American chip on their shoulder might take "world leader" to mean 'leader capable of acting on the world stage' rather than 'leader of the world'. Just sayin'.
-
CBDunkerson at 21:18 PM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
So... even with numerous mathematical errors, blatantly false assumptions, and generally slanting things to the point of absurdity... the 'best' Tol could do was claim that there is 'only' a ninety-one percent consensus?
You've got to wonder what keeps them going... even when their own best spin proves them wrong they still just keep right on believing the nonsense.
-
Razo at 21:14 PM on 5 June 2014There's no correlation between CO2 and temperature
When ever I look at the air temperature data, my eye always falls on the two upward trends, from 1910-1943 and 1970 to 2001. I keep finding myself thinking they have pretty much the same slope and the same duration. I cant help but thinking they may not have a common cause.
When I look at the modeling results of natural forcing only, like in the intermediate rebuttal of the 'models are unreliable page', niether period is modelled well. The man made forcing only model only captures the second increase. CO2 levels and increases are quite different in these two periods.
I realize that weather and climate data can be variable and can make one imagine things, but has there been any kind of study comparing these two periods?
-
chriskoz at 20:04 PM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
Tom@8,
How do you reconcile your opinion that "pervasive bias by the SkS recruited rates" may have inflated the results the Cook 2013, with the fact that scientists' self rating confirmed the Cook 2013 findings at even higher (98%+) rate? Shouldn't we rather conclude from that confirmation, that Cook 2013 findings are more likely biased low, rather than high as you suggest?
Moderator Response:[DB] Fixed text per request.
-
Tom Curtis at 15:13 PM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
Dana @6, thankyou for the clarrification.
One thing detailed analysis of the data shows is a very slight, but statistically significant trend towards more conservative ratings in both first and second ratings. The trend is larger in first ratings. That can be construed as an improvement in rating skill over time, or a degradation of performance. The former is more likely given that third and later ratings (ie, review and tie break ratings) also tended to be more conservative than initial ratings. It may also be an artifact of relative number and time of ratings by individual raters, given that they rated at different rates. So, first, have you tested individual raters results to see if they show a similar trend? And second, does your error term include allowance for this trend either as a whole, or for individual raters?
More generally, I think a similar rating excercise to that in the Consensus Project carried out by raters exclusively recruited from WUWT would generate a substantially different consensus rate to that found in Cook et al (2013). We only need look at poptech's contortions to see who willing pseudoskeptics are to distort their estimates. They do not do such rating excercises (or at least have not till now) because even with that massive bias they will find well over 50% endorsement of the consensus, and likely well over 80% which would demolish their line about no consensus and potentially undermine the confidence of raters too much. Or at least that is what I believe. The crucial point, however, is such a general bias among raters will not show up in internal consistency tests such as used by Tol, and as I understand, by you, to determine the error rate.
Being objective, we must allow at least the possibility of equivalent pervasive bias by the SkS recruited rates used for Cook et al. I think there is over whelming evidence that we are not as biased as a similar cadre from WUWT, but that does not mean we are not biased at all. Such general bias within the raters cannot be tested for by internal estimates of error or bias. They can be partly tested for by external tests such as comparison with the self ratings, but there are sufficient confounding factors in that test that while we can say any such bias is not large, we cannot say it does not exist. It is because of the possibility of this bias (more than anything else) that leads me to reject a tightly constrained error estimate (+/- 1%).
-
Bernard J. at 14:52 PM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
I'm not sure if this is actually covered in previous addressings of Tol's analysis as I am skim-reading this during my lunch break, but I have an issue with his implicit assumption that the causes for categorisation discrepancies in those papers where the scorers initially disagreed, are present at the same statistical distribution within the population of papers where there was initial concordance.
By the very nature of the scoring process the more ambiguous categorisations would 'self-select' and manifest as discordances, leaving those initial concordances more likely to be so than if the whole population was retested blind.
Tol appears to be making assumptions of homogeneity where there is quite probably no basis for such. And inhomogeneity is a significant modifer of many analytical processes.
-
dana1981 at 14:47 PM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
Tom, we used several methods to estimate the error. Using Tol's approach it's ± 0.6 or 0.7%. Individual rater estimates of the consensus varied by about ± 1%. Hence that's a conservative estimate. As you know, our approach was quite conservative, so if anything we may have been biased low. However, there's not much room above 97%.
-
Tom Curtis at 12:33 PM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
In the OP it is stated:
"Accounting for the uncertainties involved, we ultimately found the consensus is robust at 97 ± 1%"
I assume that error margin is based on the uncertainties arising from an analysis of internal error rates (such as used by Tol, and done correctly in the section with the quote). As such it does not include all sources of error, and cannot do so. It is possible, for example, that the raters displayed a consistent bias which would not be detected by that test. Thus that statement should not be interpreted as saying the consensus rate lies withing 96-98% with 95% probability, but that certain tests constrain the 95% probability range to not be less than that. Allowing for all sources of potential error, it is possible that the actual consensus rate may even be in the low 90 percents, although it is likely in the mid to high 90 percents.
-
DSL at 12:31 PM on 5 June 2014Models are unreliable
Victor -> Winston — where "Victor" came from, I have no idea.
-
DSL at 12:30 PM on 5 June 2014Models are unreliable
Victor, when you say it's cheaper to adapt, you're falling into an either-or fallacy. Mitigation and adaptation are the extreme ends of a range of action. Any act you engage in to reduce your carbon footprint is mitigation. Adaptation can mean anything from doing nothing and letting the market work things out to engaging government-organized and subsidized re-organization of human life to create the most efficient adaptive situation. If you act only in your immediate individual self-interest, with no concern for how your long-term individual economic and political freedom are constructed socially in complex and unpredictable ways, then your understanding of adaptation iss probably the first of my definitions. If you do understand your long-term freedoms as being socially constructed, you might go for some form of the second, but if you do, you will--as Tom points out--be relying on some sort of model, intuitive or formal.
Do you think work on improving modeling should continue? Or should modeling efforts be scrapped?
-
Tom Curtis at 12:21 PM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
I notice that the list of 24 errors by Tol is not exhaustive. In section 3.2 "signs of bias", Tol writes:
"I run consistency tests on the 24,273 abstract ratings; abstracts were rated between 1 and 5 times, with an average of 2.03. I computed the 50-, 100- and 500-abstract rolling standard deviation, first-order autocorrelation – tests for fatigue – and rolling average and skewness – tests for drift."
In fact, there were not 24,273 abstract ratings (strictly abstract rating records) released to Tol, but 26,848. They are the records of all first ratings, second ratings, review ratings and tie break rating generated for the 11,944 abstracts rated for the paper. That Tol dropped 2,573 ratings records from his analysis is neither explained nor acknowledged in the paper. That is clearly an additional (25th) error, and appears to go beyond error into misrepresentation of the data and analysis.
Paranthetically, Tol is unclear about that number, claiming that "Twelve volunteers rated on average 50 abstracts each, and another 12 volunteers rated an average of 1922 abstracts each", a total of 22,464 abstracts. That is 1,424 less than the total of first and second ratings of the 11,944, and is too large a discrepancy to be accounted for by rounding errors. He also indicates that abstracts were rated on average 2.03 times, yielding an estimate of 24,246 abstracts. That is within rounding error of his erroneous claim of 24,273 ratings, but inconsistent with his estimate of the number of ratings by volunteers and the actual number of rating records.
Some clarrification of why Tol inlcuded only 24,273 ratings is found in his blog, where he describes the same test as used in the paper, saying:
"The graphs below show the 500-abstract rolling mean, standard deviation, skewness, and first-order autocorrelation for the initial ratings of the Consensus Project."
The initial ratings are the first and second rating for each abstract, of which there are 23,888. However, comparison of the 100 point average of the mean value with S6 from the paper shows the test to have been the same. A problem then arises that his graph of "initial ratings" is not restricted to just first and second ratings. Consider the following figure:
The middle graph is Tol's figure S6 as displayed at his blog. The top graph is the 100 point mean of all endorsement_final ratings from the rating records (the values actually graphed and analysed by Tol). As can be seen, Tol's graph is clearly truncated early. The third graph is the 100 point mean values of enfdorsement_final ratings from all first and second ratings. Although identical at the start of the graph (of logical necessity), the end of the graph diverges substantially. That is because the first 24,273 ratings in chronological order do no included all first and second ratings (and do include a significant number of third, fourth and fifth ratings ie, review and tie break ratings). So, we have here another Tol mistake, though technically a mistake in the blog rather than the paper.
Far more important is that without strictly dividing first ratings from second ratings, and excluding later ratings, it is not possible for Tol's analysis to support his conclusions. That is because, when selecting an abstract for a rater to rate, the rating mechanism selected randomly from all available abstracts not previously rated by a particular rater. Initially, for the first person to start rating, that meant all available abstracts had no prior rating. If we assume that that person rated 10 abstracts, and then ceased, the next person to start rating would have had their ratings selected randomly from 11934 unrated abstracts, and 10 that had a prior rating. Given that second ratings were on average slightly more conservative (more likely to give a rating of 4) than first ratings, this alone would create a divergence from the bootstrapped values generated by Tol. Given that raters rated papers as and when they had time and inclination, and that therefore they did not rate at either the same pace or time, or even at a consistent pace, the divergence from bootstrap values from this alone could be quite large. Given that raters could diverge slightly in the ratings they give, there is nothing in Tol's analysis to show his bootstrap analyses are anything other than the product of that divergence and the differences in rating times and paces among raters. His conclusions of rater fatigue do not, and cannot come from the analysis he performs, given the data he selects to analyse.
This then is error 27 in his paper, or perhaps 27-30 given that he repeats the analysis of that data for standard deviation, skewness and autocorrelation, each of which tests is rendered incapable of supporting his conclusions due to his poor (and mistated) data selection.
-
Tom Curtis at 10:21 AM on 5 June 2014Models are unreliable
Winston @734, the claim that the policies will be costly is itself based on models, specifically economic models. Economic models perform far worse than do climate models, so if models are not useful "... for costly policies until the accuracy of their projections is confirmed", the model based claim that the policies are costly must be rejected.
-
Winston2014 at 09:58 AM on 5 June 2014Models are unreliable
DSL,
"Models fail. Are they still useful?"
Not for costly policies until the accuracy of their projections is confirmed. From the 12 minute skeptic video, it doesn't appear that they have been confirmed to be accurate where it counts, quite the opposite. To quote David Victor again, "The science is “in” on the first steps in the analysis—historical emissions, concentrations, and brute force radiative balance—but not for the steps that actually matter for policy."
"Models will drive policy"
Until they are proven more accurate than I have seen in my investigations thus far, I don't believe they should.
The following video leads me to believe that even if model projections are correct, it would actually be far cheaper to adapt (according to official figures) to climate change than it would be to attempt to prevent it based upon the "success" thus far of the Australian carbon tax:
The 50 to 1 Project
https://www.youtube.com/watch?v=Zw5Lda06iK0
-
Dumb Scientist at 06:53 AM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
It's astonishing that Energy Policy's "review" apparently didn't ask for a single example of the ~300 gremlin-conjured rejection abstracts.
"If I submit a comment that argues that the Cook data are inconsistent and invalid, even though they are not, my reputation is in tatters." [Dr. Richard Tol, 2013-06-11]
Not necessarily. Retracting errors is difficult but ultimately helps one's inner peace and reputation because it shows integrity and healthy confidence. When in a hole, stop digging.
-
DSL at 06:33 AM on 5 June 2014Models are unreliable
Winston2014,two things:
1. What does Victor's point allow you to claim? By the way, Victor doesn't address utility in the quote.
2. Oreskes point is a no-brainer, yes? No one in the scientific community disagrees, or if they do, they do it highly selectively (hypocritically). Models fail. Are they still useful? Absolutely: you couldn't drive a car without using an intuitive model, and such models fail regularly. The relationship between climate models and policy is complex. Are models so inaccurate they're not useful? Can we wait till we get a degree of usefulness that's satisfactory to even the most "skeptical"? Suppose, for example, that global mean surface temperature rises at 0.28C per decade for the next decade. This would push the bounds of the AR4/5 CMIP3/5 model run ranges. What should policy response be ("oh crap!")? What if that was followed by a decade of 0.13C per decade warming? What should policy response be then ("it's a hoax")?
Models will drive policy; nature will drive belief.
-
Winston2014 at 05:56 AM on 5 June 2014Models are unreliable
Have the points in this video ever been adressed here?
Climate Change in 12 Minutes - The Skeptics Case
https://www.youtube.com/watch?v=vcQTyje_mpU
From my readings thus far, I agree with this evaluation of the accuarcy and utility of current climate models:
Part of a speech delivered by David Victor of the University of California, San Diego, at the Scripps Institution of Oceanography as part of a seminar series titled “Global Warming Denialism: What science has to say” (Special Seminar Series, Winter Quarter, 2014):
"First, we in the scientific community need to acknowledge that the science is softer than we like to portray. The science is not “in” on climate change because we are dealing with a complex system whose full properties are, with current methods, unknowable. The science is “in” on the first steps in the analysis—historical emissions, concentrations, and brute force radiative balance—but not for the steps that actually matter for policy. Those include impacts, ease of adaptation, mitigation of emissions and such—are surrounded by error and uncertainty. I can understand why a politician says the science is settled—as Barack Obama did…in the State of the Union Address, where he said the “debate is over”—because if your mission is to create a political momentum then it helps to brand the other side as a “Flat Earth Society” (as he did last June). But in the scientific community we can’t pretend that things are more certain than they are."
Also, any comments on this paper:
Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences
Naomi Oreskes,* Kristin Shrader-Frechette, Kenneth Belitz
SCIENCE * VOL. 263 * 4 FEBRUARY 1994
Abstract: Verification and validation of numerical models of natural systems is impossible. This is because natural systems are never closed and because model results are always non-unique. Models can be confirmed by the demonstration of agreement between observation and prediction, but confirmation is inherently partial. Complete confirmation is logically precluded by the fallacy of affirming the consequent and by incomplete access to natural phenomena. Models can only be evaluated in relative terms, and their predictive value is always open to question. The primary
value of models is heuristic.http://courses.washington.edu/ess408/OreskesetalModels.pdf
-
heb0 at 05:41 AM on 5 June 2014The Skepticism In Skeptical Science
CollinMaessen - Thanks a bunch for the help. I've sent an edited version through the Contact page. Hopefully the raw format isn't too inconvenient.
-
Dikran Marsupial at 05:30 AM on 5 June 2014Models are unreliable
Razo wrote "Thats a kind of calibration."
sorry, I have better things to do with my time than to respond to tedious pedantry used to evade discussion of the substantive points. You are just trolling now.
Just to be clear, calibration or tuning refers to changes made to the model in order for it to improve its behaviour. Baselining is a method used in analysis of the output of the model (but which does not change the model itself in any way).
"I said, narrower is better if you want to predict a number, or justify a trend. "
No, re-read what I wrote. What you are suggesting is lampooned in the famous quote "he uses statistics in the same way a drunk uses a lamp post - more for support than illumination". The variability is what it is, and a good scientist/statistician wants to have as accurate an estimate as possible and then see what conclusions can be drawn from the results.
-
Razo at 05:16 AM on 5 June 2014Models are unreliable
Dikran Marsupial wrote "Models are able to predict the response to changes in forcings more accurately than they are able to estimate the absolute temperature of the Earth, hence baselining is essential in model-observation comparisons."
Thats a kind of calibration. I understand the need. People here were trying to tell me that no such thing was happening, and it just pure physics. I didn't know exactly how it was calculated, but I expected it. I know very well that "Models are able to predict the response to changes in forcings more accurately than they are able to estimate the absolute...".
"a constant is subtracted from each model run."
That's offsetting. Please don't disagree. Its practically the OED definition. Even the c in y=mx+c is sometimes called an offset.
"neither broader nor narrower is "better", what you want is for it to be accurate"
I said, narrower is better if you want to predict a number, or justify a trend. I mean this regardless of the issue of the variance in the baseline region.
-
dana1981 at 05:13 AM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
He wants to take Cook et al. down, but he failed miserably. His error couldn't be more obvious - he created about 300 rejection abstracts out of thin air from a claimed 6.7% error rate, when in the entire sample we only found 78 rejection papers. This has been explained to Tol several times using several different methods, and he still can't seem to grasp it.
-
Doug Bostrom at 04:49 AM on 5 June 2014The Skepticism In Skeptical Science
An excellent article, in my wholly unbiased opinion. :-)
"Unless the doubt is removed by your friend showing you a picture of Morgan Freeman standing on his porch."
And there lies the point where we discover the difference between Collin's "so-called skeptics" and the "pseudo-skeptic."
The so-called skeptic will rejoin with something along the lines of "I'll be; Morgan Freeman on your porch! Who'd a thunk it?"
The pseudo-skeptic will often follow the general path of first accusing you of having altered the photo, and then when you show it to be unaltered output from your digital camera will hypothesize that the digital camera manufacturer is in cahoots with you. More generously, they might offer that Morgan Freeman is an astounding artifact of camera malfunction.
Numerous variations abound on the overall theme of pseudo-skepticism, having in common the feature of starting as a straight line and then if necessary adopting the topography of a Klein bottle if that is what is necessary to avoid acknowledging the simply obvious.
-
Composer99 at 04:49 AM on 5 June 2014Richard Tol accidentally confirms the 97% global warming consensus
Basically, it seems to me that Dr Tol doesn't really dispute the existence of the scientific consensus (either in the form of consilience of evidence or the form of consilience of opinion of practicing scientists).
It appears, rather, that he wants to take down Cook et al because... well, because reasons. (At least that is the best I can come up with.)
-
CollinMaessen at 03:36 AM on 5 June 2014The Skepticism In Skeptical Science
You can always submit what you have via the contact page:
http://skepticalscience.com/contact.phpIf you submit It there I will eventually receive your feedback. You could also directly contact me via my website and start an email exchange with me about this:
http://www.realsceptic.com/contact/ -
Dikran Marsupial at 03:30 AM on 5 June 2014Climate is chaotic and cannot be predicted
Razo, as I have pointed out, climate (the long term statistical properties of the weather) is not necessarily chaotic, even though the weather is. Climate models do not try and predict the behaviour of a chaotic system, but to simulate it.
"How chaos could impact climate might be more like this, I think. If one could show that global warmimg is effecting the chaotic indicies that cause ElNino to the degree that it becomes a more frequent and long lasting event, ie, the regular weather, that could impact climate."
El-Nino is a mode of internal climate variability, it is one of the things that gives rise to the spread of runs from a particular climate model, but (at least assymptotically) it doesn't affect the forced response of the climate (estimated by the ensemble mean), which is what we really want to know as a guide for policy.
You clearly know something about chaotic systems, however your understaning of climate and what climate models aim to do is fundamentally misguided. Please take some time to find out more about the nature of the problem, as otherwise you are contributing to the noise here, not the signal.
-
Dikran Marsupial at 03:21 AM on 5 June 2014Models are unreliable
Razo wrote "But I'm suprised that people use it when comparing models with the actual data."
Models are able to predict the response to changes in forcings more accurately than they are able to estimate the absolute temperature of the Earth, hence baselining is essential in model-observation comparisons. There is also the point that the observations are not necessarily observations of exactly the same thing projected by the models (e.g. limitations in coverage etc.) and baselining helps to compensate for that to an extent.
"its quite another to offset model runs or different models to match the mean of the baseline."
This is not what is done, a constant is subtracted from each model run and set ob observations independently such that it has a zero offset during the baseline period. This is a perfectly reasonable thing to do in research on climate change as it is the anomalies from a baseline in which we are primarily interested.
"You seem to be saying a higher variance is better. "
No, I am saying that an accurate estimate of the variance (which is essentially an estimate of the variability due to unforced climate change) is better, and that the baselining process has an unfortunate side effect in artificially reducing the variance, which we ought to be aware of in making model-observation comparisons.
"Having the hiatus within the 95% confidence interval is a good thing, but a narrower interval is better if you want to more accurately predict a number, or justify a trend."
no, you are fundamentally missing the point of the credible interval, which is to give an indication of the true uncertainty in the projection. It is what it is, neither broader nor narrower is "better", what you want is for it to be accurate. Artifically making them narrower as a result of baselining does not make the projection more accurate, it just makes the interval a less accurate representation of the uncertainty.
"Another thing to add, as I understand if the projected values are less than 3 times the variance, one says there is no result."
No, one might say that the observations are consistent with the model (at some level of significance), however this is not a strong comment on the skill of the model."If it is over 3 times one says there is a trend, and not until the ratio is 10, does one quote a value."
No, this would not indicate a "trend" simply because an impulse (e.g. the 1998 El-Nino event) could cause such a result. One would instead say that the observations were inconsistent with the models (at some level of significance). Practice varies about the way in which significance levels are quoted and I am fairly confident that most of them would have attracted the ire of Ronald Aylmer Fisher.
"Can one use thse same rules here?"
In statistics, it is a good idea to clearly state the hypothesis you want to test before conducting the test as the details of the test depend on the nature of the hypothesis. Explain what it is that you want to determine and we can discuss the nature of the statistical test.
-
heb0 at 02:35 AM on 5 June 2014The Skepticism In Skeptical Science
This is an excellent article and will be a handy resource to link to every time someone gets pedantic about SkS and the meaning of "skepticism."
This article is concise, engaging and--I think--convincing. However, it has a tremendous number of grammatical errors and awkward wordings. The first part especially could do with more liberal use of commas. It really should be combed over if this article is intended to be a long-term reference. I wouldn't mind doing it myself, but I'm not sure the best way of submitting a proofread version.
-
Razo at 02:26 AM on 5 June 2014Models are unreliable
So with regards to baselines.
I can understand the need to do it to compare models, less so with perterbations of the same model. But I'm suprised that people use it when comparing models with the actual data. Its one thing to calculate and present ensemble mean, its quite another to offset model runs or different models to match the mean of the baseline. This becomes an arbitrary adjustment, not based on any physics. Could you explain this to me please?
Also, Dikran you say
"Now the IPCC generally use a baseline period ending close to the present day, one of the problems with that is that it reduces the variance of the ensemble runs during the last 15 years, which makes the models appear less able to explain the hiatus than they actually are."
You seem to be saying a higher variance is better. Having the hiatus within the 95% confidence interval is a good thing, but a narrower interval is better if you want to more accurately predict a number, or justify a trend.
Another thing to add, as I understand if the projected values are less than 3 times the variance, one says there is no result. If it is over 3 times one says there is a trend, and not until the ratio is 10, does one quote a value. looking at the variablity caused by changing the baseline, as well as the height of the red zones in post 723, the variance appears to be about 2.0C, the range of values is about 6.0 (from 1900 to 2100). Can one use thse same rules here?
-
Pierre-Emmanuel Neurohr at 01:38 AM on 5 June 2014President Obama gets serious on climate change
"Most importantly, we finally have a president who is a world leader."
We - the non-American part of the world - have a leader. You are very kind to let us know.Some of us could think that even if this plan was actually successful, Americans and their leader would continue to be the most climate-destroying persons on Earth, with something like 15 to 20 t of CO2 per person per year. The US and its leader would continue to set a bad example in all matters related to the climate of the Earth through their ever-increasing obsession for raw materials and energy overconsumption, which cannot but lead to more droughts and more flooding.
It takes quite a bit of extreme nationalism to manage to not see these facts. Orwell would either laugh or cry.
-
CBDunkerson at 01:10 AM on 5 June 2014President Obama gets serious on climate change
actually thoughtful wrote: "The President's policies, while an improvement, are in the category of too little to late. And likely a smokescreen for approving Keystone XL - all the while lulling the sheeple into a false sense of complacency."
Actually, given how long he has 'delayed deciding' on Keystone XL it seems very likely to me that Obama will wait until after the congressional elections this November and then kill the proposal. If he were going to approve it he should have done so by now. Delaying allows Democrats in fossil fuel states to boost their chances of election by campaigning in favor of it. Then, once the midterms are over, Obama can kill it without impacting the balance of congressional power for the remainder of his presidency.
As to, "too little to late"... I'd say rather that Obama is enacting regulations which would force a rapid phase out of coal... if that weren't already happening without them. Natural gas, wind, and solar have been shredding the coal power industry in the Unitied States. That said, these new regulations should speed up the process. Thus far, coal plants have mostly been running to 'end of life' and then shutting down in favor of other sorts of power generation. These new regulations will force many existing coal plants to shut down before reaching their end of life.
The regulations don't go far enough to completely resolve U.S. emissions problems, but again... I don't think they need to. 'Market forces' are already taking care of that. The initial fall of coal was due to natural gas, but in the past few years natural gas power development has dwindled and solar has soared... last quarter new electricity generation in the U.S. was 74% solar, 20% wind, 4% natural gas, 1% geothermal, and 1% everything else. That's 95% renewable. Obviously it will take time to replace all the existing fossil fuel based power generation, but when nearly all new electricity production developed is renewable the changeover is inevitable.
-
Razo at 00:46 AM on 5 June 2014Climate is chaotic and cannot be predicted
correction:
The key phrase is *'for certain values of the parameters'*, not 'oscillating unpredictably'.
-
Razo at 00:44 AM on 5 June 2014Climate is chaotic and cannot be predicted
I seem to be being critisized for not making any assertion by the moderator and making false assertions by others. So I will try to make an assertion here, to at least be worthy of the criticism. ;-)
Firstly, my previous post was adding to the definition of chaos. Although I do not know why it would be rebuted, a rebuttal would be on my definition of chaos. I didn't touch the rest of the argument.
But now I will. It's seems to me that people are confusing randomness and mathematical chaos a little.
"For certain values of the parameters, the overall movement of the atmospheric air was oscillating unpredictably"
The key phrase is 'for certain parameters', not 'oscillating unpredictably'.
"Actually proving that these indices are chaotic is exceedingly difficult, but Tziperman et al. (1994) showed in a simple model how El Niño is likely a seasonally induced chaotic resonance between the ocean and the atmosphere."
The key phrases here are 'induced chaotic resonance', which I called 'alternative equilibrium configurations', and 'proving that these indices are chaotic is exceedingly difficult'. But then, I'm not sure if the next sentance is correct:
"Chaotic influences from oceans and volcanoes etc. makes both weather more unpredictable and creates the unpredictable part of the 'wiggles' around the average trend in climate"
That is, although volcanic eruptions are chaotic in the regular sense or the word and can impact the climate greatly, do they involve indicies that are chaotic at certain values? Also, I don't think the formation of high pressure air masses come about after thier chaotic parameters reach a critical value.
How chaos could impact climate might be more like this, I think. If one could show that global warmimg is effecting the chaotic indicies that cause ElNino to the degree that it becomes a more frequent and long lasting event, ie, the regular weather, that could impact climate. Or maybe if one could show that the pacific trade winds, that are presently causing a slow down of average global surface temperatures, are an induced chaotic resonance caused by global warming itself.(These are just absolute hypothetical ideas by me, I am not saying this is happening. the point is indicies reaching critical values)
Regarding Lorenz's chaotic systems of rising warm air, well its on the scale of local weather thats going on all the time. I would guess it is accounted for emperically in the models as required (depending on the purpose of the model: forcasting, climate change, or downscaling etc)
Moderator Response:[JH] This post, sans the first paragraph, is the type of post we are accustomed to seeing on the SkS comment threads. It has a beginning, a middle, and an end.
FYI, Moderation complaints are also prohibited by the SkS Comments Policy.
Please note that posting comments here at SkS is a privilege, not a right. This privilege can and will be rescinded if the posting individual continues to treat adherence to the Comments Policy as optional, rather than the mandatory condition of participating in this online forum.
Moderating this site is a tiresome chore, particularly when commentators repeatedly submit offensive or off-topic posts. We really appreciate people's cooperation in abiding by the Comments Policy, which is largely responsible for the quality of this site.
Finally, please understand that moderation policies are not open for discussion. If you find yourself incapable of abiding by these common set of rules that everyone else observes, then a change of venues is in the offing.Please take the time to review the policy and ensure future comments are in full compliance with it. Thanks for your understanding and compliance in this matter.
-
dhogaza at 00:12 AM on 5 June 2014President Obama gets serious on climate change
"Why didn't he do this the day he entered office in 2008?"
Because the Supreme Court ruling affirming the right of the EPA to regulate cross-state emissions was announced on April 29, 2014, perhaps? Remember that an appeals court had struck down that right a couple of years earlier.
-
dhogaza at 00:10 AM on 5 June 2014President Obama gets serious on climate change
"Howabout "FUSION" our own earth bound sun feeding electricity into a revamped National Grid by 2030...there's a goal!!"
If past history is any indication, in 2030 fusion will still be 50 years in the future, just as it is today.
Money is being spent on fusion research. Progress is still unimpressive. Perhaps the nut will crack someday, perhaps it will always be a dream.
But criticizing Obama for not banking on fusion is rather silly.
-
geoffrey brooks at 22:34 PM on 4 June 2014President Obama gets serious on climate change
I can see from the howls of protest from the coal industry and coal states that:
1) Coal becoming only 30% (down from 40%) of the USA electrical power by 2030 - is too little too late. A target the USA is unlikely to meet with the measly greedy politicians we have in charge.
2) If US clean coal technology truly exists - it should be given to China and other coal burners - and we should help implement it. A $5 tax on oil to invest in this and other clean energy - such as fusion.
3) We should keep the Carbon in the ground - coal is the dirtiest and most expensive to clean up. The 2030 goal should be less than 5% electricity from coal
4) We already subsidizes the farming industry not to grow crops...
why not the coal industry not to mine it???
A tax on Natural Gas (much cleaner) used to generate electricity could be levied and given to the coal industry to keep their coal in the ground. The monies for not mining should be spent on re-educating the work-force, maintaining their pensions, providing transitional payments, and ensuring that they get excellent health benefits - not to enrich the greedy. If the miners are not mining, the coal will stay where it belongs in the ground - NOT in the air.
5) President Obama - you need imagination, planning and foresight to help save the planet. Lets see some. Howabout "FUSION" our own earth bound sun feeding electricity into a revamped National Grid by 2030...there's a goal!!
Geoffrey Brooks
-
Dikran Marsupial at 18:56 PM on 4 June 2014Climate is chaotic and cannot be predicted
Razo, I should point out that just because weather is chaotic that does not imply that climate (the long term statistical properties of the weather) is similarly chaotic. It is not difficult to think of other phsyical systems where this is the case, for example a double pendulum in the presence of an electromagnet.
Sadly expertise in one field is often a recipe for the Dunning-Kruger effect when moving into a different field as it can blind you to the important differences between fields and give undue confidence in ones ability that makes you unable to see your mistakes. The climate modellers are experts in their field, best to understand first and make assertions afterwards.
-
calyptorhynchus at 15:05 PM on 4 June 2014President Obama gets serious on climate change
oops 2009
-
calyptorhynchus at 13:12 PM on 4 June 2014President Obama gets serious on climate change
Why didn't he do this the day he entered office in 2008?
-
Tom Curtis at 09:59 AM on 4 June 2014Climate is chaotic and cannot be predicted
Razo @71, I am going to disagree with TD. You have not shown the relevance of your claims. You are not a researcher into climate models drawing inspiration from another field. Nor do you show how that inspiration from another field should effect our thinking about climate models. At best you have pointed out that in another field there are certain problems and that it is possible that the same problems exist for climate models. As a response to that, pointing out that it is also possible that they do not exist for climate models is an adequate rebutal.
In this case, however, we can make a stronger rebutal because we know future climate states are constrained by the requirements of conservation of energy, and hence constrained by the forcing history. As such, it is analogous to a hollow ball containing a 3D triple pendulum running down a u shaped track. The detailed motion of such a ball will be chaotic, but the mean path and velocity of the ball will be well constrained, and departures from those means will be short term variations only.
Moderator Response:[TD] Tom, I didn't write that Razo showed the relevance of his/her claims, only that he/she at least tried this time. Better than before. Nonetheless, Razo's followup post was mere continuation of his/her sermonizing without addressing the specific features of climate models that have been explained to him/her; so I deleted it.
-
Razo at 08:33 AM on 4 June 2014Climate is chaotic and cannot be predicted
i will respond still to this question as it is directed to me. You may delete them if you wish.
TC, its still relevant because the mathematics of physical phenomena can be very similar between many fields in the physical sciences: electromagnetics, fluid dynamics, thermodynamics...(it's been many years ago now since university). climate models integrate partial differential equations. One can get inspiration from other fields, and many breakthroughs are done this way.
I don't want to take more room on this post for this, but it's a fundemental point.
Moderator Response:[TD] Good, at least you explained what you claim is relevant. However, you continued to fail to address how the specifics of climate models that have been explained to you are trumped by the "lessons" of models of entirely different domains.
-
Mal Adapted at 08:28 AM on 4 June 2014President Obama gets serious on climate change
"We finally have a president that understands science."
Well, yes. My votes for Obama have been decisions on the margin, like all my votes have become over the years. He's done some things I really don't like, but since the only 2012 GOP presidential primary candidate to explicitly support both the teach of Evolution in public schools and the scientific consensus on AGW dropped out of the race early, and several of the rest actually renounced their previous support for the consensus, my choice for Obama was clear.
-
Tom Curtis at 08:18 AM on 4 June 2014Climate is chaotic and cannot be predicted
Razo @69, so? Did the person in question have experience of climate models, and tell you that the same considerations applied to climate models? Did they explain how the climate was supposed to evade the limits placed by conservation of energy on variability in climate? If not, your analogy has no relevance to the discussion, and your implied argument from authority is irrelevant.
-
Razo at 07:55 AM on 4 June 2014Climate is chaotic and cannot be predicted
I would point out the person that told me this was a person that could read Bell laboratory research on superconductors and say that the same math can apply to solid mechanics and bifurcation, ST Ariaratnam.
Moderator Response:[JH] So what! Name dropping is no substitute for well-reasoned comments that are relevant to the OP, or in response to someone else's on-topic comment
Per the SkS Comments polciy (which you should read in its entirety):
The purpose of the discussion threads is to allow notification and correction of errors in the article, and to permit clarification of related points.
Very few of your posts have met this standard.
-
Razo at 07:53 AM on 4 June 2014Climate is chaotic and cannot be predicted
Okay then.
i offered the phrase 'alternative equilibrium configuration' as a concept to help people understand chaotic systems changimg states. Before I used to have a kind of mythical understanding of bifurcation problems. When it was told to me, it was a real help in understanding.
Second I continue the very simple analogy of modelling columns to suggest requirements for a model: the alternative configuration has to be programmed into the model, the numerical algorithm has to be very robust.
these may seem trivial to some, but I don't think they are. I am not making any statement on the worthiness of existing GCM.
Moderator Response:[TD] This site is not an appropriate place for you to ruminate on topics marginally or totally irrelevant to climate change. You have responded to moderator requests to specify the relevant point you are trying to make by posting more ruminations and your own admission that you are not addressing the "worthiness of existing GCM." We will begin to simply delete your posts that are irrelevant to the topic.
-
John Hartz at 07:14 AM on 4 June 2014Climate is chaotic and cannot be predicted
All: Please do not Post any responses to Razo until he makes a specific point about the OP. .
Prev 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 Next