Recent Comments
Prev 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 Next
Comments 41701 to 41750:
-
Bob Lacatena at 23:59 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
Leto says:
As soon as we start to thumb based on what we think of as "true", then we are also voting up comments that support our position.
This begs the idea that accuracy is relative, and that opinions count as accurate facts. Accuracy should not be applied to positions, but it should be applied to facts. If someone says that the globe has been cooling for the past 17 years, that is flat-out innaccurate. If they say that there has been an apparent slow-down in warming for the past 17 years, that is an arguable point and as such not strictly innaccurate.
Facts in this debate are either true, false, uncertain (but that uncertainty is supportable in the peer-reviewed literature), or they aren't facts at all but merely beliefs and opinions.
One of the big problems that false skeptics seem to have is that they are unable to separate fact from belief. They can't see the difference in their own mindset, and then they project that mindset onto others... labeling their understanding of the science as a belief rather than an acceptance of the facts.
There is a distinction, and people should vote according to that distincition.
-
ellisr01 at 23:53 PM on 8 October 2013A rough guide to the components of Earth's Climate System
I find the section on the geosphere still suffering from a lack of clarity, but thank you for removing the very confusing point about 5 year residency time. I am of the opinion that a lot of clarity could be obtained by focusing heavily on the root cause of elevated atmospheric CO2: the movement of stored reduced carbon out of the geosphere (lithosphere) for the purpose of harvesting the stored chemical energy and subsequent release of the gaseous oxidized carbon which goes where any released gas will go, into the atmosphere. Describing clearly this one-way, man-made, process with clarity would go a long way towards bringing understanding. It would both explain the fundamental probem and at the same time point out that fossil fuels are non-renewable. This geosphere to atmosphere transfer of carbon is creating a great load on the natural processes that remove CO2 from the atmosphere. The CO2 is "queued up" in the natural CO2 removal process, mainly chemical weathering, is creating all the trouble.
-
John Mason at 23:21 PM on 8 October 2013A rough guide to the components of Earth's Climate System
OK DM - could you take a look at the last paragraph of the Geosphere section now I've revised it - is that clearer?
Peer-review live - great fun!!! - but if it improves the end product it is well worthwhile....
-
adeptus at 22:44 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
I though the science was settled on this issue?
Moderator Response:[JH] You are skating on the the thin ice of violating the following section of the SkS Comment Policy:
- No link or picture only. Any link or picture should be accompanied by text summarizing both the content of the link or picture, and showing how it is relevant to the topic of discussion. Failure to do both of these things will result in the comment being considered off topic.
-
chriskoz at 22:33 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
OPatrick@13,
Think about this case as "something I didn't know about which sparked the discussion & sibsequently the discussion increased my knowledge".
With time, you will learn to distinguish true "skeptical ideas" from longtime debunked trolls and your recommendatrion skill will improve.
I think your desire to make recommendation "more accurate" would complicate the system with little overall benefit in terms of the data John collects from it. If I'm correct with this supposition, then John agrees according to his comment @12: enough value to be worth the complication.
-
OPatrick at 22:11 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
Is there any way of unrcommending a comment you've recommended? I occasionally read a comment and think it's reasonable but later in the comments thread someone gives context which shows that it was not.
-
John Cook at 20:35 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
Thanks for everyone's comments. Re the danger of amplifying the echo chamber, there is a possibility of that but it's not inevitable - it may depend on the specific community and the specific context (e.g., the fact that only registered users can rate, that banned trolls cannot rate, the instructions provided).
Re fredb's comment about comments floating to the top, our system doesn't change the ordering of comments based on ratings. They're still ordered chronologically.
Re Harry H's comment about 2 dimensions of ratings, it's an interesting idea and I am a big fan of collecting more data. However, I'm also a big fan of keeping things as simple as possible. In this case, I think a second dimension doesn't add enough value to be worth the complication.
Leto, if someone is unknowingly repeating a myth, that should earn a down thumb. Determining a person's motives in an online comment is always problematic so we have to take the information at face value.
Scaddenp, will think about whether to add this feature to the Recent Comments page. On the one hand, it's better to see a comment within the context of the comment thread it belongs to. On the other hand, adding the feature to Recent Comments results in collection of more data. Hmm...
Chris S., I see your point about showing the result possibly skewing results. I believe Heisenberg anticipated this in the early 20th Century (you should formalise this as the Chris S Uncertainty Principle). But I think you lose more than you gain by not providing feedback.
John Brookes, I hope you don't try to earn maximum amount of down thumbs at SkS!
Yves, I see the down thumbs as filling the role of an alert button. Well, not exactly but in that general direction.
-
Dikran Marsupial at 20:14 PM on 8 October 2013A rough guide to the components of Earth's Climate System
John as residence (turnover) time and adjustment time already have definitions, I think using the phrase "effective residence time" to mean "adjustment time" is likely to further prolong the confusion between residence time and adjustment time that lies at the heart of this myth. It is a shame that the new report uses the phrase "residence time of a perturbation of CO2", which seems to needlessly confuse the two terms slightly.
Leto, I don't think there is any real controversy regarding residence time. It is defined as the ratio of the mass of the atmospheric reservoir (which is known with good accuracy) and total annual uptake from the atmosphere. This is less well known, but a figure of 4-5 years for residence time is generally accepted (see e.g. the glossary of the AR4 WG1 report for "lifetime"). I suspect the residence time is slightly different for different carbon isotopes, but not enough for it to make a significant difference to the bulk residence time of the atmosphere.
-
Klapper at 19:57 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
I was going to make a comment about the "echo chamber" effect, but I see Flakmeister has already beat me to it. I'm not a prolific commenter here, but I think the citations "requirement" is nonsense, particularly since there's a fairly apparent double standard about posting non-peer reviewed references deemed "friendly" (like Tamino, or Tom Curtis), vs. "unfriendly" non-peer reviewed analysis.
Moderator Response:(Rob P) - There is little point in SkS in being like most of the internet - full of unsubstantiated opinion. Peer-reviewed scientific literature is the 'gold standard' not because it is perfect, but because it is research carried out by experts and subject to the scrutiny of other experts. Rubbish still gets through though - Ole Humlum & co-authors, for instance, think human industrial emissions of carbon dioxide magically disappear.
If commenters make claims, they should be able to back it up with facts. We don't think that's unreasonable.
-
John Mason at 19:42 PM on 8 October 2013A rough guide to the components of Earth's Climate System
Have added some other useful links regarding carbon sinks in the Geosphere section of the post. Keep the comments coming - the clearer this ends up the better :)
-
John Mason at 19:22 PM on 8 October 2013A rough guide to the components of Earth's Climate System
Leto,
Perhaps a better term is 'effective residence time', as individual CO2 molecules are not the big picture. For a discussion of this, see:
-
Yves at 19:13 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
Having some experience with a French information website, I would suggest introducing recommendation only. Thumbs up without thumbs down. Along with an alert button in case of uncivil behaviour. This would be a little more indicative of the added value of a contribution - not intrinsic added value but in the context of the initial article, including timeliness, on-topic,...- with all the unavoidable caveats such as confirmation bias, groupthink,... since any participative website represents some kind of community.
Besides, I would also suggest setting a limit for contributions' length, in order to avoid lengthy rants. In case of a valuable, pedagogic contribution, this limit could be exceeded with editor permission - though a suggestion for editing an article could be welcome. -
Dikran Marsupial at 18:48 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
franklefkin wrote "Another thought. Perhaps the route you are suggesting is correct. If that is the case, why would the IPCC use the wrong ranges in AR4?"
The IPCC didn't use the wrong range for the subject of the discussion in the report (and at no point have I suggested otherwise). It just isn't a direct answer to the question that is being discussed here. Sadly you appear to be impervious to attempts to explain this to you, and are just repeating yourself, so I will leave the discussion there.
-
John Brookes at 18:18 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
Having been a regular poster at Jo Nova's blog, where the thumbs up/down have been there for a while, I can say I quite like them.
A quick scan for lots of thumbs down leads me to comments that are more interesting. It also gave me a goal for a while - to try and get as many thumbs down as I could.
But all skeptic blogs seem to be, dare I say, less skeptical these days. Too many gullible skeptics...
-
Chris S. at 18:10 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
Having the scores next to the thumbs may skew the system. People are more likely to attempt to game the system if they can see the results of their actions. I would advocate just having the thumbs if you wanted to see a truer reflection of voting preferences.
-
jyyh at 18:01 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
thumbs up voting button seems to work also.
-
jyyh at 17:58 PM on 8 October 2013Arctic sea ice "recovers" to its 6th-lowest extent in millennia
looks like it works. how do I make it go to zero again?
-
jyyh at 17:57 PM on 8 October 2013Arctic sea ice "recovers" to its 6th-lowest extent in millennia
just testing the thumbs down voting system...
-
JasonB at 17:50 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Following Tom's lead, here is a graph based on my suggestions @ 125:
I have simply taken Tom's graph @ 79 and plotted SASM's minimum and maximum trend lines starting at the same location and using the same slopes that SASM used.
As expected, there are quite a few model realisations that go much further beyond the supposed minimum and maximum than the actual temperature record does.
Given that those minimum and maximum trends were derived from the model realisations, and that applying this technique would lead to the nonsense conclusion that the models do not do a good job of predicting the models, clearly the approach is flawed.
Note also that shortly after the starting date, practically all of the models dramatically drop below the supposed minimum, thanks to Pinatubo! Indeed, at the start date, all of the models lie outside this supposed envelope. Again, this is because trend lines are being compared with actual temperatures.
On a different note:
In each of these cases, model temperatures are being compared to HadCRUT (and sometimes GISS) temperatures.
However, even this is not strictly an apples-to-apples comparison, as has been alluded to before when "masking" was mentioned. AFAIK the model temperatures being plotted are the actual global temperature anomalies for each model run in question, which are easy to calculate for a model. However, HadCRUT et al are attempts to reconstruct the true global temperature from various sources of information. There are differences between each of the global temperature reconstructions for the exact same actual temperature realisation, for known reasons. HadCRUT4, for example, is known to miss out on the dramatic warming of the Arctic because it makes the assumption (effectively) that temperature changes in unobserved areas are the same as the global average temperature change, whereas e.g. GISTEMP makes the assumption that temperature changes in those areas are the same as those in the nearest observed areas.
To really compare the two, the HadCRUT4 (and GISTEMP, and NOAA) algorithms should be applied to the model realisations as if they were the real world.
However, in the current circumstances, this is really nitpicking; even doing an apples-to-oranges comparison the real-world temperature reconstructions do not stand out from the model realisations. If they did then this would be one thing to check before jumping to any conclusions.
-
scaddenp at 17:21 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
No way to vote up or down on the "Comments" stream which is where I would normally read comments. I see the posts on Feedly and I would only go the thread itself when I want to add a comment.
-
Leto at 17:02 PM on 8 October 2013A rough guide to the components of Earth's Climate System
John,
Thanks for the post. I am confused on one point - how do you reconcile this statement:
"By contrast, the greenhouse gas carbon dioxide has an 'atmospheric residency' time of many centuries: once up there it takes a long time to get rid of it again."
... with previous discussion of the residence time of CO2 in the recent Essenhigh thread . In the comment section, Dikran Marsupial wrote:
"Essenhigh's 5 year figure for residence time is correct, and indeed agrees with the figure given in the IPCC WG1 report. His error lies in not understanding the distinction between residence time and adjustment time."
I agree that adjustment time is the more important figure, but what is the consensus on residence time for CO2 in the atmosphere? Is there a terminological issue here?
-
Leto at 16:48 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
The post says this: "The point is not to vote up comments that support your position and to vote down comments that support their position."
But then the post also asks us to thumb according to multiple critera including this one: "Accuracy. Is it true, or does it merely propagate an innaccurate myth?"
I feel that these two instructions are contradictory. As soon as we start to thumb based on what we think of as "true", then we are also voting up comments that support our position. What are we to do with a contrarian who is unknowingly re-stating a previously debunked myth, or a Wattsupian refugee floundering in confusion, but who is civil and respectful and genuinely here to learn? I think they should earn a thumb.
-
Tom Curtis at 16:38 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Here is a graph based on my suggestions @122:
As you can see, HadCRUT4 does not drop below model minimum trends, alghough the ensemble 2.5th percentile certainly does. Nor does HadCRUT4 drop below the 2.5th percentile line recently (although it dropped to it in 1976).
Skeptics may complain that the trends are obviously dropped down with respect to the data. That is because the HadCRUT4 trendline is the actual trend line. Trendlines run through the center of the data, they do not start at the initial point.
I will be interested to hear SAM's comments as to why this graph is wrong or misleading, and why we must start the trends on a weighted 5 year average so as to ensure that HadCRUT4 drops below the lower trend line.
-
Harri H at 16:21 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
It might be interesting to note that the leading Finnish newspaper Helsingin Sanomat (www.hs.fi) uses a two-dimensional system with thumbs up/down in two categories: "I agree" and "Well argumented".
These seem highly correlated though...
-
fredb at 15:57 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
It's not so much that this approach prodcuces uniformity, rather that in its simple implementation highly voted comments float to the top. Thus, in order to see other comments, such as highly down-voted comments, you have to scroll down and down ... most people wouldn't bother.
I encourage a look at the slashdot.org approach to crowd sourced moderation of comments comments, it works exceptionally well as a filter and yet allowing people to see the broader range of comments. -
Joshua at 15:49 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
It will be interesting to observe this system. Specifically, it will be interesting to see whether any "skeptical" comments recieve high ratings. I agree with Flakmeister - in that whenever I've seen such systems employed on blogs, they seem to have largely functioned to reinforce uniformity, or a predominant viewpoint. Even still, the system may help to reinforce a more constructive engagement within a limited range of opinions.
It would say a lot about this site if civil, well-written, and well-supported "skeptical" arguments got some high ratings. Of course, that would mean that there would have to be a substantial number of contributors who don't think that using those descriptors for "skeptical" aguments is oxymoronic.
-
Flakmeister at 14:45 PM on 8 October 2013SkS social experiment: using comment ratings to help moderation
Well, I may be proven wrong but the up/down system doesn't do a job of moderation... Witness Zero Hedge where a high number of "greenies" only demonstrates the resonance within the echo chamber...
Call me skeptical....
Moderator Response:[PW] I'm fairly sure JC and the rest of the SkS team aren't intending the thumbing system to take the place of regularmoderators: quite the contrary--and speaking as a moderator--if/when I see multiple thumbs-down, *geenrally* it can be taken as a sign of a *possible*vtroll/contrarian, and, for me, just aids in my 'drive-bys,' daily, of the threads.
-
John Mason at 14:42 PM on 8 October 2013A rough guide to the components of Earth's Climate System
OK I have edited the text accordingly. However, although the term 'strong' was in retrospect incorrect, 2010 did see moderate El Nino conditions for a time:
http://iri.columbia.edu/climate/ENSO/currentinfo/archive//201001/QuickLook.html
Tom - I'd call one of the five biggest El Ninos in a century pretty darned impressive!
-
JasonB at 13:13 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Another way of illustrating the problem with SASM's graph:
Keep the minimum and maximum trend lines the same as they are now but instead of plotting HadCRUT, plot instead the individual model realisations (Tom's spaghetti graph @ 79).
Now the model realisations will be going outside of the "minimum" and "maximum" trend lines, and many of them will be "pushing the limits" — if fact, no doubt some of them go outside those "limits". How is it possible that the actual model realisations that are used to create those "limits" could go beyond them? Because those "limits" are not the limits of the monthly (or five-yearly moving average) temperature realisations, they are the range of the trends of those realisations. And they certainly do not start at that particular point in 1990 — each realisation will have its own particular temperature (and five-year moving average) at that point in time.
-
Rob Honeycutt at 13:01 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
SAM... Actually, I find your chart interesting in terms of illustrating why there is a problem centering on 1990.
Look at your chart for a moment. Do you notice how 1990 is at the peak of the 5 year trend? Try centering the start point forward or back a few years and see if you get a different picture.
Do you see how you can get a very different interpretation by just adjusting a few years? That indicates to me that that method doesn't provide a robust conclusion. Centering on 1990 certainly provides a conclusion that skeptics prefer, but it's not at all robust.
But again, as JasonB just restated, you're treating this as an initial condition problem when it's a boundary condition problem. You have to look at the full band of model runs, including both hindcast and projections, and compare that (properly centered) to GMST data.
-
Paul Pukite at 12:46 PM on 8 October 2013A rough guide to the components of Earth's Climate System
My eyes lit up when I noticed how seriously you are treating the various realms of the Earth's climate system. I am starting up an earth sciences semantic web server and blog at http://ContextEarth.com. This is designed to encompass the various realms of the earth and its natural resources, using an organization provided by the SWEET ontology from JPL. It's all open sourced so I intend to open it up to those who have some interest in the topic.
An example of a recent post is on compensating the GISS temperature record with the SOI time series. This analysis was inspired by charts that Kevin C and Icarus posted on SkS recently.
-
JasonB at 12:41 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
SASM @ 117,
Basically, this is the same chart as Tom Curtis has drawn @88, except I continue to show the raw monthly HADCRUT4 temperature data and a 5-year center moving average.
No, it's not. Tom's chart draws trendlines only. As a consequence, there is no initial value problem and all trend lines can safely start at the same point.
You show actual temperature anomalies and compare them with trendlines. This has two problems:
1. You anchor the trendlines to start the on the 5-year centred moving average temperature in 1990. If you really want to go down this path and you want to do it "properly", then you should use a much longer average than 5 years. Given that climate is commonly defined to be the average of 30 years, you should use the 30-year centred moving average at 1990. By my calculation that would drop the starting point by 0.04 C, making a big difference to the appearance of your chart.
2. More importantly, comparing monthly data with straight trendlines will naturally show periods where the monthly data (and even smoothed data) goes outside those trendlines, even when those trandlines are minimum and maximum trend lines. That's because they're the minimum and maximum range for the trend, not the minimum and maximum values for the monthly figures at any point in time! Plot the trend line for HadCRUT starting at the same point and ending today. Does it lie well within the minimum and maximum trend lines? Obviously it must, this is what Tom showed. Include the range of uncertainty for the trend itself. What is the likelihood that the true trend does not lie within the minimum and maximum range forecast? As a bonus, if you include the range of uncertainty for the trend itself then you can generate shorter and shorter trends for comparison, and if you do, you'll find that the forecast trends continue to overlap the range of trend values despite the actual trend value swinging wildly because the range will naturally grow wider as the time period becomes shorter.
The bottom line that you have to ask yourself when generating these charts is "How is it possible that I can reach a different conclusion by looking at my chart to what I would infer from looking at Tom's or Tamino's chart (second figure in the OP)?" If the answer is not immediately obvious to you then you need to keep working on what the charts mean.
Dana goes to great length to attempt to explain that the left draft chart was a mistake due to baselining issues. My chart of the raw temperature data and Tom’s min/max model looks very similar to the draft IPCC chart. I doubt the people developing the draft chart made a major baseline mistake as claimed by Dana.
You shouldn't need to "doubt", it's obvious that they made a mistake. It makes no sense to anchor your projections on the temperature record for a particular year. The fact that you think Tom's chart "looks very similar to the draft IPCC chart" means you haven't understood the fact that by comparing trends alone Tom has completely sidestepped the issue.
If you really think you're onto something, then to prove it you should be able to do so using either Tom's chart or Tamino's. If you cannot, and you cannot say why Tom's chart or Tamino's chart do not support your claims, then you need to think a little bit more.
-
Tom Curtis at 12:40 PM on 8 October 2013A rough guide to the components of Earth's Climate System
One Planet Forever, I like graphs because they convey quantities (approximately) very quickly. In that context, the 97/98 El Nino was not a super El Nino. It was only the fifth largest on record. It had the misfortune of not coinciding with a major volcanic eruption (unlike the largest), and of occuring in a time of elevated global temperatures (unlike the other four stronger El Ninos). 2010 was so weak an El Nino that it almost qualified as a La Nina.
-
One Planet Only Forever at 12:28 PM on 8 October 2013A rough guide to the components of Earth's Climate System
John Mason,
I am not a fan of unquantifiable terms like "Super vs. Strong" to indicate a difference. As an engineer I tend to prefer the more precise presentation of differences. The clarification of El Nino strengths could be made by saying the 1997/98 event was significantly stronger than the 2009/10 event and referring the reader to the NOAA ENSO History. Even saying one was significantly stronger really doesn't "quantify the difference". And I think a reference directly to the NOAA ENSO history helps a person figure it out for themselves. It may also lead a person to explore more of the information that is avaialble. They might decide to open a few of the other links at NOAA.
-
Tom Curtis at 12:20 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
SAM, there are several problems with your graph as is.
The largest problem is that you do not show the observed trend. If you are showing the natural variation of the data, you should also show the variation in the models for a fair comparison, ie, something like the inset of my graph @79. Alternatively, if you want to compare trends, compare trends!
If you also want to show the actual data, that is fine. The way I would do it would be to show the actual 1990-2013 trend for the data, properly baselined (ie, baselined over the 1990-2013 interval. I would then show the range and mean (or median) of the model trends baselined to have a common origin with the observed trend in 1990.
Doing this would ofset the origins of the trend comparison from the temperature series. That has the advantage of making it clear that you are comparing trend lines; and that the model trend lines shown are not the expected range of observed temperatures. Ie, it would get rid of most of the misleading features of the graph.
You may also want to plot the 2.5th to 97.5th percentiles (or min to max) of the model realizations set with a common 20-30 year baseline (either 1961-1990, or 1981-2000) to allow comparison with the expected variation of the data on the same graph. That may make the graph a little cluttered, but would allow comparison with both relevant features of the model/observation comparison.
As is, you do not compare both. Rather you compare one relevant feature of observations with the other relevant feature of models; and as a result allow neither relevant feature to actually be compared.
-
Leto at 12:15 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Tom @ 118,
The 2.5th and 97.5th centiles would be of interest, given the traditional (but arbitrary) interest in the central 95% of a spread of values.
-
Leto at 12:11 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Thanks Tom and Jason B.
Thats sounds perfectly reasonable.
I presume the conclusions do not materially differ if 2013 is used as the endpoint for both models and observations? The comparison might be neater, though, with all the talk on both sides of apples and oranges.
Of course, it is a shame that such an important issue gets bogged down in minutiae in the first place, so it is with regret that I raise such trivia. Thinking defensively, though, there may be advantages in using 2013 in such a table.
-
StealthAircraftSoftwareModeler at 12:05 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Opps. Moderator, the URL for larger image @117 is:
http://tinypic.com/r/5c105z/5
-
Tom Curtis at 12:04 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Several people have suggested that the first graph in post 79 may be useful for a post. Thanks. Several decisions should be made if it is to be so used. First, I have used the full ensemble rather than just one per model as is used by AR4. This may lead to accusations of deliberate cluttering as a cheat, so, if the post authors want to use the graph, do they want a new version with just one member per model? Further, in such a new graph, would they also like? the 2.5th and 97.5th (or 5th and 95th) percentiles marked as well as the minimum and maximum on the inset? Further, currently if you know what you are looking for you can pick out the observed data because:
a) They are the two top most lines, and hence are never overlaid by model runs;
b) They end in 2012; and
c) They track each other closely (unlike all other model runs) making it possible to distinguish them easily if you look closely, and know what you are looking for.
Do you want just one observed temperature series to obviate (c) above?
And finally, do you want a similar graph for AR5?
If the authors can answer the questions fairly promptly, I can get the graphs done up by this weekend. On the other hand, if you are happy with what is currently available you are more than welcome to use it as is (or to not use it, if that is your preference).
-
StealthAircraftSoftwareModeler at 12:02 PM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
I have updated my chart to show min/max model boundary conditions. Basically, this is the same chart as Tom Curtis has drawn @88, except I continue to show the raw monthly HADCRUT4 temperature data and a 5-year center moving average. IMO, there is more meaningful information in all the data as opposed to showing simple trend lines for GISS and HADCRUT4. Depending on how the trends are selected the data can be skewed. Tom’s chart @88 shows HADCRUT4 and GISS trends as being between the CMIP3 mean and min values, whereas my chart shows the actual temperature as pushing the limits of the lower boundary conditions. Please examine the data closely: look at the temperature data, the point of origin of the CMIP3 min/max points, and the slopes of the lines. They all match Tom’s data. But I have more information and it leads to a slightly different conclusion, I think.
Dana goes to great length to attempt to explain that the left draft chart was a mistake due to baselining issues. My chart of the raw temperature data and Tom’s min/max model looks very similar to the draft IPCC chart. I doubt the people developing the draft chart made a major baseline mistake as claimed by Dana. If you look hard that the final IPCC chart, is does look similar to the draft chart, except that the scale is zoomed way out making the interesting area very small, and then they splattered spaghetti lines all over it. I can see why the skeptic crowd went nuts over the final IPCC chart.
A larger verison of this chart is here. -
Tom Curtis at 11:51 AM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
leto @113, I originally downloaded the data to check on AR4 with respect to the second graph in the original post, and on issues relating to the comparison between Fig 1.4 in the second order and final drafts of AR5. As I am manipulating the data on a spreadsheet, I decided to follow the 2nd order draft and limit the data to 2015, that being all that is necessary for the comparison. Consequently, model trends are to 2015 unless otherwise stated. Observed trends are to current using the SkS trend calculator unless otherwise stated.
Jason B's point about uncertainty ranges of observations is quite correct, but unfortunately I have yet to find a convenient means to show uncertainty ranges conveniently on Open Office Calc graphs without excessive clutter.
-
JasonB at 11:35 AM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
I should out that the second figure in the OP, from Tamino, captures this point perfectly. It includes the uncertainty ranges of both the various model forecasts and the actual records. Anybody arguing that the models have done a bad job is essentially saying that the overlaps between those two groups are so low that we can dismiss the models as unskillful (and therefore ignore what they project future consequences to be and continue BAU).
-
chriskoz at 11:32 AM on 8 October 20132013 SkS Weekly Digest #40
Interesting experiment about peer review:
Hundreds of open access journals accept fake science paper
Just to quote the most interesting aspects:
The paper, which described a simple test of whether cancer cells grow more slowly in a test tube when treated with increasing concentrations of a molecule, had "fatal flaws" and used fabricated authors and universities with African affiliated names, Bohannon revealed in Science magazine.
He wrote: "Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's shortcomings immediately. Its experiments are so hopelessly flawed that the results are meaningless."
So, can you so easily get away with such bogus stuff in bio-technology? Is it the only area where PR process is so broken? I'm sure climate science is not like that because it attracts a hell lot of attention...
-
JasonB at 11:27 AM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Leto,
The table is referring to trends in the CMIP3 models, not the actual temperature record, therefore future dates are not a problem.
Going back to Tom's trend graph @ 79, and the attempts to argue that the actual temperature record is in some way inconsistent with the forecasts, I'd just like to point out that while the range of model trends is plotted, the actual temperature trends plotted do not include their ranges!
A quick check on the SkS trend calculator shows HadCRUT4 to be 0.140 ±0.077 °C/decade (2σ) and GISS to be 0.152 ±0.080 °C/decade (2σ). What that means is that there is a ~95% chance that the actual HadCRUT4 trend for 1990 to the present is somewhere between 0.063 and 0.217 °C/decade. There's a lot of overlap between the range of model projections and the range of possible actual trends.
For someone to argue that the models had failed to predict the actual temperature trend, these two ranges would need to have very little overlap indeed.
-
Bob Loblaw at 11:02 AM on 8 October 2013CO2 is just a trace gas
Tom:
I have managed to download the full paper you refered to, and I gave it a quick read this evening.
Although I agree with your summary of the contents of the paper, and I agree that it is a very useful way of quantifying the relative importance of various atmospheric constituents, I still contend that "the Greenhouse Effect" writ large must include consideration of the atmospheric transparency wrt solar radiation.
Two interesting aspects of the paper:
1) the dual approach of adding consituents one at a time to the model, verus subtracting them (with others prreset). Various constituents have overlapping absorption bands, which are accounted for in the radiation code. Adding consituents one at a time and watching the changes tells the maximum effect (as any "overlap" won't be an overlap). Removing them one at a time leaves the overlap active in the remaining constituents, and shows a minimum effect. THis puts bounds on the range of values.
2) the use of a 3-d climate model gives a more realistic account for the spatial effects, compared to other estimates that used 1-d models. The exact effect of any constituent depends on local effects of temperature, cloud cover, etc. As a 1-d model can only deal with a single "average" condition, it is more limiting than the 3-d model approach.
-
Leto at 10:52 AM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Hi Tom,
That is very clear, thank you. Could you please comment on your table, though, which refers to trends that extend to 2015:
_____________17.00%_83.00%
1975-2015:__0.149__0.256
1990-2015:__0.157__0.339
1992-2006:__0.128__0.470
1990-2005:__0.136__0.421Is the "2015" a typo? If so, could the mods please edit the post to fix it (no point getting distracted over typos). If not, how were trends derived for years in the future?
-
Tom Curtis at 08:09 AM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
franklefkin in varius comments has presented two quotes from the IPCC. The first is from the technical summary of WG1, while the second was from the Synthesis report.
The first reads (properly formatted):
"A major advance of this assessment of climate change projections compared with the TAR is the large number of simulations available from a broader range of models. Taken together with additional information from observations, these provide a quantitative basis for estimating likelihoods for many aspects of future climate change. Model simulations cover a range of possible futures including idealised emission or concentration assumptions. These include SRES[14] illustrative marker scenarios for the 2000 to 2100 period and model experiments with greenhouse gases and aerosol concentrations held constant after year 2000 or 2100.
For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}
- Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections. {1.2, 3.2}
- Model experiments show that even if all radiative forcing agents were held constant at year 2000 levels, a further warming trend would occur in the next two decades at a rate of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios. Best-estimate projections from models indicate that decadal average warming over each inhabited continent by 2030 is insensitive to the choice among SRES scenarios and is very likely to be at least twice as large as the corresponding model-estimated natural variability during the 20th century. {9.4, 10.3, 10.5, 11.2–11.7, Figure TS.29}"
(Original formating and emphasis.)
Since originally quoteing this test, franklefkin has quoted the third seperately, describing it as "a further quote". He has then gone on to quote the second paragraph seperately, saying "This is taken directly from IPCC AR4", and going on to mention the contents of the third paragraph.
Curiously, when franklefkin quoted the third paragraph seperately, he describes it as referring to prior assessement reports, saying:
"This is saying that past reports had projected increases of between 0.15 and 0.30 C/ decade, and that observations had increases of around 0.2, which bolsterred their confidence their ability to make these projections."
(My emphasis)
In contrast, on the other two times he quotes or mentions this paragraph, he takes it as referring to the IPCC AR4 projections. Thus he has contradictory interpretations of the same passage.
For what it is worth, I agree with the interpretation that this refers to past assessment reports (first given by Dikran Marsupial in this discussion). That interpretation makes the most sense of the chosen trend period, which starts with the first projected year in all prior reports, and ends in the last full year of data when AR4 was being reported. In contrast, AR4 strictly does not project from 1990 but from 2000 (up to which time they have historical data for forcings).
It is possible, however, to interpret this as further qualifying the AR4 projection. That is an unlikely interpretation given the clear seperation into a distinct paragraph within the IPCC report, but it is possible. On that interpretation, however, it probably follows IPCC custom in refering to the "likely" range of temperatures, ie, the 17th to 83rd percentiles. Here then are the likely range for the trends I have reported from CMIP3:
_____________17.00%_83.00%
1975-2015:__0.149__0.256
1990-2015:__0.157__0.339
1992-2006:__0.128__0.470
1990-2005:__0.136__0.421Note that the likely range from 0.136 to 0.421trend over the same period of time reffered to in the quote.
The IPCC used only one run per model in its report, wheras I downloaded the full ensemble. It is possible, therefore, that the upper bound in the restricted ensemble used in AR4 is closer to three. The lower bound is sufficiently close to 1.5 as to create no issue. On this basis, reference to a "likely" range of 0.15-3 C for the 1990-2005 trend is consistent. It is also irrelevant. It would be extraordinary if the missing ensemble members would shrink the 0 to 100th percentile range (Min to Max) that I showed as much as franklefkin desires, and I have already quoted a more restricted 25th percentile greater model trends, showing continuing harping on the 17th percentile to be odd.
With respect to his second quote, as already pointed out, AR4 projections to the end of the century are not linear, and hence not simply interpretable as projections over the early decades of the twenty first century:
Finally, the only clear projection for the first decades of the twenty first century by AR4 is 0.2 C per decade. In my graph I show the ensemble mean trend 0.237 C/decade. So, while I have accurately reported, if anything my graph exagerates the "failings" of observations rather than the reverse.
I see little further point in responding to franklefkin on this point as he is getting repetitive (to say the least). Further, he is neither consistent, nor willing to concede the most straight forward points (such as that trends sited for temperature rise over a century are not the same as projections for the first few decades of that century).
-
franklefkin at 05:40 AM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Dikran,
Another thought. Perhaps the route you are suggesting is correct. If that is the case, why would the IPCC use the wrong ranges in AR4?
-
franklefkin at 05:37 AM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
Dikran Marsupial,
This is taken directly from AR4;
For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}
In the subsequent paragraph (both this one and the next I have already posted here) it goes on to state the this 0.20 C /decade is bounded by 0.10 and 0.30 C. So it is not I who is taking anything out of context. AR4 made the projection. In his post at 79, Tom Curtis compares actual temps with a minimum trend of 0.10 C /decade in an attempt to show how accurate the models' projections have been. To be accurate, I am saying that a minimum value of 0.15 C/decade should be used. When it is, the models' accuracy at projections does not look as good!
The words are not mine, they came from the IPCC in AR4.
So where did the value of 0.10 c /Decade come from?
-
Dikran Marsupial at 05:21 AM on 8 October 2013Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy
franklefkin Says "I am not looking for a specific CMIP3 trend". O.K., but in that case, you are restricted to using the figures in the report exactly according to their stated meaning, in this case average rates of warming over the course of a century. In that case, you can't use them for comparison with the observations until we have observations for the whole century and you can do an "apples versus apples" comparison.
If you want to do a comparison with the observations over some arbitrary interval (as discussed in the main article), an explicit answer is not given in the reports and you need to go back to the CMIP3 model runs to get an "apples versus apples" comparison.
Your question has been answered, the problem is that you don't appear to know exactly what the question is, and you appear unable to accept that you have not appreciated what the figures in the report that you have quoted actually mean.
Prev 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 Next