Agnotology, Climastrology, and Replicability Examined in a New Study
Posted on 28 June 2013 by dana1981
A new paper is currently undergoing open public review in Earth System Dynamics (ESD) titled Agnotology: learning from mistakes by Benestad and Hygen of The Norwegian Meteorological Institute, van Dorland of The Royal Netherlands Meteorological Institute, and Cook and Nuccitelli of Skeptical Science. ESD has a review system in which anybody can review a paper and submit comments to be considered before its final publication. So far we have received many comments, including from several authors whose papers we critique in our study, like Ross McKitrick, Craig Loehle, and Jan-Erik Solheim. We appreciate and welcome all constructive comments; the discussion period ends on July 4th.
Agnotology is the study of how and why we do not know things, and often deals with the publication of inaccurate or misleading scientific data. From this perspective, we attempted to replicate and analyze the methods and results of a number of climate science publications.
We focused on two papers claiming that factors other than human greenhouse gas emissions are responsible for the global warming observed over the past century – specifically, the orbital cycles of various planetary bodies in the solar system. Since there is no physical reason to believe that the orbits of other planets should have any significant effect on the Earth's climate, and these papers generally do not propose a physical mechanism behind this supposed influence, this hypothesis is often referred to as "climastrology," because it's essentially an application of astrology to climate change.
In our study, we attempted to replicate the methods and results in these and a number of other papers to evaluate their validity. In the process, we found many different types of errors that can appear in any paper, but that appear to be common to those which purport to overturn mainstream climate science.
One common mistake we highlighted in our paper is known as 'curve fitting'. This is an issue we have previously discussed at Skeptical Science, for example in papers published by Loehle and Scafetta. The concept behind 'curve fitting' is that with enough fully adjustable model parameters that are not limited by any sort of physical constraint, it's easy to make the model fit any data set. As the famous mathematician, John von Neumann said,
"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk."
However, a model without any sort of physical limitations is not a useful model. If you create a model that attributes climate changes on Earth to the orbital cycles of Jupiter and Saturn as Scafetta did, but you don't have any physical connection between the two or any way to know if your parameters are at all physically realistic, then you really haven't shown anything useful. All you've got is a curve fitting exercise.
Omitting Inconvenient Data
Similarly, Humlum et al. (2011) attempted to attribute climate changes on Earth to lunar orbital cycles. In addition to the same curve fitting issues, we also found that they neglected data that did not fit their model. Humlum et al. claimed that their model could be used to predict future climate changes, but in fact we found that it could not even accurately reproduce past climate changes.
They fit their model to temperature data from Greenland ice cores (GISP2) (which is a problem in itself, since Greenland is not an accurate representation of global temperatures) over the past 4,000 years. However, the data are available much further back in time. We extended the Humlum model, and found it could not reproduce Greenland temperature changes for the prior 6,000 years. Even when we fit the observed data as best as we could by removing the trend in the 4,000-year model, the model still produced a poor fit (Figure 1).
Figure 1: A replication of Humlum et al. (2011a)’s model for the GISP2-record (solid red) and extensions back to the end of the last glacial period (red dashed). The two red dashed lines represent two attempts to extend the curve fit, one keeping the trend over the calibration interval and one setting the trend to zero. The black curve shows the part of the data showed in Humlum et al. (2011a) and the grey part shows the section of the data they discarded.
Many Other Examples
We examined many other case studies of methodological errors in the appendix of our paper.
- Case 3: Loehle and Scafetta (2011) - unclear physics and inappropriate curve-fitting
- Case 4: Solheim et al. (2011) - ignoring negative tests
- Case 5: Scafetta and West (various) - wrongly presumed dependencies and no model evaluation
- Case 6: Douglass et al. (2007) - misinterpretation of statistics
- Case 7: McKitrick and Michaels (2004) - failure to account for the actual degrees of freedom
- Case 8: Veizer (2005) - missing similarities
- Case 9: Humlum et al. (2013) - looking at wrong scales
- Case 10: Cohn and Lins (2005) - circular reasoning
- Case 11: Scafetta (2010) - lack of plausible physics
- Case 12: McIntyre and McKitrick (2005) - incorrect interpretation of mathematics
- Case 13: Beck (2008) - contamination by other factors
- Case 14: Miskolzi (2010) - incomplete account of the physics
- Case 15: Svensmark, Friis-Christensen, and Lassen (various) - differences in pre-processing of data
- Case 16: Svensmark (2007), Shaviv (2002), and Courtillot et al. (2007) - selective use of data
- Case 17: Yndestad (2006) - misinterpretation of spectral methods
All the results of our paper through Case 10 will be available in an R-package in which we replicate the results of the studies in question.
The main point of our paper is to show that studies should be replicable and replicated. The peer-review process is a necessary but insufficient step in ensuring that published studies are scientifically valid. Sometimes flawed papers are published. We also note the usefulness of including open source codes and data, as we have done, to allow for replication of a study's results.
Frequently, a paper that purports to overturn our previous scientific understanding is immediately amplified in the media, and can serve to misinform the public if the results are incorrect. There was recently a good example of this in the field of economics, where an attempt to replicate a paper's results revealed a mistake that entirely undermined its conclusions. However, before the attempt was made to replicate the study, its conclusions were widely used to justify a number of economic policies that now appear to have been ill conceived.
This example highlights the importance of replication and replicability prior to putting too much weight on the results of any single study. It can be tempting to immediately accept the results of a study which confirms our pre-conceived notions – that's true of everybody, not just climate 'skeptics' – but it's important to resist that urge and first make sure the study's results are replicable and accurate.