Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.


Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest

RSS Posts RSS Comments Email Subscribe

Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...

Keep me logged in
New? Register here
Forgot your password?

Latest Posts


Climate Hustle

Dikran Marsupial

Dikran Marsupial (A.K.A. Dr Gavin Cawley) is a senior lecturer in the School of Computing Sciences at the University of East Anglia.  His research interests focus on machine learning (essentially a branch of statistics), and in particular dealing with various forms of uncertainty.  He is interested in science generally, and in favour of rational decision making.  These interests intersect in climate change, as rational choice of the best course of action requires our best effort at understanding the science of climate, including an appreciation of the uncertainties.   SkS makes a positive contribution to this by refuting climate myths and addressing common misconceptions regarding the science of climate change that stiffle productive debate of the key issues.  In his spare time, he enjoys luthiery, lute playing, cricket and moustache cultivation.


Recent blog posts

What Does Statistically Significant Actually Mean?

Posted on 10 May 2017 by Dikran Marsupial &

Used correctly, Null Hypothesis Statistical Testing (NHST) provides a valuable sanity check in science, requiring scientists to question the support their theories receive from the data, such that they only proceed with their research hypothesis if it can overcome this (often minimal) hurdle.  This enforces an element of healthy self-skepticism that helps science to be self-correcting in the long term.  Unfortunately, however, statistical hypothesis testing is widely misunderstood amongst the general public, working scientists and even professors of statistics [1].  It isn't unduly surprising then, that misunderstandings of statistical significance have cropped up rather frequently in the climate debate.  The aim of this post is to give an idea of what statistical significance actually means, and more importantly, what it doesn't mean and why this should matter.

The Basic Null Hypothesis Statistical Testing Recipe

Flipping a coin is the traditional way of deciding between two arbitrary options, for instance which side should have the option of batting or fielding first in a game of cricket.  A classic example of statistical hypothesis testing is deciding whether a coin is fair (the probability of a head is the same as that of a tail) or whether it is biased.  Say we observe the coin being flipped four times and it comes down heads each time.  The usual recipe for statistical hypothesis testing is to first state your null hypothesis (known as H0), which is generally taken to be the thing you don't want to be true.  Say we think the captain of the opposition is a cheat and he is using a biased coin to gain an unfair advantage.  In that case, our null hypothesis ought to be that we are wrong and that his coin is fair (i.e. q = p(head) = p(tail) = 0.5).

H0: The coin is fair, q = 0.5

We then state our experimental hypothesis, for which we want to provide support

H1: The coin is biased, q ≠ 0.5

We then need a test statistic, z, that we use to measure the outcome of the experiment.  In this case, we record the number of heads in n = 4 trials, so z = 4.



2014 Arctic Sea Ice Extent Prediction

Posted on 19 August 2014 by Dikran Marsupial &

As September is rapidly approaching, I thought I would update my statistical prediction for this years September mean Arctic sea ice extent. I submitted the prediction in July to the Sea Ice Prediction Network, and it seems to be rather lower than the majority of the other predictions (as my prediction is listed as "Cawley"):

Figure 1. September mean Arctic sea ice extent predictions submitted to the Sea Ice Prediction Network in June 2014.

Last Year's Prediction

Before discussing this year's prediction, lets see how we fared last year.  The prediction made last year is shown in Figure 2, and predicted a 2013 September Arctic sea ice extent of 4.1 ± 1.1 million square kilometres.  The minimum Arctic sea ice extent of 5.10 million square kilometres was reached on September 13, 2013.  Obviously this figure is substantially greater than the prediction, but still lies within the error bars of the projection, and so fits within the range of inter-annual variability considered plausible by the model.  The September mean extent was 5.35 million square kilometres, which lies slightly above the credible interval.  Note also that the model actually predicts the mean Arctic sea ice extent for the month of September, and so can be expected to somewhat over-estimate the September minimum.



Dodgy Diagrams #1 - Misrepresenting IPCC Residence Time Estimates

Posted on 19 February 2014 by Dikran Marsupial & John Cook

There are a number of diagrams that frequently crop up in discussions of climate change in the blogsphere that are easily demonstrated to be, at best misleading, if not actually fundamentally wrong.  A classic example is shown below, which suggests that the IPCC's estimate of residence time is at odds with those from a wide range of scientific studies.

dodgy diagram

In this case, the diagram was taken from an article at Watts Up With That, entitled "Apparently, 4 degrees spells climate doom"; Google's "search by image" shows it has also appeared on a range of other blogs.

So What is Dodgy About The Diagram?

The IPCC actually gives a residence time of about 4 years in the 2007 AR4 WG1 report (see page 948), which is completely in accordance with the other papers referenced in the diagram.  The confusion arises because there are two definitions of "lifetime" that describe different aspects of the carbon cycle.  These definitions are clearly stated on page 8 of the first (1990) WG1 IPCC report (on page 8):



2013 Arctic Sea Ice Extent Prediction

Posted on 20 February 2013 by dana1981 & Dikran Marsupial

We previously examined various predictions of the annual Arctic sea ice extent September minimum from 2008 through 2012, using a variety of methodologies (statistical, modeling, and/or "other").  Overall the statistical- and model-based predictions have been equally accurate, with an average difference from the observational data of 13%.  Skeptical Science's Gavin Cawley (Dikran Marsupial) has been better than average with his purely statistical predictions, averaging a 10.6% difference from the annual minimum.  Cawley describes his methodology thusly:

"I obtained from data for Arctic sea ice extent from 1979-2009 [2012 for the most recent predction]...I then fitted a Gaussian process model, using the excellent MATLAB Gaussian Processes for Machine Learning toolbox (the book is jolly good as well). I experimented with some basic covariance functions, and chose the squared exponential, as that gave the lowest negative log marginal likelihood (NLML). The hyper-parameters were tuned by minimising the NLML in the usual way."

2013 Predictions

Cawley's results are shown in Figure 1, and predict a 2013 September Arctic sea ice extent of 4.1 ± 1.1 million square kilometers.

Dikran sea ice prediction

Figure 1: Gavin Cawley's Gaussian Process statistical Arctic sea ice extent prediction



Roy's Risky Regression

Posted on 7 July 2012 by Dikran Marsupial &

In my previous post Murry Salby's Correlation Conundrum I demonstrated why a correlation with a rate of increase says very little about the cause of the increase itself, because the long term increase is largely due to the mean value of the rate of increase, and correlations are insensitive to the mean.  In this post, I will attempt to explain why regression analysis is similarly prone to misinterpretation (which is not greatly surprising as regression is a correlation based method), using an example losely based on a blog post by Dr Roy Spencer, again questioning whether the observed rise in atmospheric CO2 is of anthropogenic origin.

Risky Regressions

The argument that the rise in atmospheric CO2 is due to increasing sea surface temperatures (SSTs), rather than anthropogenic emissions has previously been suggested by Dr Roy Spencer.  Dr Spencer demonstrates that the annual increase in atmopsheric CO2 is corelated with sea surface temperatures, with a lag of about six months, which is evident in the observations (Fig. 1).

Figure 1: Normalized net global emissions (inferred from Mauna Loa observations) and sea surface temperatures (HadSST2).  Click on the image for details.



Murry Salby's Correlation Conundrum

Posted on 5 July 2012 by Dikran Marsupial &

Prof. Murry Salby of the Department of Environment and Geography at Macquarie Universiry in Sydney gave a talk last year (August 3, 2011) to the Sydney Institute (described at Wikipedia), in which he claimed that the rise in atmospheric CO2 is not driven by anthropogenic emissions. The abstract of the talk is as follows:

Atmospheric Science, Climate Change and Carbon – Some Facts

Carbon dioxide is emitted by human activities as well as a host of natural processes. The satellite record, in concert with instrumental observations, is now long enough to have collected a population of climate perturbations, wherein the Earth-atmosphere system was disturbed from equilibrium. Introduced naturally, those perturbations reveal that net global emission of CO2 (combined from all sources, human and natural) is controlled by properties of the general circulation – properties internal to the climate system that regulate emission from natural sources. The strong dependence on internal properties indicates that emission of CO2 from natural sources, which accounts for 96 per cent of its overall emission, plays a major role in observed changes of CO2Independent of human emission, this contribution to atmospheric carbon dioxide is only marginally predictable and not controllable.

Naturally the talk stirred considerable interest in the blogsphere (e.g. at Climate Etc.).  More recently, a video of this talk was made available, so we can now investigate Prof. Salby's argument in more detail.

Why we can be Confident that Prof. Salby's Conclusions are Incorrect

Ironically, the first 11 minutes of the talk provide all the components required to show beyond reasonable doubt that anthropogenic emissions are responsible for 100% of the observed increase in atmospheric CO2 and that natural sources do not play a major role.



The Independence of Global Warming on Residence Time of CO2

Posted on 1 March 2012 by Dikran Marsupial &

A hearty congratulations from the SkS team to our own Dikran Marsupial for getting a response to Essenhigh (2009) published.

A climate myth that crops up far more often than it should is that the rise in atmospheric CO2 since the industrial revolution is not caused by anthropogenic carbon emissions, but is instead a natural phenomenon.  This has been addressed repeatedly on SkS, for example see CO2 increase is natural, not human-caused.  An example of this argument is found in the paper "Potential Dependence of Global Warming on the Residence Time (RT) in the Atmosphere of Anthropogenically Sourced Carbon Dioxide" by Prof. Robert Essenhigh that appeared in the journal Energy and Fuels in 2009.  The argument is easily refuted by the observation that the rate at which atmospheric CO2 levels are rising is less than the rate at which we are releasing CO2 into the atmosphere from fossil fuel use, which implies that the natural environment must be a net carbon sink, taking in more carbon each year than it emits.

More formally, let Ea represent annual carbon emissions from anthropogenic sources (fossil fuel use and land use change), En represent the carbon emissions from all natural sources (the oceans, soil respiration, volcanos etc.) and Un represent the uptake of carbon by all natural carbon sinks (oceans, photosynthesis, etc.), Ua would be the uptake of carbon due to anthropogenic activities, but this is essentially zero, so we can safely exclude it from the analysis.  Then assuming that the carbon cycle obeys the principle of conservation of mass (any carbon emitted into the atmosphere that is not taken up by natural sinks remains in the atmosphere), the annual change in atmospheric CO2 is given by:

C' = Ea + En - Un

This can be rearranged to give an estimate of the difference between annual emissions from all natural sources and annual natural uptake by all natural sinks.

En - Un = C' - Ea

We have accurate, reliable data for the growth of atmospheric CO2 and for anthropogenic emissions (for details, see Cawley, 2011). Both of these are displayed below, along with an estimate of the net natural carbon flux En - Un.  The fact that the net natural flux is negative clearly shows that natural uptake has exceeded natural emissions every year for the last fifty years at least, and hence has been opposing, rather than causing the observed rise in atmospheric CO2.

illustration of CO2 mass balance



Scafetta's Widget Problems

Posted on 24 February 2012 by dana1981 & Dikran Marsupial

We have previously examined the work of Nicola Scafetta, a climate "skeptic" and solar-climate researcher at Duke University.  Scafetta's pet hypothesis is that astronomical cycles are somehow responsible for most of the observed global warming over the past century; a concept we have termed "climastrology," because Scafetta has proposed no plausible physical mechanism through which the orbital cycles of various planets should exert so much influence over the climate on Earth.

In recent papers, Scafetta has put forth predictions as to how the average global surface temperature will change in the future.  He has also now created a widget to compare his prediction to the IPCC projections and the monthly observed global surface temperatures.  However, as we will discuss here, there are problems with both the widget itself, and the research on which it is based.

Extreme Curve Fitting

The widget is based on Scafetta (2011), which is very similar to a paper we previously examined, Loehle and Scafetta 2011 (LS11).  The latter created a very simple climate model using two cycles (of 60- and 20-year periods) plus a linear warming trend, and adjusted the parameters in their model to fit the observed temperature data.  As we showed, this simple model does not accurately hindcast past temperature changes (Figure 1), and thus there is little reason to expect it to accurately predict future temperature changes.  It was merely an excercise in curve fitting, matching up a model with the temperature data without any physical constraints.

L&S failure

Figure 1: The LS11 Case 2 model projected backwards in time (red), compared to the Moberg et al. (2005) millennial northern hemisphere temperature reconstruction (blue) and the Loehle (2008) millennial global temperature reconstruction (green).



The Consensus Project Website


(free to republish)

Smartphone Apps


© Copyright 2018 John Cook
Home | Links | Translations | About Us | Contact Us