Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.


Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Donate

Twitter Facebook YouTube Pinterest MeWe

RSS Posts RSS Comments Email Subscribe

Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...

New? Register here
Forgot your password?

Latest Posts


Kevin C

Kevin is an interdisciplinary computational scientist of 20 years experience, based in the UK, although he has also spent two sabbaticals at San Diego Supercomputer Center. His first degree is in theoretical physics, his doctoral thesis was primarily computational, and he now teaches chemistry undergraduates and biology post-graduates. Most of his reasearch has been focussed on data processing and analysis. He is the author or co-author of a number of highly cited scientific software packages.

His climate investigations are conducted in the limited spare time available to a parent, and are currently focussed in two areas; coverage bias in the instrumental temperature record, and simple response-function climate models. He is also interested in philosophy of science and science communication.


Recent blog posts

The HadSST4 Sea Surface Temperature dataset

Posted on 5 July 2019 by Kevin C

The oceans cover two thirds of the surface of the earth, and so sea surface temperatures form a vital part of our understanding of the impact of human activity on the temperature of the planet. Sea surface temperatures contribute to estimates of global surface temperature change which are widely used in the evaluation of climate models, the estimation of internal modes of climate variability, and the setting of political targets. The ways in which sea surface temperatures have been measured by ships, buoys and satellites have varied much more significantly over time than equipment at weather stations; these changes have to be corrected when evaluating historical temperature change. Differences between different sea surface temperature datasets highlight where some of these corrections are uncertain, however users of temperature data frequently ignore these uncertainties and the effect they may have on their conclusions.

This issue was highlighted in a paper by Kent and colleagues in 2017, which made a call for action both for temperature record providers to make more progress on these issues, and for users to be aware of what the data can and cannot tell us. A number of authors have responded by investigating aspects of the sea surface temperature record (e.g. Hausfather et al 2017, Cowtan et al, 2017, Carella et al 2018, Davis et al 2018). Last week the latest version of the UK Hadley centre sea surface temperature dataset, HadSST4, was released.

The Hadley centre have been at the forefront of the development of sea surface temperature data for many years, providing the only sea record which attempts to reconcile the measurement types of individual observations. A Japanese dataset, COBE-SST2, also uses the Hadley analysis of data corrections in combination with an alternative post-processing algorithm to produce an infilled sea surface temperature reconstruction.

Other datasets apply more coarse grained corrections: NOAA's ERSST (Huang et al, 2017) corrects the gridded ship temperature field on the basis of smoothed differences between water and air temperature measurements, while my own coastal hybrid reconstruction (Cowtan et al, 2017) uses coastal weather stations to estimate a global correction for the combined impact of different types of sea surface temperature observations.

The problem is that these methods lead to different answers (Figure 1), suggesting that the required corrections for the different observations are not yet well understood. More seriously, these differences are often comparable to the size of the sources of internal variability which some authors infer from these data! Agreement between the datasets is good between 1980 and 2005, however in the 1950s and 1960s ERSST5 is cooler than HadSST3, and the coastal hybrid record is cooler still. There are large differences during World War 2. In the early 20th century the coastal hybrid record is warmer than the others. In the 19th century ERSST5 is the warm outlier. HadSST3 also shows a lower trend than the other datasets of the supposed "hiatus" period (Hausfather et al, 2017).

Comparison of different SST datasetsComparison of HadSST3 (the current version of the UK Met Office dataset), ERSST5 (the current NOAA sea surface temperature dataset), and the coastal hybrid sea surface temperature dataset (from Cowtan et al, 2017). Temperature averages are calculated using common coverage.



Global warming ‘hiatus’ is the climate change myth that refuses to die

Posted on 26 December 2018 by Kevin C

Kevin Cowtan, Professor of Chemistry, University of York and Stephan Lewandowsky, Chair of Cognitive Psychology, University of Bristol

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The record-breaking, El Niño-driven global temperatures of 2016 have given climate change deniers a new trope. Why, they ask, hasn’t it since got even hotter?

In response to a recent US government report on the impact of climate change, a spokesperson for the science-denying American Enterprise Institute think-tank claimed that “we just had […] the biggest drop in global temperatures that we have had since the 1980s, the biggest in the last 100 years.”

These claims are blatantly false: the past two years were two of the three hottest on record, and the drop in temperature from 2016 to 2018 was less than, say, the drop from 1998 (a previous record hot year) to 2000. But, more importantly, these claims use the same kind of misdirection as was used a few years ago about a supposed “pause” in warming lasting from roughly 1998 to 2013.

At the time, the alleged pause was cited by many people sceptical about the science of climate change as a reason not to act to reduce greenhouse pollution. US senator and former presidential candidate Ted Cruz frequently argued that this lack of warming undermined dire predictions by scientists about where we’re heading.

However, drawing conclusions on short-term trends is ill-advised because what matters to climate change is the decade-to-decade increase in temperatures rather than fluctuations in warming rate over a few years. Indeed, if short periods were suitable for drawing strong conclusions, climate scientists should perhaps now be talking about a “surge” in global warming since 2011, as shown in this figure:

Global temperature observations compared to climate models. Climate-disrupting volcanoes are shown at the bottom, and the purported hiatus period is shaded. 2018 values based on year to date (YTD). NASA; Berkeley Earth; various climate models., Author provided



What does ‘mean’ actually mean?

Posted on 14 August 2018 by Kevin C

This is a re-post from Climate Lab Book

We commonly represent temperature data on a grid covering the surface of the earth. To calculate the mean temperature we can calculate the mean of all the grid cell values, with each value weighted according to the cell area – roughly the cosine of the latitude.

Now suppose you are offered two sets of temperature data, one covering 83% of the planet, and the other covering just 18%, with the coverage shown below. If you calculate the cosine weighted mean of the grid cells with observations, which set of data do you think will give the better estimate of the global mean surface temperature anomaly?

If you can’t give a statistical argument why the answer is probably (B), then now might be a good time to think a bit more about the meaning of the word “mean”. Hopefully the video below will help.

Why is this important?



Evaluating biases in Sea Surface Temperature records using coastal weather stations

Posted on 8 January 2018 by Kevin C

Science is hard. Some easy problems you can solve by hard work, if you are in the right place at the right time and have the right skills. Hard problems take the combined effort of multiple groups looking at the problem, publishing results and finding fault with eachother's work, until hopefully no-one can find any more problems. When problems are hard, you may have to publish something that even you don't think is right, but that might advance the discussion.

The calculation of an unbiased sea surface temperature record is a hard problem. Historical sea surface temperature observations come from a variety of sources, with early records being measured using wooden, canvas or rubber buckets (figure 1), later readings being taken from engine room intakes or hull sensors, and the most recent data coming from drifting buoys and from satellites.

Figure 1: Three types of buckets used in early sea surface temperature observations. From Folland (1995).

These different measurement methods give slightly different readings, with the transition from bucket to engine room observations during the second world war being particularly large: this represents the single largest correction to the historical temperature record, and reduces the estimated warming since the mid 19th century by 0.2-0.3 C compared to the uncorrected data (figure 2).

Figure 2: Difference between the historical temperature record using only raw observations, and using observations corrected for the effects of different measurement methods. The corrected data show less warming on a centenial timescale. From Zeke Hausfather.



NOAA was right: we have been underestimating warming

Posted on 5 January 2017 by Zeke Hausfather

Assessing Recent Warming Using Instrumentally Homogenous Sea Surface Temperature Records

Zeke Hausfather, Kevin Cowtan, David C. Clarke, Peter Jacobs, Mark Richardson, and Robert Rohde


In a paper published in Science Advances, we used data from buoys, satellites, and Argo floats to construct separate instrumentally homogenous sea surface temperature records of the past two decades. We compared them to the old NOAA ERSSTv3b record, the new ERSSTv4 record, the Hadley Centre’s HadSST3 record, and the Japanese COBE-SST record. We found a strong and significant cool bias in the old NOAA record, and a more modest (but still significant) cool bias in the Hadley and Japanese records compared to buoy, satellite, and Argo float data. The new NOAA record agrees quite well with these instrumentally homogenous records. This suggests that the new NOAA record is likely the most accurate sea surface temperature record in recent years, and should help resolve some of the criticism that accompanied the original NOAA study.


In the summer of 2015 researchers at NOAA led by Tom Karl published a paper in the journal Science arguing that global warming since 2000 had been underestimated, and that claims of a 'hiatus' in warming were therefore wrong. The paper proved quite controversial, and the chairman of the U.S. House of Representatives Science and Technology Committee responded by instigating an investigation against scientists at NOAA, demanding access to scientist emails and claiming that they had manipulated global temperature data.

The changes in the global temperature record presented by Karl et al resulted almost entirely from updates in ocean temperatures. Specifically, NOAA switched from using their old Extended Reconstruction Sea Surface Temperature Record (ERSST) version 3b to version 4. This new ERSST record had a number of changes, including adjustments for an offset in temperatures between ship engine room and buoy-based measurements, the use of nighttime marine air temperature measurements to detect problems in ship-based records, and an increased weight on buoy records in recent years. The old NOAA record, their new record, and the commonly used U.K. Hadley Centre HadSST3 record are shown in the figure below:



Surface Temperature or Satellite Brightness?

Posted on 11 January 2016 by Kevin C

There are several ways to take the temperature of the earth. We can use direct measurements by thermometers to measure air or sea surface temperatures. We can measure the temperature of the surface itself using infrared cameras, either from the ground or from space. Or we can use satellites to measure the microwave brightness of different layers of the atmosphere.

In a recent senate subcommittee hearing the claim was made that microwave brightness temperatures provide a more reliable measure of temperature change than thermometers. There are two issues with this claim:

  1. Microwaves do not measure the temperature of the surface, where we live. They measure the warm glow from different layers of the atmosphere.
  2. The claim that microwave temperature estimates are more accurate is backed by many arguments but no data.

Scientific arguments should be based in evidence, so the aim of this article is to investigate whether there is evidence for one record being more reliable than the other. If we want to determine which record is more useful for determining the change in temperature over time, we need to look at the uncertainties in the temperatures records.

Trends in surface and satellite data

Let's look at the period 1979-2012, covering the period from the beginning of the satellite record and ending 3 years ago. We will look at two datasets: the satellite data from Remote Sensing Systems (RSS), and a surface temperature dataset from the UK Met Office (HadCRUT4). The satellite data cover several layers of the atmosphere, so we'll use the data for the lowest layer, the 'TLT' or lower troposphere record, which measures temperatures over a region around 4 kilometers above the surface.

As a first step, we will calculate the trend for both the satellite and surface temperature data. The temperature changes and their trends are shown in Figure 1.

Figure 1: Satellite and surface temperature series.

Figure 1: Temperature series for the period 1979-2012 for the RSS satellite record (left) and the HadCRUT4 surface temperature record (right). Grey crosses indicate monthly temperatures. Red lines are 12 month moving averages. The blue lines are the linear trends, and the light blue curves indicate the 2σ confidence intervals for the trends. The values of the trends and their standard errors are shown above the graph (method).



A Buoy-Only Sea Surface Temperature Record Supports NOAA’s Adjustments

Posted on 27 November 2015 by Kevin C

This is an update of an update of an article which originally appeared at Climate Etc. The authors are grateful for the helpful comments which have informed the updates.

By Zeke Hausfather and Kevin Cowtan

Significant recent media and political attention has been focused on the new NOAA temperature record, which shows considerably more warming than their prior record during the period from 1998 to present. The main factor behind these changes is the correction in ocean temperatures to account for the transition from ship engine room intake measurement to buoy-based measurements and a calibration of differences across ships using nighttime marine air temperatures (NMAT). Here we seek to evaluate the changes to the NOAA ocean temperature record by constructing a new buoy-only sea surface temperature record. We find that a record using only buoys (and requiring no adjustments) is effectively identical in trend to the new NOAA record and significantly higher than the old one.

The changes to the prior NOAA global land/ocean temperature series are shown in Figure 1. There are some large changes in the 1930s that are interesting but have little impact on century-scale trends. The new NOAA record also increases temperatures in recent years, resulting a in a record where the period subsequent to 1998 has a trend identical to the period from 1950-1997 (and giving rise to the common claim that the paper was “busting” the recent slowdown in warming).

Figure 1
Figure 1: New and old homogenized global land/ocean records from Karl et al, 2015.

The paper that presented the revised record, Karl et al, didn’t actually do much that was new. Rather, they put together two previously published records: an update to the NOAA sea surface temperature record (called ERSST) from version 3 to version 4, and the incorporation of a new land record from the International Surface Temperature Initiative (ISTI) that makes use of around 32,000 land stations rather than the 7,000 or so GHCN-Monthly stations previously utilized. The new land record is quite similar to that produced by Berkeley Earth, though it has relatively little impact on the temperature trend vis-à-vis the old land record, particularly during the recent 1998-present period.



Homogenization of Temperature Data: An Assessment

Posted on 2 November 2015 by Kevin C

The homogenization of climate data is a process of calibrating old meteorological records, to remove spurious factors which have nothing to do with actual temperature change. It has been suggested that there might be a bias in the homogenization process, so I set out to reproduce the science for myself, from scratch.  The results are presented in a new report: "Homogenization of Temperature Data: An Assessment".

Historical weather station records are a key source of information about temperature change over the last century. However the records were originally collected to track the big changes in weather from day to day, rather than small and gradual changes in climate over decades. Changes to the instruments and measurement practices introduce changes in the records which have nothing to do with climate.

On the whole these changes have only a modest impact on global temperature estimates. However if accurate local records or the best possible global record are required then the non-climate artefacts should be removed from the weather station records. This process is called homogenization.

The validity of this process has been questioned in the public discourse on climate change, on the basis that the adjustments increase the warming trend in the data. This question is surprising in that sea surface temperatures play a larger role in determining global temperature than the weather station records, and are subject to a larger adjustments in the opposite direction (Figure 1). Furthermore, the adjustments have the biggest effect prior to 1980, and don't have much impact on recent warming trends.

Figure 1. Raw and homogenized temperaturesFigure 1: The global temperature record (smoothed) with different combinations of land and ocean adjustments.

I set out to test the assumptions underlying temperature homogenization from scratch. I have documented the steps in this report and released all of the computer code, so that others with different perspectives can continue the project. I was able to test the underlying assumptions and reproduce many of the results of existing homogenization methods. I was also able to write a rudimentary homogenization package from scratch using just 150 lines of computer code. A few of the tests in the report are described in the following video.



Making sense of the slowdown in global surface warming

Posted on 26 May 2015 by Kevin C

The slowdown in global warming is a subject of intense study. Is it a real physical effect, or a few chance cool years, or something more complex? Could it have been predicted? Can we understand it in retrospect? The following lecture and commentary from the Denial101x course attempt to summarize recent work on the subject. However it is a very fast-moving field, so this summary can only cover a small fraction of the material and will quickly become out-of-date (if it is not already so).

View on YouTube

Making Sense of the Slowdown: Commentary

The term 'hiatus' is often applied to describe a slowdown in the rate of global warming since the late 1990s or the early 2000s. However there are two separate questions which are often confused in discussion of the hiatus. The first is whether there has been a change in the rate of warming, while the second concerns whether the rate of warming is in line with model projections.

When looking at the rate of warming, the year-to-year variability makes it hard to draw conclusions from short periods, especially if we are allowed to cherry-pick a start date. Separating a change in the rate of warming from a few chance cool years is hard, however careful analysis of climate models suggest that recent changes in the rate of warming can occur naturally, but are uncommon (Roberts et al 2015).



The DENIAL101x temperature tool

Posted on 5 May 2015 by Kevin C

As part of DENIAL101x, we've released a new tool which enables anyone to check and debunk misinformation about the historical temperature record for themselves.

This isn't the temperature tool - use the link above to open it.

Try it for yourself here!

You can look at temperature records at any scale, from a local weather station to national and global records. You can investigate common myths, such as the impact of urban heat islands and adjustments to the data. And you can use simple statistical tests to examine the accuracy of the record for yourself.

Here's an introductory video on using the temperature tool:

There's more information in the course materials, for example explaining those two 'advanced' method buttons at the bottom of the tool. Turn those off and you can produce nonsense - but very interesting nonsense. Why? To find out more, it's not too late to sign up at DENIAL101x.



Telegraph wrong again on temperature adjustments

Posted on 24 February 2015 by Kevin C

There has been a vigorous discussion of weather station calibration adjustments in the media over the past few weeks. While these adjustments don't have a big effect on the global temperature record, they are needed to obtain consistent local records from equipment which has changed over time. Despite this, the Telegraph has produced two highly misleading stories about the station adjustments, the second including the demonstrably false claim that they are responsible for the recent rapid warming of the Arctic.

In the following video I show why this claim is wrong. But more importantly, I demonstrate three tools to allow you to test claims like this for yourself.

The central error in the Telegraph story is the attribution of Arctic warming (and somehow sea ice loss) to weather station adjustments. This conclusion is based on a survey of two dozen weather stations. But you can of course demonstrate anything you want by cherry picking your data, in this case in the selection of stations. The solution to cherry picking is to look at all of the relevant data - in this case all of the station records in the Arctic and surrounding region. I downloaded both the raw and adjusted temperature records from NOAA, and took the difference to determine the adjustments which had been applied. Then I calculated the trend in the adjustment averaged over the stations in each grid cell on the globe, to determine whether the adjustments were increasing or decreasing the temperature trend. The results are shown for the last 50 and 100 years in the following two figures:

Trend in weather station adjustments over the period 1965-2014, averaged by grid cell. Warm colours show upwards adjustments over time, cold colour downwards. For cells with less than 50 years of data, the trend is over the available period.

Trend in weather station adjustments over the period 1915-2014, averaged by grid cell. Warm colours show upwards adjustments over time, cold colour downwards. For cells with less than 100 years of data, the trend is over the available period.



Missing Arctic warming does contribute to the hiatus, but it is only one piece in the puzzle.

Posted on 17 February 2015 by Kevin C

new paper from scientists at the Danish Meteorological institute investigates the geographical distribution of warming over the period of the recent slowdown. Interestingly they fail to find any significant contribution from the omission of the rapidly warming Arctic from some temperature datasets. This is surprising, given that the DMI's own data, as well as the AVHRR satellite data, the major weather model reanalyses and land based weather stations all show rapid Arctic warming at a rate which should affect global trends.

We have reproduced their work and established the reasons for their result. Gleisner and colleagues fail to find the impact of Arctic warming for three reasons: where they are looking for it, how they are looking for it, and when they are looking for it. We will consider each of these questions in turn.

First: A sanity check

First let's do a very simple sanity check to see if missing out the Arctic should have a noticeable effect on Arctic temperature trends.

HadCRUT4 had on average 64% coverage for the region north of 60°N for our original study period of 1997-2012. This region corresponds to about 6.7% of the planet's surface. Therefore the missing region corresponds to about 2.5% of the planet. Eighty percent of the missing region is north of 70°N where coverage is very incomplete.

The rate of Arctic warming in the MERRA for region north of 70°N, where most of the missing coverage occurs, is 1.3°C/decade. The ERA-interim reanalysis shows a higher rate of 1.7°C/decade. (Check it yourself)

The trend for the rest of the world is much smaller. Therefore, the missing region in the Arctic alone should increase the global trend by roughly 0.03 to 0.04°C/decade. The trend in HadCRUT4 over that period is about 0.05°C/decade. So inclusion of the Arctic alone might be expected to increase the global trend by 60-80%, as illustrated in Figure 1. Gleisner et al provide no explanation for the apparent contradiction between their results and the weather models.

Figure 1. Arcitc warming from reanalyses.

Figure 1: Global impact of Arctic warming estimated from reanalyses. The JRA-55 analysis (not shown) shows good agreement with ERA-interim.



Cowtan and Way 2014: Hottest or not?

Posted on 2 February 2015 by Kevin C

2014 is over, and all the major temperature record providers have reported their annual temperatures, generating a lot of discussion as to whether the year was the hottest on record. Whether 2014 was hottest or not doesn’t really change our understanding of the science, but the media coverage should make it very clear that it is important for social reasons.

In 2014 we also released our version 2 temperature reconstructions based on separate treatment of land and oceans. The long infilled reconstruction, which covers the same period as the underlying HadCRUT4 data, is the most used. So how does 2014 compare in our infilled temperature reconstruction?

1 2010 2014 2014 2014 2014=
2 2014 2010 2010 2010 2010=
3 2005 2005 2005 2005 2005
4 2007 2007 2007 1998 1998
5 2009 2006 1998 2013 2003
6 2006 2013 2013 2003 2006
7 2013 2009 2002 2002 2009
8 1998 2002 2009 2006 2002
9 2003 1998 2006 2009 2013
10 2002 2003 2003 2007 2007

As has been anticipated, we show 2014 as the second hottest on record. And not by a close margin - the temperature anomaly for 2014 was 0.61°C, as compared to 0.63°C for 2010. Given that our work is motivated by the desire to understand why the different versions of the temperature record differ, the difference is interesting.



Kevin Cowtan Debunks Christopher Booker's Temperature Conspiracy Theory

Posted on 27 January 2015 by Kevin C

In The Telegraph, Christopher Booker accused climate scientists of falsifying the global surface temperature data, claiming trends have been "falsified" through a "wholesale corruption of proper science."  Booker's argument focuses on adjustments made to raw data from temperature stations in Paraguay.  In the video below, Kevin Cowtan examines the data and explains why the adjustments in question are clearly justified and necessary, revealing the baselessness of Booker's conspiracy theory.

The video features a prototype tool for investigating the global temperature record. This tool will be made available with the upcoming MOOC, Making Sense of Climate Science Denial, where we will interactively debunk myths regarding surface temperature records.



Uncertainty, sensitivity and policy: Kevin Cowtan's AGU presentation

Posted on 15 January 2015 by Kevin C

The surface thermometer record forms a key part of our knowledge of the climate system. However it is easy to overlook the complexities involved in creating an accurate global temperature record from historical thermometer readings. If the limitations of the thermometer record are not understood, we can easily draw the wrong conclusions. I reevaluated a well known climate sensitivity calculation and found some new sources of uncertainty, one of which surprised me.
This highlights two important issues. Firstly the thermometer record (while much simpler than the satellite record) requires significant expertise in its use - although further work from the record providers may help to some extent. Secondly, the policy discussion, which has been centered on the so called 'warming hiatus', has been largely dictated by the misinformation context, rather than by the science.

At the AGU fall meeting I gave a talk on some of our work on biases in the instrumental temperature record, with a case study on the implications from a policy context. The first part of the talk was a review of our previous work on biases in the HadCRUT4 and GISTEMP temperature records, which I won't repeat here. I briefly discussed the issues of model-data comparison in the context of the CMIP-5 simulations, and then looked at a simple case study on the application of our results.

The aim of doing a case study using our data was to ascertain whether our work had any implications beyond the problem of obtaining unbiased global temperature estimates. In fact repeating an existing climate sensitivity study revealed a number of surprising issues:

  1. Climate sensitivity is affected by features of the temperature data which were not available to the original authors.
  2. It is also affected by features of the temperature record which we hadn't considered either, such as the impact of 19th century ship design.
  3. The policy implications of our work have little or nothing to do with the hiatus.

The results highlight the fact that significant expertise is currently required to draw valid conclusions from the thermometer record. This represents a challenge to both providers and users of temperature data.


Let's start by looking at the current version of our temperature reconstruction, created by separate infilling of the Hadley/CRU land and ocean data. The notable differences are that our reconstruction is warmer in the 2000's (due to rapid arctic coverage), and around 1940, and cooler in the 19th century due to poor coverage in HadCRUT4 (figure 1).

 Figure 1: Cowtan and Way version 2 infilled temperature series

Figure 1: Comparison of the Cowtan and Way version 2 long reconstruction against HadCRUT4, showing the uncertainty interval from CWv2.



How global warming broke the thermometer record

Posted on 25 April 2014 by Kevin C

Last November we published a paper in Quarterly Journal of the Royal Meteorological Society on the subject of coverage bias in the Met Office HadCRUT4 temperature record. The paper was made available to all through the generous donations of Skeptical Science readers. We found that when the HadCRUT4 data are extended to cover the whole globe, some of the apparent slowdown in global warming over the past 16 years disappears. This video provides a brief recap of that work:

The original motivation for the project came from the fact that different versions of the temperature record were showing substantially different short term trends. We assumed that in addressing the coverage bias in HadCRUT4 we would bring it into agreement with the GISTEMP record from NASA. Having done that, the project would be finished.

But what we actually found was a surprise - our infilled record showed rather faster warming than GISTEMP.

The GISTEMP conundrum

We first thought that the disagreement must be down to the sea surface temperature (SST) data, because the Met Office had recently corrected a bias in their ocean warming estimates of about the right size. However this guess turned out to be wrong. When we looked at a map of differences between the GISTEMP trends and ours (Figure 1) the main differences were not in the oceans at all: they were in the Arctic. And the differences in this one small region were big enough to explain about two thirds of the difference in trend between our results and GISTEMP.

 Figure 1: Difference in temperature trends between GISTEMP and the Cowtan and Way infilled temperature data

Figure 1: Difference in temperature trends between GISTEMP and the Cowtan and Way infilled temperature data on the period 1997-2012 (i.e. GISTEMP minus C&W). Units are °C/decade.



Cowtan and Way: Surface temperature data update

Posted on 27 January 2014 by Kevin C

Following the release of temperature data for December 2013,  we have updated our temperature series to include another year. From now on we hope to provide monthly updates. In addition we have released the first of a new set of 'version 2' temperature reconstructions.

Version 2 temperature reconstructions

One of the main limitations of Cowtan and Way (in press), which we highlighted in the paper and has been echoed by others, is that the global temperature reconstruction was performed on the blended land-ocean data. The problem with this approach is that surface air temperatures behave differently over land and ocean, primarily due to the thermal inertia and mixing of the oceans. The problem is compounded by the fact that sea surface temperatures are used as a proxy for marine air temperatures due to problems in the measurement of marine air temperatures.

We have therefore started releasing temperature series based on separate reconstruction of the land and ocean data. The first of these 'version 2' temperature series is a long reconstruction covering the period from 1850 to the present, infilled by kriging the HadCRUT4 land and ocean ensembles. (We can't produce a hybrid reconstruction back to 1850 because the satellite data only starts in 1979.)

For the long reconstruction there is no need to rebaseline the data to match the UAH data, and so all the original observations may be used. The separate land/ocean reconstructions also address a small bias due to changing land and ocean coverage which was mitigated by the rebaselining step in our previous work.

The use of the ensemble data means that we now produce a more comprehensive estimate of the uncertainty in the temperature estimates (following Morice et al 2012). The coverage uncertainty estimate has also been upgraded to capture some of the seasonal cycle in the uncertainty. A comparison of the temperature series to the official HadCRUT4 values is shown in Figure 1.

Figure 1: Comparison of HadCRUT4 to the infilled reconstruction

Figure 1: Comparison of HadCRUT4 to the infilled reconstruction, using a 12 month moving average.



Cowtan and Way (2013) is now open access

Posted on 3 December 2013 by Kevin C

Cowtan and Way 2013 summary graphThanks to the generosity of Skeptical Science contributors we are happy to report that Cowtan and Way (2013) is now open access and freely available to the public. We hope this will help to advance the discussion on coverage bias in the temperature record. We would like to thank everyone who has so generously contributed.

You can now read the paper here.

As a bonus, here is a poster which I took to the 2013 EarthTemp meeting at the DMI in Copenhagen last June. I was very fortunate to be able to attend this meeting, and talking to the experts there was critical to understanding the behaviour of air temperatures over sea ice - this led to section 5 in the paper and our more recent update.

Coverage of the paper has, predictably, ranged from the outstanding to the abominable. However some of the resulting conversations have already strengthened our results and are opening new lines of investiagation. My favourite quote is from William M. Connolley: "... anyone could have done it. Well, not quite, because they did it carefully". One of the things which still surprises me about our paper is that no one else beat us to it. And if it can be said of our work that "they did it carefully", then we have achieved one of our primary goals.



The Other Bias

Posted on 15 November 2013 by Kevin C

Looking at three versions of the global surface temperature record, with their different behaviour over the last decade and a half, it is only natural to wonder 'which is right?'.

To answer this question, we need to know why they differ. And the first place to look is the known sources of bias which impact the different versions of the temperature record.

One source of bias - due to poor observational coverage - has been discussed in our recent paper, although it was reported back in 2009, and it was addressed by NASA as long ago as 1987.

The other source of bias comes from a change in the way sea surface temperatures (SSTs) are measured. In the 1980's most SST observations were from ship engine room water intake sensors, however over the last two decades there has been a shift to using observations from buoys. The ship-based observations are subject to biases which depend on a number of factors including the design of the ship, but are on average slightly warm. As a result the raw observations can provide a misleading trend for short periods spanning the changeover, such as the last 15 years. The problem is illustrated in the Figure 1.

Figure 1: Illustration of how observation type can bias trends

Figure 1: How a transition from ship to buoy measurements can bias temperature trends. This image is illustrative and does not represent the actual trends in observation types, which are more complex.



Help make our coverage bias paper free and open-access

Posted on 14 November 2013 by Kevin C

Earlier in the year, Skeptical Science ran an appeal to fund the publication of the Cook et al Consensus Project paper. The required funds were raised in less than a day, a powerful example of citizen-science in action. Our new paper 'Coverage bias in the HadCRUT4 temperature record' is somewhat different from the consensus paper: it is not a Skeptical Science project, and the primary audience are the users and providers of global surface temperature data. Nonetheless the results of the paper are also of significant relevance to the public discourse on climate change.

Our aim has always been to get the best possible information into the hands of the public. To this end we have already made all the data and methods available. However to best enable others to investigate this area, to build on, improve, correct and if necessary refute our work, we would ideally like to provide free, open access to the paper as well.

As a spare time project neither Robert nor myself have academic funds which can be legitimately contributed to making this paper open access. In the light of your generosity last time, we would like to ask you to help crowd-fund making our paper open-access and freely available to the general public.

The publication costs are as follows:

Open access $2400-$3000
depending on RMS membership conditions
Printing charges £0-700
depending on colour and layout

If we reach the first goal, the paper will be made open access. Donations will continue to be accepted to offset the remaining publication costs, which we will otherwise bear out of own pockets. If the final goal is reached we will remove the donation button and the excess will be set aside to fund publication costs of future papers.

Thank you again for all the varied contributions that you make to the dissemination of climate information and for providing engaging discussion and feedback on issues related to climate science.

Kevin Cowtan and Robert Way




Global warming since 1997 more than twice as fast as previously estimated, new study shows

Posted on 13 November 2013 by dana1981

A new paper published in The Quarterly Journal of the Royal Meteorological Society fills in the gaps in the UK Met Office HadCRUT4 surface temperature data set, and finds that the global surface warming since 1997 has happened more than twice as fast as the HadCRUT4 estimate. This short video abstract summarizes the study's approach and results.

The study, authored by Kevin Cowtan from the University of York and Robert Way from the University of Ottawa (who both also contribute to Skeptical Science), notes that the Met Office data set only covers about 84 percent of the Earth's surface. There are large gaps in its coverage, mainly in the Arctic, Antarctica, and Africa, where temperature monitoring stations are relatively scarce. These are shown in white in the Met Office figure below. Note the rapid warming trend (red) in the Arctic in the Cowtan & Way version, missing from the Met Office data set.

Met Office vs. Cowtan & Way (2013) surface temperature coverage and trends 
Met Office vs. Cowtan & Way (2013) surface temperature coverage and trends



Has the rate of surface warming changed? 16 years revisited

Posted on 21 May 2013 by Kevin C

Climate scientists have traditionally looked at climate over long periods - 30 years or more. However the media obsession with short term trends has focussed attention on the past 15-16 years. Short term trends are much more complex because they can be affected by many factors which cancel out over longer periods. In a recent interview James Hansen noted "If you look over a 30-40 year period the expected warming is two-tenths of a degree per decade, but that doesn't mean each decade is going to warm two-tenths of a degree: there is too much natural variability".

Over the winter vacation we produced a video which tried to explain the contributions to the recent temperature trend based on the best evidence available at the time, however the rapid pace of development in this area has thrown significant doubt on the conclusions. While the video has significant educational content, the conclusions do not reflect a scientific consensus, so we will be withdrawing it and will work on an updated version.

The problem

The video was based on an approach pioneered by Lean and Rind (2008) and Foster and Rahmstorf (2011), by determining the contribution of known influences on global temperature to best explain those temperatures. However this approach can give misleading results if significant influences on temperature are missing from the analysis, or if wrong influences are included. Therefore we need a comprehensive list of possible factors which might affect the short term trend. Based on the latest literature, the following should be considered:



The Japan Meteorological Agency temperature record

Posted on 12 February 2013 by Kevin C

The Japan Meteorological Agency (JMA) produce its own version of the instrumental temperature record, which has until recently received little attention.  NASA's Climate 365 put together a graphic to illustrate how little difference there is between the four primary global surface temperature datasets (Figure 1), but of course all the climate contrarians took from this was that JMA's data shows the least warming in recent years. Sources with a tendency towards motivated reasoning naturally concluded that it must be right.


Figure 1: The four main global surface temperature measurement datasets (Source)

Since it is now being more widely quoted, it is time we took a look at JMA data and investigate why they show less recent surface warming than the other three datasets.  As we have seen before, an understanding of the coverage of a data set is critical in interpreting the resulting temperature series, so we will start by looking at the maps. The following figure compares the JMA temperature data with three other surface temperature datasets (HadCRUT4, NCDC and GISTEMP), the satellite data from UAH, and the reanalysis (weather model) data from NCEP/NCAR.

Figure 1: Coverage maps for various temperature series

 Figure 2: Coverage maps for various temperature series. Colors represent mean change in temperature between the periods 1996-2000 and 2006-2010, from +2C (dark red) to -2C (dark blue). Note that the cylindrical projection exaggerates the polar regions of the map.



16 years - Update and Frequently Asked Questions

Posted on 10 February 2013 by Kevin C

Update 21/02/2013: Troy Masters is doing some interesting analysis on the methods employed here and by Foster and Rahmstorf. On the basis of his results and my latest analysis I now think that the uncertainties presented here are significantly underestimated, and that the attribution of short term temperature trends is far from settled. There remains a lot of interesting work to be done on this subject.

The ‘16 years’ video has lead to a number of good questions from viewers which simply could not be addressed in a basic 2-minute explanation. We will first look at the results from the latest data, and then try and address some of the questions which have arisen. The main issues which will be addressed are:

  • How are the natural influences determined?
  • What happens if you use data over a longer timescale?
  • What about other versions of the temperature record (e.g. HadCRUT4)?

Each question will be addressed at a basic level where possible, however some of the issues are more technical.

Update to December 2012

The GISTEMP temperature data for December 2012 was not available at the time the video was being made, and thus could not be included. The release of this extra month of data brought a couple of surprises: Firstly the additional month was cooler than expected, and secondly GISTEMP switched from using HadISST to ERSST for the sea surface temperature data. These both affected the results. In addition I have switched to using the constrained trend method described here for the trends reported in the text and included the latest volcano and solar data.



16  ^  more years of global warming

Posted on 10 January 2013 by Kevin C

Update 21/02/2013: Troy Masters is doing some interesting analysis on the methods employed here and by Foster and Rahmstorf. On the basis of his results and my latest analysis I now think that the uncertainties presented here are significantly underestimated, and that the attribution of short term temperature trends is far from settled. There remains a lot of interesting work to be done on this subject.

Human greenhouse gas emissions have continued to warm the planet over the past 16 years. However, a persistent myth has emerged in the mainstream media challenging this.  Denial of this fact may have been the favorite climate contrarian myth of 2012, first invented by David Rose at The Mail on Sunday with an assist from Georgia Tech's Judith Curry, both of whom later doubled-down on the myth after we debunked it.  Despite these repeated debunkings, the myth spread throughout the media in various opinion editorials and stunts throughout 2012. The latest incarnations include this article at the Daily Mail, and a misleadingly headlined piece at the Telegraph.

As a simple illustration of where the myth goes wrong, the following video clarifies how the interplay of natural and human factors have affected the short-term temperature trends, and demonstrates that underneath the short-term noise, the long-term human-caused global warming trend remains as strong as ever.



DIY climate science: The Instrumental Temperature Record

Posted on 6 December 2012 by Kevin C

Key points
  • Anyone can reproduce the instrumental temperature record for themselves, either by using existing software or by writing their own.
  • A new browser-based tool is provided for this purpose.
  • Many recent claims concerning climate change can be tested using this software.

The motto of the UK's Royal Society is "Nullius in verba", often translated as "Take nobody's word for it" with the implication that scientific views must be based in evidence, not authority. In this spirit, one way to test the claims of climate science is to do it yourself (DIY). There are only three scientific questions which need to be answered to determine whether climate change is a real issue:

  1. Is it warming?
  2. Why is it warming?
  3. What will happen in future?

The rest is window dressing. The first two questions can be addressed using the instrumental temperature record. The in situ record (based on thermometer readings) provides observations of air temperature over land and sea surface temperature which can be used to reconstruct a temperature record for much of the Earth's surface back to the 19th century. This gives a direct indication of whether the Earth is warming.

The instumental temperature record



Papers on Hurricanes and Global Warming

Posted on 4 November 2012 by Kevin C

This is a re-post from Ari Jokimäki's AGW Observer. Titles added by Kevin C

A new historical record of Atlantic hurricane threat

From Grinsted et al (2012)

Homogeneous record of Atlantic hurricane surge threat since 1923 – Grinsted et al. (2012) “Detection and attribution of past changes in cyclone activity are hampered by biased cyclone records due to changes in observational capabilities. Here we construct an independent record of Atlantic tropical cyclone activity on the basis of storm surge statistics from tide gauges. We demonstrate that the major events in our surge index record can be attributed to landfalling tropical cyclones; these events also correspond with the most economically damaging Atlantic cyclones. We find that warm years in general were more active in all cyclone size ranges than cold years. The largest cyclones are most affected by warmer conditions and we detect a statistically significant trend in the frequency of large surge events (roughly corresponding to tropical storm size) since 1923. In particular, we estimate that Katrina-magnitude events have been twice as frequent in warm years compared with cold years (P < 0.02).” Aslak Grinsted, John C. Moore, and Svetlana Jevrejeva, PNAS October 15, 2012, doi: 10.1073/pnas.1209542109. [FULL TEXT]



Watts' New Paper - Analysis and Critique

Posted on 2 August 2012 by dana1981

 "An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends"
Paper authors: A. Watts, E. Jones, S. McIntyre and E. R. Christy

In an unpublished paper, Watts et al. raise new questions about the adjustments applied to the U.S. Historical Climatology Network (USHCN) station data (which also form part of the GHCN global dataset).  Ultimately the paper concludes "that reported 1979-2008 U.S. temperature trends are spuriously doubled."  However, this conclusion is not supported by the analysis in the paper itself.  Here we offer preliminary constructive criticism, noting some issues we have identified with the paper in its current form, which we suggest the authors address prior to submittal to a journal.  As it currently stands, the issues we discuss below appear to entirely compromise the conclusions of the paper.

The Underlying Problem

In reaching the conclusion that the adjustments applied to the USHCN data spuriously double the actual trend, the authors rely on the difference between the NCDC homogenised data (adjusted to remove non-climate influences, discussed in detail below) and the raw data as calculated by Watts et al.  The conclusion therefore relies on an assumption that the NCDC adjustments are not physically warranted.  They do not demonstrate this in the paper.  They also do not demonstrate that their own ‘raw’ trends are homogeneous. 

Ultimately Watts et al. fail to account for changing time of observations, that instruments change, or that weather stations are sometimes relocated, causing them to wrongly conclude that uncorrected data are much better than data that takes all this into account. 



The GLOBAL global warming signal

Posted on 4 July 2012 by Kevin C


  • New global temperature series confirm the GISTEMP results using only the HadCRUT3, NCDC and/or UAH data.

  • Once El Nino is taken into account there is no evidence for a slowdown in warming over the period 1996-2010.

  • If the HadCRUT4/HadSST3 ocean temperature corrections are also included then the underlying global warming rate is ≳0.2°C/decade.

  • There remain uncorrected cool biases in the temperature trends.


Global warming involves warming of the whole globe (the clue is in the name), but it does not necessarily affect every part of the globe at the same rate.

Different parts of the globe can experience very different changes under greenhouse warming. As a result, if you want to measure global warming, you have to measure the whole globe, not just a part of it. But of the three main in situ temperature records (GISTEMP, NCDC and HadCRUT), only GISTEMP is near-global in coverage over recent decades, and only by means of allowing each weather station to cover a larger region of the map. The incomplete coverage of the Hadley and NCDC datasets may be seen in Figure 1, along with three global temperature reconstructions.

Figure 1: Coverage maps for various temperature seriesFigure 1: Coverage maps for various temperature series. Colors represent mean change in temperature between the periods 1996-2000 and 2006-2010, from +2C (dark red) to -2C (dark blue). Note that the cylindrical projection exaggerates the polar regions of the map.



HadCRUT4: Analysis and critique

Posted on 13 June 2012 by Kevin C

In my previous three articles on HadSST3, CRUTEM4 and HadCRUT4, I have given an overview of the literature and data concerning the new datasets which comprise the Hadley/CRU version of the instrumental temperature record. The analysis I have presented so far has been addressed at communicating the work done by Hadley and CRU as clearly as possible.

However in the course of examining the data for these articles I have come across a number of features which are of interest in understanding the data and do not seem to have been widely reported. Some of these features are (at least to me) rather unexpected. Note however that this material is the result of a few months of spare-time effort, and has not been subject to the scrutiny of peer-review, and so should be treated as tentative. It is likely that at least some of it is wrong. Constructive criticism and pointers to any previous similar work I have missed are welcome.

The material is quite dense. Much of it concerns the problem of coverage bias, so reviewing my previous articles ‘HadCRUT3, Cool or Uncool?’ and ‘GISTEMP, Cool or Uncool?’ on this subject may be helpful. I will start by presenting an outline of my conclusions and  then explain in detail how I reached them.



HadCRUT4: A detailed look

Posted on 22 May 2012 by Kevin C

The Hadley Centre of the UK Meteorological office and the Climatic Research Unit (CRU) of the University of East Anglia have since 1989 jointly maintained a global surface temperature record, HadCRUT. The current version of this record, HadCRUT3, is very widely cited in the academic literature (currently around 900 citations), and provides a record of combined land and ocean temperatures running back to 1850. The dataset is updated monthly to provide a continuous snapshot of the state of the climate.

Recently a new version of the record, HadCRUT4, has been released running to December 2010, with monthly updates planned in future. This update is a response to several factors. In the case of the CRUTEM land temperature record, coverage has been declining because of the need for current weather stations to have a sufficient number of readings in the baseline period 1960-1990. As weather stations move, continuity back to this period is lost. The impact of this decline in coverage has been assessed by two reports, by the ECMWF and GISS, and also reproduced here, and has led to an underestimation of recent land temperature trends. The CRUTEM4 update introduces a number of new station records to address this coverage issue.

A second motivation for the update is the discovery of a bias in the sea surface temperature record, HadSST2, leading to a significant cool bias following the second world war. A new version, HadSST3, addresses this bias, but also has some impact on recent temperature trends. HadSST3 also introduces a new approach to estimating uncertainties in temperature records through the use of an ensemble of realisations; this approach is carried through to HadCRUT4.

The method and results are described in Morice et al (2012). In this article we will examine the impact of these changes on the global surface temperature record and consider the implications for other datasets.

What has changed?



CRUTEM4: A detailed look

Posted on 15 May 2012 by Kevin C

CRUTEM is a version of the surface temperature record based on weather station data spanning the last one and a half centuries. It is produced by the Climatic Research Unit (CRU) at the University of East Anglia, and provides the land component of the widely quoted HadCRUT global temperature record. Version 3 of this dataset (CRUTEM3) was the current version from 2006-2012, however in the past few months a new version, CRUTEM4, has been released.

Why produce a new version? Studies from the ECMWF and GISS identified a significant bias in the CRUTEM3 record: The dataset has been under-reporting recent temperatures, owing to poor sampling of high Northern latitudes which have displayed the fastest warming over the last decade. The coverage issue has been examined in previous articles on HadCRUT3 and GISTEMP.

CRUTEM4 (Jones et al, 2012) takes the existing CRUTEM3 dataset and adds a significant number of new records. 344 stations were added in the Russian federation, 223 in other former USSR states, 125 in the Arctic and 7 in Greenland. A number of other records were updated. 312 records which were adjusted by CRU in the 1980s for inhomogeneity were either replaced with other records or had their adjustments reassessed. The total number of temperature series in the dataset is now ~5500, although only about 3000 can be updated on a monthly basis.



HadSST3: A detailed look

Posted on 5 May 2012 by Kevin C

The Hadley centre of the UK Meteorological office has for a number of years maintained a dataset of sea surface temperatures (SSTs), HadSST2, which has formed a basis for estimating global surface temperatures. The HadSST2 dataset was used in the widely quoted HadCRUT3 temperature record, as well as providing the in-situ sea surface temperature component of HadISST since 2007. HadISST is used along with Reynold's OISST in NASA's GISTEMP record. The source data are versions of the International Comprehensive Ocean Atmosphere Data Set (ICOADS), which includes historical records from many sources.

The SST data are a little more complex than the weather station data with which most of use are familiar: Whereas temperature measurements at weather stations have been performed according to a standard protocol for over a century, measurement methods for SST data have changed significantly over the same period. Early measurements were taken using a canvas bucket trailed in the water, or later a better insulated wooden or rubber bucket. Later measurements were taken from engine room intakes, hull sensors, or buoys. The different methods have different biases, and thus significant corrections are required to produce a stable temperature series. The HadSST2 record included a 'bucket correction' for data collected before 1942, to correct for a known cool bias in the data.

This year, the Hadley centre released a new version of this dataset, HadSST3, based on additional data and more importantly, some additional corrections. These are described in Kennedy et al, 2012.



GISTEMP: Cool or Uncool?

Posted on 19 April 2012 by Kevin C

There are three main versions of the instrumental temperature record, HadCRUT3 from the UK meteorological office, GISTEMP from NASA, and the NCDC dataset from NOAA. Of the three, HadCRUT3 shows the least warming over the last 15 years, and GISTEMP shows the most. The difference is quite striking:

Dataset1997-2012 trend
HadCRUT3v 0.013 ±0.142 °C/decade
NCDC 0.049 ±0.132 °C/decade
GISTEMP 0.103 ±0.143 °C/decade

Given the short-term cooling influences which have been operating over the last decade, the GISTEMP trend is much as expected. But the HadCRUT3 and NCDC trends are much lower.

In the previous article we examined the problem of sampling a stratified data set, and how this impacts the HadCRUT3 temperature record. We saw that the land temperature anomalies are both higher and have been increasing faster than ocean temperature anomalies. However land temperatures are under-represented in the HadCRUT3 data over the last decade, and the proportion of land temperatures has been declining. These effects both contribute to an increasing cool bias in the HadCRUT3 data.

Is there another source of bias which might explain the divergence of the three datasets? Studies from the ECMWF and GISS have indentified one such source in the HadCRUT3 data: Poor coverage at high latitudes. Can we find any evidence which might confirm this?



HadCRUT3: Cool or Uncool?

Posted on 28 March 2012 by Kevin C

The UK Meteorological office have for many years published estimates of the global mean surface temperature record from 1850. Over the last decade it has been noted that this record has shown little or no warming. The Skeptical Science trend calculator shows that the difference between the HadCRUT3v trend and the IPCC forecast over the past 15 years is statistically significant at the 95% level. What is going on?

Foster and Rahmstorf (2011) have shown that two natural cycles - the El Nino Southern Oscillation (ENSO), and the solar cycle - have contributed temporarily to this apparent slowdown in global warming. But the slowdown is much more obvious in the HadCRUT3v data. Why?

The clues lie in one basic statistical principle, and two features of the data.

The statistical principle: Sampling a stratified population

Suppose you want to determine some statistic on a large dataset, say the average height of the children of a given age. You could simply measure everyone. But that would be impractical. So normally you would measure the heights of a representative sample group. If the group is large enough, the average height of the sample group will give a good estimate of the average height of age group as a whole.

Or will it? Suppose three quarters of your sample group are girls. Girls make up approximately half of the population as a whole. But girls in the chosen age group are on average shorter than boys. If girls make up three quarters of your sample group, then the average height of the sample group (the 'sample mean') will be lower than the average for the population as a whole (the true 'population mean'). The sample group is not representative of the population, and as a result produces a biased estimate.

Sampling bias estimating average height trend



The Skeptical Science temperature trend calculator

Posted on 27 March 2012 by Kevin C

Temperature trend calculatorSkeptical Science is pleased to provide a new tool, the Skeptical Science temperature trend uncertainty calculator, to help readers to evaluate critically claims about temperature trends.

The trend calculator provides some features which are complementary to those of the excellent Wood For Trees web site and this amazing new tool from Nick Stokes, in particular the calculation of uncertainties and confidence intervals.

Start the trend calculator to explore current temperature data sets.
Start the trend calculator to explore the Foster & Rahmstorf adjusted data.

What can you do with it?

That's up to you, but here are some possibilities:



The Consensus Project Website


(free to republish)

© Copyright 2020 John Cook
Home | Links | Translations | About Us | Privacy | Contact Us