Are surface temperature records reliable?
What the science says...
Select a level... |
![]() |
![]() |
![]() | ||||
The warming trend is the same in rural and urban areas, measured by thermometers and satellites, and by natural thermometers. |
Climate Myth...
Temp record is unreliable
"We found [U.S. weather] stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.
In fact, we found that 89 percent of the stations – nearly 9 of every 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/reflecting heat source." (Watts 2009)
Temperature data is essential for predicting the weather. So, the U.S. National Weather Service, and every other weather service around the world, wants temperatures to be measured as accurately as possible.
To understand climate change we also need to be sure we can trust historical measurements. A group called the International Surface Temperature Initiative is dedicated to making global land temperature data available in a transparent manner.
Surface temperature measurements are collected from about 30,000 stations around the world (Rennie et al. 2014). About 7000 of these have long, consistent monthly records (Fig. 1). As technology gets better, stations are updated with newer equipment. When equipment is updated or stations are moved, the new data is compared to the old record to be sure measurements are consistent over time.
Figure 1. Station locations with at least 1 month of data in the monthly Global Historical Climatology Network (GHCN-M). This set of 7280 stations are used in the global land surface databank. (Rennie et al. 2014)
In 2009 some people worried that weather stations placed in poor locations could make the temperature record unreliable. Scientists at the National Climatic Data Center took those critics seriously and did a careful study of the possible problem. Their article "On the reliability of the U.S. surface temperature record" (Menne et al. 2010) had a surprising conclusion. The temperatures from stations that critics claimed were "poorly sited" actually showed slightly cooler maximum daily temperatures compared to the average.
In 2010 Dr. Richard Muller criticized the "hockey stick" graph and decided to do his own temperature analysis. He organized a group called Berkeley Earth to do an independent study of the temperature record. They specifically wanted to answer the question is "the temperature rise on land improperly affected by the four key biases (station quality, homogenization, urban heat island, and station selection)?" Their conclusion was NO. None of those factors bias the temperature record. The Berkeley conclusions about the urban heat effect were nicely explained by Andy Skuce in an SkS post in 2011. Figure 2 shows that the U.S. network does not show differences between rural and urban sites.
Figure 2. Comparison of spatially gridded minimum temperatures for U.S. Historical Climatology Network (USHCN) data adjusted for time-of-day (TOB) only, and selected for rural or urban neighborhoods after homogenization to remove biases. (Hausfather et al. 2013)
Temperatures measured on land are only one part of understanding the climate. We track many indicators of climate change to get the big picture. All indicators point to the same conclusion: the global temperature is increasing.
------
See also
Understanding adjustments to temperature data, Zeke Hausfather
Explainer: How data adjustments affect global temperature records, Zeke Hausfather
Time-of-observation Bias, John Hartz
Check original data
All the Berkeley Earth data and analyses are available online at http://berkeleyearth.org/data/.
Plot your own temperature trends with Kevin's calculator.
Or plot the differences with rural, urban, or selected regions with another calculator by Kevin.
NASA GISS Surface Temperature Analysis (GISSTEMP) describes how NASA handles the urban heat effect and links to current data.
NOAA Global Historical Climate Network (GHCN) Daily. GHCN-Daily contains records from over 100,000 stations in 180 countries and territories.
Last updated on 15 August 2017 by Sarah. View Archives
- unified documentation of the procedure, including scientific justification and specification of algorithms applied is not available
- for step 2, 3, 4 & 6 at least references to papers are provided, for step 5 not even that
- neither executables nor source code and program documentation is provided for programs TOBS, MMTS, SHAP & FILNET.
- metadata used by the programs above to do their job is missing and/or unspecified
- clear statement whether the same automatic procedure were applied to GHCN v2 which is hinted at the USHCN Version 1 site is missing (if the arcane wording "GHCN Global Gridded Data" in the HTML header of that page is dismissed)
Well, I have found something, not referenced in either USHCN or GHCN pages. It is ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/software/USHCN_v52d.20100217.tar.gz. There is software there (written in Fortran 77) and some rather messy documentation including an MS Word DOC file titled: I do not know how authoritative it is. But I do know much better documentation is needed even on low budget projects, not to mention one multi thousand billion bucks policy decisions are supposed to be based on. The "Pairwise Homogeneity Algorithm (PHA)" promoted (but not specified) in this document is not referenced on any other USHCN or GHCN page. Google search "Pairwise Homogeneity Algorithm" site:gov returns empty. It would be a major job to do the usual software audit on this thing. One has to hire & pay people with the right expertise for it, then publish the report along with data. However, any scientist would run away screaming upon seeing a calibration curve like this, wouldn't she? It is V shaped with clear trends and multiple step-like changes. One would think with 6736 stations spread all over the world and 176 years in time providing 4,864,014 individual data points errors would be a little bit more independent allowing for the central limit theorem to kick in. At least some very detailed explanation is needed why are there unmistakable trends in adjustments commensurate with the effect to be uncovered and why this trend has a steep downward slope for the first half of epoch while just the opposite is true for the second half? BTW, the situation with USHCN is a little bit worse. Adjustment for 1934 is -0.465°C relative to those applied to 2007-2010 (like 0.6°C/century?). I'll post the USHCN graph later. #66 scaddenp at 12:39 PM on 29 June, 2010 you think that you can explain warming in ocean, satellite, and surface record away as "anomalies" as poor instrumental records, and then explain the loss of ice/snow around the world purely by black soot? And the sealevel rise as by soot-induced melting alone without thermal expansion? I guess similar strange measurement anomalies will explain upper stratospheric cooling and the IR spectrum changes at TOS and at surface. That is drawing one very long bow One thing at a time, please. Let's focus on the problem at hand first, the rest can wait.[Oops. Pressed the wrong button first] $ wget ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.mean* $ grep '^425' v2.mean > ushcn.mean $ grep '^425' v2.mean_adj > ushcn.adj $ cat ushcn.mean|perl -e 'while (<>) {chomp; $id=substr($_,0,12); $y=substr($_,12,4); for ($m=1;$m<=12;$m++) {$t=substr($_,11+5*$m,5); printf "%s_%s_%02u %5d\n",$id,$y,$m,$t;} }'|grep -v ' [-]9999$' > ushcn.mean_monthly $ cat ushcn.adj|perl -e 'while (<>) {chomp; $id=substr($_,0,12); $y=substr($_,12,4); for ($m=1;$m<=12;$m++) {$t=substr($_,11+5*$m,5); printf "%s_%s_%02u %5d\n",$id,$y,$m,$t;} }'|grep -v ' [-]9999$' > ushcn.adj_monthly $ cut -c-20 ushcn.mean_monthly | sort > ushcn.mean_monthly_id $ cut -c-20 ushcn.adj_monthly | sort > ushcn.adj_monthly_id $ uniq -d ushcn.mean_monthly_id $ uniq -d ushcn.adj_monthly_id $ sort ushcn.mean_monthly_id ushcn.adj_monthly_id | uniq -d > ushcn.common_monthly_id $ (sed -e 's/^/0 /g' ushcn.mean_monthly; sed -e 's/^/1 /g' ushcn.adj_monthly; sed -e 's/^/2 /g' ushcn.common_monthly_id;)|sort +1 -2 +0 -1 > ushcn.composite_list $ sed -e 's/ */ /g' ushcn.composite_list|perl -e 'while (<>) {chomp; ($i,$id,$t)=split; if ($i==2 && $id eq $iid && $id eq $iiid) {$d=$tt-$ttt; printf "%s %d\n",$id,$d;} $iiid=$iid; $iid=$id; $ttt=$tt; $tt=$t;}' > ushcn.adjustments_monthly_by_station $ sed -e 's/^............_//g' -e 's/_.. / /g' ushcn.adjustments_monthly_by_station | sort > ushcn.adjustments_annual_list $ echo '#' >> ushcn.adjustments_annual_list $ cat ushcn.adjustments_annual_list | perl -e 'while (<>) {chomp; ($d,$t)=split; if ($d ne $dd && $dd ne "") {$x/=$n*10; printf "%s\t%.3f\n",$dd,$x; $n=0; $x=0;} $n++; $x+=$t; $dd=$d;}' > ushcn.adjustments_annual.txt $ openoffice -calc ushcn.adjustments_annual.txt