Global warming conspiracy theorist zombies devour Telegraph and Fox News brains
Posted on 25 June 2014 by dana1981
A long-debunked myth is amplified by the conservative media echo chamber from a fringe science-denying blog to The Telegraph and Fox News
Global warming myths can never be permanently killed. Once debunked, a climate myth will go into a state of hibernation, waiting for enough time to pass that people forget the last time a scientific stake was thrust through its heart. The myth will eventually rise from the grave once again, seeking out victims with tasty, underutilized brains to devour – every zombie’s favorite meal.
And so we have the long-debunked conspiracy theorist myth that scientists are falsifying temperature data to conjure global warming and frighten the masses. The story goes that in the raw temperature data from the continental USA, the hottest year on record is 1934. In the data adjusted by scientists, 1998 was the hottest year on record in the USA, until that record was broken in 2006 and then shattered in 2012 (1934 comes in 4th). The raw data are the gold standard, so this proves that climate scientists are falsifying data, right?
Wrong. Really, about as wrong as humanly possible. Scientists make adjustments to the raw data to remove factors that we know introduce biases and false trends. For example, temperature stations were once observed and data recorded in the afternoon, but later the observations switched to the mornings. Since mornings are colder than afternoons, that change introduced a cool bias into the raw data. Other factors that require adjustments include changes in the temperature monitoring system instrument setups, and the movement of temperature stations from one location to another.
Scientists have put substantial effort into accounting for all these changes that introduce known biases in the raw data across thousands of temperature monitoring stations. It’s not a vast conspiracy to trick us into buying solar panels; it’s just good science, which even contrarian climate scientists don’t dispute. In fact, when we compare raw and adjusted temperature data across the surface of the whole planet, the difference between the two is barely noticeable.
Yet we suddenly have Christopher Booker at The Telegraph and Fox News' Sean Hannity claiming that climate scientists are ‘fabricating’ and ‘manipulating’ the raw temperature data, despite the fact that even most contrarian bloggers see right through this myth.
So what happened? Shauna Theel at Media Matters documented the path that this zombie myth took, starting from a fringe denialist blog, making its way up the conservative media echo chamber ladder until it reached media outlets like The Telegraph and Fox News that purportedly care about factually accurate reporting.
Tony Heller, a birther who criticizes climate science under the pseudonym "Steven Goddard," wrote a blog post that claimed "NASA cooled 1934 and warmed 1998, to make 1998 the hottest year in US history instead of 1934." After the Drudge Report promoted a report of this allegation by the conservative British newspaper The Telegraph, conservative media from Breitbart to The Washington Times claimed the data was "fabricated" or "faked." On June 24, Fox & Friends picked it up, claiming that "the U.S. has actually been cooling since the 1930s" but scientists had "faked the numbers".
Let’s take a few more shots at this zombie myth to see if we can send it back to its grave, temporarily.
- Scientists make adjustments to remove known biases in the raw data. Average surface temperatures over the continental USA have indeed warmed by more than 1°F since the 1930s.
- Even if you don’t trust those adjustments, raw and adjusted global temperature data are nearly identical. The USA represents less than 2% of the Earth’s surface.
- On top of that, the warming of the atmosphere only accounts for about 2% of the warming of the planet as a whole. The oceans account for more than 90% of global warming, and have been heating up at a rate equivalent to 2 Hiroshima atomic bomb detonations per second since the 1950s, and 4 per second over the past decade.
Thank you for this post. A climate denying friend of mine recently submitted to me the telegraph article, as evidence against global warming. With a quick google search, it was clear that Tony Heller and Christopher Booker were fringe, anti-science bloggers, but I didn't have a good direct argument against the evidence they submitted. This post helped with that. Thumbs up guys!
As I once commented on a contrarian site, in a blog post decrying temperature corrections:
The blog authors were, predictably, displeased by that comment. When the corrections are removing errors, and increasing the accuracy of your data, the contrarian preference for raw data is a choice for inaccuracy. And decrying those corrections says "conspiracy theorist" in large type.
Nice one, Dana. Added this as a rebuttal to both articles in rbutr. Onr of the others added is from Monbiot who wrote about Booker's inability to get anything right.
The reemergence of this meme was to be expected : the argument "there is a pause" cannot be held out anymore, so they have to attack the record or say "it's natural variability because of El Nino".
Goddard chose the former, because he is deep entrenched in conspiration theorism. Expect other sites to choose the later (for example Curry).
Note that the biggest upward adjustment appears to be - you guessed it - 1998. i.e. the adjustments have contributed to the apparent hiatus. That's partly an optical illusion due to 1998 being an extremum, but Zeke's land-only graph with differences here suggests that it is genuine.
It may be worth a look to see where the differences are combing from geographically, and see what a comparison with the more complete Berkeley and Hadley data show.
KR, the 'skeptic' insistence on using faulty data is a widespread phenomenon;
'Mann should have factored in the erroneous tree ring proxy data after the divergence period!'
'There were other tree ring data sets near Yamal which were not selected for sensitivity to temperature changes and thus show wildly innacurate results if improperly used as temperature proxies... those should have been factored in to temperature anomaly data series!'
'The XBT network had problems with some buoys incorrectly determining depth and thus skewed temperature results... those incorrect values should be included in ocean temperature change analysis!'
'Guy Callendar excluded CO2 readings taken outside sources of major emissions from his analysis of atmospheric CO2 changes over time! Fraud! The massively inflated local readings must be included!'
Et cetera. The same crazy argument comes up over and over again. If we just include enough provably erroneous data this whole global warming thing would go away.
The figure in the post is somewhat out-of-date. The difference between GHCN v3.2.2 raw and adjusted is shown here: LINK
Changes in version 3.2 significantly increased the rate of breakpoint detection in the PHA, leading to an increase in the number of corrected inhomogenities.
[RH] Shortened link that was breaking page format.
The so-called "pause" that the pseudo-skeptics keep banging on about is the biggest evidence that the temperature data are not faked. If the climate scientists were really just making up the temperature numbers, why on earth would they include so much variability that the pseudo-skeptics could claim that global warming stopped in 1998. Of course, pseudo-skeptics aren't really bothered much about mutually exclusive arguments.
For what it's worth, the BEST project find a genuine warming trend in US data, just as NOAA do:
The claim by Steve Goddard that 40% of the dataset is estimated (denoted by E on the each datapoint), is interesting. The great thing is all of us can easily experiement with the datasets (current and historical).
I understand scientists need to adjust data for bias, but this post didn't do a lot to educate on:
Why so many adjustments? Why adjust so often? When will the need for adjustments end? How many times does/did a single datapoint get adjusted? Is there a change log for each adjusted datapoint? Is there a changelog between each published datset which tells how many datapoints were adjusted?
In general this post is a good read. But I found this a bit hyperbolic, "The USA represents less than 2% of the Earth’s surface." This is a quasi marketing type statement designed to contrast 98% vs. 2% and make readers jump to conclusions.
Beisdes that, Steve Goddard's accustation of data tampering is about a land based temperature network.
The USA may be less than 2% of the Earth's surface. However it is 6.26% of total land area and ranks 4 of 256 countries. (Russia, Antartica and China are bigger.)
Did you look at the detailed explanation linked to in the article? At the bottom there is also further reading. Why adjust? Well to take an example of just one adjustment, would you consider it valid to compare temperatures measured in the afternoon (past practise) with measurements taken morning (modern practise). How about when a stevenson screen was add to the station? Or a station moved? The science is trying to construct the best possible record of past temperature change from what data is available with all its flaws. Methods for detecting problems with station records and methods for correcting these problems are evolving all the time. You would expect then to see them applied to problem of extracting historical temperatures. The exact methodology is documented in published papers and as the article I linked to shows, it has been reproduced by many researchers (even ones sure that their superior methods would show reduced warming like the BEST group).
truthbtold @10:
1) When economists try to compare economic conditions between different years, they try to eliminate the effects of inflation to determine the real changes in economic activity. When they do so, they state the figures in "real dollars" relative to the most recent year under consideration. They do that because those are the terms that make sense for the people making the comparison. Likewise in temperature series, the adjustments are made relative to the most recent temperature record. For that reason, anytime adjustments are made they are made to past years, rather than the most recent record. That means anytime an error is found in previous adjustment procedures, past years will be adjusted again; and they will only cease to be adjusted once the temperature record is demonstrably perfect.
2) We do not have a temperature record using the same instruments, under the same conditions, at the same locations, using the same observation times and methods. Rather, all of those things have changed over time to a greater or lesser extent except for (in the US) a recently installed set of temperature stations (the Climate Refference Network). Our knowledge of the causes and effects of these changes is not perfect, and is revisited by scientists in order to improve the temperature record, and whenever that knowledge is improved, a further adjustment is in order.
3) So called "climate skeptics" have a very one sided view of climate adjustments, only being worried about adjustments that run counter to their narrative. The most telling example of this is their willingness to accept the UAH temperature series, which derives tropospheric temperatures from microwave emissions from the atmosphere. That series requires far more, and more complicated adjustments than does the surface temperature record but so called "climate skeptics" accept it without batting an eyelid, and and in preference to more straight forward measures.
4) A range of other measurements show the temperature record after adjustment better reflects the actual temperature record than the unadjusted record. In Maine, for example, there is a record of the first day without ice (the ice out day) of a number of lakes, eg:
Bear in mind that the unadjusted temperature series for the contiguous US shows 1940s temperatures equivalent to those over the last decade; but that is inconsistent with the ice out data shown above, or indeed the ice out days for all lakes in Maine (smoothed):
(Source)
Similar records show for the Great Lakes, and other natural climate indicators. These records are not, of course, able to tell us the validity of individual adjustments, but they do show the general tendency of the adjustments is to make the temperature record more accurate. Further, related but distinct instrumental records also show the same patterns as the instrumental record, as has been shown recently for global data by the UK Met Office:
Part of what is causing confusion is, I think, this graph from the NCDC discussion on USHCN adjustments. Can someone please explain why the adjustment is increasing over time and why the increase looks so much like the overall temperature increase? I can't understand the reasons based on the discussion on that page or elsewhere. Thanks.
DB - I would suggest reading Surface Temperature Measurements: Time of observation bias and its correction. This issue is primarily seen in the US, as volunteer temperature measurements have shifted to a different time of day over the years, station by station, hence an increasing TOBS correction. Other countries don't rely as much on volunteers. Other items in the correction list include the progressive station change from mercury to electronic thermometers, which read a bit cooler.
The TOBS correction is discussed in detail by Karl et al 1986, referenced on the very USHCN page you linked to. See the proceeding graph of the effect of individual adjustments - the sum of these is an increasing adjustment over time, quite justified by demonstrable bias changes over the US historical record.
Note that these adjustments are fairly small compared to overall temperature changes in the last century, affect only the US, and are lost in the noise for the global temperature record. Pseudo-skeptics harp on the shape of the adjustments, but fail to point out that they really make no significant difference in our understanding of global climate change.