Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Are surface temperature records reliable?

What the science says...

Select a level... Basic Intermediate Advanced

The warming trend is the same in rural and urban areas, measured by thermometers and satellites, and by natural thermometers.

Climate Myth...

Temp record is unreliable

"We found [U.S. weather] stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.

In fact, we found that 89 percent of the stations – nearly 9 of every 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/reflecting heat source." (Watts 2009)

At a glance

It's important to understand one thing above all: the vast majority of climate change denialism does not occur in the world of science, but on the internet. Specifically in the blog-world: anyone can blog or have a social media account and say whatever they want to say. And they do. We all saw plenty of that during the Covid-19 pandemic, seemingly offering an open invitation to step up and proclaim, "I know better than all those scientists!"

A few years ago in the USA, an online project was launched with its participants taking photos of some American weather stations. The idea behind it was to draw attention to stations thought to be badly-sited for the purpose of recording temperature. The logic behind this, they thought, was that if temperature records from a number of U.S. sites could be discredited, then global warming could be declared a hoax. Never mind that the U.S. is a relatively small portion of the Earth;s surface. And what about all the other indicators pointing firmly at warming? Huge reductions in sea ice, poleward migrations of many species, retreating glaciers, rising seas - that sort of thing. None of these things apparently mattered if part of the picture could be shown to be flawed.

But they forgot one thing. Professional climate scientists already knew a great deal about things that can cause outliers in temperature datasets. One example will suffice. When compiling temperature records, NASA's Goddard Institute for Space Studies goes to great pains to remove any possible influence from things like the urban heat island effect. That effect describes the fact that densely built-up parts of cities are likely to be a bit warmer due to all of that human activity.

How they do this is to take the urban temperature trends and compare them to the rural trends of the surrounding countryside. They then adjust the urban trend so it matches the rural trend – thereby removing that urban effect. This is not 'tampering' with data: it's a tried and tested method of removing local outliers from regional trends to get more realistic results.

As this methodology was being developed, some findings were surprising at first glance. Often, excess urban warming was small in amount. Even more surprisingly, a significant number of urban trends were cooler relative to their country surroundings. But that's because weather stations are often sited in relatively cool areas within a city, such as parks.

Finally, there have been independent analyses of global temperature datasets that had very similar results to NASA. 'Berkeley Earth Surface Temperatures' study (BEST) is a well-known example and was carried out at the University of California, starting in 2010. The physicist who initiated that study was formerly a climate change skeptic. Not so much now!

Please use this form to provide feedback about this new "At a glance" section, which was updated on May 27, 2023 to improve its readability. Read a more technical version below or dig deeper via the tabs above!


Further details

Temperature data are essential for predicting the weather and recording climate trends. So organisations like the U.S. National Weather Service, and indeed every national weather service around the world, require temperatures to be measured as accurately as possible. To understand climate change we also need to be sure we can trust historical measurements.

Surface temperature measurements are collected from more than 30,000 stations around the world (Rennie et al. 2014). About 7000 of these have long, consistent monthly records. As technology gets better, stations are updated with newer equipment. When equipment is updated or stations are moved, the new data is compared to the old record to be sure measurements are consistent over time.

 GHCN-M stations

Figure 1. Station locations with at least 1 month of data in the monthly Global Historical Climatology Network (GHCN-M). This set of 7280 stations are used in the global land surface databank. (Rennie et al. 2014)

In 2009 allegations were made in the blogosphere that weather stations placed in what some thought to be 'poor' locations could make the temperature record unreliable (and therefore, in certain minds, global warming would be shown to be a flawed concept). Scientists at the National Climatic Data Center took those allegations very seriously. They undertook a careful study of the possible problem and published the results in 2010. The paper, "On the reliability of the U.S. surface temperature record" (Menne et al. 2010), had an interesting conclusion. The temperatures from stations that the self-appointed critics claimed were "poorly sited" actually showed slightly cooler maximum daily temperatures compared to the average.

Around the same time, a physicist who was originally hostile to the concept of anthropogenic global warming, Dr. Richard Muller, decided to do his own temperature analysis. This proposal was loudly cheered in certain sections of the blogosphere where it was assumed the work would, wait for it, disprove global warming.

To undertake the work, Muller organized a group called Berkeley Earth to do an independent study (Berkeley Earth Surface Temperature study or BEST) of the temperature record. They specifically wanted  to answer the question, “is the temperature rise on land improperly affected by the four key biases (station quality, homogenization, urban heat island, and station selection)?" The BEST project had the goal of merging all of the world’s temperature data sets into a common data set. It was a huge challenge.

Their eventual conclusions, after much hard analytical toil, were as follows:

1) The accuracy of the land surface temperature record was confirmed;

2) The BEST study used more data than previous studies but came to essentially the same conclusion;

3) The influence of the urban stations on the global record is very small and, if present at all, is biased on the cool side.

Muller commented: “I was not expecting this, but as a scientist, I feel it is my duty to let the evidence change my mind.” On that, certain parts of the blogosphere went into a state of meltdown. The lesson to be learned from such goings on is, “be careful what you wish for”. Presuming that improving temperature records will remove or significantly lower the global warming signal is not the wisest of things to do.

The BEST conclusions about the urban heat effect were nicely explained by our late colleague, Andy Skuce, in a post here at Skeptical Science in 2011. Figure 2 shows BEST plotted against several other major global temperature datasets. There may be some disagreement between individual datasets, especially towards the start of the record in the 19th Century, but the trends are all unequivocally the same.

rural-urban T

Figure 2. Comparison of spatially gridded minimum temperatures for U.S. Historical Climatology Network (USHCN) data adjusted for time-of-day (TOB) only, and selected for rural or urban neighborhoods after homogenization to remove biases. (Hausfather et al. 2013)

Finally, temperatures measured on land are only one part of understanding the climate. We track many indicators of climate change to get the big picture. All indicators point to the same conclusion: the global temperature is increasing.


 

See also

Understanding adjustments to temperature dataZeke Hausfather

Explainer: How data adjustments affect global temperature recordsZeke Hausfather

Time-of-observation Bias, John Hartz

Berkeley Earth Surface Temperature Study: “The effect of urban heating on the global trends is nearly negligible,” Andy Skuce

Check original data

All the Berkeley Earth data and analyses are available online at http://berkeleyearth.org/data/.

Plot your own temperature trends with Kevin's calculator.

Or plot the differences with rural, urban, or selected regions with another calculator by Kevin

NASA GISS Surface Temperature Analysis (GISSTEMP) describes how NASA handles the urban heat effect and links to current data.

NOAA Global Historical Climate Network (GHCN) DailyGHCN-Daily contains records from over 100,000 stations in 180 countries and territories.

Last updated on 27 May 2023 by John Mason. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Related Arguments

Further reading

Denial101x video

Here is a related lecture-video from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Kevin Cowtan: Heat in the city

Comments

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  Next

Comments 51 to 75 out of 529:

  1. Gavin Schmidt has a brief response to the Smith & Coleman bizarre claims that the "real" temperature stations' data have been replaced by averages of unrepresentative stations' data, and that data have been destroyed. Gavin's response is to Leanan's comment #9 on 17 January 2010 in the comments on the RealClimate post 2009 Temperatures by Jim Hansen.
  2. 53# have a look at http://www.uoguelph.ca/~rmckitri/research/nvst.html which graphs temp against station numbers, you can also access the University of Delware mpeg file which animates the global station numbers form 1950 - 1999. Watch China & the Soviet Union. If you reaslly want to spend the time, go to GISS and check out the temp graphs for stations in the SU...you will find 'most' of them stopped sending data after 1990.
  3. Re: #72 Jeff Freymueller at 17:00 PM on 27 February, 2010 (in: Senator Inhofe's attempt to distract us from the scientific realities of global warming) http://skepticalscience.com/news.php?p=2&t=76&&n=147#9477 "Anyone can download the original data and reanalyze it" Jeff, I am trying to do that, but the only source I know of is GHCN-Monthly Version 2 at the NCDC site: http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/index.php Do you know a better source? If so, a pointer is welcome. For this particular dataset is a complete mess. It is dominated by USHCN, poorly documented, metadata are insufficient, the adjustment procedure is arbitrary, coverage AFTER 1990 is deteriorating rapidly. Look into it and you'll see. In a post deleted by John I have provided some details. Suffice to say the NCDC adjustment algorithm has at least four outstanding break points at 1905, 1920, 1990 & 2006. On average for the 1920-1990 period they have applied a 0.36°C/century warming adjustment to the entire dataset. It's essentially the same for sites flagged "rural" and the rest (urban & suburban), no statistically significant difference in adjustment slopes (based on counterfactual assumption of no urbanization in this period perhaps). The dataset does not meet any reasonable open source standard. Still, NCDC at its site says it was "employed" in IPCC AR4 20th century temperature reconstruction. http://ber.parawag.net/images/GHCN_adjustments.jpg Looks like it is high time for a transparent open source community project to recollect worldwide temperature histories along with site assessments and ample metadata.
  4. #57, Berényi Péter The details are outside my specialty so I can offer only very limited help here. I went to the ftp site and poked around for a minute or two and found this readme file: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt The ftp site looks like it is designed to make it simple for people to write automated scripts to grab all the data or updates, which is what I would be doing if I used this data. I do know from reading other blogs that the raw data is also available in addition to adjusted data, so you will have to poke around a bit, or send a question to the email address in the readme file if you can't find what you need after reading the documentation. GISS has the source code for its software online, so you can look into that for examples of reading the files and so on.
  5. Thanks, Jeff. It's GHCN-DAILY Version 2.1, I'll look into it. $ wget -r ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/
  6. We can add to "other lines of evidence for rising temperatures" also indirect evidence you mentioned elsewhere: - Greenland and Antarctica show net ice loss - Acceleration of glaciers in Greenland and Antarctica, particularly within the last few years. - Sea-ice loss in the Arctic is dramatically accelerating - Accelerating decline of glaciers throughout the world. - Rapid expansion of thermokarst lakes throughout parts of Siberia, Canada and Alaska - Disintegration of permafrost coastlines in the arctic - Poleward migration of species - Poleward movement of the jet streams (Archer 2008, Seidel 2007, Fu 2006) - Widening of the tropical belt
  7. Some recent analysis of USA surface temperatures, 16th March 2010, by Dr. Roy Spencer (http://www.drroyspencer.com/category/blogarticle/) suggests that there is sufficient doubt and perhaps significant differences to be found when closely examining the published data that warrants closer examination to accurately quantify the UHI effect, and how it impacts on both the accepted trends and also on how it affects other data that was calibrated against accepted surface measurements.
  8. johnd, it's always a good thing when other scientists come out with different analysis. But it needs to be done at the same quality level. The dataset Spencer uses did not go through the same quality control as GHN; also, data are not homogenized. Spencer corrected the raw data just for altitude and check for water coverage. Given that similar and more comprehensive analysis on the link between population and UHI has already been performed and accounted for, I'd be more carefull before claiming that "there is sufficient doubt". There are other things that I think need to be clarified. For example, Spencer found large UHI warming-population density differences for different years, which I find hard to explain. Even larger differences are found between USA and the rest of the world. Also, there's a sharp increase in the warming bias already between population densities of 5 and 20 per Km2, which again I find hard to understand. And it's worth noticing that the whole claim is based on the data for population densities below 200 per Km2, above which Spencer's results agree with CRUTem3. Spencer should also explain how satellite based temperatures can be fooled by population density. One last remark, the ISH dataset is released by NOAA which uses the GHN dataset for its analysis of temperature trends, I'm sure for good reasons.
  9. Riccardo, Spencer wasn't presenting his analysis as complete, but believed what differences he found are sufficient to justify a more complete independent analysis. Given that there are few stations where one can be confident of the data being not being biased by the UHI effect perhaps it does warrant careful analysis. Aren't satellite based temperature measurement equipment calibrated against "known" conventional temperature measurements? If not what are they calibrated against? The accuracy of satellite measurements despite the sophisticated instrumentation, will only be as accurate as the standard used to calibrate them.
  10. johnd, I know what Spencer and you think. I was giving reasons to think differently.
  11. johnd writes: Aren't satellite based temperature measurement equipment calibrated against "known" conventional temperature measurements? If not what are they calibrated against? The accuracy of satellite measurements despite the sophisticated instrumentation, will only be as accurate as the standard used to calibrate them. The AMSU temperature measurements are calibrated against two targets. There's a "hot" target located on the satellite itself (whose temperature is directly monitored using high-precision platinum resistence thermometers). For a "cold" target the sensor turns to measure the cosmic background radiation in open space (a very cold 3K). Real earth temperatures will fall between these two targets. The close agreement between satellite and surface temperatures is a bit of a problem for those skeptics who believe that the surface record is hopelessly contaminated by UHI effects. I've seen many commenters on other sites try to reconcile this by assuming that the satellite record is somehow "tuned" to match the surface trend, or surface stations are used to "calibrate" the AMSU satellite temperatures. But no such adjustment is actually used, and the close agreement between satellite and surface temperatures is real.
  12. #61 Ned at 00:20 AM on 5 April, 2010 The close agreement between satellite and surface temperatures is a bit of a problem for those skeptics who believe that the surface record is hopelessly contaminated by UHI effect Ned, the problem with satellite "temperatures" is that satellites do not measure temperature, not even color temperature, but for a specific layer of atmosphere (e.g. lower troposphere) brightness temperature is measured in a single narrow IR band. This measurement may be accurate and precise, but it is insufficient in itself to recover proper atmospheric temperatures. In order to make that transition, you need an atmospheric model. With the model atmosphere you can calculate the brightness temperature backwards and tune parameters until a match is accomplished with satellite brightness temperature data. Then you can look at the lower troposphere temperature of the model and call it temperature. However, with no further assumptions, the relation is not reversible, i.e. many different atmospheric states lend the same brightness temperature as seen from above. The very assumptions in the model, that make reverse calculations possible are the hidden backlink to surface temperature data. For there is no other way to verify model reliability than compare it to actual in situ measurements. Therefore if the surface temperature record is unreliable, so are atmospheric models used to transform satellite measured brightness temperatures to atmospheric temperatures. That makes the whole satellite thing dependent on surface data, in spite of independent sensor calibration methods.
  13. BP #62, if what you say were true then would not the satellite and surface temperature results diverge more and more as time goes by? After all, your argument is essentially that the satellite temperature data was 'set' to conform to surface results (though in fact UAH originally came up with results significantly different from the surface results and only later came to line up after several errors were identified). However, now that those 'assumptions' needed to match the satellite record up are in place they are fixed. If the surface temperature continued to change, per the 'UHI error theory' for instance, then it should diverge from the satellite record which is still based on the assumptions needed to match up to the older temperatures. Yet we aren't seeing this sort of growing divergence. I believe that is because you are simply incorrect about the satellite record being deliberately 'set equal' to the surface record... as also demonstrated by the fact that they originally did not match and the primary adjustments made since then have had to do with correcting for sensor drift rather than baselining to the surface data.
  14. #63 CBDunkerson at 20:40 PM on 27 June, 2010 though in fact UAH originally came up with results significantly different from the surface results and only later came to line up after several errors were identified Yes. And the motivation for debugging was the discrepancy. But the thing about conversion of brightness temperatures to proper temperatures using an atmospheric model was just a guess, I have not looked into the issue deeply enough, yet. However, I am pretty sure the surface database is tampered with. I have downloaded both v2.mean.Z and v2.mean_adj.Z from the GHCN v2 ftp site. According to the readme file data in the latter one are "adjusted to account for various non-climatic inhomogeneities". Then selected pairs of temperature values where for a 12 character station ID (includes country code, nearest WMO station number, modifier and duplicate number), a specific year and month both files contained valid temperatures (4,864,014 pairs for 1835-2010). For each pair I have calculated the adjustment as the difference of the value found in v2.mean_adj and v2.mean. Having done that, I have taken the average of adjustments for each year. It looks like this: It is really hard to come up with an error model that would justify this particular pattern of adjustments. One is inclined to think it's impossible. Note that for the last ninety years adjustments for various non-climatic inhomogeneities alone add about 0.26°C to the warming trend. If we also take into account the UHI effect which is not adjusted for properly, not much warming is left. Without soot pollution on ice and snow, we probably would have severe cooling.
  15. BP - a script and post on this. do climatologists falsify data?. I believe the papers used for homogenization are listed here.USHCN Do you have a problem with the methodology used here?
  16. "without soot pollution on ice & snow" - you mean you think that you can explain warming in ocean, satellite, and surface record away as "anomalies" as poor instrumental records, and then explain the loss of ice/snow around the world purely by black soot? And the sealevel rise as by soot-induced melting alone without thermal expansion? I guess similar strange measurement anomalies will explain upper stratospheric cooling and the IR spectrum changes at TOS and at surface. That is drawing one very long bow, BP. You could be right but I will stick with the simpler explanation - we ARE warming and our emissions are the major cause of it.
  17. BP, what act of contrition will you offer should your remark of "tampered with" prove faulty? Perhaps a more careful choice of words would be better? Also, this isn't some kind of fad thing you're bringing from elsewhere, is it? I'm not being nasty, just am bothered with words smacking of fraud and am really bored with impressionist fads. As I've said before, you make an effort but that makes it -more- disappointing when you succumb to the freshly-revealed-climate-science-conspiracy-of-the-week. Anyway how about explicitly publishing your (admittedly simple sounding but I'm a simpleton) arithmetic method you're using to produce your datapoints?
  18. I used to be impressed with the allowances you grant to BP and his wild (and long) meanderings laced with accusations (such as 'tampering') and insinuations, but it is starting to get very boring and frustrating. How many times can such accusations be allowed without proof, even if followed by apologies - although the apologies are never (as far as I can see) related to the accusations made, as can be seen for his 'apology' on the Ocean acidification thread, where he apologised for getting angry but not for the general accusations against 'climate science'.
  19. Not to swerve completely off-topic JMurphy but I'm not sure I can think of a single other skeptic I've witnessed actually admitting an error other than BP, here, though I can't remember exactly what was about or where, just that it was striking in its very novelty. I'm bothered by the fraud thing, very much so because it's hard to talk with somebody who starts with an assumption that data is cooked and I have to wonder how virtually all of our instrumental records could be either hopelessly flawed or run by the Mafia but on the other hand we're also sometimes treated to interesting little essays like this. I've spent (wasted according to some people) a lot of time in the past 3 years hanging out on climate blogs and Berényi Péter is quite unlike any other doubter I've run across.
  20. doug_bostrom wrote : "...but on the other hand we're also sometimes treated to interesting little essays like this". And to add one last comment on this diversion : that comment just proves my point. Anywhere else you care to research the subject of the Dialogo, you will find Simplicio described as a combination of two contemporary philosophers : Cesare Cremonini who famously refused to look through the telescope; and Ludovico delle Colombe, one of Galileo's main detractors. You will also find evidence of Galileo's good connections with Maffeo Barberini (later Urban VIII), who had written a poem in praise of Galileo's telescopic discoveries and actually agreed to the publication of the Dialogo. Why, then, would Simplicio be a parody of Urban ? It's all part of a pattern : BP finds the evidence he likes and agrees with and everything else (and everyone else) is wrong, fraudulent or part of the conspiracy.
  21. #65 scaddenp at 12:39 PM on 29 June, 2010 I believe the papers used for homogenization are listed here.USHCN Do you have a problem with the methodology used here? According to the USHCN page you have linked they do adjustments in 6 steps. The first one "with the development at the NCDC of more sophisticated QC procedures [...] has been found to be unnecessary" Otherwise the procedure goes like this: RAW -> TOBS -> MMTS -> SHAP -> FILNET -> FINAL Proper process audit is impossible, because
    1. unified documentation of the procedure, including scientific justification and specification of algorithms applied is not available
    2. for step 2, 3, 4 & 6 at least references to papers are provided, for step 5 not even that
    3. neither executables nor source code and program documentation is provided for programs TOBS, MMTS, SHAP & FILNET.
    4. metadata used by the programs above to do their job is missing and/or unspecified
    5. clear statement whether the same automatic procedure were applied to GHCN v2 which is hinted at the USHCN Version 1 site is missing (if the arcane wording "GHCN Global Gridded Data" in the HTML header of that page is dismissed)
    Well, I have found something, not referenced in either USHCN or GHCN pages. It is ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/software/USHCN_v52d.20100217.tar.gz. There is software there (written in Fortran 77) and some rather messy documentation including an MS Word DOC file titled:
    USHCN Version 2.0 Update v1.0 Processing System Documentation (another version number here?) Draft August 46, 2009 Claude Williams Matthew Menne
    I do not know how authoritative it is. But I do know much better documentation is needed even on low budget projects, not to mention one multi thousand billion bucks policy decisions are supposed to be based on. The "Pairwise Homogeneity Algorithm (PHA)" promoted (but not specified) in this document is not referenced on any other USHCN or GHCN page. Google search "Pairwise Homogeneity Algorithm" site:gov returns empty. It would be a major job to do the usual software audit on this thing. One has to hire & pay people with the right expertise for it, then publish the report along with data. However, any scientist would run away screaming upon seeing a calibration curve like this, wouldn't she? It is V shaped with clear trends and multiple step-like changes. One would think with 6736 stations spread all over the world and 176 years in time providing 4,864,014 individual data points errors would be a little bit more independent allowing for the central limit theorem to kick in. At least some very detailed explanation is needed why are there unmistakable trends in adjustments commensurate with the effect to be uncovered and why this trend has a steep downward slope for the first half of epoch while just the opposite is true for the second half? BTW, the situation with USHCN is a little bit worse. Adjustment for 1934 is -0.465°C relative to those applied to 2007-2010 (like 0.6°C/century?). I'll post the USHCN graph later. #66 scaddenp at 12:39 PM on 29 June, 2010 you think that you can explain warming in ocean, satellite, and surface record away as "anomalies" as poor instrumental records, and then explain the loss of ice/snow around the world purely by black soot? And the sealevel rise as by soot-induced melting alone without thermal expansion? I guess similar strange measurement anomalies will explain upper stratospheric cooling and the IR spectrum changes at TOS and at surface. That is drawing one very long bow One thing at a time, please. Let's focus on the problem at hand first, the rest can wait.
  22. BP #64 I suggest you have a look at what the Clear climate project has to say about the ghcn data you've examined. Scientific code is almost never pretty - the goals are very different to what commercial programmers would expect, and technical debt accumulates at far faster rates compared even to poorly managed commercial projects. This is caused by the two camps having distinctly different goals (technical incompetence on the side of the scientists, and scientific incompetence on the side of the programmers, to be uncharitable).
  23. How depressing. In post #62 BP makes a whole series of very specific claims about the satellite temperature record, all of which are stated as factual with no qualifiers or caveats. Then, two posts later in #64, he casually mentions that those earlier statements were actually "just a guess, I have not looked into the issue deeply enough, yet." Then, to compound this, BP proceeds to claim to have discovered evidence of "tampering" (his own word) with the GHCN data set, based on comparing the raw and adjusted GHCN data sets using a naive unspatial averaging of all station data. I have pointed out to BP previously that you cannot compare the results of a simple global average of all stations to the gridded global temperature data sets because the stations are not distributed uniformly. Given that many people have looked into this in vastly more detail than BP, and have done it right instead of doing it wrong, I cannot fathom why BP thinks his analysis adds any value, let alone why it would justify sweeping claims about "tampering". Here's a comparison of the gridded global land temperature trend, showing the negligible difference between GHCN raw and adjusted data: This is based on results from Zeke Hausfather, one of a large and growing number of people who have done independent, gridded analyses of global temperature using open-source data and software. BP claims that the GHCN adjustment added "0.26 C" to the warming trend over the last 90 years (is that 0.26 C per 90 years, or is it 0.26 C per century over the last 90 years? "0.26 C" is not a trend). Using a gridded analysis, the actual difference in the trends is 0.04 C/century over the last 90 years. Over the last 30 years, the difference in trend between the raw and adjusted data is 0.48 C/century ... with the adjusted trend being lower than the raw trend. In other words, the "tampering" that BP has detected is, over the past 30 years, reducing the magnitude of the warming trend. Then, of course, there's the issue that land is only about 30% of the earth's surface. Presumably the effect of any adjustment to the land data needs to be divided by 3.33 to compare its magnitude to the global temperature trend. Once again, BP has drawn extreme and completely unjustified conclusions ("tampering") based on a very weak analysis. Personally, I am getting really tired of seeing this here.
  24. #67 doug_bostrom at 12:44 PM on 29 June, 2010 how about explicitly publishing your (admittedly simple sounding but I'm a simpleton) arithmetic method you're using to produce your datapoints? Listen, I think the description of the procedure followed is clear enough, anyone can replicate it. I am not into "publishing" either, it not my job. It is not science proper. That would require far more resources and time. I am just trying to show you the gaps where PhD candidates could find their treasure. The trails can be followed, and if anyone is concerned about it, things I write here are published under the GNU Free Documentation License. I have not written proper software either just used quick-and-dirty oneliners in a terminal window. Anyway, here you go. This is what I did for USHCN as recovered from the .bash_history file. [Oops. Pressed the wrong button first] $ wget ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.mean* $ grep '^425' v2.mean > ushcn.mean $ grep '^425' v2.mean_adj > ushcn.adj $ cat ushcn.mean|perl -e 'while (<>) {chomp; $id=substr($_,0,12); $y=substr($_,12,4); for ($m=1;$m<=12;$m++) {$t=substr($_,11+5*$m,5); printf "%s_%s_%02u %5d\n",$id,$y,$m,$t;} }'|grep -v ' [-]9999$' > ushcn.mean_monthly $ cat ushcn.adj|perl -e 'while (<>) {chomp; $id=substr($_,0,12); $y=substr($_,12,4); for ($m=1;$m<=12;$m++) {$t=substr($_,11+5*$m,5); printf "%s_%s_%02u %5d\n",$id,$y,$m,$t;} }'|grep -v ' [-]9999$' > ushcn.adj_monthly $ cut -c-20 ushcn.mean_monthly | sort > ushcn.mean_monthly_id $ cut -c-20 ushcn.adj_monthly | sort > ushcn.adj_monthly_id $ uniq -d ushcn.mean_monthly_id $ uniq -d ushcn.adj_monthly_id $ sort ushcn.mean_monthly_id ushcn.adj_monthly_id | uniq -d > ushcn.common_monthly_id $ (sed -e 's/^/0 /g' ushcn.mean_monthly; sed -e 's/^/1 /g' ushcn.adj_monthly; sed -e 's/^/2 /g' ushcn.common_monthly_id;)|sort +1 -2 +0 -1 > ushcn.composite_list $ sed -e 's/ */ /g' ushcn.composite_list|perl -e 'while (<>) {chomp; ($i,$id,$t)=split; if ($i==2 && $id eq $iid && $id eq $iiid) {$d=$tt-$ttt; printf "%s %d\n",$id,$d;} $iiid=$iid; $iid=$id; $ttt=$tt; $tt=$t;}' > ushcn.adjustments_monthly_by_station $ sed -e 's/^............_//g' -e 's/_.. / /g' ushcn.adjustments_monthly_by_station | sort > ushcn.adjustments_annual_list $ echo '#' >> ushcn.adjustments_annual_list $ cat ushcn.adjustments_annual_list | perl -e 'while (<>) {chomp; ($d,$t)=split; if ($d ne $dd && $dd ne "") {$x/=$n*10; printf "%s\t%.3f\n",$dd,$x; $n=0; $x=0;} $n++; $x+=$t; $dd=$d;}' > ushcn.adjustments_annual.txt $ openoffice -calc ushcn.adjustments_annual.txt
  25. In my comment above, the link associated with the phrase "many people have looked into this in vastly more detail than BP" is sub-optimal. The link there is to a cached page at google, when it should be to Zeke Hausfather's comparison of GHCN analyses.

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us