Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.


Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe

Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...

New? Register here
Forgot your password?

Latest Posts


Of Averages & Anomalies - Part 2B. More on why Surface Temperature records are more robust than we think

Posted on 5 June 2011 by Glenn Tamblyn

In Part 1A and Part 1B we looked at how surface temperature trends are calculated, the importance of using Temperature Anomalies as your starting point before doing any averaging and why this can make our temperature record more robust.

In Part 2A and in this Part 2B we will look at a number of the claims made about ‘problems’ in the record, and how misperceptions about how the record is calculated can lead us to think that it is more fragile than it actually is. This should also be read in conjunction with earlier posts here at SkS on the evidence here, here & here that these ‘problems’ don’t have much impact. In this post I will focus on why they don’t have much impact.

If you hear a statement such as ‘They have dropped stations from cold locations so the result is now give a false warming bias’ and your first reaction is, yes, that would have that effect, then please, if you haven’t done so already, go and read Part 1A and Part 1B then come back here and continue.

Part 2A focused on issues of broader station location. Part 2B focuses on issues related to the immediate station locale.

Now to the issues. What are the possible problems?

Problems with ‘bad’ stations

One issue that has received considerable attention is the question of the ‘quality’ of surface observation stations, particularly in the US. How well do the stations in the observation network meet quality standards with respect to location and avoidance of local biasing issues, and how much might this impact on the accuracy of the temperature record.

The upshot of investigations into this is that, at least in the US, a substantial proportion of stations have poor location quality ratings. However, analysis of the impact of the site quality problems by a number of independent analysts suggests that these problems have had almost no impact on the accuracy of the long term temperature record. How could this be? Surely that is the whole point of these quality rankings – poor quality sites can give bad results. So why wouldn’t they?

The definition of the best quality sites, Category 1 is as follows:

“Flat and horizontal ground surrounded by a clear surface with a slope below 1/3 (<19º). Grass/low vegetation ground cover <10 centimeters high. Sensors located at least 100 meters from artificial heating or reflecting surfaces, such as buildings, concrete surfaces, and parking lots. Far from large bodies of water, except if it is representative of the area, and then located at least 100 meters away. No shading when the sun elevation >3 degrees.”

Down to Category 5: 

“(error ≥ 5ºC) - Temperature sensor located next to/above an artificial heating source, such a building, roof top, parking lot, or concrete surface”

Lets consider a few of these factors. And remember, we are interested in factors that have an impact on long-term changes in the temperature readings at a site. If a factor results in a bias in the reading but this bias does not change over time, then it will not impact on the analysis since we are interested in changes – static biases get cancelled out in the analysis and have no long-term impact. Firstly, let's look at the standard enclosure used for a meterological measurement station – the Stevenson Screen:

Stevenson Screen


The screen is designed to isolate the instruments inside from outside influences, particularly radiant effects from its surrounds and rain. It is usually made from a material such as wood or similar that is a fairly good insulator and isn’t going to change temperature too much because of radiant heating/cooling from its surroundings. The double-slatted design suppresses air movement from wind through the enclosure, minimising wind chill effects and restricting rain entry onto the instruments. The double-slatted design also means that any air rising from beneath the enclosure isn’t being preferentially drawn into or out of the box. And the design of the base allows air movement from below while shielding from radiation  from below.

So what are the problems which can change the category of a temperature monitoring station to lower than 1?

Slope > 19º

A problem may arise if the station isn’t located on sufficiently flat ground. This can produce air movements that are caused by temperature, resulting in warmer air possibly moving towards the station. However, unless there have been really major earthworks around the site, this factor doesn’t change over time and is unlikely to have a long-term changing impact.

Grass/low vegetation ground cover >10 centimeters high

This can impact on air movements around the station. Also, if the vegetation changes substantially – low grass to shrubs and trees - then this could change water evaporation rates around the station and alter air temperatures. Major increases in vegetation might have a cooling effect on the station due to evaporative effects, while declines in vegetation back to Category 1 standards might have a warming impact. However, unless there is a regular and progressive change in the vegetation pattern around the station, this would not produce an ongoing change of any bias. If maintenance of vegetation around the station over its lifetime has been poor or erratic, then the bias may fluctuate up and down. This would create shorter term fluctuations in the bias but this would tend to cancel out in the longer term.

Shading when the sun elevation >3 degrees

If the degree to which the station and its surrounds are shaded over the course of the day changes, this can alter local heating. Primarily this is going to impact as a result of shading causing differing heating/cooling of the ground under/around the enclosure, resulting in changes in the temperature and flow rate of rising air up through the enclosure. Unless the cause of the shading varies over long, multi-year time frames such as trees growing or buildings rising, the shading effect is not a long-term changing biasing factor. Depending on the cause of the shading, this may cause changes in the bias over the course of a day and over the seasons, but as a multi-year bias, this would remain constant.

Not far enough from large bodies of water

This too is a static bias. The body of water would have a cooling effect due to evaporation that would vary with daily weather conditions and the seasons but would not be a multi-year biasing factor.

Static artificial heating sources

Essentially surfaces such a brick, concrete, bitumen, etc. that can act as local heat stores, greater than normal grass covered earth would be, that can then release heat either radiantly or by heating the surrounding air. These can be vertical structures, horizontal surfaces away from the enclosure, or a horizontal surface beneath the enclosure. The enclosures are designed to minimise radiant heat penetration into the enclosure from its surrounds, so the major impact of such static heating sources is going to be from heating surrounding air which may then pass through the enclosure. This will be worst when such a surface is very close to the enclosure, particularly beneath it, generating rising warmer air into the box.

Also an important factor will be the extent to which any such surfaces tend to form a partial ‘room’ around the enclosure, restricting horizontal air movement. Any such surface will tend to heat the air near/above it, causing that air to rise. More air is then drawn in to replace this, potentially flowing over or through the enclosure. If the distances involved and the geometry of the site result in this new air being warmer than the general surroundings, this could provide a warming bias for the site. Conversely if this replacement air is being drawn from a location that isn’t warmer then there may be no bias at all, possibly even a cooling effect. Ambient winds may also blow warmed air towards, or away from enclosure, depending on wind direction. And the effect of any such bias will vary over the course of the day and the seasons.

However, since the main source of any such bias is the amount and layout of such surfaces and sunlight, these biases won’t change over multi-year time frames unless the area of the surfaces is changing. This could be due to construction, or changes in shading of these surfaces such as by trees growing or building construction nearby. And some of these shading changes could actual reduce the bias over time, resulting in a long-term cooling trend. Also to be considered is whether the site is included within a region that is or becomes urban, in which case the UHI adjustments mentioned previously may cancel out any bias completely. And we still have to allow for area weighting of data from such a site when averaged over the Earth's land surface. And this doesn’t affect the oceans at all.

Dynamic artificial heating sources

These are similar to the static surfaces, but they are things that actively pump heated air into the environment. Things such as Air Conditioner condensers, Exhaust fans, Heater flues, Cooling towers, Vehicle exhausts, etc.  As with the static sources, a key issue here is geometry. They are generating hot air which will tend to rise unless winds blow it towards the enclosure. Does any such device actively blow warm air towards the enclosure? Or does its operation tend to draw air in from elsewhere and over the enclosure? How distant is the device and what is the geometry?

Also how long does the device operate for; 24/7 or intermittently? A station may be next to a large car park, but unless there is continuous activity, even thousands of cars have no extra impact if they are all parked and empty. Does an Air-conditioner run 24/7 or just 9-5 weekdays? Is it a reverse cycle A/C unit also used for heating in winter or at night, in which case it will pump out colder air then that doesn’t rise? How much do these activities vary with the seasons? And ultimately do these activities grow in magnitude over multi-year time frames? Otherwise they again contribute to short-term intra-annual biasing but not multi-year effects. And they may be cancelled out anyway by UHI compensations.

Conclusions about ‘Bad’ Stations

The US network certainly isn’t as good as it should be. There are certainly factors operating there that influence short term daily and seasonal readings and these may have important implications for use in daily Meteorological forecasting which rely on absolute temperatures. However, for long term multi-year Climatological uses, it is perhaps easy to overestimate the impact of these problems.

It easy to understand how our subjective impressions, standing near a poor quality site, seeing an A/C roaring away or feeling the radiant heat from a concrete parking lot nearby, could lead us to think this is a big issue. But the combination of the screening properties of the enclosures, long-term averaging, anomaly-based averaging, and UHI compensation will certainly tend to remove many biases that do not have long term-trend changes. And area averaging over the Earth's land surface combined with the fact that most of the Earth is water reduces any impact even further.

So it isn’t surprising that the long term temperature trend data doesn’t seem to be significantly affected by station quality issues. That is not to say that there may not be noticeable impacts on shorter term measures – local and seasonal trends and possibly daily temperature range (DTR) effects for example. But for the headline Global Temperature Anomaly, which is a main indicator of Climate Change, station quality issues appear to be a very minor issue, something that ‘all comes out in the wash’.

Station Homogenisation issues

Finally we come to ‘Station Homogenisation’ – the process of reviewing station data records looking for errors that are a result of how the measurement was taken, rather than what the temperature actually was.

A common misconception is that ‘the thermometer never lies’. That the raw data is the gold standard. As anyone who works in any field involving instrumentation knows, this isn’t true; there are always ‘issues’ that you have to monitor for. Any instrument, even a simple thermometer will have its own built in biases.

Sometimes there will be readings that are just plain whacky. And surrounding influences can have an impact. A thermometer out in the sunshine will have a different reading from one shaded by your hand for a few minutes. A caretaker who can work quickly taking the readings when the enclosure door is open will produce a different bias from one who works slowly, or reads the instruments in a different order. Bias and error is everywhere.

If readings at a station weren’t always taken at the same time of day, this can introduce biases. Changes in the instruments used can introduce a bias. Some readings can be just plain wrong. Imagine some scenarios:

  • The caretaker of a station may have had a ‘big night out’ and not read the thermometer very accurately. There is an error there but we probably can’t detect it.
  • The caretaker of a station may have a ‘big night out’ every Friday night. Now there might be a regular error in Saturday’s readings. With a pattern like this, we might be able to detect it with statistical analysis. We might be able to correct it but only if we are certain enough.
  • That caretaker might have had one ‘REALLY big night out’ and next morning broke the thermometer. He replaced it but did he record that fact in the station log? If he did, we know that a change of bias has been introduced between the two thermometers. Then we can compare readings from before and after and try to find the change. But only after we have years' worth of data from both thermometers. And if he didn’t log it, then we only spot a problem if that station seems to have a strange change compared to nearby stations.
  • Over time the Stevenson Screen may have fallen into disrepair, resulting in a slow changing bias as outside influences start to penetrate. Then the site is updated with a new screen. Biases removed, although the new screen may have its own small bias. If we now about the change we can try to compensate for it. Eventually when we have enough data from before and after.
  • The caretaker at the station in Ushuaia right at the southern tip of Argentina records data through the early 1900’s. In Spanish, with poor handwriting – really hard to tell 7’s and 9’s apart. The log sheets are sent to Buenos Aires where the data from this and many other stations are collated and typed up onto summary sheets by a clerk with an old battered typewriter. Then they are filed away; 40 years later they are extracted, faded and old, photocopied on a poor quality early copier and mailed to the US for incorporation into climate databases. Where they must be copied into the database again by hand. How many errors have crept in during that process?

So, we can’t simply take the raw data at face value. It has noise in it. We need to analyse this data looking for problems and correcting them when we are confident enough of the correction. But also being careful that we don’t introduce errors through unjustified corrections. This requires care and judgment and it is sometimes a real detective story. And often corrections cannot be made until many, many years later because you need lots of data before you can spot changes in bias.

So this process of working through the data, trying to make it more accurate is ongoing.

But what of its impact on the temperature record? Again, if the biases at a station don’t change over time, they don’t affect our analysis. Individual errors matter but they will tend to be random, some higher, some lower so when we average over large areas and long time periods, they tend to cancel out. Again, it is problems that cause changing biases that matter. And analysis of changes due to Homogenisation in the record indicate that there are as many cooling changes as warming ones. Such as this from Brohan et al 2006:

Homogenisation Distribution


So, Part 1A looked at how we should calculate the temperature record and why the method used is very important to the result. And that this doesn’t necessarily match our intuitive idea of how it should be done; in this our intuition is often wrong. In Part 1B looked at how we DO calculate the temperature record, that is using the method outlined in part one and that the area weighting scheme used by one record is based on empirical evidence. In Part 2A we looked at some of the areas where the temperature record has been criticised with respect to its broader locale.  And in this post we have explored issues related to the immediate surrounds of the station.

I think we have seen that there are many reasons why we tend to overestimate the effect of these problems.  This conclusion is consistent with the evidence here, here & here from various analyses that show that these possible problems haven’t had any significant effect on the result.

My conclusion is that we can have a strong confidence in the results produced for the global temperature trend. Any problems will show up more in short-term patterns such as seasonal, monthly and daily trends. But the headline global numbers look pretty robust.

You will have to make up your own mind but I hope I have been able to give you some food for thought when you are thinking about this.

1 0

Printable Version  |  Link to this page


Comments 1 to 27:

  1. A most informative, eye-opening post that filled in a lot of gaps in my knowledge. I thought I understood why we use anomalies; now I may even be able explain it to someone else. Many thanks.
    0 0
  2. Thanks, Glenn. Part 2B was especially interesting and helped fill in some of my gaps.
    0 0
  3. One thing that I am curious about is the details of the actual measurement protocol for thermometers in Stevenson screens. That is how current, maximum and minimum temperatures are used in the temperature record. One concievable error is where a transient event such as a jet exhaust or air conditioner turn on is captured by a maximum temperature measurement.
    0 0
  4. This is really interesting to in that it puts the whole WUWT Surface Station project into a broader perspective. It's interesting from a sociological standpoint how one highly motivated individual (Anthony) can create such a hubbub over what amounts to a small amount of noise in the data.
    0 0
  5. Rob #4 - yes, I think Glenn did a nice job putting this into perspective. You see a temperature station next to an A/C unit and you think, 'there's no way this data can be reliable!'. But once you look into how the data is analyzed, as Glenn has done in great detail, you see that they do a great job of filtering out extraneous effects. The problem is that the surface station folks didn't bother to examine how the data is actually analyzed to create the average surface temperature data set. Now they're seeing how good a job GISTemp et al. do of it, a few years too late, and there's major egg on their face as a result.
    0 0
  6. It should be reiterated that denier claims about the surface temperature record have never been backed up by any data analysis work. This in spite of the fact that all the temperature data used by NOAA and NASA are freely available on-line, as are all the necessary software development and data analysis tools. As has been pointed out here in earlier discussions, the software development and data analysis required to test virtually all of the denier claims about the surface temperature record (i.e. claims about "dropped stations", the UHI effect, and "raw vs. adjusted" data discrepancies) can be tackled by a competent programmer/analyst in just a few days. As has been shown in this excellent "Of Averages & Anomalies..." series, the anomaly gridding/averaging procedure is quite straightforward. And thanks to modern (and freely-available) software development tools, the procedure is surprisingly easy to implement (as I found out when I spent some time playing around with the temperature data a few months ago). I posted some of my results some time ago here, but for the benefit of folks new to skepticalscience, I figure that reposting them would be worthwhile. (Note: I used GHCN "raw" temperature data to generate the results). This first plot shows my unsmoothed land station gridding/averaging results vs. NASA's (the NASA results were copied/pasted directly from the NASA web-site): My gridding/averaging implementation is much cruder than NASA's; I made a couple of "back of the envelope" shortcuts/approximations out of, for lack of a better description, "sheer laziness" ;). But my results still track NASA's quite closely. Another plot that folks (especially those new to this stuff) might be interested in seeing is a plot that shows the effects of the "dropped stations" that Anthony Watts and other deniers have made such a fuss about. Watts and Co. have claimed for a long time that warming trends have been exaggerated by the "dropping" of high-latitude/high-altitude stations from the temperature record. Well, here's a plot that I generated that shows temperature anomaly results for "all stations" vs. "dropped stations excluded" (5-year moving-average smoothing): As you can see, the "dropped stations" effect is minimal. This is something that Watts and Co. could have verified for themselves with just a few days (at most) of "spare time" programming/analysis effort. If Watts didn't have the programming skills to tackle a project like this, he should have hired someone or gotten a volunteer to do the work for him before he started throwing around accusations of incompetence/dishonesty on the part of the climate-science community.
    0 0
  7. GISS is using an urban correction (among others) in their U.S. temperature graphs. There is no mention of the urban correction in the global series. Uncorrected urban stations will yield higher temperatures than corresponding rural stations, this is well documented, and accurate corrections need be applied to remove this effect.
    0 0
  8. I guess the big problem I see is that nobody seems to know the range of these stations. The fact that many stations have been closed is alarming. This tells me we don't have adequate coverage. I am completely against filling in missing locations with anomaly numbers, I would rather see a questionmark than a guess.
    0 0
  9. Dr. Cadbury--I'm pretty sure that the entire point behind the "teleconnections" diiscussed in this series of postings is that we do, in fact, know the "range" of the stations and it is therefore possible to remove stations without "guessing".
    0 0
  10. "Uncorrected urban stations will yield higher temperatures than corresponding rural stations, this is well documented" However various slicing and dicing of the data has shown that there's little effect on *trend*, which is all we care about.
    0 0
  11. Dr Jay "I am completely against filling in missing locations..." You should be relieved when you have a look at the 'There aren’t enough stations' section in Part 2A of this series The analysis shows that the problem is not much of a problem at all. (This is likely a weather vs. climate issue. We really need local temp, humidity, wind conditions for weather reporting and forecasting. We don't need local specifics for a general picture of global climate.)
    0 0
  12. @mcClam6 Great. Please give me the range in which they accurate in miles, please. If you simply paste a link in I am not going to read it. I have been given the run around here enough times to know I won't get the answer I am looking for. @Adelady The link you provided is broken. Anyway, I have to disagree with you. I think local temperature is the least important measurement we need. I want specifics in areas where nobody is living to better deduce whether we are having an impact on the area.
    0 0

    [DB] "Please give me the range in which they accurate in miles, please."

    Pointless, line-in-the-sand dare.

    "If you simply paste a link in I am not going to read it."

    Petulance is the hallmark of a closed mind.

    "I have been given the run around here enough times to know I won't get the answer I am looking for."

    That you refuse to accept information that does not fit your predefined question is telling, and hardly a response worthy of one claiming to possess a "PhD".

    Quite wasting everyone's time: read up on the science and the basics and learn for yourself.  You need solid food, not milk.

    Note to other readers:

    Jay has on numerous occasions questioned the topic of various threads here.  Each time he has been provided with answers with links to published, peer-reviewed supportative and corroborating material.  Answers which apparently are not to his liking.  So be it.

    This Forum is for everyone: Authors, discussion participants and the silent readership majority alike.  The Comments Policy here at Skeptical Science mandates civility and a focus on the science.

    Readers posting genuine questions here always receive genuine, helpful answers.    No one here wants to see anyone "not get it".  But it is incumbent upon the person asking the question to actually perform the homework given and read the material furnished in answer, including the linked material, if they have questions.

  13. No doubt, the so-called 'skeptics' here will choose to ignore caerbannog's analysis and recommendation about looking at the actual data themselves and undertaking their own analysis. Instead they will most likely continue to spout conspiracy theories, pontificate, make laughable allegations of innumeracy against scientists and the usual flavour of 'skeptic' tactics to fabricate debate, obfuscate and confuse. Either put up (and by that I do not mean linking to some hacked attempt to do some analysis by someone like D'Aleo) or please move on. The planet is warming, and that fact has been independently verified, by the Clear Climate Code, including 'skeptics' such as JeffId and RomanM, not to mention other metrics and observation platforms--deal with it. Ignoring that reality amounts to nothing more than denial.
    0 0
  14. Cadbury, If I may, I think what you (and Watts) fail to understand is that scientists have carefully looked at this. When trying to generate a temperature map for the day's forecast, having lots and lots of thermometers all over is important. When generating a map of temperature anomalies which span years and decades, and are an average taken from many, many days of readings, we find that five, ten, one hundred, even hundreds of miles are inconsequential. One doesn't need a hundred carefully gridded stations if they all give the exact same answer. And it costs money to take all of those readings, money that could be better spent accumulating valuable rather than redundant information. The argument that we don't have enough temperature stations is a distraction from the truth of the matter. It's like sitting in a hospital worrying that you might die because the snowstorm outside would prevent an ambulance from getting to you in time.
    0 0
  15. 7, Eric the Red,
    ...accurate corrections need be applied to remove this effect.
    A seemingly thoughtful insight, with the unspoken implication that scientist somehow hadn't thought of this. That in spite of the wealth of details provided to you at the top of this page, in the original post, on the great lengths to which scientists have gone in studying the problem and working to properly and objectively homogenize the data. Really, the unwillingness to read and learn, while also dropping little doubt-grenades, is breathtaking.
    0 0
  16. Yes, That is why the U.S. record is arguably top notch. Now, if we can just get the rest of the world to follow suit ... Boy, you just cannot resist sticking in little barbs, when you jump to conclusions, do you?
    0 0
  17. @Moderator I'm sorry but I asked a few weeks ago for the total number of global glaciers and somebody gave me a link which did not have the information readily available. All I am looking for is a number.
    0 0

    [DB] The website I directed you to earlier, the world glacier monitoring service (on its facts and figures page), says about 160,000.

    Part of the problem in getting an exact number is that glaciers are not necessarily discreet separable entities, like rivers, nor are they all "named". Some parts of a single icemass have multiple names as well.

    In other words, there may not be a definitive answer to your question.

  18. The number is over 100000 according to NSIDC. I haven't checked out any of the links at the bottom of this page, but I reckon you'd find what you want at one of them.
    0 0
  19. DrJ: Read the First Sentence of the Top link. I may be a Certified College Dropout but I still know how to look for answers when I have questions.
    0 0
  20. 16, Eric the Red, You stick in little (baseless) insinuations and I'll point them out. It's as simple as that. The follow-on statement about the world temperature record being unreliable is similarly foolish. Look, the indicators that the world is warming and the climate is changing are becoming almost too numerous to mention. How can you possibly sit and harp on the observational surface temperature record when multiple different sets of scientists have been working on it full time for decades, and the results are buttressed by satellite observations, melting ice, rising sea levels, shrinking glaciers, migrating species, expanding droughts, etc., etc., etc., etc. To nitpick on obscure details, trying to make the obvious seem controversial, is just denying the undeniable. I don't think you can deny that denying that the planet is warming is an act of serious denial. Do you deny it?
    0 0
  21. pbjamm, That was awesome! I never knew about Oh, my gosh, I've got a new toy to play with and I can't wait to use it. I swear, I need it 50 times a day, and the implicit sarcasm is just priceless. Awesome!
    0 0
  22. Dr. PhD: I'm sorry but I asked a few weeks ago for the total number of global glaciers and somebody gave me a link which did not have the information readily available. All I am looking for is a number. If you want that information, there are many ways of finding it on your own (especially if you have a PhD). Demanding that other people do your research for you, and then getting snippy because you don't like the answer, is kind of ungracious, IMO.
    0 0
  23. Sphaerica, Do you pigeonhole everyone who disagrees with you in the same way? Is that why you think everyone is a denier? I have never denied anything in your previous posts, except for droughts. Whatever gave you the idea that I do not believe that the planet has warmed?
    0 0

    [DB]  Everyone:

    1. Please focus on the arguments, not the person(s).
    2. If one must discuss climate denialism, the Are you a genuine skeptic or a climate denier? is the appropriate venue for it.  Not here.
  24. 23, Eric the Red, From your comment #7
    ...accurate corrections need be applied to remove this effect.
    1st implication = the temperature record is suspect 2nd implication = if the record is suspect, then logically maybe there is no warming. Many, many people will draw those (invalid) conclusions from your statement. From your comment #16:
    Now, if we can just get the rest of the world to follow suit ...
    Same implications and ultimate effect. That this is done by implication instead of direct statement escapes no one. It also directly contradicts the content of the original post, which is supported by a wide variety of evidence and logic, while your little grenades (to me) serve to subtly and inadequately attempt to refute the original post. Am I mistaken? Would you like to clarify what you actually meant to communicate?
    0 0
  25. 24. Sphaerica, From GISS's own admission, they make these corrections. Yes, that would imply that the raw data is suspect. This analysis is all about the requires and corrections made to U.S. monitoring stations. The rest of the world makes no such claim. Are they performing similar correction?
    0 0
  26. 25, Eric the Red,
    GISS's own admission...
    Why do you insist on constantly doing this. "Admission." As in confession, as in, they're doing something wrong by making the corrections, and they need to "admit" to their evil, nefarious actions. "...the raw data is suspect." Did you read the post? Do you understand why the data must be homogenized (or, perhaps to use a better term, normalized)? That doesn't make it "suspect." By dropping these little grenades of yours, with no clear explanation and without background, you are sabotaging the nature and intent of the original post, and to my eyes, you are doing it deliberately, because you feign ignorance when it is pointed out to you, and yet compound the error by reiterating your (vacant) points. And you completely dodged my question. Your posts imply that the data is suspect and therefore the world may not be warming. Is this what you intend to communicate? Is this what you are saying? Agree that this is what you are saying and openly deny that the globe is warming, or else deny that this is what you are saying, and explicitly agree that the world is warming. State it clearly and without ambiguity. What are you saying?
    0 0
  27. Here's another set of results that folks here might want to take a look at... Down at the bottom of this post is another interesting little plot I generated. This one shows what happens when you throw out 90 percent of the temperature stations at random (with no attempt to maintain uniform geographic coverage). What I did here was to generate a random integer (with a pseudorandom number generator) between 0 and 9 (inclusive) for each temperature station. If the integer value was 9, I included the station in the computations; otherwise, I threw it out. I repeated this 10 times (each time throwing out a different random "9 out of 10" selection of stations) -- this effectively ruled out a "lucky hit" cherry-pick situation. The official NASA/GISS land-station results (with all temperature stations) are shown as the dark-red plot line in the foreground of the figure below; all the other plot lines are results from my "throw out 9 out of 10 stations" processing. As you can see, my results are quite consistent, although they do tend to show a bit *more* warming than the official NASA "all stations" results. This is probably a result of NH overweighting; throwing out 90 percent of the stations will create more coverage gaps in the SH (where there has been less warming) than in the NH (where warming has been more pronounced). Just speculating here; I haven't really investigated this. These results show two things -- (1) the GHCN temperature record really is robust (and oversampled), and (2) if anything, it appears that NASA goes out of its way to avoid exaggerating the global-warming trend -- if NASA were really "cooking the books" to exaggerate the warming trend, at least some of my runs would have shown a smaller warming trend than NASA's. But that is obviously not the case here. (Ignore the details of the plot legend; the legend labels are cryptic abbreviations of some processing-run parameters that are of no consequence here.) 0 0

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

The Consensus Project Website


(free to republish)

© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us