Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.


Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Twitter Facebook YouTube Pinterest MeWe

RSS Posts RSS Comments Email Subscribe

Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...

New? Register here
Forgot your password?

Latest Posts


Are surface temperature records reliable?

What the science says...

Select a level... Basic Intermediate Advanced

The warming trend is the same in rural and urban areas, measured by thermometers and satellites, and by natural thermometers.

Climate Myth...

Temp record is unreliable

"We found [U.S. weather] stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.

In fact, we found that 89 percent of the stations – nearly 9 of every 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/reflecting heat source." (Watts 2009)

Temperature data is essential for predicting the weather. So, the U.S. National Weather Service, and every other weather service around the world, wants temperatures to be measured as accurately as possible.

To understand climate change we also need to be sure we can trust historical measurements. A group called the International Surface Temperature Initiative is dedicated to making global land temperature data available in a transparent manner.

Surface temperature measurements are collected from about 30,000 stations around the world (Rennie et al. 2014). About 7000 of these have long, consistent monthly records (Fig. 1). As technology gets better, stations are updated with newer equipment. When equipment is updated or stations are moved, the new data is compared to the old record to be sure measurements are consistent over time.

 GHCN-M stations

Figure 1. Station locations with at least 1 month of data in the monthly Global Historical Climatology Network (GHCN-M). This set of 7280 stations are used in the global land surface databank. (Rennie et al. 2014)

In 2009 some people worried that weather stations placed in poor locations could make the temperature record unreliable. Scientists at the National Climatic Data Center took those critics seriously and did a careful study of the possible problem. Their article "On the reliability of the U.S. surface temperature record" (Menne et al. 2010) had a surprising conclusion. The temperatures from stations that critics claimed were "poorly sited" actually showed slightly cooler maximum daily temperatures compared to the average.  

In 2010 Dr. Richard Muller criticized the "hockey stick" graph and decided to do his own temperature analysis. He organized a group called Berkeley Earth to do an independent study of the temperature record. They specifically wanted  to answer the question is "the temperature rise on land improperly affected by the four key biases (station quality, homogenization, urban heat island, and station selection)?" Their conclusion was NO. None of those factors bias the temperature record. The Berkeley conclusions about the urban heat effect were nicely explained by Andy Skuce in an SkS post in 2011. Figure 2 shows that the U.S. network does not show differences between rural and urban sites.

rural-urban T

Figure 2. Comparison of spatially gridded minimum temperatures for U.S. Historical Climatology Network (USHCN) data adjusted for time-of-day (TOB) only, and selected for rural or urban neighborhoods after homogenization to remove biases. (Hausfather et al. 2013)

Temperatures measured on land are only one part of understanding the climate. We track many indicators of climate change to get the big picture. All indicators point to the same conclusion: the global temperature is increasing.


See also

Understanding adjustments to temperature dataZeke Hausfather

Explainer: How data adjustments affect global temperature recordsZeke Hausfather

Time-of-observation Bias, John Hartz

Berkeley Earth Surface Temperature Study: “The effect of urban heating on the global trends is nearly negligible,” Andy Skuce



Check original data

All the Berkeley Earth data and analyses are available online at

Plot your own temperature trends with Kevin's calculator.

Or plot the differences with rural, urban, or selected regions with another calculator by Kevin

NASA GISS Surface Temperature Analysis (GISSTEMP) describes how NASA handles the urban heat effect and links to current data.

NOAA Global Historical Climate Network (GHCN) DailyGHCN-Daily contains records from over 100,000 stations in 180 countries and territories.

Last updated on 15 August 2017 by Sarah. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Related Arguments

Further reading


Prev  1  2  3  

Comments 101 to 114 out of 114:

  1. Berényi Péter at 23:54 PM on 10 August, 2010 Peter, some of the links seems to be broken? Anyway, does your first chart represent published results or just your own analysis? If you are interested in Arctic surface station records have a look at Bekryaev 2010 which uses data from 441 high latitude and Arctic surface stations.
  2. Berényi Péter writes: Unfortunately I do not have too much time for this job, you may have to wait a bit. Like Berényi Péter, I also don't have a lot of time right now, being about to leave for vacation in a few days and having far too much to do. But I thought it would be worth putting up a quick example to illustrate the necessity of using some kind of spatial weighting when analyzing spatially heterogeneous temperature data. Since BP uses Canada as his example, I'll do the same. He mentions a useful data source, the National Climate Data and Information Archive of Environment Canada. I'll use the same data source. Since I want to get this out quickly, I'm just using monthly mean temperature data from July, and as another shortcut I'll just look at every 5 years (i.e., 2010, 2005, 2000, 1995, ...) I picked July because it's the most recent complete month and 5-year intervals for no particular reason. Maybe sometime later I can expand this to look at the complete monthly data set. In any case, using just one month per 5 year interval will make this analysis more "noisy" than it would otherwise be, but that's OK. I then identified all stations with data in all years, and whose name and geographic coordinates were exactly the same in all years. There's just over 150 of them: Note, first, that the stations aren't distributed uniformly. Note, second, that the trends differ greatly in different regions. In particular, note that there are a large number of stations showing cooling in inland southwestern Canada. There are also a lot of stations showing warming across eastern and northern Canada. (This is an Albers conical equal-area projection, so the apparent density of stations is proportional to their actual density on the landscape). If you calculate the trend for each station, and then just take the overall non-spatial average, you get a slight cooling of about -0.05C/decade for Julys in the 1975-2010 period. But as the map shows, that's quite unrealistic as an estimate of the trend for the country as a whole! The large number of tightly-clustered stations in certain areas outweighs the smaller number of stations that cover much larger areas elsewhere. To estimate the spatially structured temperature trend I used a fairly simple kriging method. This models a continuous surface based on the irregularly distributed station data. There are many other approaches that could be used (e.g., gridding, other interpolation methods, etc). Anyway, the spatially weighted trend across all of Canada is warming of +0.18C/decade. So ... a naive nonspatial analysis of these data give an erroneous "cooling" of -0.05C/decade. A spatially weighted analysis gives a warming of +0.18C/decade. This is why I keep telling Berényi Péter that his repeated attempts to analyze temperature data using simple, nonspatial averages are more or less worthless. Again, this is based on a small fraction of the overall data set, and a not necessarily optimal methodology. But it's sufficient to show that using real-world data you can end up with seriously misleading results if you don't consider the spatial distribution of your data.
  3. #101 Peter Hogarth at 04:25 AM on 11 August, 2010 some of the links seems to be broken? Yes, two of them, sorry.
    • GHCN data
    • March, 1840 file at Environment Canada - this one only contains a single record for Toronto, but shows the general form of the link and structure of records
    Anyway, does your first chart represent published results or just your own analysis? As I have said, it is my own analysis. But it is a pretty straightforward one using only public datasets. Really nothing fancy, anyone can repeat it. BTW, the result, as you can see, is published (here :) It is not peer reviewed of course. But since the quality of the peer review process itself is questioned in this field, it is a strength, not a deficiency. Any review is welcome. have a look at Bekryaev 2010 which uses data from 441 high latitude and Arctic surface stations You still don't get it. The Bekryaev paper is useless in this context, as it is neither freely available nor has its supporting dataset published. Therefore it is impossible to repeat their analysis or check the quality of their data here and now. Credibility issues can get burdensome indeed.
  4. That map from my previous comment also nicely illustrates the conceptual flaw in the claim (by Anthony Watts, Joe D'Aleo, etc.) that the observed warming trend is an artifact of a decline in numbers of high-latitude stations. Obviously, stations in northern Canada are mostly warming faster than those further south. So, if you did use a non-spatial averaging method, dropping high-latitude stations would create an artificial cooling trend, not warming. Using gridding or another spatial method, the decline in station numbers is pretty much irrelevant (though more stations is of course preferable to fewer).
  5. Berényi Péter at 07:07 AM on 11 August, 2010 Thanks for fixing the links, though I think Ned has actually answered one question I had quite efficiently. I'm not sure what it is I still don't get? (why so defensive?) Bekryaev lists all sources (some of them available for the first time), the majority with links, though I admit I haven't followed them all through. I am surprised you make comments without even looking at the paper. Anyway, I genuinely thought you might be interested.
  6. #105 Peter Hogarth at 07:58 AM on 11 August, 2010 Bekryaev lists all sources (some of them available for the first time), the majority with links, though I admit I haven't followed them all through. Show us the links, please. I am surprised you make comments without even looking at the paper. Anyway, I genuinely thought you might be interested. I am. However, I would prefer not to pay $60 just to have a peek what they've done. I am used to the free software development cycle where everything happens in plain public view. #104 Ned at 07:11 AM on 11 August, 2010 Obviously, stations in northern Canada are mostly warming faster than those further south I see that. However, that does not explain the fact the bulk of divergence between the three datasets occurred in just a few years around 1997 while the sharp drop in Canadian GHCN station number happened in July, 1990. Anyway, I have all the station coordinates as well, so a regional analysis (with clusters of stations less than 1200 km apart) can be done as well. But I am afraid we have to wait for that as I have some deadlines, then holidays as well.
  7. #102 Ned at 06:50 AM on 11 August, 2010 I thought it would be worth putting up a quick example to illustrate the necessity of using some kind of spatial weighting when analyzing spatially heterogeneous temperature data OK, you have convinced me. This time I have chosen just the Canadian stations north of the Arctic Circle from both GHCN and the Environment Canada dataset. The divergence is still huge. Environment Canada shows no trend whatsoever during this 70 year period, just a cooling event centered at the early 1970s, while GHCN raw dataset is getting gradually warmer than that, by more than 0.5°C at the end, creating a trend this way. No amount of gridding can explain this fact away.
  8. This one is related to the figure above. It's adjustments to GHCN raw data relative to the Environment Canada Arctic dataset (that is, difference between red and blue curves). Adjustment history is particularly interesting. It introduces an additional +0.15°C/decade trend after 1964, none before.
  9. BP #108 Your approach still gives the appearance of cherry picking stations. As I said previously, you need to make a random sample of stations to examine. Individual stations on a global grid are not informative, except as curiosities :)
  10. #109 kdkd at 19:37 PM on 11 August, 2010 Your approach still gives the appearance of cherry picking stations You are kidding. I have cherry picked all Canadian stations north of the Arctic Circle that are reporting, that's what you mean? Should I include stations with no data or what? How would you take a random sample of the seven (7) stations in that region still reporting to GHCN every now and then? 71081 HALL BEACH,N. 68.78  -81.25 71090 CLYDE,N.W.T.  70.48  -68.52 71917 EUREKA,N.W.T. 79.98  -85.93 71924 RESOLUTE,N.W. 74.72  -94.98 71925 CAMBRIDGE BAY 69.10 -105.12 71938 COPPERMINE,N. 67.82 -115.13 71957 INUVIK,N.W.T. 68.30 -133.48 BTW, here is the easy way to cherry pick the Canadian Arctic. Hint: follow the red patch.
  11. One more piece of the puzzle. If DMI (Danish Meteorological Institute) Centre for Ocean and Ice is visited, a very cool melt season can be noticed this year north of the 80° parallel (compared to the 1958-2002 average). It went below freezing two weeks ago (with the sun up in the sky 7×24 hours a week) and stayed there consistently. This is unheard of since measurements started. Melt season is defined here as the period when 1958-2002 average is above freezing. It is 65 days, from 13 June to 16 August. One wonders how exceptional this weather might be. Therefore I have recovered average melt season temperatures for the high Arctic from the DMI graphs for the last 53 years. This is what it looks like: It is pretty stable up to about 1992. Then, after a brief warming (a tipping point?) it dives into a rather scary, accelerating downward trend. So no, this year is not exceptional, just an extension of the last two decades. It may even be consistent with recent ice loss of the Arctic Basin, because lower temperatures mean higher pressure, a predominantly divergent surface wind pattern around the Pole, hence increased export of ice to warmer periphery. Of course with further cooling this trend is expected to turn eventually. However, there is one thing this downward trend is surely inconsistent with. It is the upward trend reported by e.g. GISS (US National Aeronautics and Space Administration - Goddard Institute for Space Studies) and the computational climate models it is calibrated to, of course. This conflict should be resolved.
  12. BP - homogenization adjustments are something that happen at an individual station level and relate to time of day of reading, screen type, thermometer type, altitude etc. I've said it before and I'll say it again. If you think the homogenization is done wrong, then you need to show us a station where the adjustment procedure has been incorrectly applied or proof that those procedures have flaws. There is just not enough information here to assess whether you supposed problems are real problems. Pick a station in this high arctic set. Dig out the data needed for homogenization, follow the GHCN manual and show us where they went wrong. Just one station.
  13. BP- and I will ask again. What do you think the probability of surface temp record, glacial ice volume, sealevel and satellite temperatures trends ALL being wrong so as to give us a false trend? Consilience anyone?
  14. #112 scaddenp at 10:49 AM on 18 August, 2010 Pick a station in this high arctic set. Dig out the data needed for homogenization, follow the GHCN manual and show us where they went wrong. Just one station. Nah, that would be cherry picking and excessive detail.
  15. BP - you show a site saying how interesting but never found out what the homogenisation procedure was. As I pointed out earlier, people have done this for 2 stations in NZ where "they were apparently adjusted to show warming", but when the station siting history etc was examined, the homogenisation procedure was shown to be correct. Its not enough to show just the readings, you have to have site history and adjustment procedure. And you guess on probability that the consilience is wrong?
  16. #113 scaddenp at 10:52 AM on 18 August, 2010 BP- and I will ask again. What do you think the probability of surface temp record, glacial ice volume, sealevel and satellite temperatures trends ALL being wrong so as to give us a false trend? I can't assign a probability to that event, because the sample space is undefined. We have no idea what might or might not going on in the background. But I would say it's likely in the ordinary sense of the word. In all these cases people are desperately looking for tiny little effects hidden in huge noise with predetermined expectation. Not the best precondition for objectivity. At least the surface temperature record has serious problems with neglecting the temporal UHI effect due to fractal-like population distribution and quadrupling of global population density in slightly more than a century. If you subtract this from the trend, not much remains, leaving all the multiple independent lines of evidence inconsistent with each other.
  17. #115 scaddenp at 11:20 AM on 18 August, 2010 never found out what the homogenisation procedure was Listen, I am talking about adjustments done to raw data here. I thought homogenization is supposed to come later. Anyway, it is next to impossible to assess the validity of a procedure if truly raw data are not published. How likely is it that Environment Canada stations needed an increasing upward adjustment starting in 1964 up to 0.9°C toward the end to make their way into GHCN raw dataset?
  18. BP writes: In all these cases people are desperately looking for tiny little effects hidden in huge noise with predetermined expectation. Not the best precondition for objectivity. I don't think that's a reasonable suggestion. Spencer & Christy are "skeptics" but their UAH satellite record is not dramatically different from RSS's version (+0.14C/decade vs. +0.16). Several of the recent "blog-based" replications of the GISTEMP/HADCRUT surface temperature record were done by "skeptics" or "semi-skeptics" ... but they don't show any difference from the mainstream versions. If Greenland were gaining ice, or if the global mean temperature were falling over the 1979-2010 period, or if there were a reasonable way to process satellite altimetry data that showed sea levels declining ... somebody would have published it by now. Do you seriously think Spencer & Christy haven't scrutinized their methods, looking for anything that could get them back to the (erroneous) cooling trend they got so much fame and attention for in the 1990s? Sorry, BP, but that argument just won't fly.
  19. BP - the irony in your post on objectivity is amazing. Signal to noise in MSU and sealevel is easily quantifiable. And your UHI doesnt make any sense with numerous papers on measuring and understanding the effect. As to GHCN. Do think it reasonable that stations going into the GHCN have temperatures corrected so that every station measures temperature on the same basis? THEN you worry about gridding etc. I think you should actually get the station data and the GHCN adjustment data from the station custodian. Why guess?
  20. #119 scaddenp at 14:53 PM on 18 August, 2010 Do think it reasonable that stations going into the GHCN have temperatures corrected so that every station measures temperature on the same basis? Definitely. That is, it would be reasonable, but unfortunately it is not what happens. In reality data from GHCN stations inside the US of A go into the raw data file pretty much unchanged, then later on multiple adjustments are applied to them as they make their way to v2.mean_adj. The bulk of the 20th century warming trend for the US is introduced this way. For the rest of the world an entirely different procedure is followed, where adjustments are hidden from the public eye. That is, for these stations the additional upward trend introduced during the transition from v2.mean to v2.mean_adj is next to negligible, but there are huge adjustments to data before they have a chance to get into the raw dataset. Of course it is always possible to re-collect data from the original sources and make a comparison (that's what I was trying to do with Environment Canada and Weather Underground), but it is not a cost effective way to do the checking, that much you have to admit. Worse, for most of the stations in GHCN there is no genuine raw data online (not to mention metadata) from the original source, so one would need a pretty extensive organization to do an exhaustive validation job of GHCN data integration procedures.
  21. BP - no one doubts for a moment that data in the series has to be adjusted but you seem to assume that data adjustment is evidence for global conspiracy to create global warming but you havent investigated the adjustment for any single station so far as I am aware. Take Wellington. Original station close to sea level. Then it was moved to met office on top of nearby hill. ("Proof of global cooling. Adjustments arent required"). Later it was moved to airport at sealevel. ("Conspiracy to create warming by moving station. Must make adjustment"). NONE of this history is apparent in the raw data. In fact none of it accessible via internet. Since you are so sure that a station has be incorrectly adjusted, then surely the way to prove this is get the adjustment procedure from custodian and check it against the GHCN manual. None of your graphs mean anything until basis for adjustment has been audited for individual station. You can claim a coup if you find just ONE piece of fraud, so surely worth effort of writing directly to custodian and a lot more cost effective than analysis that shows that adjustments are made - we know that. Papers written on what, how, and how effective these are.
  22. #121 scaddenp at 08:04 AM on 19 August, 2010 no one doubts for a moment that data in the series has to be adjusted Agreed. However, everyone with a basic training in science and a bit of common sense would doubt the right time for adjustments is before data are put into the raw dataset. If it is done to numerous Canadian sites we can check by Environment Canada, there is no reason to assume it is not a general practice, also done to most stations there is no easy way to recover genuine raw data for. The straight, simple and honest path would be not to do it ever, not in a single case. Include all the necessary metadata there along with truly raw measurements and do adjustments later, putting adjusted values into a separate file. From the Tech Terms Dictionary: Raw data Raw data is unprocessed computer data. This information may be stored in a file, or may just be a collection of numbers and characters stored on somewhere in the computer's hard disk. For example, information entered into a database is often called raw data. The data can either be entered by a user or generated by the computer itself. Because it has not been processed by the computer in any way, it is considered to be "raw data." To continue the culinary analogy, data that has been processed by the computer is sometimes referred to as "cooked data." Therefore it is a valid statement that the majority of data in GHCN are cooked.
  23. With two stages of adjustment,you have two types of data. If Environment Canada (are they the real custodian or the collection agency) says this the data as read from thermometer, then it raw. You have to have the metadata about the thermometer and station changes before you can do the adjustment procedures though. This is the what is missing from your analysis. I am pretty sure that GHCN "raw" data is the station-adjusted data ready for gridding. GHCN does not have the data for station series adjustment as fas as I know. This is done by custodial agency in NZ and I guess the rest of the world. It needs local knowledge.
  24. BP - apologies. I have taken time I don't really have to read the GHCN documentation. The raw file should indeed be the thermometer readings as received from custodian corrected only for scale. If the individual data from environment Canada dont match individual data from GHCN, then you do have a case for asking why not. However, averaging isnt meaningful without methodology for the average. Is difference in the individual stations or in the averaging method? I note that GHCN rejects station data for which the raw data for homogenization correction is not available, so in principle, you should be able to find all that. Since you think the adjustments must be wrong, then pick the station with highest adjustment and get the homogenization data for that. Repeat the procedure in Petersen et al
  25. The answer might be no. A STATISTICAL ANALYSIS OF MULTIPLE TEMPERATURE PROXIES: ARE RECONSTRUCTIONS OF SURFACE TEMPERATURES OVER THE LAST 1000 YEARS RELIABLE? McShane and Wyner. Submitted to the Annals of Applied Statistics One of the conclusions:

    ...we conclude unequivocally that the evidence for a ”long-handled” hockey stick (where the shaft of the hockey stick extends to the year 1000 AD) is lacking in the data.

    In other words, there might have been other sharp run-ups in temperature, but the proxies can't show them. The hockey stick handle may be crooked, but the proxies can't show it one way or the other.
    Response: Not the same topic. Try this thread for a better place to discuss McShane and Wyner:
    Is the hockey stick broken?
  26. Using only station records when there are two versions of satellite data available is a form of cherry picking. Since those are limited time wise, I propose that a compromise set be used. This is the set I will use from now on for the instrumental period. It is a merged set that uses the CRU, Hadley, UAH and RSS. Details are available on my site as well as the file. No set is perfect, but I hope that using a set like this is acceptable, more reliable and reduces the complaints from each side of the debate on which set of data they use.…rement-is-best/ John Kehr The Inconvenient Skeptic
  27. John Kehr - you do realise that they dont measure the same thing? (and your link doesnt work). Satellite data lower troposphere is temps through section of atmosphere at around 4000m. Try reading up on how MSU measurements are made, corrected etc. ALL of them valuable, all of them show a warming trend. I think your method of combination is bogus - you need to find a way to reflect the way lower troposphere temperature operates with surface temperature.
  28. Doh!!... Those darn links... Working link. I did indicate that the satellite measurement is a measurement of wavelength. I am not saying that it is perfect method, but none of them are perfect. Hadley and CRU also give different results. This is the one place where anomaly is beneficial. I think it is a more useful method than all skeptics using satellite only and the AGW crowd using CRU only. Instead of arguing about interpolation methods and UHI I am using more sources of anomaly data. If you have a better proposal for incorporating satellite data into a standard record I am all ears. I don't particularly care what method is used, but a single set that attempts to use the station and satellite data would be helpful for all.
  29. Well me, if I wanted to know what is going on in the surface record, I would use the surface temperature record. If I wanted to know what is going on in lower troposphere, I would use MSU data. Giving the complexities in the relationship, I would certainly not be interested in a combination, least of all one put together with arbitrary weightings. What would you think of someone doing this in your area? Think you could get such an approach published. I did indicate how you would combine them properly but first you solve a very difficult problem. Also, the idea that "skeptics" use satellite and AGW use surface is bogus. It is use for what purpose.
  30. Re: The Inconvenient Skeptic As scaddenp rightly points out, you are in error. The atmosphere is layered, like an onion. The different dataset sources measure different things. Attempting to homogenize them into a "blended" dataset is less like comparing apples to oranges than it is comparing apples and breadfruit. Attempting to shift the focus of the debate to "skeptics using satellite only and the AGW crowd using CRU only" is also misleading. Scientist use the theory that best explains the preponderance of the data. Multiple, independent lines of evidence (of which station data and satellite data are but two) show that our world is warming and that we are causing it. That is what science is telling us. Most "skeptics" choose to focus on part of the evidence available rather than all of it. I can appreciate wanting to roll all of the instrumental data (station and satellite) into one neat package, but it isn't necessary. It's rather like combining the four Gospels into one continuous narrative: while interesting, it doesn't tell us anything we don't already know. The Yooper
  31. TIS writes: I think it is a more useful method than all skeptics using satellite only and the AGW crowd using CRU only. I don't know where you get that impression. I tend to use the satellite record if I'm making a point about recent years, and the instrumental record if the topic is longer term. You can find nice examples of comments where I used the satellite temperature record here and here. Note also that if you click on the "Advanced" tab at the top of this page, then scan down to figure 7, you'll see a comparison of temperature reconstructions in which I averaged the various instrumental records to get a "surface" record, and averaged the satellite records to get a "lower troposphere" record.
  32. ice-core samples are WORTHLESS EVIDENCE, as proven at . And since this Global Warming has only this physical evidence (witll all else being ambiguous), then their argument FAILS.
    Response: Please be sure to review the Comments Policy before posting. In particular, we ask that you refrain from posting duplicate comments in multiple threads, and avoid the use of ALL CAPS.
  33. KirkSkywalker - Your referenced web page is mistaken. CO2 is not retained as dry ice in ice cores, but rather as gas bubbles (little icy air tanks). Since that's the only argument presented on the page, I find it lacking content. To include the quote from that page: “A single fact will often spoil a most interesting argument.” –William Feather
  34. KirkSkywalker - Thinking back, I recalled something like this before. Googling a bit, I found that you had posted the same error about ice cores here, on Oct. 23. And had received the same reply from me. Are you reading this website (the point of a discussion is to, in fact, discuss), or just posting and walking away?
    Response: KR, thank you for your vigilance in noting that the same point is being raised in multiple threads. The thread where you responded to KirkSkywalker's comment last month (What does past climate change tell us about global warming?) is probably a better fit than this one for discussion of ice cores. Let's have any further discussion of KirkSkywalker's claims about ice cores take place over there.
  35. Kirk: "And since this Global Warming has only this physical evidence (witll all else being ambiguous), then their argument FAILS." Kirk, if you want physical evidence for global warming, go here and also here and here. You might also truck on over to here if you want to get a grip on the physics of GW.
  36. Hi all, I've been hearing a lot about degraded NOAA satellites. Most of what I find on it is from viciously slanted blogs This MSU webpage was pointed out to me. It confirms some degree of difficulties with one or more NOAA satellites that resulted in some distorted thermal images. I'm having trouble finding information on the temporal duration of the issues. I also don't know what data has been affected. Does anyone have an answer to this challenge? Thanks.
    Response: [Daniel Bailey] This was addressed by Ned at Great-Lakes-satellite-temperature.
  37. Thanks Yooper, Hey I was at the University of Michigan Biological Station this June and July, and I toured the Upper Peninsula a bit. Do you do research in Michigan?
  38. Re: Rovinpiper (137) Sorry, I no longer work in the Earth Sciences fields. In pharmaceuticals now, living where I want to live instead of doing the work I wanted to do & hating where I was living (Washington, DC). If you want to chat via email, send it to John Cook here at Skeptical Science & he'll forward it to me. The Yooper
  39. Here is the link to "Tales from the thermometer" which is available via replay wayback machine: Which brings up my question on the temp data sets, the HadCRU and GISS ones are the same thermometers, with different data adjusting procedures etc..., while the GSOD database has many more stations - my question is does it also include the GHCN stations (while adding many more), or is it a set of completely distinct stations? I couldn't find for sure from the links at Ned #90...Also, are there any other worldwide surface station data sets distinct from the GHCN that have been looked at? Many thanks!!
  40. If anyone cares I emailed the folks who have done the GSOD database, and it sounds like the GHCN daily record has an overlap with 4131 GSOD stations, but that the GHCN monthly stations(which according to the above provide the 3 main temperature datasets) do not overlap with the GSOD stations. Still not quite sure I understand that so if anyone has any other info feel free to chime in!
  41. I am looking for information on how NASA GISS fills in the gaps on the Arctic temperature grids. I just want a better understanding. Thanks.
  42. RickG, have a look at GLOBAL SURFACE TEMPERATURE CHANGE, J. Hansen, R. Ruedy, M. Sato, and K. Lo
  43. GISS Temp is the obvious place to start where the papers referencing the methodology method are given. You might like to look at Ned's post above #102 to help you guess whether the hadcrut method (use global average to interpolate) or GISS method (infer for local station analysis) might give best answer for Arctic.
  44. Responding cloa513 from here If someone was averaging temperatures in the way you seem to think they are, then you would have a point. However, if you see Hansen 2008, the keepers of temperature record would agree and so that is NOT how it is done. It has been pointed out to you before with the links to the actual method, so why are you persisting with this erroneous strawman?
  45. Reply to comment from here. "The Berkeley Earth Surface Temperature project is incorporating criticism of data collection sites" And here is what the BEST project says as of today: We are first analyzing a small subset of data (2%) to check our programs and statistical methods and make sure that they are functioning effectively. We are correcting our programs and methods while still “blind” to the results so that there is less chance of inadvertently introducing a bias. The Berkeley Earth team feels very strongly that no conclusions can yet be drawn from this preliminary analysis. -- emphasis added Best to wait until there's a finding before rushing to judgment. But then, you're reading Watt$.
  46. At NASA's GISS Surface Temperature Analysis webpage The Annual Mean Temperature Change in the United States appears to have peaked and is dropping at the end of the graph?
  47. 146 sjshaker Ask you self this. If you had seen that graph in the early 1990s - obviously only with the numbers up to then - would you not say the same thing? Would you have been right? Given which, do you think looking at the ups and downs over a time range of a couple of years is reliable?
  48. Keep in mind, Chris, that Fig D you refer to is only for the United States and is based on running 5-year means. In English, this translates to the info shown for the last 5 years on the graph is less certain and more variable (relative to that which preceded it). The Yooper
  49. Gee, Chris, noticed that it has dropped before? I can think I can predict with some confidence that this year will be cooler than last year. Why? La Nina. I can also bet with reasonable confidence it will be warmer than the last La nina of similar magnitude. And guess what, temperatures will go up again in the next EL Nino. Do you become a climate skeptic in La Nina years, and warmist in El Nino?
  50. I've been looking for long term historical data from climate records for specific met sites that I can download and graph on my own. So far, Google has not provided. I did find and read some nice Wikipedia entries on Climate records, controversies about same, and more about the Berkeley Earth Surface Temperature Project Also found access to "Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850" I would appreciate pointers to raw data that we can download ourselves. Chris Shaker
    Response: Go to Click the Data Sources link in the horizontal bar at the top of that page.

Prev  1  2  3  

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page

The Consensus Project Website


(free to republish)

© Copyright 2022 John Cook
Home | Links | Translations | About Us | Privacy | Contact Us