Recent Comments
Prev 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 Next
Comments 125301 to 125350:
-
dhogaza at 04:04 AM on 24 January 2010On the reliability of the U.S. Surface Temperature Record
Or maybe even restrain myself :) -
dhogaza at 04:04 AM on 24 January 2010On the reliability of the U.S. Surface Temperature Record
" A poor station with an absolute temperature error of +5 degrees C still has a bias error of +5 degree C - no matter what the variation occurring due to instrumentation type." We're interested in trends, so a constant bias has no effect, nor does the choice of baseline from which to compute the anomaly. For any bias B, and any two temperature reading at points in time N0 and N1, (N0-B) - (N1-B) = N0 - N1. And you can extend that into any statistical trend analysis taken over a time series N0 ... Nn. "I'm a chemical engineer with U.S. government and 20 years of research experience in various areas including environmental mitigation. If one of my phD's came to me with this nonsense, I'd fire him on the spot. " I could make a snarky statement about 9th grade algebra students but I'll withstrain myself. -
nofreewind at 00:12 AM on 24 January 2010The IPCC's 2035 prediction about Himalayan glaciers
I live in the East USA but have skied annually in California/West and talking about glaciers as a water supply seems awful SILLY. Whatever glaciers there are in California are teeny(most all of the snow melts by September) and their contribution to melt has to be very small compared to general melting snowpack. Worrying about glaciers, without considering an overall predicted rise in precipitation from AGW "theory", does not seem to be right to me. And even with the worst case AGW scenarios coming true, say a 4F rise in temp, is that going to stop a snowpack from forming? I don't think so. Here in the Eastern US snowfall is extremely variable with some years we have very little snowpack, yet our rivers flow all year long, except in periods of a true drought, when they just flow low. I conclude the glacier scare is nothing but that, another scare. I don't have the deep scientific knowledge of AGW "theory" that many of you have, but common sense appears once again to rule the day on the Himalayan glacier/melt issue! -
Ned at 22:44 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
kforestcat writes: "I'm fully aware of how anomaly data is used ( having used it in my own research)" I'm not sure you actually do understand this, because your comments still show the same kinds of errors and confusion. "NASA's individual station temperature readings are taken in absolute temperature (not as an anomaly as you have suggested)." NASA doesn't take temperature station readings, and nobody has suggested that the temperature sensors measure anomalies directly. "Menne has to have (and use) absolute temperature data to get the 1971-2000 mean temperature and then divide the current temp with the mean to get the anomaly. We are back to the same problem - Menne is measuring instrument error - he is not measuring error resulting from improper instrument location." That is very confused. The temperature anomaly is the current daily (or monthly) temperature minus the mean temperature on the same day (or month) during a given reference period. You don't "divide" any temperatures. And Menne et al. are not measuring "instrument error". They are analyzing measurements of temperature as a function of site quality in order to determine the difference in temperature trends between well-sited and poorly-sited stations. "Actual anomaly is 93F - 85F = 8F; Instrument anomaly is 105F - 90F = 15F. The data is trash. There is simply no way to recover either the actual ambient temperatures nor an accurate anomaly reading. What you are missing is that an improperly placed instrument is reading air temperatures & anomalies influenced by unnatural events." You still completely fail to understand what's going on here. Menne et al. are taking the temperature data and grouping them into categories based on the site quality. They then determine the difference in long-term trends between well-sited and poorly-sited stations. In the raw, unadjusted data, poorly-sited stations tend to have a slightly lower trend than well-sited stations. The network homogenization and adjustment process brings poorly-sited stations into closer agreement with well-sited stations. "The readings bear no relationship to either the actual temperature nor the actual anomaly - the data's no good, can't be corrected, and will not be used by a reputable researcher." That is just bluster. What the analysis shows quite clearly is that if anything, poorly-sited stations on average underestimate the warming trend, but that the network adjustment process is able to successfully compensate for this effect. And even if you were reluctant to accept that, the close agreement between in-situ surface temperature and satellite microwave temperature retrievals from the lower troposphere suggests that the surface temperature record is realistic. "Finally, it's not entirely surprising that Menne finds a downward bias in his individual anomaly readings at poorly situated sites. Because: 1) a poorly located instrument produces a higher mean temperature; hence, the anomaly will appear lower; " Huh? Again, this makes no sense. If a sensor always reads 5C too high, its anomaly will be exactly the same as if it were perfectly sited. If a sensor's environment changes such that the current temperature is biased high relative to the period of record, then it will have a positive anomaly, not a negative one. "and 2) generally there's a limit to how hot an improperly placed instrument will get (i.e. mixing of unnaturally heated air with ambient air will tend to cool the instrument - so the apparent temperature rise is lower than one might expect)." That is both confused and irrelevant to the paper at hand. "Had Mennen (NASA) actually measured both absolute temperature and calculated anomaly data using instrumentation at properly setup sites, within say a couple of hundred feet of the poor sites, as a proper standard to measure the bias against - our conversation would be different." (1) Menne et al. work for NOAA, not NASA, and the paper being discussed here is about NOAA's temperature data. (2) You still seem confused about the relationship between measured temperature data and calculated temperature anomaly. (3) The entire point of this paper is to compare poorly-sited and well-sited stations. (4) By doing this comparison using trends in the anomaly rather than using the absolute temperatures, there's no need to compare stations within "a couple of hundred of feet" of each other. "As it stands Menne's data is useless nonsense and not really worth serious discussion." Again, that is just bluster. It sounds to me like you don't understand the subject but are deeply invested in casting doubt on it. -
chris at 22:13 PM on 23 January 2010Skeptical Science now an iPhone app
re #19 "The numbers don't seem to add up" The numbers add up pretty well if one considers the system in it's entirety (all the forcings and a realistic assessment of climate response times). So, for example, the 20th century global temperature evolution can be reproduced rather well by incorporating all of the contributions and climate response times [*](see Figure 1): [*] http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf It's possible to illustrate part of the difficulty with your analysis by considering the global temperature from the late 19th century to the mid 20th century [**]. The global warming during this period wasn't more than around 0.2-0.3 oC overall. It's just that the surface temperature was knocked back quite a bit for a while (see post just above) by volcanic activity. So the net warming in response to your net forcing of 0.5 W/m2 1910-1940 likely wasn't more than 0.2-0.3 oC (perhaps even a bit less, if there was a significant contribution from ocean current effects of the sort that Tsonis and Swanson have discussed). But the bottom line is that the nett effect can only be assessed by a realistic incorporation of all of the contributions and the earth's responses to these.... [**] http://www.cru.uea.ac.uk/cru/data/temperature/nhshgl.gif -
chris at 21:44 PM on 23 January 2010Skeptical Science now an iPhone app
You're mistaking "lag" and "time constant"/"response time", HumanityRules (see my post #20) It's pretty straightforward: make a step change in a forcing to a new value. The earth starts to warm essentially immediately (no lag!). The time taken for the earth to come to equilibrium with the new forcing is a function of the time constants/response times of the system (rapid time response of a few years in the atmosphere; slower time constants for penetration of heat into the "deeper" elements of the climate system, with a very slow response time indeed for the vast oceans to come towards equilibrium with the forcing). It's the latter that gives the "heat in the pipeline" that you remarked upon. That's all very straightforward I think. The mistake is to think that the response of the surface temperature can be encapsulated within individual simple hived-off pieces of the whole. For example we could look at the temperature rise during the early 20th century. There was some very dramatic volcanic activity in the late 19th century/early 20th century and inspection of the global temperature record [*] shows that this knocked back the surface temperature by quite a large amount (0.2-0.3 oC) during a period of 20-odd years. However volcanic forcings are temporary; they have a significant short term effect on the surface temperature, which can be prolonged if there is a period of sustained volcanic activity [as in the period 1883 (Krakatoa) through to Soufriere, Santa Maria Mt Pelee in 1902], and so their effects don't penetrate "deeply" into the climate system. So much of the earth's surface temperature suppression due to volcanic forcing was recovered relatively quickly through the period 1910-1930's. There was also a small solar contribution and an enhanced greenhouse effect contribution to the early 20th century temerature rise. The earth responds to these again without lag, but the full response to these persistent in the long term forcings will take a long time to saturate the elements of the climate system that have a high intertia to change (i.e. the oceans). The earth still hasn't come fully to equilibrium with the enhanced forcing as it stood in 1940 (say), let alone with the forcing as it stands at this particular point in time. Obviously, 'though, if we want to attribute the contributions to the 20th century temperature evolution, we have to consider all of these (including the negative forcing contributions like anthropogenic aerosols), and the manner that the earth responds to these. It's not that complex. However it does require thinking (modelling) of the system in it's entirety. One can't insist on cutting everything right back to individual components and simplistic responses and then complain that reality doesn't conform to a grossly oversimplified view - that's essentially to use straw-man argumentation! -
angliss at 16:27 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Kforestcat, I'm sorry, but you're off in the weeds on this one. What you describe with your pavement example is an example of signal + bias + noise. Because the instrument's location is constant, we can eventually come up with a correction mechanism to remove the bias from the data. That leaves us with signal + noise. Removing the noise is simple filtering, of which averaging is one variety. Mathematically, averaging a signal removes noise (increases the signal-to-noise ratio) at the rate of the square root of the number of samples. Averaging daily samples over the course of a week increases the SNR by nearly 3 over any single sample. So if we picked up thermal noise from a car one day, then we merely have to average that data point with others from the same instrument in order to dramatically reduce the impact of that noisy sample on the overall data. I'll grant you that, if you only have a single data point, biases and noise on that data point will be a major problem. But that's not the case with the temperature record. -
chris1204 at 16:22 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Regardless of site, we see an obvious rising temperature gradient from 1980 to 2009 based on a line of best fit. However, ‘eyeballing’ the unadjusted data suggests a striking fall based on a line of best-fit beginning with an anomalously high 1998 to 2009. Now we certainly don't want to cherry pick. The 1998 data was attributable to a very large El Nino. However, following on from the preceding post (‘The chaos of confusing the concepts’) with its discussion of the Lorenz attractor, I find myself wondering whether we may indeed be seeing evidence of greater inherent unpredictability than we commonly suppose. Eleven years after all seems a long period, especially when we consider the preceding data set covers eighteen years. Should we be considering the two periods as one segment? Alternatively, should we be considering these periods as two distinct segments and asking why 1998 produced such a high El Nino (followed by a relatively warm period) and why 2007 – 2009 are producing a much lower gradient? Moreover, is this gradient likely to continue? I think the question of site location is clearly a furphy given the broad consistency between better and not so well located sites. However, deciding which periods we select to measure trends is of much more fundamental importance given the arbitrary nature of lines of best fit. Otherwise, we risk failing to ask obvious questions. -
Tom Dayton at 16:01 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Kforestcat, in your example you wrote "Say the mean 1971-2000 temperature well away from the parking lot...." But that's not of interest. Instead, the temperature on that given day, from that parking-lot-situated instrument, is differenced from the average temperature across 1971-2000 of that same instrument. -
Tom Dayton at 15:55 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Kforestcat, of course the temperature stations produce absolute temperatures as their "raw" data rather than as anomalies from a baseline. I have never seen anyone claim otherwise. You are misreading quite drastically. The baseline against which the anomalies are computed, is the average temperature for that specific locality across whatever time range has been chosen as the baseline. Each station has its own, local, baseline computed. Then each individual temperature reading from that one given station is differenced from that baseline for that one given station. The result is a difference of that one reading, from that tailored baseline. That procedure is done separately for each individual temperature reading, each against its own individual, tailored, baseline. It is a simple mathematical transformation that has nothing to do with instrument error and nothing to do with instrument calibration. It is a simple re-expression of each individual temperature reading that preserves all changes from the baseline temperature. The resulting collection of individually transformed temperatures is the collection of "raw" anomalies. Those are the "raw" data that you see being discussed. -
From Peru at 15:37 PM on 23 January 2010Why is Greenland's ice loss accelerating?
The paper states: "Our results show that both mass balance components, SMB and D (eq. S1), contributed equally to the post-1996 cumulative GrIS mass loss (Fig. 2A)." But then, Fig.3 shows: Ice Discharge: -94 Gt/yr Surface Mass Balance: -144 Gt/yr Isn't this a contradiction? Then comes this statement: "A quadratic decrease (r^2 = 0.97) explains the2000–2008 cumulative mass anomaly better thana linear fit (r^2 = 0.90). Equation S1 implies thatwhen SMB-D is negative but constant in time, ice sheet mass will decrease linearly in time. If, however, SMB-D decreases linearly in time, ashas been approximately the case since 2000 (fig.S3), ice sheet mass is indeed expected to decrease quadratically in time" What is this "r^2 = 0.97" and how it is related to the equations: MB = ∂M/∂t = SMB – D (S1) δM = ∫dt (SMB-D) = t (SMB0–D0) + ∫dt (δSMB–δD) (S4) Any idea? -
Kforestcat at 15:23 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Gentlemen I'm fully aware of how anomaly data is used ( having used it in my own research) and I know full well what can go awry in the field experiments. We are talking about every day instrument calibration and QA/QC - this is not rocket science. I firmly maintain the Menne 2010 paper is fundamentally flawed and entirely useless. NASA's individual station temperature readings are taken in absolute temperature (not as an anomaly as you have suggested). The temperature data is reduced to anomaly after the absolute temperature readings for a site are obtained. For example see, the station data Orland (39.8 N, 122.2 W) obtained directly from the NASA's GISS web site. The temperatues are recorded in Annual Mean Temperature in degrees C - not as an anomaly as you have suggested. (Tried to attach a NASA GIF as visual aid -but did not succeed). Bottom line. Menne has to have (and use) absolute temperature data to get the 1971-2000 mean temperature and then divide the current temp with the mean to get the anomaly. We are back to the same problem - Menne is measuring instrument error - he is not measuring error resulting from improper instrument location. The Menne paper is absolutely useless for the stated purpose. Anyone who actually collects field data, I have, knows they are going to immediately run into two fundamental problems when an instrument is improperly located. 1) they are not reading ambient air temperature and 2) neither temperature readings nor the anomaly can be corrected back to a true ambient because other factors are influencing the readings. For example: Suppose we have placed our instrument in a parking lot. Say the mean 1971-2000 temperature well away from the parking lot is 85F; but the instrument is improperly reading a mean of 90F. Now on a given day, say the ambient temp is 93 but your instrument is reading 105F (picked up some radiant heat from a car). Ok our: Actual anomaly is 93F - 85F = 8F; Instrument anomaly is 105F - 90F = 15F. The data is trash. There is simply no way to recover either the actual ambient temperatures nor an accurate anomaly reading. What you are missing is that an improperly placed instrument is reading air temperatures & anomalies influenced by unnatural events. The readings bear no relationship to either the actual temperature nor the actual anomaly - the data's no good, can't be corrected, and will not be used by a reputable researcher. Finally, it's not entirely surprising that Menne finds a downward bias in his individual anomaly readings at poorly situated sites. Because: 1) a poorly located instrument produces a higher mean temperature; hence, the anomaly will appear lower; and 2) generally there's a limit to how hot an improperly placed instrument will get (i.e. mixing of unnaturally heated air with ambient air will tend to cool the instrument - so the apparent temperature rise is lower than one might expect). Had Mennen (NASA) actually measured both absolute temperature and calculated anomaly data using instrumentation at properly setup sites, within say a couple of hundred feet of the poor sites, as a proper standard to measure the bias against - our conversation would be different. As it stands Menne's data is useless nonsense and not really worth serious discussion. Dave -
Charlie A at 15:15 PM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
nofreewind -- As Tom Dayton points out, the absence of glaciers just affects the timing of the water flow. Assuming constant annual precipitation, then the total annual water flow in the rivers will stay the same, but there will be a bigger seasonal variation. Without glaciers, melting snowpack would be the source of summer water flow in most Himalayan/Indian rivers. I've seen some non peer reviewed articles that said that the loss of glaciers would cause most rivers in India to go dry during the summer, but have not seen any peer reviewed articles that had any such drastic predictions. The alarmist articles seem to ignore the snowpack, which is the primary storage in many areas, such as California as mentioned above by Tom Dayton. -
Tom Dayton at 14:58 PM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
nofreewind, glaciers and snowpack are natural reservoirs of water not only in the Himalayas, but in California and many places around the world. They hang on to precipitation during the winter and dole it out gradually as meltwater as the weather warms into Spring and Summer, and even into Fall. Huge numbers of people, agriculture, and industry rely on the resulting somewhat steady and predictable supply of water around the year. It is impossible to build enough artificial reservoirs to compensate for the loss of those natural reservoirs. Also, excessive supply of water (because it is not being held long enough and doled out in measured quantities) causes flooding by exceeding the short-term capacities of the human infrastructure. -
nofreewind at 14:36 PM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
I don't understand why the glaciers are so important to water supply. We don't have glaciers here in Pennsylvania USA, yet the rivers flow year round. In the Himalayans, the water comes down the mountains, not because of glaciers, but because the monsoons bring snow to the mountains. If there is precipitation, the rivers will flow, right? Why is it so important that the water is hundreds or thousands of year old glacier water? Let's get rid of the glaciers and get some fresh water down to drink! -
angliss at 14:25 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Kforestcat, I think you may be confusing two things here - bias error and probabalistic noise. The paper makes it clear that the unadjusted curves represent the measurements before known bias errors are removed, while the "adjusted" curves are after the bias errors have been corrected. Conversion to an anomaly is effectively the same as normalization, and the purpose is the same. Both serve to accentuate the part of the data that you care about. I do both regularly in my professional field of electrical engineering, especially when I'm interested in understanding the nature of noise plaguing my circuitry. Finally, what Watts et al are essentially saying is that heat islands, in this case caused by electrical transformers, waste treatment plants, air conditioners, or pavement, have made the global temperature record unusable. This paper points out that Watts is incorrect, but it's not the first paper to do so by any means. The following paper showed that well established urban areas had the exact same trends as rural areas, but with a removable warm temperature bias: http://www.agu.org/pubs/crossref/2008/2008JD009916.shtml To use an analogy, if a trampoline can get you 10 feet into the air out on a farm, there's every reason to believe that it'll get you just 10 feet into the air if you move it into a city. -
HumanityRules at 13:38 PM on 23 January 2010Skeptical Science now an iPhone app
Chris I guess my point was made in #19. There is no evidence of lag in the early-mid century. The radiative forcing increase 1910-1940 coinsides with a delta T which leaves nothing "in the pipeline". "the earth should somehow miraculously come instantaneously to the new forced surface temperature" it seems miracles did happen 1910-1940. For the preposed system to work lag would have to be a late 20th century phenomenon only. -
Charlie A at 13:37 PM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
Apparently, there are other errors in this section. The erroneous 2035 date has been acknowledge, but the IPCC has not acknowledge the error I've pointed out above in table 10.9. Another error is the statement, referring to Himalayan glaciers of "Its total area will likely shrink from the present 500,000 to 100,000 square kilometers by the year 2035." The statement appears to have its original source in a 1996 article which states that the total worldwide extrapolar glacial area of 500k sq km is expected to go down to 100k sq km by 2350. I don't have a peer reviewed source handy for total Himalayan glacial area, but the UNEP/WGMS report global glacier changes says total area of Himalayan glaciers is 33,040 sq kilometers, so this appears to be yet another clue that the statements in this section should have been reviewed more thoroughly. Georg Kaser, a lead author of a WG1 chapter has said that he told others in the IPCC of the errors, but they chose not to correct them. The entire section on Himalayan glaciers is not of that muc importance. What is more important is that this is yet another example of problems in the IPCC review process. Nominations for reviewers of AR5 are now being taken, but only from selected organizations. The IPCC would be well served to include some reviewers that don't have strong confirmation bias in favor of AGW, and for there to be procedure put in place that don't allow the lead authors to rely almost exclusively on their own publications, to the exclusion of other peer reviewed papers that conflict with the lead authors opinions. -
Doug Bostrom at 13:29 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Swerving off topic so possibly may never see the light of day, but further to remarks on skepticism versus denial, etc., English is a rich language and there's no need to use a single word to describe a plethora of approaches. Doubter, contrarian, skeptic, denier, they're all different in meaning and need to be applied individually. "Faithful" would be a better word for some, for that matter, seemingly detached from the material world. My limited experience w/participating in discussions on this topic tells me I'm generally far too hasty in categorizing, to the point where I've already had to resort to apology too often, enough to make me more cautious about committing accidental slurs. As is said, discretion is the better part of valor. -
Charlie A at 12:48 PM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
19 Ricardo says "it's a typo, the starting year is 1947 not 1847." Ricardo, what is your data source for this statement? 2840 meters of retreat from 1845 to 1966 is consistent with other reports of 1600 meters of retreat from 1847 to 1906 (27 meters/year) and 1040 meters of retreat from 1906 to 1958 (20 meters per year). What is your source for saying that the starting year is 1947 ?? My figures come from http://iahs.info/redbooks/a058/05828.pdf and several other sources of similar numbers. The current retreat rate of 10 meters/year comes from the 9th volume of Fluctuations of Glaciers, issued by the World Glacier Monitoring Service If AR4 is incorrect and the other correct, then the snout of the Pindari has slowed from 27m/yr up to 1906, to 20 meters per year to 1958, to 10 meters per year to 2006. If the AR4 is correct, then there has been an even more dramatic reduction in the retreat rate of 135.2meter/year down to 10 meters per year. -
Marcus at 12:29 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Steve Carson. I'm no expert in climatology, but if I read a paper which claimed that surface temperatures had been falling (not rising) for the last 30 years, then I'd seek independent verification from other sources before I accepted or dismissed the claim-that's what makes me a Skeptic (& a scientist). A denialist, by contrast, will automatically dismiss any evidence that doesn't fit their ideology-without independent verification-no matter how strong the evidence is (yet they still demand ever more evidence-even though they'll dismiss that too). If it helps, the other side contains what I call the "True Believers"-they accept the theory of global warming because someone they admire &/or want to believe tells them so-without independent verification. Personally, I have no time for denialists or true believers, but instead seek independent verification of every claim & counter-claim being made. Its always important to think for yourself rather than blindly accept the claims of people who might have a vested interest. Hope that makes more sense. -
Kforestcat at 12:20 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Gentlemen You really ought to read the methods used before you gloat. The individual station anomaly measurements were based on each stations "1971-2000 station mean". See where the document states: "Specifically, the unadjusted and adjusted monthly station values were converted to anomalies relative to the 1971–2000 station mean." In other words, the only thing this study measures is the difference in instrument error at each station. The absolute error occurring at individual stations because the station had not been properly located is not measured. A poor station with an absolute temperature error of +5 degrees C still has a bias error of +5 degree C - no matter what the variation occurring due to instrumentation type. I'm a chemical engineer with U.S. government and 20 years of research experience in various areas including environmental mitigation. If one of my phD's came to me with this nonsense, I'd fire him on the spot. Sorry boys, you are going to have to better than this. DaveResponse: Whenever you look at a graph of global temperature, invariably you're looking at "temperature anomaly" (the change in temperature), not absolute temperature. As NASA puts it, "the reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region". It's the change in temperature (eg - the trend) that is of interest and the analysis in Menne 2010 determines if there is any bias in the trend due to poor siting of weather stations. -
From Peru at 11:57 AM on 23 January 2010Why is Greenland's ice loss accelerating?
As shown in the other Greenland post in this site, the best-fit curve of total ice mass loss from GRACE shows that Greenland ice loss is accelerating at a rate of 30 Gigatonnes/yr^2. But now results that we have the contributions: Ice Discharge: -94 Gt/yr (39,5%) Surface Mass Balance: -144 Gt/yr (60,5%) So most of ice loss comes from just surface melting! This is surprising because surface-melt minus surface-precipitation is something that is very weather-sensitive. Now I ask: 1. How could a weather-sensitive melting follow a quadratic function so closely (i.e. how could the acceleration be so close to a constant value of 30 Gigatonnes/yr^2)? 2. Can we expect this trend to persist or weather-climate variability will "break" the soft curve here shown at any time? -
Marcel Bökstedt at 11:22 AM on 23 January 2010The chaos of confusing the concepts
This is a great site, because it takes the discussion to a high level, but still not so technical that you can't follow it. This particular posting is also quite interesting. I'm wondering about the correct definition and nature of the term "climate". I imagine that "todays weather" is characterized by a large number of parameters, varying in a way which might very well be (weakly) chaotic in the technical sense that the weather of next month is deterministically determined by weather of today, but this dependence is very sensitive to small variations of the initial conditions. Climate on the other hand could be defined not as the weather at a particular point in time, but as a subset of the parameter space. The weather can vary wildly, but only inside the bounds prescribed by the "climate". Or maybe differently, in analogy to the example of the Lorentz attractor, climate is a subset of the parameter space where "weather" spends "most of its time". This would be a slightly different but possibly more flexible definition of "climate". It reminds me of the situation in celestial mechanics. There the laws of motions of the planets in the solar system are quite simple, certainly immensly simpler than the laws governing weather. But you can run into chaos. I believe that at least in some situations you can compute the orbits of the the planets at a time in the far future (climate), but because of chaos you cannot calculate where in the orbit the planet will be at this specific time (weather). In this situation, the orbits of the planets change, so that there is a certain "dynamics of orbits". But it is far easier to predict the development of the orbit of a certain planet at a certain time than to predict exactly where the planet will be at this time. One thing which is unclear to me is if the "climate" of meteorological models can vary on its own. Is there is some sort of "dynamics of climate" - long term non-forced natural variation - or is the climate supposed to be completely determined by the various type of "forcing"? -
stevecarsonr at 11:15 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Marcus, it's not my blog, added to which I'm new here - so apologies if my comment is out of order - but I don't understand the attraction of the labeling of others. (And I've seen it a lot at other blogs). You might characterize their opinion, that's certainly helpful. But "skeptic"/"denialist" seems like a classification of character - or maybe assassination of character. Now you might say the 2 people you are talking about are a distinguished physics professor and a meterologist, so take the following as a general comment that doesn't apply to them as to the specifics, but maybe the general concept does.. Some people don't understand the radiative transfer equation (in fact, I've just realised that maybe I don't understand it properly, maybe someone can help with my totally off-topic question) because they don't have a physics background. So they don't understand how CO2 can impact temperature. Does this make them "a denialist"? Or someone who doesn't understand radiative physics? People are free to call them whatever they like, but one comment I would make it that the more personal attacks thrown the less likely people with questions are to sit down and try and understand a complex subject. And it is complex. And the scientific method is not natural and instinctive. -
Marcus at 10:45 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
HumanityRules-also my apologies. I wasn't actually suggesting that the denialists *invented* the urban heat island (even if that's how it comes across), but they have exploited it *mercilessly* to try & undermine the credibility of the surface temperature record-even long after study after study had shown that (a) the bias wasn't as strong as suggested (b) the bias often gave cooler results than for nearby rural areas (c) researchers always adjusted for the bias & (d) that the surface record was closely correlated to the record from satellites. I doubt that this latest paper will silence their misuse of UHI for ideological purposes-even when its based on the work of one of their own! -
Marcus at 10:35 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
stevecarsonr, my apologies-I misspoke before. I implied that the Urban Heat Island was a myth-that wasn't my intent. What I meant was that Urban Heat Islands being the primary cause of global warming (rather than CO2 emissions) was an Urban Legend. This paper seems to give added weight to the Urban Legend status of the view that poorly sited measuring stations, alone, are capable of producing a +0.16 degree change per decade in average global temperatures! -
Marcus at 10:32 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
CoalGeologist, people who raise doubts about the science, without seeking to prove that those doubts are valid, are not true skeptics-they're denialists-an important distinction. Lindzen, for example, is a skeptic-he doubted that global warming would be as extreme as predicted (due to the Iris Effect) & sought to prove it-so he's a skeptic. On the matter of bias in US temperature records, Watts has behaved as a true skeptic-up to a point-but if he now refuses to publish the results of this study, then he proves himself to be just another denialist. His role as a denialist is already proven, however, in the way he behaves on other matters related to Anthropogenic Global Warming. -
chris at 09:19 AM on 23 January 2010The chaos of confusing the concepts
I hope you don’t mind me responding to your message stevecarsonr (I happen to be on line, it’s a Friday night, and I’ve drunk a good part of a bottle of wine!). I think it comes down to the meaning of “chaos” as I suggested in post #12 just above. I'm not sure that the phenomenon you describe (periodic ice-melt-induced cessation of the Atlantic conveyor, which I also used in my post above), is an example of chaotic behaviour. I would say that it is an example of stochastic behaviour. Weather has inherent chaotic elements since (i) its evolution is critically dependent on the starting conditions (that’s why weather models, but not climate models, are continuously “reset” to current atmospheric conditions), (ii) it progresses on, and is influenced by events on, a very small spatial scale, and (iii) there are an almost infinite set of influences that determine the temporal evolution of local atmospheric conditions. That’s not the case with the example of ice-melt-induced cessation of the Atlantic conveyor. There’s no question that this phenomenon (see e.g. http://en.wikipedia.org/wiki/File:Ice-core-isotope.png ) has a stochastic element. But it is a phenomenon that is bounded within a particular climate regime (glacial period of an ice age with a particular continental arrangement that gives a strong thermohaline heat transport to high Northern latitudes), and is predictable in principle. Presumably if one knew something about the relationship between Arctic ice buildup and its evolution towards instability, then one could make a reasonable prediction (if one was around during the last glacial period say!) of when the next ice-collapse-induced cessation event would occur. I think we also have to be careful not to assign the label “chaotic” to behaviour that we happen to lack the knowledge-base to understand predictively. Focussing on the thermohaline circulation (THC) and the effects of melt water, it does seem a possibility (see e.g. ftp://rock.geosociety.org/pub/GSAToday/gt9901.pdf, for an interesting read) that the THC could slow down or stop if sufficient freshwater from Arctic ice melt were to flood the Arctic ocean. But I don’t think this would be an example of chaotic behaviour even if we might not know, at the moment, what specific conditions (how hot does it have to be; how much ice melt and how fast, would be required…). At some point we might well understand this process well enough that it might cease even to be considered “stochastic”. -
CoalGeologist at 08:59 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Markus (Post #1) is already aware that he would be ill-advised to hold his breath waiting for an acknowledgment from Anthony Watts that the U.S. surface temperature data set apparently does not show a systematic bias toward warming. In fact, in the overall scheme of things, I’d wager it’s more likely that Watts would attempt to stop publication of the paper than to make such an admission. This issue has much broader implications, however, that extend well beyond the reliability of this particular data set, and bears on the entire debate over anthropogenic climate change (ACC). Skeptics such as Watts, Joseph D’Aleo and others are well within their rights to question the validity of the data, particularly considering the poor condition and non-ideal location of many of the measurement stations. More than that, it’s their duty (duty…duty…duty!) to raise skeptical criticisms, just as it is the duty of climate scientists to address reasonable concerns. The advancement of science depends on it. The success of this approach demands, however, that skeptical hypotheses be: a) testable, and b) potentially refutable. If not, then they fall into the domain of ideology, not science, and can never be considered anything more than unsubstantiated conjecture. Skeptics feel they’ve done their job merely by raising questions and doubts, while forgetting the essential next step of hypothesis testing. Sadly, past experience shows that in the current "debate", most arguments against anthropogenic climate change are effectively irrefutable, no matter how much evidence is brought to bear. Worse, the premise that ACC is wrong provides the touchstone by which all evidence is measured. Evidence that appears to support ACC is inferred to be wrong; evidence that appears to refute ACC is inferred to be valid. At the same time, the new re-assessment of the data by Menne et al. gives all of us a greater level of confidence in its reliability. (Readers may also be interested to read this analysis of the NOAA & NASA data: http://www.yaleclimatemediaforum.org/2010/01/kusi-noaa-nasa/ In fairness, we have to acknowledge the important role that skeptics have played in this process. It’s a shame if Watts and others are unable to derive any satisfaction from their efforts, even if the rest of us can. Most likely they’ll just keep plugging away, trying to prove what they already ardently believe. The surface temperature data set does, indeed, pose a “challenge” to put it mildly. While the temptation is there to just chuck the entire lot, we can’t afford to do that, as the results are too important. The only other option is to try to make the best use of the available information, by removing as much of the error as possible without introducing bias. This may entail eliminating some stations, and making some adjustments to some data, where warranted. This is what the researchers at the National Climate Data Center have been trying to do in good faith. They deserve our appreciation, and that of Mr. Watts as well! -
stevecarsonr at 08:45 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Philippe, thanks for the clarification - I should have made it clear my comment was directed at some of the comments, not at the post itself. Everyone knows that the UHI exists, the IPCC has it at 0.006'C per decade. Perhaps that's correct, but the Fujibe paper and the Ren paper do question that number. Anyway, I'm off topic, as this post is about the effect of microsite issues on measurement. -
stevecarsonr at 08:35 AM on 23 January 2010The chaos of confusing the concepts
It's great to see the chaos subject raised especially from someone with a PhD in complexity studies. Hopefully you will indulge some questions from someone who knows less. First of all, I don't see what you have actually demonstrated. Why is it a demonstration that climate is not chaotic by showing a graph of the temperature parameter in climate over 120 years. And how do you know that that particular graph is not actually chaotic? For example, I could plot a graph of the motions (the 2 angles) of the double pendulum for a period and say "see it's not chaotic". What's the difference between these 2 cases? Second, while of course the climate is bounded in many ways that doesn't mean it's not chaotic. The fact that there will always be 4 seasons or the poles always colder than the equator really isn't relevant. Or the fact that mid-latitudes might be between 0'C and 30'C on any given day. I know you didn't put forward these points, but I see them a lot with a kind of "QED climate is not chaotic" and I'm scratching my head.. In your opinion is this correct? I.e., the fact that the above few points are true doesn't disprove the possibility of chaotic behavior? Third, an example. The well-known "Atlantic conveyor belt" of ocean heat is driven by the thermohaline currents. Sufficient melting ice from Greenland/Arctic would disrupt the thermohaline and the conveyor belt stops, northern Europe gets very cold and the Arctic re-freezes. But at what point does this occcur? Prof FW Taylor in his book "Elementary Climate Physics" (2005 OUP) shows the 2-box model of the oceans, apparently originally proposed by Henry Stommel. It's fairly simple but shows an unstable behaviour against peturbations in either direction. He comments that right now (2005) none of the GCMs show the reversal of the circulation, instead they vary between no real change and a 50% drop over the next 100 years. However, my point finally arrived at, if this model (extended to a more realistic one) is correct, then surely the thermohaline will provide chaotic behavior at some point. (Perhaps strictly speaking it might not be chaotic, perhaps just unknown and complex at this stage). Comments? Fourth, without actually knowing the formulae for many important aspects of climate, how can "the climate community" (or subsection thereof?) be so confident that climate is not chaotic? E.g. the aerosol effect, a negative feedback, but with error bars stretching between zero and the effect of CO2 at 380ppm (according to IPCC AR4). I could happily theorize about changes in ocean temperature increasing the production of sulphides forming more clouds and providing more negative feedback.. or higher winds from higher temperature differentials picking up more dust from every drought ridden deserts and therefore seeding more clouds.. or not. I have the full extent of the error bars and given that we don't really know the formulae they might exhibit strong negative feedback - or they might exhibit actually positive feedback under some circumstances. How to confirm "no chaos" when the equations are somewhere between "cloudy" and "unknown"? Sorry for writing such a lengthy set of questions, but a subject that really needs discussion and thanks for posting on the subject. -
chris at 08:30 AM on 23 January 2010The chaos of confusing the concepts
re your comment #13 batsvensson; i.e. "When you say you can predict that the weather in the next 6 month will be 20 to 30 C degree, then you are NOT basing this on the system itself but on a pulse respond to a ramp signal you know exist. However, still there is very tiny and small possibility your prediction may turn out wrong, but it is so unlikely to happened that you are not regarding it, which you probably do perfectly right in. But never less, it less to weather being non-chaotic and more to weather being affected by a ramp signal that you are able to do such prediction. " Surely Ned is basing his prediction on "the system itself". It's got nothing to do with "weather being non-chaotic" (we know weather has significant chaotic elements), it's to do with the likely range of weather events being bounded by a rather well-defined climate regime, and a highly predictable seasonal variation essentially based on Newtonian physics. As you say, there is a tiny and small ("tiny" and "small"?!) possibility that Ned's prediction may turn out to be wrong. There are two main reasons why this might be the case: (i) The variability in the weather encompasses the possibility of rare extreme excursions out of the expected range within a particular climate regime; (ii) a contingent event (volcanic eruption; extraterrestrial impact) might intervene. -
Philippe Chantreau at 08:22 AM on 23 January 2010The chaos of confusing the concepts
Very interesting stuff, thanks Jacob. -
Philippe Chantreau at 08:15 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Stevecarsonr you're confusing UHI and microsite issues. Nobody believes that the UHI is a myth and there is an abundant litterature on the subject. GISTEMP corrects for UHI. Many papers about it were mentioned on this very blog. Watts' basic argument, insofar as it remains consistent (which is not always the case)was that siting issues affected readings so that thermometers read to high. Neither Watts nor anyone among his cheerleading crowd ever attempted to do a real data analysis to verify the hypothesis. One of his readers, however, tackled the problem as sson as enough stations were sampled (John V). He evidently found out that the hypothesis was not verified by data analysis and endured so much malice at Watts's site that he didn't post there any more. Further analysis was done by NOAA once enough stations were sampled so that no regional bias could possibly affect the results, and the results were exactly the same. The very premise for the existence of Watts' blog has been invalidated numerous times. -
blouis79 at 08:13 AM on 23 January 2010Models are unreliable
We have no idea how reliable climate models are: IPCC AR4 8.6.4 How to Assess Our Relative Confidence in Feedbacks Simulated by Different Models? [quote]A number of diagnostic tests have been proposed since the TAR (see Section 8.6.3), but few of them have been applied to a majority of the models currently in use. Moreover, it is not yet clear which tests are critical for constraining future projections. Consequently, a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed.[/quote] Any person on earth knows that clouds can warm and cool. IPCC knows that too. Cloud feedbacks are not well modelled. IPCC AR4 8.6.3.2 Clouds [quote]In many climate models, details in the representation of clouds can substantially affect the model estimates of cloud feedback and climate sensitivity (e.g., Senior and Mitchell, 1993; Le Treut et al., 1994; Yao and Del Genio, 2002; Zhang, 2004; Stainforth et al., 2005; Yokohata et al., 2005). Moreover, the spread of climate sensitivity estimates among current models arises primarily from inter-model differences in cloud feedbacks (Colman, 2003a; Soden and Held, 2006; Webb et al., 2006; Section 8.6.2, Figure 8.14). Therefore, cloud feedbacks remain the largest source of uncertainty in climate sensitivity estimates.[/quote] -
batsvensson at 07:51 AM on 23 January 2010The chaos of confusing the concepts
Ned, One can roughly say there are two classes of chaotic system, deterministic and non-deterministic. The behavior of the latter is the same as a random system. However a deterministic chaotic system isn’t the same as a random system. All deterministic system can in principle be predicted. Therefore saying weather or climate is chaotic is not the very same thing as actually claming it can not be predicted to some degree of certainty, they claim is only that it may be hard to predict, how hard is another matter thou. Climate is affected by regular cyclic phenomena and random event, these can be seen as ramps and step pulses to the system. Any system, linear, chaotic or random will respond to such pulses and such response can in principle be measured or filtered out from a time series of measurements. In a linear system this filtering is trivial, but for a chaotic system the process is non-trivial, a detected pulse may very well be a false positive in such system, this is very hard to say with out knowing anything about the history of the system itself. A linear system response to a pulse is easy to predict, but for a chaotic system one can not in general do this, except within small time scales. Chosen time scale short enough even the behavior of a random system can be predicted, but the error will soon grow to large to get any meaning full prediction out from it. The difference in predicting a linear system vs. a nonlinear lies in the rate of error growth. Linear system has a much smaller growth rate, and therefore we can make the prediction over longer times series with high confidence, while this is not the case with non-linear system. That errors in prediction grows by time is an inherent property of any simulations. The task in making a good prediction is to try make a system in which the growth of error is as small as possible. When you say you can predict that the weather in the next 6 month will be 20 to 30 C degree, then you are NOT basing this on the system itself but on a pulse respond to a ramp signal you know exist. However, still there is very tiny and small possibility your prediction may turn out wrong, but it is so unlikely to happened that you are not regarding it, which you probably do perfectly right in. But never less, it less to weather being non-chaotic and more to weather being affected by a ramp signal that you are able to do such prediction. -
stevecarsonr at 07:41 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
I don't think the UHI can be regarded as a myth. I can't comment on the US and haven't yet read the Memme/Menne paper. Perhaps the UHI is neglible in the US as Peterson found. But a 2009 IJC paper: Detection of urban warming in recent temperature trends in Japan, by Fumiaki Fujibe, showed a 0.1'C/decade UHI effect for the larger cities. This was based on 1979-2006 from 561 stations recording hourly data and compared with local population density data. You can see more about this paper at http://scienceofdoom.com/2010/01/17/urban-heat-island-in-japan/ There is also the Ren paper from 2008 in Journal of Climate which also found a significant UHI in China. -
Doug Bostrom at 06:56 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
It's one thing to endlessly kvetch and complain, another to actually exert some effort. Watts and crew did some work, hats off to them, misguided though they were. I hope these results will encourage doubters to purchase historical weather records for those locations where they are complaining about interpolation. As NewYorkJ remarks, many doubters will play the "hoax" wildcard yet again in order to explain this latest disaster for their strange cause. However, each time the doubt community must draw that card from their hand the remaining slice of the behavioral bell curve containing them loses area. -
chris at 06:54 AM on 23 January 2010Skeptical Science now an iPhone app
re #18 Not quite sure what your difficulty is HumanityRules: 1880-current global temperature rise is around 0.85-09 oC (NASA Giss or Hadcrut3v). atmospheric CO2 rise from 290 ppm (mid-late 19th century) to 386 ppm (current), should give 1.24 oC at equilibrium within the mid-range of climate sensitivity (3 oC of warming per doubling of [CO2]). So we've had 0.85-0.9 oC of expected temperature response of ~ 1.25 oC. It's obvious (I would have thought) that the temperature response to enhanced radiative forcing is the equilibrium response. There shouldn't be a lag in the onset of the response of course, but the earth will take quite a while for the slow elements of the response, especially the accumulation of heat into the oceans, to come to equlibrium with the enhanced forcing. So the only "sleight of hand" would be to pretend that the earth should somehow miraculously come instantaneously to the new forced surface temperature. Even the most blatant efforts[*] to pursue that canard didn't go as far as to insinuate an instant temperature response to enhanced forcing. Perhaps you're having a general difficulty with the fact that the earth's temperature response to forcing, while not that complex, isn't amenable to simplistic interpretations. As several of us have pointed out on another thread, you do need to consider all of the forcings and their contributions (including anthropogenic aerosols, which have significantly countered anthropogenic greenhouse gas-induced warming). The question of the temporal response to enhanced forcing is difficult (there are obviously multiple time constants in the earth's response - atmosphere responds quite quickly, the surface and especially the oceans much more slowly), and that's the reason that determination of climate sensitivity is difficult based on analysis of (say) the 20th century response to enhanced greenhouse forcing... [*] e.g. (see Schwartz 2007) http://www.skepticalscience.com/climate-sensitivity.htm and his retraction and revision of the notion of a fast earth temperature response to external forcing -
NewYorkJ at 06:36 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Typo in 3 places: "Memme 2010" should be "Menne 2010". So the Watts project shows there is, if anything, an overall cool bias in the raw data, which is a moot point considering the adjustments remove most of this bias. "Adjustments applied to USHCN Version 2 data largely account for the impact of instrument and siting changes, although a small overall residual negative (“cool”) bias appears to remain in the adjusted maximum temperature series." One criticism of the Menne result is that they are relying on the data the Watts army of volunteers puts together with regards to the rating classification. How reliable is that data? Do they have the expertise and objectivity needed to effectively assign a rating to each station? The Watts project has served it's purpose, which is to spread doubt about the data among the laypersons. Peer-reviewed academic studies will just be dismissed as being part of the hoax. How can they be believed, when "photos" prove otherwise?Response: Thanks for the alert, I've fixed the typo. Re the issue of relying on the surfacestations.org classifications, the NOAA also have their own independent ratings. The dotted lines in the figures above represent the good/bad trends according to their own classifications while the solid lines are according to the surfacestations.org classifications. -
Albatross at 06:04 AM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
Jesús Rosino @4: Thanks for the link to Karger et al.. It is by far the best objective and quantitative discussion of the status and outlook of the Himalayan glacier that I have come across. I highly recommend it. After reading their "report", it is clear that there is still much reason for concern. -
chris at 04:25 AM on 23 January 2010The chaos of confusing the concepts
Nice article.. It's worth restating that treatment of chaos requires a careful consideration of exactly what one means by the term in any particular instance. The notion that climate might be chaotic in the same sense that weather is, not really suportable. The example of ice age cycles just referred to is a good one. During the last several hundreds of thousands of years the earth underwent glacial-interglacial-glacial transitions which had profound effects on climate regimes - there was nothing chaotic about the broad properties of these transitions and the climate state transitions that these induced. The transitions were paced by earth orbital cycles, and in each glacial/interglacial the earth transitions were driven towards new equilibrium states having characteristic, (and rather reproducible through several cycles over hundreds of thousands of years) global temperatures, atmospheric CO2 concentrations, ice sheet coverage, sea levels etc. And in the general state, climates and their responses to variations in forcings are non-chaotic. Of course there may be stochastic elements to forces that vary these states. For example the "transient" temperature rises and falls in the N. hemiphere in glacial periods during Dansgaard-Oeschger and Heinrich events seem to be due to periodic ice discharge fom the Arctic ice sheets, and temporary slowing or stopping of the thermohaline circulation. These may be contingent/stochastic events (i.e. essentially non-predictable), but local climates likely responded in a well-defined manner according to the resulting changes in local ocean/air heat transport; atmospheric moisture contents etc. The idea that climate is something like the long term accumulation of weather is a silly concept that is presumably raised so as to give the impression that climate is hopelessly non-predictable given that one can't predict weather. As Ned has said, this isn't true. The relationship between weather and climate is, of course, more sensibly defined the other way round; i.e. weather is the day to day variation in seasonal atmospheric parameters (temperature, wind, precipitation) within a given climate regime... -
danielbacon at 04:24 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
http://ams.confex.com/ams/15AppClimate/techprogram/paper_91613.htm "The National Weather Service MMTS (Maximum-Minimum Temperature System) -- 20 years after Nolan J. Doesken, Colorado State Univ., Ft. Collins, CO During the mid 1980s, the National Weather Service began deploying electronic temperature measurement devices as a part of their Cooperative Network. The introduction of this new measurement system known as the MMTS (Maximum-Minimum Temperature System) represented the single largest change in how temperatures were measured and reported since the Cooperative Network was established in the 1800s. Early comparisons of MMTS readings with temperature measurements from the traditional liquid-in-glass thermometers mounted in Cotton Region shelters showed small but significant differences. During the first decade, several studies were conducted and published results showed that maximum temperatures from the MMTS were typically cooler and minimum temperatures warmer compared to traditional readings. This was a very important finding affecting climate data continuity and the monitoring of local, regional and national temperature trends. It has now been 20 years since the initial deployment of the MMTS. The Colorado Climate Center at Colorado State University has continued side by side daily measurements with both the MMTS and the traditional liquid-in-glass thermometers. This paper presents a 20-year comparison of temperatures measured 4 meters apart. Results show that little has changed in the relationship between MMTS and liquid-in-glass. Despite a yellowing of the MMTS radiation shield over time, the MMTS continues to read cooler during the daylight hours at all times of year. Minimum temperatures show little difference but with a small seasonal cycle in temperature differences. The largest differences continue, as they were first observed in 1985, to occur with low sun angles, clear skies, light winds and fresh snowcover. In addition to quantitative comparisons, some general comments on the impact of MMTS and other electronic temperature measurements on long-term temperature measurements and observed trends will also be offered." So it is consistent with the findings in 2005 -
Jacob Bock Axelsen at 03:44 AM on 23 January 2010The chaos of confusing the concepts
@Berényi Péter I just browsed through your reference, and I have some rather informal comments. The paper bases its statements solely on power spectral analysis of timeseries data. This gives you resonance frequencies (non-chaos) and characteristic timescales (indicative of chaos). No strange attractors, no direct correlations or actual physical processes. For instance, the paper claims that turbulence is present on millenium scale in less than an order of magnitude in a bounded frequency region. Even for power spectra I would say this is weak evidence. Turbulence in heat transfer over millenia cannot exist within the solar system, which the paper also states. The requirements for turbulent Navier-Stokes dynamics in advection is simply destroyed by viscocity and dissipation. That is also why climate models work well based on average radiation assumptions on the structure of the atmospheric energy budget. The paper claims that the source could be the turbulent galactic electron density field modulating cosmic ray fluxes. However, research indicates that this mechanism is too weak to cause major climate change: http://www.agu.org/pubs/crossref/2009/2009GL037946.shtml http://www.skepticalscience.com/cosmic-rays-and-global-warming.htm The final claim is that CO2 power spectra give indications of chaos during the last 500 million years. Keeping the regularity of ice age cycles in mind, I would personally need more proof to accept more than quasi-linear responses. I hope this was useful. -
Ian Love at 03:28 AM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
As earlier posters have implied, other science based sites (e.g. Deltoid, RealClimate) have also put up articles that add to the background and current status of Himalayan glaciers. Of interest is the 2009 article by Xu et al Black soot and the survival of Tibetan glaciers (open access)."We find evidence that black soot aerosols deposited on Tibetan glaciers have been a significant contributing factor to observed rapid glacier retreat." -
HumanityRules at 02:42 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
My guess is that it wasn't deniers that instigated the rating process of weather stations but good old climatologist way back. What was the initial justification for this? And Marcus again it wasn't deniers who invented the Urban Heat Island idea. I just put this into Google Scholar search with many limitations and got over 400,000 hits (900,000 without the limitations). As far as I'm aware denier websites don't show in scholar searches. -
JasonW at 02:38 AM on 23 January 2010The chaos of confusing the concepts
Excellent post, concise and very readable. Thanks! -
Dennis at 02:30 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Watts writes that "the GISS data isn’t much to be trusted," but he doesn't say why. -
Philip64 at 01:53 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
A clear and unambiguous analysis. However, I think it is optimistic to think the outcome of this survey will be presented by Watts or other skeptic bloggers in the way it has been here. If they mention it at all, it will be in a way that shows (if not proves!) somehow or other that the results they were hoping for have indeed been found. Several studies published in SCIENCE a few years ago pretty much demolished the notion of the urban heat island, but since when have such things ever troubled the denialist/skeptic/contrarian campaign? Admittedly, since the field research was organised by their own sympathisers in this occasion, it will be harder to conclude that corrupted scientists and the socialist/liberal/big government conspiracy has rigged the data. But never underestimate the inventiveness of the paranoid mind.
Prev 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 Next