Recent Comments
Prev 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 Next
Comments 116501 to 116550:
-
Ken Lambert at 00:32 AM on 30 June 2010Astronomical cycles
Chris#106 kdkd#108 Well no, Chris. The core of CO2GHG theory is a simple IPCC equation: F.CO2 = 5.35ln(CO2b/CO2a)W/sq.m where CO2a is the pre-indistrial CO2 concentration (280ppmv)and CO2b is the current concentration. This is logarithmic and monotonic. Every year the radiative energy flux imbalance F.CO2 increases with CO2 concentration. The argument that global 'noise' seriously distorts this heat accumulation assumes some sort of storage mechanism (where else but the oceans?), which can store heat and release it in noisy bursts globally. OHC in the top 700m has been flat for 6 years. Seems inconsistent with anything but a flattening sea level. Peter Hogarth #107 Had a good look at "Visual Depictions of Sea level Rise" and links from March10. Please explain the trend lines for the deployment of Jason 2 and Envisat. I am seeing Jason 1 with a linear trend for the last 8 years of about 2.1mm/year and an offset of 4-6mm at the start of the TOPEX-Jason transition in 2002 from here: http://sealevel.colorado.edu/current/sl_ib_ns_global.jpg -
JMurphy at 00:12 AM on 30 June 2010September 2010 Arctic Ice Extent Handicapping Via ARCUS
Arkadiusz Semczyszak wrote : "I recommend the latest post from Steven Goddard 28/06/1910 (WUWT)..." That's funny, but I recommend his post of 26 June (Latest Barrow Ice Breakup On Record?). There are many people on there showing him the error of his ways. The day after Steven Goddard wrote that post, hoping for the latest coast ice break up in Barrow, the ice vanished from the beach. Have a look at it now. Perhaps he was confused because the picture he shows has Barrow shrouded in mist : he couldn't see the ice but just assumed (hoped/prayed) it was there. It actually looks as if the ice broke off the day before Goddard wrote his post. Join in the laughter at Deltoid -
HumanityRules at 00:01 AM on 30 June 2010Return to the Himalayas
"It's easy to dismiss the loss of ten percent of a region's water supply as insignificant in the grand scheme of things, but imagine proposing to an engineer responsible for the operation of a municipal water district that ten percent of his reservoir capacity was to be removed for no reason but an anticipated accident that might be avoided." I'm not accusing you of anything here Doug but there seems buried in this comment one of the bigger problems of developing nation. They are more susceptible to the variations that nature can throw at them no matter what the underlying reason. One way to mitigate any future water shortages might be for these nations to produce more water engineers and take control of their water supplies. It's ironic dam projects in China and India, which might help these nations gain better control of this resource have often been criticized by the environmental lobby. On a more general point. It does seem the problem with this science is now it is expected to give more black and white answers to problem thrown at it by policy and politics were in fact the situation is more grey. Maybe New Scientist is pandering to much to this rather than remaining firmly buried in the science. -
Berényi Péter at 23:54 PM on 29 June 2010Temp record is unreliable
#73 Ned at 21:52 PM on 29 June, 2010 Here's a comparison of the gridded global land temperature trend, showing the negligible difference between GHCN raw and adjusted data What you call negligible is in fact a 0.35°C difference between adjustments for 1934 and 1994 in your graph. If it is based on Zeke Hausfather, then it's his assessment. Now, 0.35°C in sixty years makes a 0.58°C/century boost for that period. Hardly negligible. It is actually twice as much as the adjustment trend I have calculated above (0.26°C in ninety years, 0.29°C/century). About the same order of magnitude effect is seen for USHCN. It is 0.56°F (0.31°C) difference between 1934 and 1994, which makes a 0.52°C/century increase in trend for this period. Therefore, if anything, my calculation was rather conservative relative to more careful calculations. It should also be clear it has nothing to do with the grid, so stop repeating that, please. What is not shown by more complicated approaches is the curious temporal pattern of adjustments to primary data (because they tend to blur it). Finally dear Ned, would you be so kind as to understand first what is said, then you may post a reply. -
Baa Humbug at 23:52 PM on 29 June 2010What causes the tropospheric hot spot?
Some commenters here are posing the question "would we see a hot spot if the warming was caused by solar output". I'm happy to let the IPCC AR4 answer that. From the FAQ’s document, page 120-121.. Models and observations also both show warming in the lower part of the atmosphere (the troposphere) and cooling higher up in the stratosphere. This is another ‘fingerprint’ of change that reveals the effect of human influence on the climate. If, for example, an increase in solar output had been responsible for the recent climate warming, both the troposphere and the stratosphere would have warmed. And from page 674 of Chapter 9 WG1 The simulated responses to natural forcing are distinct from those due to the anthropogenic forcings described above. Solar forcing results in a general warming of the atmosphere (Figure 9.1a) with a pattern of surface warming that is similar to that expected from greenhouse gas warming, but in contrast to the response to greenhouse warming" The charts regarding this hot spot are on the same page in the report, that is, the charts back-up what the report says. And it says that due to anthropogenic warming, there SHOULD be a hot spot that is about 2 to 2.5 times greater than the surface warming. If the surface has warmed by 0.7DegC then the hot spot should have warmed by 1.4 to 1.74DegC Surely large enough to detect. If however the hot spot hasn't warmed enough to detect, then the surface couldn't have warmed by 0.7DegC. One or the other, can't be both. Discuss -
Peter Hogarth at 23:50 PM on 29 June 2010September 2010 Arctic Ice Extent Handicapping Via ARCUS
Arkadiusz Semczyszak at 22:26 PM on 29 June, 2010 Goddard states: "You will also note that most of the world’s sea ice is located in the Antarctic" Antarctic: around 19 Million square km max extent at approx 0.87m thick average. Arctic: around 15 million square km max extent (limited by surrounding land masses) at average approx 2m thick. Which has more sea ice (at least for now)? I'll dig out references as this is from memory, but about right. I'll read the next paragraph now... -
johnd at 23:44 PM on 29 June 2010Abraham reply to Monckton
scaddenp at 08:14 AM, whilst the climate might be described as "stable", agriculture generally has continually had to deal with rapid climate change forcing farmers and all manner of farmed animals and plants to adapt quite rapidly. This is not by virtue of extreme changes in the weather, but by migration. Animals that had developed over generations under one set of climatic conditions, either gradually or suddenly find themselves in areas where conditions are totally different. Dairy cattle bred for the cold wet conditions of England or Europe find themselves being milked in a somewhat drier Australia for example or in a decidedly hotter tropical Asia. The Australian fine wool industry was largely built on sheep with origins in the dry and arid areas of northern Africa, then from the plains of Spain, within a few decades they adapted to the colder wetter conditions of southern Australia. These days with modern transportation, both for domestic movement and international, animals often have to adapt from one extreme to another very quickly. The limitation seems to be not so much of animals from colder wetter areas adapting to warmer drier conditions, but the reverse. Even with grain crops, they too have migrated, but here technology in the form of genetic modification, and breakthrough cultivation practices has allowed production to occur and increase in areas previously considered unviable. The changes that have occurred within ones lifetime would only have been imagined by those of the previous generation who had both imagination and vision, and choose to try and overcome what others saw as impossible obstacles. That is one thing that remains unchanged today. -
David Grocott at 23:01 PM on 29 June 2010What causes the tropospheric hot spot?
fydijkstra, You commend John for stating that "The hot spot is not a unique greenhouse signature", and then assert that not finding it must mean "that the greenhouse signature is weaker than most AGW advocates think". Actually, if we accept the large caveat that the hot spot hasn't yet been found (see Santer 2008 amongst others), we can conclude one of three things: 1) Our measurements have not been accurate enough; 2) The models that predict the tropospheric hot spot in response to warming are wrong; 3) There is in fact much less warming than our measurements tell us there is. Number one is not popular with the 'sceptics', which leaves numbers two and three. If we go with number two we have to accept that the laws of physics are wrong (at least that's my understanding - I'm not a modeller), and if we go with number three we have to accept that all of our surface and satellite measurements are wrong. Both of these arguments are difficult to make. As I pointed out at #17, the fact that the hot spot is not a unique greenhouse signature means that its perceived absence would have far wider ramifications than simply saying humans aren't causing global warming. In light of that, I find it far more conceivable to conclude that the hot spot has in fact been found (see Santer 2008), than that all of our surface and satellite temperature measurements are wrong, or that the laws of physics need revising. If the hot spot was unique to the greenhouse signature the 'sceptics' would have a much easier job. -
Arkadiusz Semczyszak at 22:26 PM on 29 June 2010September 2010 Arctic Ice Extent Handicapping Via ARCUS
I recommend the latest post from Steven Goddard 28/06/1910 (WUWT) - for me it is very comprehensive and shows that the Arctic ice surface is not what so fascinated. For me, the most effective there is this quote: The New York Times, 1969: "From the 9th century to the 13th century almost no ice was reported there. This was the period- of Norse colonization of' Iceland and Greenland. Then, conditions worsened and the Norse colonies declined. After the Little Ice Age of 1650 to 1840 the ice began to vanish near Iceland and had almost disappeared when the trend re versed, disastrously crippling Icelandic fisheries last year." At SH (http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/seaice.anomaly.antarctic.png) the record probably will fall (as they say the forecast) but otherwise ... -
Ned at 22:24 PM on 29 June 2010Temp record is unreliable
BP writes: I have not written proper software either just used quick-and-dirty oneliners in a terminal window. Maybe you should stop making allegations of fraud based on "quick and dirty oneliners"? Especially on a topic where many people have invested huge amounts of their own time on far more sophisticated analyses? -
Ned at 22:18 PM on 29 June 2010Temp record is unreliable
In my comment above, the link associated with the phrase "many people have looked into this in vastly more detail than BP" is sub-optimal. The link there is to a cached page at google, when it should be to Zeke Hausfather's comparison of GHCN analyses. -
Berényi Péter at 22:11 PM on 29 June 2010Temp record is unreliable
#67 doug_bostrom at 12:44 PM on 29 June, 2010 how about explicitly publishing your (admittedly simple sounding but I'm a simpleton) arithmetic method you're using to produce your datapoints? Listen, I think the description of the procedure followed is clear enough, anyone can replicate it. I am not into "publishing" either, it not my job. It is not science proper. That would require far more resources and time. I am just trying to show you the gaps where PhD candidates could find their treasure. The trails can be followed, and if anyone is concerned about it, things I write here are published under the GNU Free Documentation License. I have not written proper software either just used quick-and-dirty oneliners in a terminal window. Anyway, here you go. This is what I did for USHCN as recovered from the .bash_history file.[Oops. Pressed the wrong button first] $ wget ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.mean* $ grep '^425' v2.mean > ushcn.mean $ grep '^425' v2.mean_adj > ushcn.adj $ cat ushcn.mean|perl -e 'while (<>) {chomp; $id=substr($_,0,12); $y=substr($_,12,4); for ($m=1;$m<=12;$m++) {$t=substr($_,11+5*$m,5); printf "%s_%s_%02u %5d\n",$id,$y,$m,$t;} }'|grep -v ' [-]9999$' > ushcn.mean_monthly $ cat ushcn.adj|perl -e 'while (<>) {chomp; $id=substr($_,0,12); $y=substr($_,12,4); for ($m=1;$m<=12;$m++) {$t=substr($_,11+5*$m,5); printf "%s_%s_%02u %5d\n",$id,$y,$m,$t;} }'|grep -v ' [-]9999$' > ushcn.adj_monthly $ cut -c-20 ushcn.mean_monthly | sort > ushcn.mean_monthly_id $ cut -c-20 ushcn.adj_monthly | sort > ushcn.adj_monthly_id $ uniq -d ushcn.mean_monthly_id $ uniq -d ushcn.adj_monthly_id $ sort ushcn.mean_monthly_id ushcn.adj_monthly_id | uniq -d > ushcn.common_monthly_id $ (sed -e 's/^/0 /g' ushcn.mean_monthly; sed -e 's/^/1 /g' ushcn.adj_monthly; sed -e 's/^/2 /g' ushcn.common_monthly_id;)|sort +1 -2 +0 -1 > ushcn.composite_list $ sed -e 's/ */ /g' ushcn.composite_list|perl -e 'while (<>) {chomp; ($i,$id,$t)=split; if ($i==2 && $id eq $iid && $id eq $iiid) {$d=$tt-$ttt; printf "%s %d\n",$id,$d;} $iiid=$iid; $iid=$id; $ttt=$tt; $tt=$t;}' > ushcn.adjustments_monthly_by_station $ sed -e 's/^............_//g' -e 's/_.. / /g' ushcn.adjustments_monthly_by_station | sort > ushcn.adjustments_annual_list $ echo '#' >> ushcn.adjustments_annual_list $ cat ushcn.adjustments_annual_list | perl -e 'while (<>) {chomp; ($d,$t)=split; if ($d ne $dd && $dd ne "") {$x/=$n*10; printf "%s\t%.3f\n",$dd,$x; $n=0; $x=0;} $n++; $x+=$t; $dd=$d;}' > ushcn.adjustments_annual.txt $ openoffice -calc ushcn.adjustments_annual.txt
-
fydijkstra at 22:00 PM on 29 June 2010What causes the tropospheric hot spot?
The hot spot is not a unique greenhouse signature and finding the hot spot doen't prove that humans are causing global warming. That's true, John. The hot spot may be caused by more than one phenomenon. But what does not finding the hotspot mean? That means at least, that the greenhouse signature is weaker than most AGW advocates think. And that is exactly what Jo Nova is telling us. Climate sceptics do not deny the fundemental physics underlying the AGW-theory. Their message is: (1) global warming is less then we are told, (2) the warming is less caused by the greenhouse effect then we are told and (3) the catastrophic effects of the warming are les then we are told. The problems that John Cook and others have with identifying the hot spot confirm the second point of the sceptic views. -
Ned at 21:52 PM on 29 June 2010Temp record is unreliable
How depressing. In post #62 BP makes a whole series of very specific claims about the satellite temperature record, all of which are stated as factual with no qualifiers or caveats. Then, two posts later in #64, he casually mentions that those earlier statements were actually "just a guess, I have not looked into the issue deeply enough, yet." Then, to compound this, BP proceeds to claim to have discovered evidence of "tampering" (his own word) with the GHCN data set, based on comparing the raw and adjusted GHCN data sets using a naive unspatial averaging of all station data. I have pointed out to BP previously that you cannot compare the results of a simple global average of all stations to the gridded global temperature data sets because the stations are not distributed uniformly. Given that many people have looked into this in vastly more detail than BP, and have done it right instead of doing it wrong, I cannot fathom why BP thinks his analysis adds any value, let alone why it would justify sweeping claims about "tampering". Here's a comparison of the gridded global land temperature trend, showing the negligible difference between GHCN raw and adjusted data: This is based on results from Zeke Hausfather, one of a large and growing number of people who have done independent, gridded analyses of global temperature using open-source data and software. BP claims that the GHCN adjustment added "0.26 C" to the warming trend over the last 90 years (is that 0.26 C per 90 years, or is it 0.26 C per century over the last 90 years? "0.26 C" is not a trend). Using a gridded analysis, the actual difference in the trends is 0.04 C/century over the last 90 years. Over the last 30 years, the difference in trend between the raw and adjusted data is 0.48 C/century ... with the adjusted trend being lower than the raw trend. In other words, the "tampering" that BP has detected is, over the past 30 years, reducing the magnitude of the warming trend. Then, of course, there's the issue that land is only about 30% of the earth's surface. Presumably the effect of any adjustment to the land data needs to be divided by 3.33 to compare its magnitude to the global temperature trend. Once again, BP has drawn extreme and completely unjustified conclusions ("tampering") based on a very weak analysis. Personally, I am getting really tired of seeing this here. -
kdkd at 21:44 PM on 29 June 2010Temp record is unreliable
BP #64 I suggest you have a look at what the Clear climate project has to say about the ghcn data you've examined. Scientific code is almost never pretty - the goals are very different to what commercial programmers would expect, and technical debt accumulates at far faster rates compared even to poorly managed commercial projects. This is caused by the two camps having distinctly different goals (technical incompetence on the side of the scientists, and scientific incompetence on the side of the programmers, to be uncharitable). -
HumanityRules at 21:16 PM on 29 June 2010Return to the Himalayas
Thanks Doug, I'll just offer this new paper for now which raises the idea that the Himalayan catchment areas may be less susceptible to glacier mass balance changes than believed. Suggesting precipitation dominates the hydrological system of the region. -
Berényi Péter at 21:14 PM on 29 June 2010Temp record is unreliable
#65 scaddenp at 12:39 PM on 29 June, 2010 I believe the papers used for homogenization are listed here.USHCN Do you have a problem with the methodology used here? According to the USHCN page you have linked they do adjustments in 6 steps. The first one "with the development at the NCDC of more sophisticated QC procedures [...] has been found to be unnecessary" Otherwise the procedure goes like this: RAW -> TOBS -> MMTS -> SHAP -> FILNET -> FINAL Proper process audit is impossible, because- unified documentation of the procedure, including scientific justification and specification of algorithms applied is not available
- for step 2, 3, 4 & 6 at least references to papers are provided, for step 5 not even that
- neither executables nor source code and program documentation is provided for programs TOBS, MMTS, SHAP & FILNET.
- metadata used by the programs above to do their job is missing and/or unspecified
- clear statement whether the same automatic procedure were applied to GHCN v2 which is hinted at the USHCN Version 1 site is missing (if the arcane wording "GHCN Global Gridded Data" in the HTML header of that page is dismissed)
I do not know how authoritative it is. But I do know much better documentation is needed even on low budget projects, not to mention one multi thousand billion bucks policy decisions are supposed to be based on. The "Pairwise Homogeneity Algorithm (PHA)" promoted (but not specified) in this document is not referenced on any other USHCN or GHCN page. Google search "Pairwise Homogeneity Algorithm" site:gov returns empty. It would be a major job to do the usual software audit on this thing. One has to hire & pay people with the right expertise for it, then publish the report along with data. However, any scientist would run away screaming upon seeing a calibration curve like this, wouldn't she? It is V shaped with clear trends and multiple step-like changes. One would think with 6736 stations spread all over the world and 176 years in time providing 4,864,014 individual data points errors would be a little bit more independent allowing for the central limit theorem to kick in. At least some very detailed explanation is needed why are there unmistakable trends in adjustments commensurate with the effect to be uncovered and why this trend has a steep downward slope for the first half of epoch while just the opposite is true for the second half? BTW, the situation with USHCN is a little bit worse. Adjustment for 1934 is -0.465°C relative to those applied to 2007-2010 (like 0.6°C/century?). I'll post the USHCN graph later. #66 scaddenp at 12:39 PM on 29 June, 2010 you think that you can explain warming in ocean, satellite, and surface record away as "anomalies" as poor instrumental records, and then explain the loss of ice/snow around the world purely by black soot? And the sealevel rise as by soot-induced melting alone without thermal expansion? I guess similar strange measurement anomalies will explain upper stratospheric cooling and the IR spectrum changes at TOS and at surface. That is drawing one very long bow One thing at a time, please. Let's focus on the problem at hand first, the rest can wait.USHCN Version 2.0 Update v1.0Processing System Documentation(another version number here?)Draft August 46, 2009 Claude Williams Matthew Menne -
Peter Hogarth at 20:17 PM on 29 June 2010What causes the tropospheric hot spot?
SNRatio at 16:30 PM on 29 June, 2010 (or related anyway) Bengtsson 2009 suggests Lower Troposphere Temperature minus Mid Troposphere Temperature (TLT-TMT, or T2LT-T2) is a servicable approximation to lapse rate. Sorry for not pointing this out explicitly above after my table, but this has increased unambiguously over the record. Even between the analysis in the papers referenced. Santer (data up to 2000) gets 0.024 and 0.023 for RSS and UAH data respectively. Bengtsson (data up to 2008) gets 0.035 and 0.036, and the May 2010 values give 0.037 and 0.035 (RSS and UAH respectively). On this the satellite records agree very well. It should also be appreciated that the TLT and TMT are not measurements within distinct layers, but represent weighted sums as we ascend so that for example 90% of T2 data is from troposphere surface to 18km, whilst 90% of T2LT is surface to 8km. Radiosondes can of course give data at specific heights, the four most recent primary sonde data sets show clear tropospheric warming higher than at surface, see figure above. -
JMurphy at 19:47 PM on 29 June 2010Temp record is unreliable
doug_bostrom wrote : "...but on the other hand we're also sometimes treated to interesting little essays like this". And to add one last comment on this diversion : that comment just proves my point. Anywhere else you care to research the subject of the Dialogo, you will find Simplicio described as a combination of two contemporary philosophers : Cesare Cremonini who famously refused to look through the telescope; and Ludovico delle Colombe, one of Galileo's main detractors. You will also find evidence of Galileo's good connections with Maffeo Barberini (later Urban VIII), who had written a poem in praise of Galileo's telescopic discoveries and actually agreed to the publication of the Dialogo. Why, then, would Simplicio be a parody of Urban ? It's all part of a pattern : BP finds the evidence he likes and agrees with and everything else (and everyone else) is wrong, fraudulent or part of the conspiracy. -
Doug Bostrom at 18:35 PM on 29 June 2010Temp record is unreliable
Not to swerve completely off-topic JMurphy but I'm not sure I can think of a single other skeptic I've witnessed actually admitting an error other than BP, here, though I can't remember exactly what was about or where, just that it was striking in its very novelty. I'm bothered by the fraud thing, very much so because it's hard to talk with somebody who starts with an assumption that data is cooked and I have to wonder how virtually all of our instrumental records could be either hopelessly flawed or run by the Mafia but on the other hand we're also sometimes treated to interesting little essays like this. I've spent (wasted according to some people) a lot of time in the past 3 years hanging out on climate blogs and Berényi Péter is quite unlike any other doubter I've run across. -
JMurphy at 18:11 PM on 29 June 2010Temp record is unreliable
I used to be impressed with the allowances you grant to BP and his wild (and long) meanderings laced with accusations (such as 'tampering') and insinuations, but it is starting to get very boring and frustrating. How many times can such accusations be allowed without proof, even if followed by apologies - although the apologies are never (as far as I can see) related to the accusations made, as can be seen for his 'apology' on the Ocean acidification thread, where he apologised for getting angry but not for the general accusations against 'climate science'. -
andrew adams at 17:48 PM on 29 June 2010What causes the tropospheric hot spot?
robhon, They did point to the particular passage in the IPCC report - I don't think they misquoted it, it just doesn't support the point they were making. -
SNRatio at 16:30 PM on 29 June 2010What causes the tropospheric hot spot?
Isn't the GHG fingerprint the _divergence_ of stratospheric and tropospheric temps? I.e. the stratosphere constant or cooling while troposphere is warming, or the stratosphere cooling with the trop. constant or warming? And that fingerprint does not have to be very strong. -
Doug Bostrom at 12:44 PM on 29 June 2010Temp record is unreliable
BP, what act of contrition will you offer should your remark of "tampered with" prove faulty? Perhaps a more careful choice of words would be better? Also, this isn't some kind of fad thing you're bringing from elsewhere, is it? I'm not being nasty, just am bothered with words smacking of fraud and am really bored with impressionist fads. As I've said before, you make an effort but that makes it -more- disappointing when you succumb to the freshly-revealed-climate-science-conspiracy-of-the-week. Anyway how about explicitly publishing your (admittedly simple sounding but I'm a simpleton) arithmetic method you're using to produce your datapoints? -
scaddenp at 12:39 PM on 29 June 2010Temp record is unreliable
"without soot pollution on ice & snow" - you mean you think that you can explain warming in ocean, satellite, and surface record away as "anomalies" as poor instrumental records, and then explain the loss of ice/snow around the world purely by black soot? And the sealevel rise as by soot-induced melting alone without thermal expansion? I guess similar strange measurement anomalies will explain upper stratospheric cooling and the IR spectrum changes at TOS and at surface. That is drawing one very long bow, BP. You could be right but I will stick with the simpler explanation - we ARE warming and our emissions are the major cause of it. -
Doug Bostrom at 12:32 PM on 29 June 2010How many climate scientists are climate skeptics?
The complete Revkin comment omnologos believes dramatic: For starters, one aspect of such efforts that I find troubling is the definition of categories. Convinced/unconvinced begs a question: Convinced of what? That human-driven warming is real, is dangerous, requires a response focused on emissions reductions (or adaptation), or…? There is such a continuum of reasoning on the part of those lumped as “unconvinced” that the entire effort threatens to lose meaning. Not exactly an excoriation, I'd say. It would indeed be nice if some folks w/core activity in social sciences took up this topic, especially as there's been such a concerted effort to deceive the public on what's what in mainstream climate research. Anderegg et al are filling a vacuum but it's beginning to leak. See my post immediately above this one. -
scaddenp at 12:27 PM on 29 June 2010Temp record is unreliable
BP - a script and post on this. do climatologists falsify data?. I believe the papers used for homogenization are listed here.USHCN Do you have a problem with the methodology used here? -
Doug Bostrom at 12:24 PM on 29 June 2010How many climate scientists are climate skeptics?
Folks interested in a formal social sciences approach to expert thinking about climate change might want to check out this paper: Expert judgments about transient climate response to alternative future trajectories of radiative forcing -
Berényi Péter at 12:16 PM on 29 June 2010Temp record is unreliable
#63 CBDunkerson at 20:40 PM on 27 June, 2010 though in fact UAH originally came up with results significantly different from the surface results and only later came to line up after several errors were identified Yes. And the motivation for debugging was the discrepancy. But the thing about conversion of brightness temperatures to proper temperatures using an atmospheric model was just a guess, I have not looked into the issue deeply enough, yet. However, I am pretty sure the surface database is tampered with. I have downloaded both v2.mean.Z and v2.mean_adj.Z from the GHCN v2 ftp site. According to the readme file data in the latter one are "adjusted to account for various non-climatic inhomogeneities". Then selected pairs of temperature values where for a 12 character station ID (includes country code, nearest WMO station number, modifier and duplicate number), a specific year and month both files contained valid temperatures (4,864,014 pairs for 1835-2010). For each pair I have calculated the adjustment as the difference of the value found in v2.mean_adj and v2.mean. Having done that, I have taken the average of adjustments for each year. It looks like this: It is really hard to come up with an error model that would justify this particular pattern of adjustments. One is inclined to think it's impossible. Note that for the last ninety years adjustments for various non-climatic inhomogeneities alone add about 0.26°C to the warming trend. If we also take into account the UHI effect which is not adjusted for properly, not much warming is left. Without soot pollution on ice and snow, we probably would have severe cooling. -
Doug Bostrom at 11:49 AM on 29 June 2010Sea level rise is exaggerated
I've been inspired by Daniel to scrutinize Donnelly et al more closely. One thing to note right away: I'm struck by Daniel's powerful rhetoric (Donnelly's paper is an "utter joke") compare to Donnelly's measured language in his conclusions: The likely increase in the rate of SLR in the late 19th century A.D. is roughly coincident in time with climate warming observed in both instrumental and proxy records [e.g., Mann et al., 1998; Pollack et al., 1998]. The results indicate that this recent increase in the rate of SLR may be associated with recent warming of the global climate system. Daniel might have a point about uncertainties, if Donnelly's stratigraphic conclusions were taken in isolation and were naively dated, and if Donnelly's main objective was to derive a reasonable absolute measure of sea level for each stratigraphic sample. However, Donnelly's dating interpretation of the sequence is consistent with independent markers and in any case Donnelly's objective is not to obtain a series of accurate historical sea levels attributed to particular years over a 700 year period but rather to form an estimate of rate of rise over that entire span. That means that if the samples can be boxed into periods of a few years they're suitable for Donnelly's requirements. It turns out that markers constrain the uncertainties of individual measurements sufficient to eliminate extended large excursions of the type Daniel hypothesizes making it quite unlikely that the bulk of the rise indicated by the sequence was concentrated in short bursts. The various tools deployed by Donnelly to constrain dates are exactly what I'm talking about when I say the neither Daniel nor I are equipped to offer criticism of this work with an eye to disproving it. Here's a typical example: To further refine our C-14 chronology, we used fossil pollen evidence of European clearance/agriculture and industrial revolution-related heavy metal pollution horizons (Figure 3). Peat samples for pollen and metals analysis were also taken from just above the contact with the erratic. The initial rise in Rumex spp. pollen (a native weed) (-46.5 to -50.5 cm) coincides with land clearance for agriculture between 1650 and 1700 A.D. [Clark and Patterson, 1985; Donnelly et al., 2001]. An uncertainty box has been plotted (light gray) based on the presence of Jg and Sp remains at this interval (indicative meaning of 6.7 ± 10.4 cm above MHW) and the time interval of initial land clearance (box with diagonal line fill; Figure 2). The combination of the indicative meaning of the sample (including 2s uncertainty) with its accepted age range yields boxes representing the most likely elevation of MHW in the past (Figure 2). The appearance of Plantago lanceolata (an introduced species) between -32.5 and -35.5 cm (Figure 3) suggests deposition in the early 19th century [Clark and Patterson, 1985]. Based on the presence of Jg and Sp remains at this interval we plotted a box representing the indicative meaning of this interval (vertical line fill) and the associated uncertainty box (light gray) (Figure 2). Other methods were used to constrain other samples, methods of which Daniel and I know nothing. This is what I mean when I refer to unearned hubris; dismissing a paper as "junk" from a position of ignorance of the specialized tools used to produce it is foolish. Daniel complains There is more than enough slack in this data to periodically reproduce the apparently rapid sea level rise of 2.8mm/year in the NYC tide gauge data of the last ~150 years but that's speculation. The conservative way to interpret the data is to take it for what can say. Donnelly: A linear rate of rise of 1.0 ± 0.2 mm/year intersects all the 2 [sigma] uncertainty boxes of the record from the 14th to the mid-19th century. That's it, and in any case we've already seen that wild slews in rate changes don't seem to fit the C14-independent constraints of the samples. Donnelly himself is carefully circumspect about his conclusions: Coupling the Barn Island record and regional tide-gauge data indicates that the rate of SLR increased to modern levels in the 19th century (Figure 2). However, given that the center of each uncertainty box has the highest probability the most conservative interpretation of the data is that the SLR increase to modern values occurred in the late 19thcentury (Figure 2). Daniel needs to do better than Donnelly at performing this same work in order to dismiss Donnelly's paper but he can't because he's not trained in Donnelly's area of specialization. Because Daniel cannot address the paper at this level of detail but can only use general purpose adjectives to support his case, I'm with Donnelly on this matter. What choice do I have? Donnelly makes a reasonable case using tools he describes adequately but which I'm unqualified to judge, as is Daniel. I suggested an attack via uncertainty to Daniel because that's the only technique I could use in this case, lacking the disciplinary tools to address Donnelly's methods as I do, and in all probability that's true of Daniel as well. -
carrot eater at 11:47 AM on 29 June 2010What causes the tropospheric hot spot?
Andrew, Your instincts are correct. If the surface is warming, then the tropical troposphere should warm faster than the surface, no matter what is causing the warming. So there is no fingerprint there. But above that, if the stratosphere is at the same time cooling, that is indeed a fingerprint of enhanced greenhouse effect. Caveat being that ozone loss also causes strat cooling, but that effect is more limited to a certain altitude band. -
Rob Honeycutt at 08:52 AM on 29 June 2010What causes the tropospheric hot spot?
Ah, yes Andrew. I made the same mistake myself. Did they point to where in the IPCC report that it says that? I would be curious. -
David Grocott at 08:47 AM on 29 June 2010What causes the tropospheric hot spot?
Peter, Really interesting post, thanks. Particularly appreciate the (newer) Santer graph. Re: Paltridge, here’s a couple of quotes from the study itself:Radiosonde humidity measurements are notoriously unreliable and are usually dismissed out-of-hand as being unsuitable for detecting trends of water vapor in the upper troposphere.
It is of course possible that the observed humidity trends from the NCEP data are simply the result of problems with the instrumentation and operation of the global radiosonde network from which the data are derived. The potential for such problems needs to be examined in detail.
Despite these caveats, Paltridge does, quite rightly I feel, argue that “the NCEP data for the middle and upper troposphere should not be “written off”…Since balloon data is the only alternative source of information [as opposed to that taken from satellite measurements] on the past behavior of the middle and upper tropospheric humidity and since that behavior is the dominant control on water vapor feedback, it is important that as much information as possible be retrieved from within the “noise” of the potential errors.” However, on the recommendation of the Elliott and Gaffen study (1991), Paltridge’s study only covers the reanalysis data from 1973 to 2007 and limits its examination to particular latitudes between 50° S and 50° N, and atmospheric pressures up to up to 500 hPa everywhere, together with the summer season data from 400 hPa, and the data up to 300 hPa in the tropics. This is because the radiosonde measuring system isn’t accurate enough to measure changes in humidity in locations where humidity is already at comparatively low levels and because any radiosonde humidity measurements prior to 1973 are unusable as a result of instrumental changes and deficiencies. Paltridge also bases his findings on a combination of observations and models (you know, the things Nova hates). This report – http://www.atmos.umd.edu/~ekalnay/Kistleretal.pdf - notes that “gridded variables, the most widely used product of the reanalysis, have been classified into three classes”; moisture variables, upon which Paltridge would have relied, fall into the category, ‘Type B Variables’, which the report describes as being “influenced both by the observations and by the model, and are therefore less reliable [than Type A Variables which "are generally strongly influenced by the available observations"]“. In addition, on both the NCEP reanalysis website and the NCAR reanalysis website a ‘problem report’ is given, discussing the issues associated with the data. One such issue is titled ‘Spurious Moisture Source/Sink’. In brief, it states that “a poor approximation was used for the humidity diffusion which created spurious moisture sources and sinks”; amongst other things this “can be expected to increase/decrease humidity”. And of course the radiosonde measurements contradict the satellite measurements - http://www.gfy.ku.dk/~kaas/forc&feedb2008/Articles/Soden.pdf -
scaddenp at 08:14 AM on 29 June 2010Abraham reply to Monckton
Awol - "as warm as possible". Why not even more potent GHGs then and get us to Venus-like temperatures? Well obviously because we want planet to be around the temperatures we evolved to live in. However, this debate isnt really about what would be an optimal temperature but is about how fast we are changing it. Think of your farm animals and about how easily farmers are able to cope with rapid climate change. We have huge urban centers and complex food production systems that have developed in stable climate. Rapid change is not good for them. Ask how farmers on the great deltas are going to cope with coast erosion and salt incursion as sealevel rises as well. Over a 1000 years (ice cycle type change) possible. Over 100 years - hmm. -
Peter Hogarth at 08:12 AM on 29 June 2010What causes the tropospheric hot spot?
One of the issues at the heart of the matter is the trend differences between UAH and RSS satellite temperature estimates for the tropical lower and mid troposphere, both based on the MSU (microwave soundings) raw data from various satellites. Santer 2008 uses data from 1979 to 2000, Bengtsson 2009 uses a similar methodology with (Ocean only) satellite data updated to 2008, and I have updated the satellite trend values to current, with the latest UAH LT5.3 data. All trend value are degrees C per decade. Santers analysis of the satellite trends and surface temperature and Radiosonde trends to 2000 pointed to a much reduced discrepancy between observations and model outputs than found previously (see figure below). He also suggests where models may be lacking. Bengtsson 2009 follows on from this but uses the later lower trend values. Based on these and a modeling/statistical approach, the probability of the satellite temperature trends being due to natural causes is given as 27% for the UAH measurements and 2.5% for the RSS measurements. Maybe reasonable odds, but not “robust” yet except perhaps in the case of the RSS trend. Bengtsson also argues that the UAH values are closer to observed SST, but conversely Santer 2008 suggests the RSS values are closer to other global temperature series, and other interpretations of the MSU raw data. This relatively small RSS/UAH difference weighs heavily. Bengtsson concludes “Observed and re-analyzed lapse rate trends are all positive and for the period 1979-2008 well outside the range of natural variability”, but in terms of temperature trends, “The present 30-years of tropospheric temperature observations are still insufficient to identify robust trends as the internal variability of realistic climate models is larger than the observed trends” Has the situation changed since 2008/2009? A little. The UAH/RSS divergence has reduced with the latest revisions and data, the RSS trend values are slightly lower but the revised UAH tropical trend values have increased (I should mention UAH global trends did not change with the update) so that they are higher than in Santers original analysis. As the trends have continued (ie troposheric temperatures have continued to rise) we would expect that the updated 2010 data will push further towards (rather than away from) any statistically robust result. The following image is from Santer 2008 and summarises the story of the models and measurements as at 2008 quite nicely. If you view the JoNova post linked in the article you will see a related but older chart from Santer 2005, which supports the idea of significant divergence despite Jo citing the later 2008 paper in which Santer argues otherwise. Likewise her second figure should be updated in line with more recent work, as science has moved on. I also have serious concerns about the one later reference which Jo uses (Paltridge 2009) to support a view that tropospheric relative humidity is falling. This is at odds with the conclusions of the bulk of recent papers I have read, and Paltridge himself states “It is accepted that radiosonde-derived humidity data must be treated with great caution” the data is single source (NCEP) re-analysis. As well as Sherwood 2010a listed earlier, and evidence from Sherwood 2010b, he has also completed a recent review of independent work examining tropospheric water vapour using multiple data sets from several types of sensor Sherwood 2010c which adds a more thoroughly referenced and wide perspective on Tropospheric water vapour: “Thus, all primary data sets support the conclusion that water vapor mixing ratios in the troposphere are increasing at roughly the rate expected from the Clausius-Clapeyron equation. Although a few analyses have found otherwise, these relied on secondary data sets that are less suitable for quantifying trends”. -
Doug Bostrom at 07:58 AM on 29 June 2010Astronomical cycles
Further to kdkd's comment to Ken, here's an extended quote from a blog post by Michael Tobis I think talks to the whole matter behind this thread, that is to say whether some external influence is going to nullify unrelated processes here on Earth. There is some implication that there is an "AGW theory" and that there is an argument in its support, and that said argument is a cohesive thread starting with Fourier and ending at the dreaded-extremist-boogeyman-Gore, and that failure of any chain in said argument necessarily implies "see, so no carbon policy is necessary". (I'm missing a few steps in their reasoning here, too, but that's another topic still.) I claim there is no "AGW theory" in the sense that there is an argument that four colors suffice, or more fairly, that stars follow an evolutionary path based on their mass. AGW is not an organizing principle of climate theory at all. Hypotheses, organizing principles, of this sort emerge from the fabric of a science as a consequence of a search for unifying principles. The organizing principles of climatology come from various threads, but I'd mention the oceanographic syntheses of Sverdrup and Stommel, the atmospheric syntheses of Charney and Lorenz, paleoclimatological studies from ice and mud core field work, and computational work starting with no less than Johnny von Neumann. The expectation of AGW does not organize this work. It emerges from this work. It's not a theory, it's a consequence of the theory. Admittedly it's a pretty important consequence, and that's why the governments of the world have tried to sort out what the science says with the IPCC and its predecessors. That tends to color which work gets done and which doesn't, and I think it should. As Andy Revkin pointed out, it may be time to move toward a service-oriented climatology, or what I have called applied climatology. The point is that this amounts to application of a theory that emerged and reached mathematical and conceptual maturity entirely independent of worry about climate change. So attacks on climate change as if it were a "theory" make very little sense. Greenhouse gas accumulation is a fact. Radiative properties of greenhouse gases are factual. The climate is not going to stay the same. It can't stay the same. Staying the same would violate physics; specifically it would violate the law of energy conservation. Something has to change. For a little more on what must change, how much, etc. see the rest of Tobis' post. My point in quoting Tobis is to make a helpful reminder that "falsifying" the notion of anthropogenic global warming would require an upheaval of research none of us are going to witness. So don't look to external matters such as the moon and stars or things that make graphs wiggle to put a neat "done" on the matter. -
andrew adams at 07:43 AM on 29 June 2010What causes the tropospheric hot spot?
I was prompted by this post to pay a visit to Jo Nova's site. I don't know much about Ms Nova herself but the comments were a real snake pit - not pleasant. One guy got great satisfaction (and approval from others) by countering the argument that the THS is not specifically a fingerprint of AGW by pointing to the IPCC's statements that tropospheric warming/stratospheric cooling is a fingerprint of AGW. But to my layman's eye they are completely different phenomena and irrelevant to this argument. -
Doug Bostrom at 07:41 AM on 29 June 2010Return to the Himalayas
Thank you for your remarks, Kooiti. You make a fair point about the distinction between the Himalayas and the region covered by Immerzeel. In fact, the title of the article I feature is "Climate Change Will Affect the Asian Water Towers" and I now see that not only does my title not reflect that but I actually managed to -not- mention the article title once in my little writeup. The latter I'll somehow fix but I think I'll leave the title as-is because I think it's helpful to give folks an explicit pointer to updated information behind the Himalaya reference fiasco. Thanks also for your remarks about population details as well as your taking time to supply some pointers for people wanting to go further with that. In a way your points are complementary to mine in that we can see how resolution of this sort of information improves over time. W/regard to Barnett's own issues w/cites as well as "data bruising" as information is passed from one publication to another I have (is it any surprise) an opinion about that but not I think one that is very controversial in terms of intent, though it could become significant when viewed from the perspective of relying on reviews if not handled properly. The least ambiguous and most accurate description of any research paper's content is harbored in the original paper itself. By necessity not all information from a paper is conveyed when another researcher or reviewer dips into a given paper for supporting information and so real diligence must be practiced in this crucial hand-off. Unless the dependency in question is unusually atomic there is room for ambiguity and even error to creep in; ambiguity accompanies insufficient characterization as does error. Authors of reviews and synthesis reports are at the end of a longer foodchain, the information they draw from has passed through more hands and thus is more susceptible to damage. I think it's safe to say this problem of conclusion creep or divergence is one reason why it's such a good thing that IPCC has built what appears to laypersons as a pathologically obsessive review process, drawing on the awareness of publishing scientists of how easy it is to mess up information as it is passed along. Cases of error in spite of fanatical attention to details are a cue to how demanding this process really is. Apparently IPCC is going to amp-up their reviewer scrutiny still farther, a sign of the relative urgency attached to the task of keeping the IPCC synthesis up to date even while drawing on active avenues of inquiry. Time appears of the essence in this case, we don't have 100 years to wait for dust to collect on researchers' work before before being supplied with information to assist with mitigation and adaptation. -
kdkd at 07:33 AM on 29 June 2010Astronomical cycles
KL #105This is a huge burst of heat equivalent; entirely incompatible with the steadily increasing imbalance proposed by CO2GHG theory.
The myth of a monotonic increase is a frequent climate contrarian talking point. The theory has no such requirement. Perhaps this is one for the common arguments page, although the idea that noise exceeds signal over short time periods is so obvious to everyone except the so called sceptics, that it's an argument that's trivial to rebut. -
AWoL at 07:28 AM on 29 June 2010Abraham reply to Monckton
I'm just a vet, though believe it or not, I can remember Boltzman's Constant from our old Physics lectures, so I like to believe that I inhabit the ranks of the scientific semi-literate. My question is, if the Earth has an arbitrary average temperature of circa 15degC and the temperature of space is 270degK ie -270degC, then what's the problem? Anything that stems the ferocious heat loss to the exterior, surely has to be a good thing? Surely the correct thing to do is to pump CO2( or more potent greenhouse gases) into the atmosphere in order to keep the planet as warm as possible? What a nutty idea , I hear you say, but in reply I say....-270degC, out there. Not much chance of too much warming when you're up against that. It's bloody cold out there! -
Kooiti Masuda at 06:51 AM on 29 June 2010Return to the Himalayas
You have not quite come back to the Himalayas, but to the greater mountainous region of the Central Asia. Let us tentatively call it the greater Himalayas. There were actually two problems in the Asian chapter (Chapter 10) of IPCC AR4 WG2. One was the outlook of diminishing glaciers in the (proper) Himalayas in the Section 10.6.2. It was really erroneous. Another thing was the estimate of population who depend on glacier meltwater of the "greater Himalayas". In Section 10.4.2.1, it was written that "Climate change -related melting of glaciers could seriously affect half a billion people in the Himalaya-Hindu-Kush region and a quarter of a billion people in China who depend on glacial melt for their water supplies (Stern, 2007)". Stern Review may be a good reference for economic matters, but not an original reference suitable here. In Stern Review, it appears in Chapter 3 "How climate change will affect people around the world", section "3.2 water", page 63. It refers to the following paper in the Note 23 and it is surely the source of the information. Barnett, T.P., J.C. Adam and D.P. Lettenmaier, 2005: Potential impacts of a warming climate on water availability in snow-dominated regions. Nature, 438, 303 - 309. IPCC AR4 WG2 directly refers to Barnett et al. (2005) in Chapter 3 (Freshwater resources and their management), Sections 3.4.1 (Surface waters) and 3.4.3 (Floods and droughts). The paper by Barnett et al. is a peer-reviewed article. And its estimate of population who depend on meltwater from snow seems to be reasonable, though it is different from the estimate of population who are likely to be affected by decrease of snowpack associated with global warming. But it did not distinguish snowmelt and glacier-melt. But Stern Review and consequently IPCC AR4 WG2 Chapter 10 mis-interpreted it as mainly glacier-melt. Also there is a problem in the paper of Barnett et al. that their numbers about glacial meltwater and population are not fully substantiated by their references 40, 41, 42 and 43. It seems to be an issue of sloppiness in the editorial process of the "Nature" magazine rather than of IPCC. The study by Immerzeel et al. (2010) is very welcome and it will supercede Barnett et al. (2005) in terms of estimate of population who depend on snow and glacier meltwater. But we should note that Immerzeel studied five large river basins. We should also look at inland river basins to the north of the Tibatan Plateau, where glacier meltwater has the largest relative role, though the human population density is relatively low. -
carrot eater at 05:24 AM on 29 June 2010What causes the tropospheric hot spot?
HumanityRules: Read 'short term trend' as referring to the observed high frequency behaviour, all the short term wiggles due to ENSO and whatever else. The jump upwards during an El Nino can be described as a short term trend. I did not find John's phrasing at all confusing, but that's what is meant. We already have known that satellite records have been subject to long-term biases and calculation errors in correcting for the same. They've been continually corrected in UAH, one by one. The remaining differences between UAH and RSS also beckon. Any time you've got satellite drift, or you've got to sew together records from different and non-overlapping satellites (an issue with TSI, isn't it?), you'll have to be careful with long term biases. -
VoxRat at 05:08 AM on 29 June 2010Ocean acidification
BP #68 Specifically with respect to pH, though the subject of this discussion... Your graph doesn't have anything to say about the extent or rapidity of pH changes. I guess when these guys http://www.sciencedaily.com/releases/2009/05/090519111031.htm publish their findings we'll have something to compare current trends with. -
John Russell at 04:56 AM on 29 June 2010Return to the Himalayas
Try feeding yourself WITHOUT using water, Paul Daniel Ash. In a rice field it takes around 2,500 litres of water to grow a single kilogram of rough rice. -
michael sweet at 04:03 AM on 29 June 2010September 2010 Arctic Ice Extent Handicapping Via ARCUS
HumanityRules: One way that scientists show what they know is by making predictions of what they think will happen. If those predictions consistantly show skill then they understand the material. If they have poor predictions that means they have a ways to go to understand the material. The predictions on arctic ice have a big spread and low skill. This shows the rest of us that they have more to learn. We can compare to past predictions (like the 2007 IPCC report) and see that the date of an ice free arctic keeps getting moved forward. Even WUWT now predicts an ice free arctic before the 2007 IPCC report!! That shows the scientists have been much too conservative in the past. What does that suggest about their predictions of ice melt in Greenland? Are they more likely conservative or alarmist? As the years go by the predictions will converge. When they do we will have more confidence in those predictions. I note that none of the estimates predict a return to the ice levels of the 1990's. This tells us something and is a firm prediction. -
Doug Bostrom at 03:53 AM on 29 June 2010Return to the Himalayas
Ding-dong-ding! Paul gets the prize for being first to winnow a New Scientist error. In fairness I think it was down to some ambiguous language in Immerzeel, who refers to "total population" without a reminder that the term refers to the region under scrutiny. I made the same mistake when reading the paper but had the luxury of having time for my intuition to illuminate a caution lamp and consequently check the figure. -
Paul Daniel Ash at 03:38 AM on 29 June 2010Return to the Himalayas
Well, 4.5% of the current world population would be 301,376,432; in forty years it'll be something like 420,000,000. So, math FAIL. Also you can't "feed" yourself using water.. not for long, anyway. -
Doug Bostrom at 03:24 AM on 29 June 2010Tuesday 29 June talk on science blogging at University of W.A.
Thanks for the update John and now go to bed. -
HumanityRules at 02:51 AM on 29 June 2010What causes the tropospheric hot spot?
Why would the satellite temperature record be subject to "spurious long-term biases"? We seem to be happy with the data when it confirms the surface instrument record or when it confirms stratospheric cooling but this mid section of the data is all wrong. It seems horribly convenient. In fact it appears akin to Antony Watts hunt for those badly placed weather stations. -
Doug Bostrom at 02:49 AM on 29 June 2010Return to the Himalayas
Thank you, Johnny. Indeed you're right, I don't know how I missed that; I actually read the Wikipedia article while selecting my analogy. Scurrying to correct it...
Prev 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 Next