Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  1720  1721  1722  1723  1724  1725  1726  1727  1728  1729  1730  1731  1732  1733  1734  1735  Next

Comments 86351 to 86400:

  1. Ken Lambert at 23:37 PM on 6 May 2011
    Brisbane book launch of 'Climate Change Denial'
    Why did you not invits me John? A good chance for us to catch up and see if I could get an autographed copy.
    Response: Sorry Ken, there were a few Brisbanites I forgot to invite (I've gotten some cross emails).
  2. Ken Lambert at 23:33 PM on 6 May 2011
    Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    Marcus #84, #85 "It took fossil fuels around 60 years, & massive State support, to reach the relatively cheap prices they are today" That type of comment betrays a lack of understanding of the history of electricity generation in Australia and most of the first world. What technologies were available 100 years ago for large scale central generation? Answer - fossil fuel and hydro. In the absence of hydro resources and the availablity of relatively cheap coal - the choice of was one of necessity - coal. Such large investments by State Utilities (ie owned by the taxpayer) with 25-40 year lives were bound to remain a mainstay of our generation until nuclear arrived after WW2. You all know what has happened to nuclear. Oil gas coal hydro and nuclear are there because of economics. If there were better cheaper technologies, they would take over. When you talk of PV Solar being economical in boutique applications now - that is not new. PV Solar has been the best choise for powering remote area small scale applications for many years. I was playing around with Solar brine ponds in 1984 - the technology looked simple and effective but did not fly for cost reasons, chiefly maintenance in a very corrosive environment.
  3. Why 450 ppm is not a safe target
    Jesús Rosino at 17:15 PM on 6 May, 2011 When I said IPCC projections exclude "glacier and ice sheet mechanics" I was referring to the IPCC caveat "Model-based range excluding future rapid dynamical changes in ice flow". As I understand it (feel free to correct me), this is the main cause of the IPCC underestimate. I agree that, at least at this point, the 5m SRL projection for 2100 is a big outlier. OTOH I don't know how far a consensus goes here, but I thought the 1+ m was already considered to be quite plausible at the BAU scenario.
  4. Bob Lacatena at 21:54 PM on 6 May 2011
    Why 450 ppm is not a safe target
    33, Jesús, I was expecting a link to a RealClimate blog post. You did not provide a link, and a Google search shows no hits anywhere for the text that you have posted. A search for "sea level" rise at realclimate.org, however, provides any number of reasoned, scientific posts at RC explaining upper and lower boundaries for sea level rise, and the reasons behind each. In particular, reasonable estimates recognized by RC fall between 0.5 and 2.0 meters. From RC 9/4/2008:
    We stress that no-one (and we mean no-one) has published an informed estimate of more than 2 meters of sea level rise by 2100.
    From RC 11/15/2010:
    ...and Gillis shows that most of the experts now assume a considerably higher rise until 2100 than IPCC: about one meter, potentially even more.
    I see nothing remotely close to a comment at RC suggesting that a 5m rise by 2100 is at all likely, and must insist that if you cannot produce a link to such a statement, you must openly and loudly withdraw the statement that "RealClimate silence about this ... is somewhat telling". From the Hansen and Sato 2011 paper you posted:
    Alley (2010) reviewed projections of sea level rise by 2100, showing several clustered around 1 m and one outlier at 5 m, all of which he approximated as linear. The 5 m estimate is what Hansen (2007) suggested was possible, given the assumption of a typical IPCC's BAU climate forcing scenario.
    Also:
    However, the fundamental issue is linearity versus non-linearity. Hansen (2005, 2007) argues that amplifying feedbacks make ice sheet disintegration necessarily highly non-linear. In a non-linear problem, the most relevant number for projecting sea level rise is the doubling time for the rate of mass loss. Hansen (2007) suggested that a 10-year doubling time was plausible, pointing out that such a doubling time from a base of 1 mm per year ice sheet contribution to sea level in the decade 2005-2015 would lead to a cumulative 5 m sea level rise by 2095. Non-linear ice sheet disintegration can be slowed by negative feedbacks. Pfeffer et al. (2008) argue that kinematic constraints make sea level rise of more than 2 m this century physically untenable, and they contend that such a magnitude could occur only if all variables quickly accelerate to extremely high limits. They conclude that more plausible but still accelerated conditions could lead to sea level rise of 80 cm by 2100.
    You will note that Hansen is not with this projecting a 5m rise by 2095. He's doing math. He's saying if ice sheet disintegration is non-linear, and if doubling time for rate of mass loss is the best predictive factor, and if that doubling rate is 10 years (which is plausible, but in no way predicted), then the math says that a doubling of 1 mm/year from 2005-2015 would arrive at a cumulative 5 m sea level rise by 2095. He never says this is going to happen. He never suggests that it will. The paper in many places is actually a rather dispassionate discussion of all of the various estimates of others (as well as Hansen 2007) on sea level rise. From Hansen's 2007 paper, where the proposal was presented (given that Hansen and Sato 2011 is merely summarizing the current state of the literature):
    Of course I cannot prove that my choice of a ten-year doubling time for nonlinear response is accurate, but I am confident that it provides a far better estimate than a linear response for the ice sheet component of sea level rise under BAU forcing.
    His point is clearly that linear estimates are overly simplistic, and that a non-linear mechanism could produce a dangerously higher value. The point here is not the value. It's not a prediction. It's a demonstration of the importance of not taking a purely linear approach to a non-linear problem, much in the way that exponential growth is taught to school children by pointing out that if they start with a penny, and double it every day, by the end of a month they have over 10 million dollars. From the same paper, a few sentences later:
    The nonlinearity of the ice sheet problem makes it impossible to accurately predict the sea level change on a specific date. However, as a physicist, I find it almost inconceivable that BAU climate change would not yield a sea level change of the order of meters on the century timescale.
    Suggestion:If something bothers you, take the time to actually read the source material, in the proper context and perspective, and pay attention to the words, not your emotional reaction to the words. And never, ever base anything on a "blog post."
    Moderator Response: [Dikran Marsupial] s/2010/2100/
  5. CBDunkerson at 21:40 PM on 6 May 2011
    10 Indicators of a Human Fingerprint on Climate Change
    neil, night time temperatures increasing faster than day time temperatures is indicative of enhanced greenhouse warming because of the way that greenhouse warming operates. That is, increasing the concentration of greenhouse gases in the atmosphere (which is being caused by human industry) decreases the rate at which the planet cools. Consider a 100 degree day in Miami vs a 100 degree day in the Arizona desert. Once the Sun goes down the temperature starts dropping... but Miami is very humid (lots of atmospheric water vapor, a greenhouse gas) and can actually stay warm all night. The desert on the other hand gets very cold very fast because it has almost no water vapor and thus the daytime heat escapes quickly. Thus, if we increase the level of atmospheric carbon dioxide and other greenhouse gases which have much lower geographic variance than water vapor we are decreasing the rate of night-time cooling for the entire planet... nights stay warmer longer and thus the average night time temperature increases faster than the day time temperature. As to ocean heat content... a strong indication of warming, but not of anthropogenic causes. Any warming forcing would result in most of the energy going into the oceans. For instance, if the observed global warming were being caused by increased solar radiation we would expect to see days warming faster than nights (because there is no sunlight at night) and ocean heat content rising as most of the solar forcing went into heating the upper ocean. Ergo, we'd have the ocean warming either way, but the day vs night warming speed would be different.
    Response:

    [DB] Additionally, what was predicted by models and then subsequently confirmed later by observational studies is summarized here.  For example:

    Nights warm more than days Arrhenius 1896 Dai et al. 1999
    Sherwood et al. 2005
  6. alan_marshall at 19:24 PM on 6 May 2011
    Brisbane book launch of 'Climate Change Denial'
    I would like to copies of your book on the shelves of Parliament Shop in Parliament House, Canberra. Maybe you will find an MP to suggest it. They already stock "Requiem for a Species" by Clive Hamilton. Located in the main Foyer, the Shop is open 7 days a week 9:30 am - 5:00 pm (on sitting days extended to 5:45 pm). Ph: (02) 6277 5050 Fax: (02) 6277 5068
    Response: Haydn and I will actually be heading down to Canberra on May 16 and will be delivering a copy of the book to every federal MP in Australia (more news on this shortly). I'll check whether NewSouth Books are distributing to Parliament shop.
  7. CO2 is plant food? If only it were so simple
    It seems the evidence indicates that climate change is already negatively impacting global cereal crop years in the period 1980 to the present. The paper published in Science yesterday is behind a paywall, but Scientific American has a summary here.
  8. Lindzen Illusion #2: Lindzen vs. Hansen - the Sleek Veneer of the 1980s
    Angusmac, I have no problem acknowledging Hansen's 1988 prediction were out because the primitive model has a sensitivity that was too high. However, given the uncertainties in determining climate sensitivity, I think it was a pretty good effort for its time. Both history and better model show a convergence on sensitivity of 3.
  9. Lindzen Illusion #2: Lindzen vs. Hansen - the Sleek Veneer of the 1980s
    Scaddenp @68, you misrepresent me by stating that, "You [I] want real airplanes to be designed from a high school flight model, because it's simple?"

    Real engineers and scientists use simple models every day to check the validity of their more complex models. They usually use simple physics to check if small changes in their assumptions lead to large changes in outcomes.

    Dana in this blog used a simple spreadsheet to adjust Hansen's Scenario B without the need to re-run the whole computer model.

  10. Jesús Rosino at 17:15 PM on 6 May 2011
    Why 450 ppm is not a safe target
    Alexandre, Rahmstorf & Vermeer base their projections on empirical correlation between 20th Century SLR and temperature. However, 20th Century SLR doesn't have a big contribution from ice sheets (thermal expansion plays a big role), so I don't think it can reflect future contribution from ice sheets. I think that warmer deep ocean would be a more likely explanation of the IPCC undersetimation (and this would hopefully help to close the global heat budget). On the other hand, the IPCC projections do include a contribution from increased ice flow from Greenland and Antarctica at the rates observed for 1993-2003. The point by Rahmstorf and Vermeer being that it may accelerate. My comment was motivated by Martin #13:
    "[...] Until now, I was under the impression that predictions for a sea level rise by the end of this century were between 0.5 and 2m. That Hansen should predict 5m and so soon (by 2095!! not 2195 or 3095) surprises me. Do you know what the reaction to Hansen is among climate experts (the kind that publish peer reviewed papers on climate change)? Is this a consensus view?"
    I'm trying to answer that by saying that this projection by Hansen & Sato cannot be considered anything close to a scientific consensus (besides, it's not peer reviewed and it's quite different to what peer reviewed papers say), and highlight that even those latest figures we are recently more used to (+1 meter) must be taken with some caution, as we always look for physical expalantions, rather than empirical correlations. Yes, I do know that current data show that IPCC projections are undersetimating SLR, but between 19-59 cm (IPCC) and more than 1 meter there's a big gap. SLR is, of course, a serious threat with any of those numbers. Sphaerica, regarding the citation, I'm trusting this blog post:
    "Hansen and Sato (2011) using paleoclimate data rather than models of recent and expected climate change warn that “goals of limiting human made warming to 2°C and CO2 to 450 ppm are prescriptions for disaster”. They predict that pursuit of those goals will result in an average global temperature exceeding those of the Eemian, producing decadal doubling of the rate polar ice loss, resulting in sea level rise of up to 5m by the end of this century."
  11. Lindzen Illusion #2: Lindzen vs. Hansen - the Sleek Veneer of the 1980s
    Dana@67, I am not arguing for higher sensitivity.

    I just find it ironic that SkS can praise the accuracy of Hansen's 1988 projections without mentioning the dramatic drop in temperature projections. Here is a timeline derived from SkS blogs for Hansen's 2019 temperature anomaly:

    • Hansen (1988): Scenario A is "business as usual." Anomaly = 1.57°C.
    • Hansen (2006): Scenario B is ''most plausible.'' Anomaly = 1.10°C.
    • Dana (2011) in this blog: Adjusted Scenario B "matches up very well with the observed temperature increase." Dana is silent on the anomaly but the blog's chart indicates that Anomaly = 0.69°C.

    In summary, SkS have presented many blogs praising the accuracy of Hansen (1988) but have neglected to mention that the Hansen's original projection for the 2019 temperature anomaly has plummeted from 1.57°C in 1988 to 0.69°C in Dana (2011). Why is this not mentioned?

    Dana, I find it difficult to believe that these plummeting temperatures have not escaped your attention. Do you have a problem with declaring that your adjusted Scenario B is only slightly above Hansen's zero-emissions Scenario C?

  12. Dikran Marsupial at 16:58 PM on 6 May 2011
    Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    steven mosher@152 If you want papers published with turn key code and data, then I would agree that would be great, but who is going to provide the (quite considerable) additional funding to support that? Are you going to fund that just for climatology, or for all science? Sadly it is just unrealistic. Also the idea that would allow the reviewer to see if additional data would change the result, that would only apply to a limited range of studies. I am working on a project at the moment that has several processor centuries of computation behind it!
  13. Lindzen Illusion #2: Lindzen vs. Hansen - the Sleek Veneer of the 1980s
    Moderator@66, your Eyecrometer sounds like a very useful device. Can I buy one on Amazon?

    You already have the post-2000 figures. Scenarios B and C both show a warming trend of 0.26 °C/dec for the 1958-2000 period, however, they diverge post-2000. I show the trends for 2000-2011 here and I summarise them below:

    • Scenario B = 0.38 °C/decade
    • Scenario C = 0.12 °C/decade
    • Real-world (GISS LOTI) = 0.15 °C/decade

    There is a definite dogleg in real world temperatures post-2000.

    I request that you publish the figures for the adjusted Scenario B so that I can incorporate them in my models.

    Response:

    [DB] "There is a definite dogleg in real world temperatures post-2000."

    The warmest decade in the instrumental record is a "dogleg"?  Where's the dogleg:

    Ten

    [Source]

     

    Well, it's not evident in any of the temperature records.  How about since 1975 (removing the cyclical noise & filtering out volcanic effects):

    1975

    [Source]

    No dogleg there.  How about the warming rate estimated from each starting year (2 sigma error):

    CRU

    [Source]

    THe warming since 2000 is significant.  Still no dogleg.

    Your focus on statistically insignificant timescales is misplaced.  While a world-class times-series analyst (like Tamino) can sometimes narrow down a window of significance to decadal scales, typically in climate science 30 years or more of data is used.

  14. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    "You mean they used to *burn* this stuff. Burn? Seriously? Talk about primitive!" Yep, they'll say that the way people today say "they used to think the Earth was *flat*? Seriously?"
  15. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    Tom " ...actually I think it will be a very easy sell to our grandchildren simply because they will be able to see the consequences of our inaction." I think it will be more than that. Our grandchildren, and their descendants, will have quite a lot of "What the ..!" responses. In a society which values carbon fibre products as a necessity (along with other products we've not yet developed), there'll be a lot of head scratching along the lines of "You mean they used to *burn* this stuff. Burn? Seriously? Talk about primitive!" (Or brainless, or silly, or stupid, or choose your own word.)
  16. steven mosher at 13:18 PM on 6 May 2011
    Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Tom, I have no issue with your characterization. My point is rather simple. As you probably know i've advocated for open science from day 1. I think papers should be published with turn key code and data. If that were the case any reviewer or subsequent reader could see for themselves if newer data changed the answer. And that would be that. Whatever the answer is. instead we have a situation where on all sides people make inferences about the motives of authors, reviwers, editors. I'm suggesting that one way past some of this is to push for reproducible results. I don't have any issue calling out michaels. When he and santer testified together, I thought that santer cleaned his clock. And I said so.
  17. 10 Indicators of a Human Fingerprint on Climate Change
    Also, a suggestion: I would suggest that you add in the increase in Ocean Heat Content Anomaly (OHCA). We expect about 90% of the Anthropogenic forcing to go into heating the upper ocean - and this is indeed what is observed, and this close link between TOA radiative imbalance and OHCA increase I believe is some of the strongest evidence of anthropogenic warming. There are good recent updates on ocean heat content: Lyman et al. (2010) And there are other papers connecting the TOA radiation with the oceanic warming: you have some Trenberth ones, but Levitus principally talks about this: Levitus et al. (2005) Levitus et al. (2009) Apologies if this was already obvious and listed elsewhere.
  18. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    "heir democratically elected governments back to 1900 who started the electricity generation industries with (except for hydro), the only feasible, abundant and cheap fuel available - fossil fuel - chiefly coal." What's your point here Ken? I never argued that fossil fuels should *never* have received government support-in their infancy. I was merely pointing out how fossil fuels *continue* to enjoy State subsidies, in spite of their apparent maturity. Yet mention *any* kind of government funding for renewable energy-especially ones which cut into the profits of the big energy suppliers, like rooftop solar-& the usual suspects cry foul. I call that *hypocrisy*!
  19. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    "Marcus, I will be the first to buy your PV Solar panel when it can re-produce itself without the help of relatively cheap fossil fuels." What a completely bogus argument. It took fossil fuels around 60 years, & massive State support, to reach the relatively cheap prices they are today-& even then only with ongoing, often well hidden, government "subsidies". Most renewable energy technologies are already approaching parity with fossil fuels, in about half the time & with far less State support than what fossil fuels received-& continue to receive. Also, as scaddenp rightly points out, future PV's probably *will* increasingly be made without the help of fossil fuels, as relatively cheap *alternative* energy sources-like bio-diesel, solar-thermal, geo-thermal, bio-gas, tidal & wind power-become the mainstay of the manufacturing industry (as is already the case in California, for instance). So again, Ken, your argument really is quite weak, & getting weaker with each new posting.
  20. 10 Indicators of a Human Fingerprint on Climate Change
    Hi John, Thanks for your useful site. About #7: The decreasing diurnal temperature range (DTR) is listed as a fingerprint of Anthropogenic warming, with references. However, you do not say what the basis is for this claim - why would the DTR be expected to decrease under greenhouse warming rather than solar warming of some other forcing? Initially it seems intuitive because obviously the sun shines in the day, and not at night -so increasing night-time temperatures "should" be due to something else. However, upon more thought I started doubting that this is the case. Indeed, going back over the references you gave, plus further back into for example the Stone and Weaver (2002, 2003) papers, and even the Easterling papers - I don't see anywhere a definitive statement that a reduced DTR is indeed a fingerprint of additional greenhouse forcing. There is some mention of this in the intro's but never anything further. It seems to come down to a matter of clouds and soil moisture in the Stone and Weaver papers, and in Braganza (2004). All of them say it is a good index of recent climate change, while I don't see any of them actually saying it provides evidence of anthropogenic warming.
  21. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    scaddenp @81, actually I think it will be a very easy sell to our grandchildren simply because they will be able to see the consequences of our inaction. There will undoubtedly be climate change deniers in fifty years time, but I do not expect them to be more numerous, nor more influential than modern day geocentrists. The problem is, if we don't act effectively now, the cost to our grandchildren of acting then will be much steeper. They will also have lost much that is irreplaceable including, major ecosystems such as the Amazon and Great Barrier Reef.
  22. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    KL - "I will be the first to buy your PV Solar panel when it can re-produce itself without the help of relatively cheap fossil fuels." Yeah! Good to know we have a customer for a project under consideration...
  23. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    I cant find anything more recent published on commitments that really adds to Hare and Meinshausen, though results from AR5 full carbon cycle will be interesting. However, I think a zero emissions is interesting only from a theoretical point of view. Best we can hope is to see what commitment we have with a cap though progress towards that isnt really happening either. If getting reductions is tough now as a policy sell, imagine how difficult it could be for our grandchildren selling really tough emission control and seeing NO results (in terms of slower climate change) from those efforts for decades. I dont think humans work well on those time scales.
  24. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    scaddenp @79, thanks for the link. From it I find the following graph by Hare and Meinshausen whose zero emissions trajectory is much more like what I would expect. Most of the initial rise in temperatures is, of course, due to the loss of aerosols, but some rise would still be expected with out that. However, given the example of the Eemian, it is plausible that both of these constant forcing scenario's are wildly optimistic and that constant forcing would result in a two plus degrees C increase.
  25. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    FK&M 2011 base their reconstruction on a multiple regression on the Greenland summer temperatures taken from four stations, and the Winter North Atlantic Oscillation. Base on this, they show melt extents in the 1930's to be significantly more extensive than those in the 1990s. However, as shown below, Greenland summer temperatures (red line, first graph)in the 1990's where equivalent to those in the 1930s, while the winter NAO index (second graph) was only a little stronger (more positive) in the 1930s. Given that, we would expect only a slightly stronger Ice Melt Extent Index for the 1930's than for the 1990's. This is particularly the case given that the Summer Temperature has 2.6 times the effect of the Winter NAO on Ice Melt Extent (FK&M 2011, 4th page of 7). This suggests at a minimum that the FK&M reconstruction is sensitive to the choice of surface stations in constructing their index. Given that, there is a significant probability that the choice of a larger number of surface stations would show a significantly stronger end 20th century melt relative to the 30's.
  26. muoncounter at 11:12 AM on 6 May 2011
    Hurricanes aren't linked to global warming
    Interesting new blog post by Michael Tobis: Is there a trend in Atlantic hurricanes? Some people are still stuck in hypothesis testing mode, which I think is starting to get a little bit crazy. We can only test hypotheses that way. We cannot use that approach to establish that changing the radiative properties of the atmosphere is safe. My claim is that we need to get our thinking out of nudge-world. This is not a nudge. ... I don't understand why people don't anticipate some ringing in a system that gets kicked this hard.
  27. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Lucia @140 and Mosher @147 suggest that using 2010 data would simply require substituting 2010 for 2007 in the sentence,
    "We find that the recent period of high-melt extent is similar in magnitude but, thus far, shorter in duration, than a period of high melt lasting from the early 1920s through the early 1960s. The greatest melt extent over the last 2 1/4 centuries occurred in 2007; however, this value is not statistically significantly different from the reconstructed melt extent during 20 other melt seasons, primarily during 1923–1961.
    Close scrutiny of the graph of the Tedesco figures (Figure 2 above) shows that the difference between 2007 and 2010 mass loss is only 1/6th of the difference between 2005 and 2007. Assuming a similar magnitude difference in the Ice Melt Extent Index, one difference the inclusion of 2010 would make is that the number of years prior to satellite observations in which 2010 lies within the 95% confidence intervals reduces to approximately eleven. The number of reconstructed temperatures lying within one RMSEv of the 2010 value would probably also fall from two to either one or zero. The trailing 10 year moving average for 2010 would also rise by about 0.3 points to almost match that in the 1930's. Given these changes: 1) Ice melt extent in the 2000's would be statistically indistinguishable from that for the highest period in the reconstruction. (This is true as it stands, but apparently was not worthy of comment in the paper.) 2) The two years with the greatest ice melt extent would have occurred in the last ten years, and five of the most extensive melts would have occurred in the last ten years. In no other ten year period would more than two of the ten most extensive melts have occured. 3) The year with the highest melt extent would be the most recent year, with just eleven reconstructed values having 95% confidence intervals encompassing that value. 4) The relatively low ice melt extents in the early satellite period are due in large part to major tropical volcanic eruptions, eruptions which were absent in the 1930s. In the absence of these erruptions, the longest and period of extensive melting would be that straddling the end of the 20th century, not that in the middle. Clearly natural forcings have favoured extensive ice melt in the mid 20th century, while acting against it towards the end. (True also in 2009, and surely worth a mention in the paper.) A paper drawing these conclusions, IMO, would be substantially different from the paper actually produced. Certainly it would have been very hard for Michaels to place the spin on it that he as been doing. Of course, there are certain caveattes to this analysis. Firstly (and most importantly), Tedesco shows mass loss, while FK&M show melt extent. The two are not equivalent and it is possible to have (as in 2005) very extensive melt areas with restricted mass loss due to greater precipitation. Given comments by Box, it is quite possible that that has indeed happened in 2010. If that where the case it would require an even more extensive revision of FK&M 2011 to bring it up to 2010, datawise. Second, this analysis has been done graphically, and has all the consequent uncertainties (ie, the numbers might be out by one or two in either direction). This is particularly the case in that FKM's graph reproduce by Lucia has an uneven scale.
  28. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    Its a letter to editor, but discussed (along with Hare and Meinshausen) at RC. Go here. Includes pointer to discussion of Matthews and Weaver.
  29. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Chris, (the "dense" remark was not about you, it was everyone's ability to ignore the CATO article elephant in the room) "However an incisive review, paarticularly in late Dec/Jan, could easily have addressed the several problmatic sentences in the paper, and ensured that the data was properly interpreted in relation to independent data. " Well, he certainly could have, but Box's initial review tried to make the paper better by suggesting the FKM consider the warming and cooling causal factors that Box himself used to predict the 09-10 melt, accurately. But as a shortcut, he suggested they include the 2010 data. Now, we are all worked up because the data wasn't available, or timed right, even though, somehow, every other expert knew the implications, but whatever, who cares? Box's point is that publishing "as is" was insignificant and not up to JGR's usual standard. So his suggestion was ignored. This paper was too important wait apparently. Not my call or Box's. This coupled with that way the paper has been used, post-publication, is the real story here. Unless these are addressed, getting on to Box's handling of the review process seems like a misdirected priority. Although, it is certainly something to discuss in improving the review process in climate papers.
  30. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    scaddenp @78, I would be very dubious of Matthews'and Weaver's result. It is widely believed, and with good reason, that thermal inertia alone would result in an additional 0.5 degrees C warming if CO2 levels where held constant. M&W show only half of that, and show no additional warming for zero emissions. Even allowing that CO2 levels would decline with zero emissions, they would do so on a decadal time scale which would ensure a significant early warming with zero emissions. M&W is behind a pay wall, so I cannot discuss their methods, and they may be correct. However, their paper does require close scrutiny before accepting its results.
  31. The Climate Show Episode 12: twisters, ozone and Google in the sun
    Hey, it's 15 minutes shorter than last episode! :-) I like to watch the videos myself - the figures, charts, and images that are interspersed add a lot to the story being told. And it's easy enough to have it open in a browser window, while doing something else on the PC (as much as my wife might be amused by me watching a vidcast and playing a game or working on a presentation for work at the same time...) Either way, I guess I know what I'm doing with my Friday night, now (yes, I am that boring! ;-)
  32. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    BP, Seriously, please do email and tell Lindzen and Christy and Spencer to stick to the science, and to stop talking through their hats about economics, politics and socioeconomics--then there will be no need for others to make the observation I did above. Until then, it is not political to call them on their politicization of science.....
  33. Arctic Ice March 2011
    Oops! I see I said it would take the water longer to freeze in the autumn! That is a relic of a revision! Sorry!
  34. Arctic Ice March 2011
    Interesting comments! My understanding is rather incomplete, but since the Arctic basin and surrounding seas are adding more meltwater to the surface waters each melt season, I suppose this means that the intensity (is that the correct word?) of the halocline during the autumn and winter is increased. In other words, the increased summer melting and reduction in the overall ice volume must be making the surface waters of the Arctic fresher which may well cause the surface waters to take just a bit longer to freeze in the autumn. It seems to me that this is a variable that is potentially in flux for a number of different reasons, and this must have consequences if it is so. For example, I understand that the fresher water being added to the Arctic waters during the melt season results in more surface ice forming earlier in the fall/winter for the simple reason that it freezes at a higher temperature than saltier water. And I gather that the Arctic still has a well-established halocline despite the significant and continuing reduction in overall ice volume, but I also suppose that eventually the halocline will begin to break down, if the trend continues. Right now, my best guess is that the Arctic still has plenty of relatively fresh water at the surface. Also, given that the sea ice that forms is fresher if it forms at higher temperatures because it is then able to exclude more salt, I suspect that the ice that exists in the Arctic is fresher than it used to be. Has anyone studied this? If the ice is fresher than it was a few decades ago, it makes sense that it melts more rapidly once the temperatures warm in the spring. This is part of the reason I made my comment about the potential for a record melt, since not only are the air temperatures higher than they were a few decades ago, but the ice that is present may have a higher melting point. That said, it seems to me that eventually, as the Arctic is open to wave and wind action for longer periods of time, it will lose a significant portion of its fresher surface waters to the Atlantic in particular. Is this happening? Or is it predicted to happen? If it does, given the rising temperature trend, I would think that future sea ice formation in the Arctic will delayed and reduced since the surface water will be saltier and the air temperature will be warmer.
  35. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Well done Daniel Bailey, you have drawn the "lukewarmers" (but I think that excludes poster Chris) out of the woodwork to defend their champion Michaels. Yes, please KFM release the code and perhaps CA will do a thorough audit of KFM too, and then perhaps, just perhaps, pigs might fly.
  36. steven mosher at 08:37 AM on 6 May 2011
    Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Oddly Dan thinks: "No, the data would not have been in the proper format the authors were accustomed to dealing with. But that is merely a technical limitation and could have been dealt with. After all, the Muir Russell Commission was able to replicate the entire "hockey stick" from original data in a mere two days (something the auditors still have not yet completed themselves), pronouncing it something easily accomplished (cf page 48 of the report) by a competent researcher." Muir Russell did no such thing. They constructed their own 'replication' of the land temperature series (chapter 6) and not the hockey stick. Strange the simple mistakes people make when grasping at straws. On the issue of failing to use "up to the data" I'm fairly confident that no one here will throw out all papers that are published with truncated data series. if that were the case some of Box's own work would qualify as bad science. And of course we have the famous example of Santer's paper. I would hope that KFM would publish their code so that an update could be made. That seems a simple matter and I suppose someone can go hound them for the code. The issue seems to be this: "Had the authors considered all available data, their conclusion that ‘Greenland climate has not changed significantly’ would have been simply insupportable." That is a TESTABLE claim. but no one has taken the time to test it. It seems rather brazen to claim that their conclusions would have been unsupportable. Well, test the claim of this post. if they had considered all the data would their conclusions be unsupportable? dont wave your skeptical arms. Ask them for their code and do the work. And what specifically do you think was wrong. you wrote "They write: “We find that the recent period of high-melt extent is similar in magnitude but, thus far, shorter in duration, than a period of high melt lasting from the early 1920s through the early 1960s. The greatest melt extent over the last 2 1/4 centuries occurred in 2007; however, this value is not statistically significantly different from the reconstructed melt extent during 20 other melt seasons, primarily during 1923–1961.”" if you add 2010 data which sentence is wrong A "We find that the recent period of high-melt extent is similar in magnitude but, thus far, shorter in duration, than a period of high melt lasting from the early 1920s through the early 1960s." adding 2010 data wont change a word of that claim. B "The greatest melt extent over the last 2 1/4 centuries occurred in 2007; however, this value is not statistically significantly different from the reconstructed melt extent during 20 other melt seasons, primarily during 1923–1961.”" adding 2010 data will change one word here. So.. The greatest melt extent over the last 2 1/4 centuries occurred in 2010; however, this value is not statistically significantly different from the reconstructed melt extent during 20 other melt seasons, primarily during 1923–1961. better?
  37. Lindzen Illusion #3 and Christy Crock #5: Opposing Climate Solutions
    A worldwide zero emissions isnt required. The Matthews and Weaver 2010 diagram was this . Holding emissions at current level doesnt look too dangerous from this diagram. Of course, this doesnt address the equity issue which would propose that west reduced emissions heavily so developing countries can increase their's but that is a different issue.
    Moderator Response: [muoncounter] Please restrict image width to 500.
  38. David Horton at 07:43 AM on 6 May 2011
    Lindzen Illusion #4: Climate Sensitivity
    So, just pure Milankovitch with no CO2 contribution?
  39. Chip Knappenberger at 07:25 AM on 6 May 2011
    Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    And for a comparison to the Wake et al. conclusions provided by chris@144... Here is the first paragraph of conclusions from our paper:
    We have created a record of total annual ice melt extent across Greenland extending back approximately 226 years by combining satellite‐derived observations with melt extent values reconstructed with historical observations of summer temperatures and winter circulation patterns. This record of ice melt indicates that the melt extent observed since the late 1990s is among the highest likely to have occurred since the late 18th century, although recent values are not statistically different from those common during the period 1923–1961, a period when summer temperatures along the southern coast of Greenland were similarly high as those experienced in recent years. Our reconstruction indicates that if the current trend toward increasing melt extent continues, total melt across the Greenland ice sheet will exceed historic values of the past two and a quarter centuries.
    Not really very different from Wake et al. This conclusion follows directly from our work and will be little impacted by what occurred in 2010. In the last two paragraphs of our Conclusions, we go beyond our direct results, and speculate on sea level rise. In writing a paper, it is not particularly unusual to try to put the results in the bigger picture. -Chip
  40. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Albatross, I'm not being naive (or I should say I don't consider that I am). In this particular instance I consider many of the individuals complaining about this mess, to be naive: There are two elements that ensure decent quality papers. The first is the inherent scientific integrity in the vast majority of publishing scientists (the most important part of peer-review; i.e. "self-peer-review"). The second is the peer-review process itself (reviewers giving robust and unambiguous guidance to editors). It's very rare indeed that each of these fail at the same time. Of course KFM are "playing games". We all know that. So in this particular case, we lose a main element of peer review, and are left with the other to hold the fort. Unfortunately the reviewer most qualified to do this seems to have deserted his post. He wrote a dismal review (have you read it?), classified the paper as "good", and declined to re-review the paper at a time when he could have made some very strong recommendations indeed. Why complain now? My comments about Wake et al don't constitute a strawman, and I recommend tha people try to look at this from an objective viewpoint. Wake et al demonstrate that so long as one is clear about the period of analysis, it's acceptable to publish a paper in 2009 that only includes data through 2005. It doesn't matter whether or not a reviewer suggested that they should have included more up to date data. The editor can overrule that if he considers the suggestion is unreasonable. The obvious point that several people seem preternaturally unable to absorb is that one cannot include data in a paper that doesn't yet exist. The editor seems to have cottoned on to that reality. Of course that doesn't absolve KFM from interpreting their data properly in the light of the 2010 data as it stood especially in December 2010. Unfortunately the expert reviewer declined to take part in the process that would have ensured they did. (not sure what the second reviewer was doing - I wonder whether he might be hiding under his bed until this blows over, and hoping that no one decides to identify him/her!). Sadly, we can't conjure up good outcomes by a combination of indignation and anger. It really helps if the systems of peer review work well enough at source.
  41. Bob Lacatena at 06:48 AM on 6 May 2011
    Why 450 ppm is not a safe target
    30, Jesús,
    ...their suggestion that 450 ppm will result in 5 meters sea level rise at the end of this century.
    Citation (direct link), please.
  42. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Not really grypo. However an incisive review, paarticularly in late Dec/Jan, could easily have addressed the several problmatic sentences in the paper, and ensured that the data was properly interpreted in relation to independent data. (Not sure what you mean by O'Donnel and Steig) Yes, I've seen a little of what the authors have done with it. What do you expect - that's how a couple of the authors earn a living I believe. Not sure about Box's "ideas" - however he has considerable expertise in the subject of the paper, and it would have been nice if he'd chosen to review the paper robustly in Jan. At the very least some incorrect statements in the abstract should have been removed/corrected. Not sure what you mean by "acting dense to this issue to appear "nice""? I hope I'm not acting dense, and I certainly don't think I'm being nice, although I am trying to be scientific, objective and accurate! Here's the conclusions of Wake et al 2009 that you suggest should be a model for how KFM should have written theirs. They don't actually seem so different in general outline from FKM's. I would conclude from the conclusions of Wake et al that things are not very different now in Greenland than then, just like FKM (my bold in the conclusions below). I don't actually think that's going to be true going forward into the future, but a dispassionate read of Wake et al's (2009) conclusions wouldn't give you that idea, apart perhaps from the sentence about disappearance of some ice sheet points... CONCLUSION We have presented a modelling study tracking SMB changes of the Greenland ice sheet since 1866, reflecting how the ice sheet has behaved under the climatic conditions of the 19th– 21st centuries. Over the time window of our study, we find that the Greenland ice sheet has reacted to, and endured, a temperature increase similar to that experienced at present. Higher surface runoff rates similar to those of the last decade were also present in an earlier warm period in the 1920s and 1930s and apparently did not lead to a strong feedback cycle through surface lowering and increased ice discharge. Judging by the volume loss in these periods, we can interpret that the current climate of Greenland is not causing any exceptional changes in the ice sheet. Mass loss through glacier discharge is currently believed to dominate mass loss through SMB, and both processes are likely to be correlated. Forman and others (2007) report that the ice sheet retreated 1–2km inland at Kangerlussuaq, West Greenland, over the past 100 years. Although our model resolution is 55 km, we predict complete disappearance of some ice-sheet points in this area, in line with these observations. We are not able to shed light on the relative contributions of ice dynamics vs SMB to the current mass loss, but our study puts the modernday changes into the context of longer-term century timescale changes."
  43. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Chris @139 and @141, I agree with some of your comments, in particular I agree that Dr. Box could have handled the review better, and so could the editor. I do, however, take issue with other claims and arguments that you made. Unfortunately, I do not have time to properly address them all today. At the end of the day, and despite all the attention, KFM have to this day still not bothered to include the 2010 data-- surely data format issues have not held them back for eight months! Note what the Sir Muir Russell inquiry accomplished in a matter of days. Instead, it is unfortunate that we now have "lukewarmer" bloggers now trying to defend KFM, as of course they would. It would be much better for all concerned if KFM did the analysis themselves. While KFM and Wake et al. are in broad agreement, there are striking differences in how the results are interpreted, and how the findings are being used (or is that abuse din the case of Pat Michaels?). Your comment about Wake et al.'s "void" is a strawman and very much avoids the issue at here and now, unless of course you can demonstrate that reviewer's of the Wake et al. paper specifically requested that they include the 2006-2008 data. What is (intentionally) lost in all this is that the current warming is in all likelihood not transient whereas the 1925-1955 event was, and its causes are in all likelihood very different than those driving the warming now and into the future-- I'll post some more on this tomorrow when I have more time. Citing the KFM paper as evidence that what is happening now (and where we are heading) is not exceptional, and possibly even natural or within the realm of natural variability is thus misleading, disingenuous and wrong. The duplicity being shown by Knappenburger and Michaels is also deeply disturbing. I could be more direct and forthright, but that might violate the comments policy. Chris, let us not be naive about the game that is being played by KFM-- 'skeptics' have quite a long history of "playing" peer-reviewed journals.
  44. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Chris, seriously? Are you blaming a bad review for this paper? Is this Odonnel/Steig all over again? You have seen what the author's have done with it since, right? Do you really think Box's ideas could have saved it from the publishing afterglow? Do we really need to act dense to this issue to appear *nice*? I'd look at Wake 09 to see how this paper's conclusions should have been written, as stated earlier. This way we can congratulate the author's on the statistical methods and then turn back to analyzing reality.
  45. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    Eli, I certainly agree with your first statement. You reproduce something Martin Vermeer stated, viz:
    ""This is clearly what Box means by "causal factors"... an analysis of ice melt as relating to temperature is what he would have liked to see, to provide the "depth" that he is missing. ......"
    Two things about that: 1. First, that may very well be what Box meant, but if he did mean it why didn't he say so? Several commentators have suggest interpretations of what Dr. Box meant in his review. In my opinion it's farcical. If it's not possible to give clear and unambiguous guidance to an editor then I don't see how one can subsequently complain when a deficient paper results. 2. Secondly if Martin thinks that "an analysis of ice melt as relating to temperature is what he would have liked to see...", then he could look at page 3 of FKM2011 and read the section that begins: "Our Greenland melt reconstruction therefore focuses on the relationship between summer average temperatures from the available four coastal locations in southern Greenland and our standardized melt index." where an analysis of ice melt as relating to temperature is described; i.e. it uses the empirical temperature and melt data for the period 1979-2009 in order to optimize a model for the relationship between temperature and melt that can be used to determine historical melt. Dr. Box classified the paper methodologically as "good". I'm not playing Devil's Advocate here. In my opinion a truly dismal review has caused a huge amount of confusion over this paper. We (and the editor!) shouldn't be having to guess what the reviewer meant, and if he felt so strongly about defects in the paper he should have taken the opportunity to re-review it in early Jan and do somethibg about it, rather than get mad.
  46. Why 450 ppm is not a safe target
    Jesús Rosino at 02:02 AM on 6 May, 2011 IPCC physical models exclude glacier and ice sheet mechanics - I'm sure you are aware of this. So they're not directly comparable with Rahmstorf & Vermeer. Although I agree that these semi-empirical quantifications are not a consensus yet (too new, too few), I'd say that it is already a consensus that those IPCC AR4 figures are an underestimate. I'm just trying to understand your point.
  47. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    barry
    I realize there are issues with the methodology amongst some, but getting a fix on inclusion of 2010 data with the methods used in the paper would be a good next step, no? Can this be done by the good people here questioning the paper? Quantitative analysis beats speculation every time
    I asked Chip the same thing by private email explaining that I'd like to do this myself. The satellite index melt data KFM2011 used to compute the average in their figure 2 come from index from Abdalati and Steffen, 1997; Fettweis et al., 2007; Mote, 2007. His answer is as far as he is aware, those data aren't available yet. When reading here, I saw the link to the Tedesco paper which contains melt index data for 2010. (This paper was published after the final version of KFM2011 was submitted to JGR but before KFM2011 was published. But as readers are aware, one isn't permitted to completely recompute ones results after a paper has been accepted.) The Tedesco paper would seem to have data that we might use to make a decent guess as to whether the results of KFM2011 would be obsolete when 2010 melt data were published. Specifically, had Tedesco been used as the melt-index, the relevant 2010 melt data would be the value for 2010 in figure 1C in Tedesco. I examined figure 1C and drew a trace to see how the melt index for 2010 compares to melt data for 2007: It seems according to Tedesco melt index for 2010 was exceeded that for 2007 by a tiny amount. In corporating this value into KFM would involve adding a blue dot for 2010 to figure 2 in KFM. Here's my marked up version with lines illustrating various features of the graph: Assuming the relative values of the melt indices from 2007 and 2010 from the two other groups exceeds 2007 by a similar tiny amount, the final blue dot representing 2010 melt data : * would likely still fall inside the ±2σ uncertainty intervals for roughly 20 years of the reconstructed melt data. That is: given what we know, we would not state that 2010 had a record melt index. (I think this is the relevance of explaining the measured melt index for 2007 falls inside the uncertainty of the reconstruction. Based on what we know, we can't call 2007 since 1840 or even earlier start years. ) * the 10 year lagging average would likely still fall below the levels achieved during the previous rapid melt period. * the duration of the recent rapid melt period would remain shorter than the previous one. I suspect that the first two of these things may change during the next El Nino. But as far as I can see, incorporation of the 2010 melt data would require KFM to slightly tweak their text. Adding the data did not make the paper "obsolete". What mystifies me is why Jason Box, who is a co-author and who claims the 2010 data would and does make the paper obsolete didn't just slap the 2010 data point from Tedesco onto KFM2010. I'd do it myself, but I don't know if re-baselining would be required. But it seems to me that if we added the Tedesco data to the KFM2010's figure, the appropriate tweak the KFM2010's abstract is to strike out "2007" and replace it with "2010". This may change when the melt data from the other groups is available. I'll try to remember this issue and comment when it does become available. My fuller discussion is here
  48. Frauenfeld, Knappenberger, and Michaels 2011: Obsolescence by Design?
    I’m going to be a party-pooper again, and state why I consider your narrative to be deficient Daniel. You state that you’re planning a future comment on: ”The editorial and decision making process at the heart of the publication of an obsolete paper.” I hope that when you do so, you can be a little more objective in your analysis than here. Your aim should be to make convincing arguments to reasonable people who might be confused about some of the issues related to the science and politics of climate studies. So far it doesn’t seem like it. Here’s some of what I consider to be problematic: 1. Your paragraph (it helps with long posts to number your paragraphs) starting and ending with:
    ”This lack of context……. Because to them, FKM 2011 adds nothing to the science and is thus obsolete.”
    ... isn't really correct. FKM2011 does add something to the science, and its benefits might have outweighed the negatives if it had been properly reviewed. One of the important elements of science is that interpretations are broadly independent of methodological approaches (in other words the essential features of the study object – the stark and uninterpreted historical and contemporary Greenland melt in this case- are revealed by independent analyses using different methods). FKM2011 is surely a useful confirmation of the conclusions of Dr. Box who addressed this subject in a paper in 2009: [L.M. Wake et al (2009) Surface mass-balance of the Greenland ice sheet since 1866, Annals of Glaciology 50, 178]. The abstract concludes: (concerning Surface Mass Balance = “SMB”):
    ”All SMB estimates are made relative to the 1961–90 average SMB and we compare annual SMB estimates from the period 1995–2005 to a similar period in the past (1923–33) where SMB was comparable, and conclude that the present-day changes are not exceptional within the last 140 years.”
    FKM2011 can't be too far wrong if the conclusion from the perspective of 2010 is similar to the perspective of an expert in 2009. I hope we're not go down the route of insisting that the essential elements of a particular local climate state is defined by 1 year’s data! That sounds like "cherry-picking" to me. 2. You suggest (in a somewhat Mills and Boon style if I may say so!) that: ”a 2010-shaped void left its mark by its absence.” But Dr. Box’s 2009 paper included data through only 2005. Are we going to complain that in Dr. Box’s study ”a 2009-shaped void left its mark by its absence.”? …not to mention the 2008-shaped void, and the 2007-shaped void, and the 2006-shaped void? Or are we a little bit reasonable and recognise that one can only include data in a paper that actually exists? And how do we decide when to cause a fuss about where to extend a study? 2008 and 2009 melt years were the lowest for some time (Tedesco and Fettweis data in Fig 2 of your top article). If FKM2011 was actually FKM2010, would this kerfuffle have arisen? Nope. 3. You state omnisciently:
    ”No, the data would not have been in the proper format the authors were accustomed to dealing with. But that is merely a technical limitation and could have been dealt with. After all, the Muir Russell Commission was able to replicate the entire "hockey stick" from original data in a mere two days (something the auditors still have not yet completed themselves), pronouncing it something easily accomplished by a competent researcher.”
    That is entirely unreasonable. It’s not that the data weren’t “in the proper format”. The data from independent groups used by FKM for their model simply weren’t available! That’s not “merely a technical limitation”. It means the analysis for the model simply couldn’t be done. Your analogy of the Muir Russell Commission (MRC) is completely inappropriate, and I fear it makes you appear as if you don’t consider this a subject that should be addressed objectively. The data for that reproduction existed, and in fact had been already “reproduced” several times (e.g. by Wahl and Amman). The independent data used by FKM for their reconstruction didn’t exist. This seems astonishingly easy to understand…. 4. One of the things that stands out from the comments on this thread is that the real problem with FKM2011 is easy to glean and would have been straightforward to address if a reviewer had chosen to do so. Right through the thread (e.g. see my comments, those of Dikran Marsupial, Tom Curtis and Dr. Pelto) we have highlighted statements, sentences or sub-paragraphs that were inappropriate interpretations given the scope of the data and independent knowledge. A reviewer should have ensured that these statements were removed from the paper and the data properly described in relation to the marked 2010 temperatures and melt, and especially in terms of the sort of information that Dr. Pelto has described about the nature of glacier recession in the early 20th century melt period and now. That was what was required of this paper - a thorough and if necessary suitably blunt review in late Dec/early Jan. At least one reviewer was offered the opportunity to do this on Dec 27th last. Unfortunately he declined. 5. We would have been left with a rather dull but confirmatory study of Greenland melt, that would have added a little (a few more years) to Dr. Box’s own paper published in 2009 (see [1.] above). That would be fine. You’re allowed to publish dull and confirmatory studies. The “house journals” of particular scientific fields are full of them. At least one journal I know of even has categories for the reviewer’s general assessment of a paper that includes the category “dull but sound”. That’s an indication to the editor that the manuscript is pretty borderline. In this particular case at least one reviewer thought FKM2011 was better than that. He categorised it as “good”.
  49. Jesús Rosino at 04:42 AM on 6 May 2011
    Why 450 ppm is not a safe target
    Sphaerica, I was thinking of their suggestion that 450 ppm will result in 5 meters sea level rise at the end of this century.
  50. Pete Dunkelberg at 04:32 AM on 6 May 2011
    Medieval project gone wrong
    Mangini et al. 2007:Persistent influence of the North Atlantic hydrography on central European winter temperature during the last 9000 years (more through than their 2005 paper) finds solar variation to be the cause of European winter warm periods. More interestingly, they say that their isotope methods record mainly winter variation, while tree rings record mostly spring and summer when warming is less. Thus the best comparison with their results is with recent northern hemisphere winter temperature trends (where is the graph of that?). Arrhenius already predicted that winters would warm faster than summers, and this has generally been the case in the recent warm-up. However, NH winters are also sensitive to everything from the Madden-Julian Oscillation to tropical volcanoes, and we have seen in the last two years that some NH areas can have cold winters while Arctic amplification is concentrated farther north.

Prev  1720  1721  1722  1723  1724  1725  1726  1727  1728  1729  1730  1731  1732  1733  1734  1735  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us