Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  922  923  924  925  926  927  928  929  930  931  932  933  934  935  936  937  Next

Comments 46451 to 46500:

  1. Real Skepticism About the New Marcott 'Hockey Stick'

    @49 chriskoz

    Many thanks for the excellent advice.  I had not yet taken the time to explore how to format the graphic for the net so thanks for saving me the time.  

  2. Further Comments on The Economist's Take on Climate Sensitivity

    Paul Magnus and Authors

    As a long time subscriber to the Economist, I have found their information on a range of subjects to be well researched and first class.  It is a 'newspaper' widely read amongst decision makers all over the world.  Their science and technology section is written for the layman but well detailed and cutting edge in many fields.

    I would not dismiss their subject article on climate change as a sort of denialist rant.  

    The Discovery News rebuttal story you quote above is unimpressive in detail and would have much less authority than the Economist story.  

    The Economist is not denying a contribution to global warming by CO2 means but the authors make a good case that the extent of warming has probably been overstated, and the uncertainties probably understated.

  3. Land Surface Warming Confirmed Independently Without Land Station Data

    This reanalysis is known to have large temperature biais in some part of the world. This is expected since there is no land temperature input and sea temperature are also a reconstructed. It is expected that in the next generation of reanalysis will integrated the temperature and wind measurments to improve the results. Nevertheles, and conterintuitively, most of the information is carried by pressure measurments. It is also expected that the addition of recentely reconvered historical weather data (like Old weather project) will improve further the results.

  4. Dikran Marsupial at 22:10 PM on 12 April 2013
    Models are unreliable

    Bouke wrote: "The issue I have is that I see scientists making predictions on the basis of their models. Those predictions will be used as the basis for policy,"

    Exactly what are the policy decisions that are being made based on predictions of SSIE?

  5. Models are unreliable

    bouke: Maslowski, et al 2012

  6. The anthropogenic global warming rate: Is it steady for the last 100 years?

    KK Tung @6.

    While you are correct to say (as you do) that it is not wrong if net forcing is found to be linear over the period, this is not the same as basing your study on an assumption of net forcing being linear over the period. And given that, I would suggest that it is wrong to not emphasise your reliance on such a linearity and also wrong not to make clear the levels of non-linearity/linearity developed by other people. You acknowledge (but leave us to 'eyeball' your fig 2a rather than properly illustrate it) that GISS conclude there was accelerated heating after 1978 and link to two other graphs in the post. Those two graphs have very significant non-linearity - the SKSci graph of IPCC forcing shows an 8-fold increase in the rate of forcing after the 1960s and it is even more pronounced on the Skeie et al 2011 graph. As for findings of linearity - forgive my ignorance but do you reference any findings of linearity?

    Your writing comes close enough to accusing others of creating/exagerating the non-linearity in net forcing solely to support an otherwise unfounded theory (eg "...allowing the GISS model to produce..." "...models to adopt ..." and @6 "...were trying to simulate the observed warming using forced response alone..."), which I consider close enough to be worthy of comment. Are you in any way saying the non-linearity is being exaggerated for non-scientific reasons?

    Your comment that warming follows CO2 with such little time lag being rather tricky and "will not be discussed here." I like 'tricky'. Is it to be discussed in a later post? Or is such discussion available elsewhere?

  7. The anthropogenic global warming rate: Is it steady for the last 100 years?

    Thanks for your integrity, Dumb Scientist.  Looking forward to more insights.

  8. Real Skepticism About the New Marcott 'Hockey Stick'

    Paul@48,

    While you work on your poster, consider my little technical advice:

    Don't save it as JPG file. JPEG format (natural contone image compression) does very bad job with text (does not compress it well and a lossy result means text is distort), save it in PDF format.

    Your embedded graphic (being simple, non-natural image) will be shown best in PNG format (both smallest and lossless).

    You'll achieve both small file size (easily downloadable) and nice resolution independent zoom-in.

  9. Real Skepticism About the New Marcott 'Hockey Stick'

    @46/47 Tom Curtis

    Quite right, I would indeed prefer the @20 graphic to be both effective and honest to the science and I'm happy to spend the time to improve it.  I had a problem with discerning your tone in @21, you will understand if I am wary on climate comment threads, so it is good of you to have another go.  

    By taking the time to spell them out, your suggestions now make a lot more sense to me and I will definitely look at them again as soon as I can.  I am happy to spend the time to get it 'right' in the kinds of ways you suggest.  And, of course, the  broad message of the graphic remains sadly the same.

    Thanks, Paul

  10. Further Comments on The Economist's Take on Climate Sensitivity

    So you would just wish the economist would chip in with comment on how we could achieve green house gas reduction within the economy rather than question the science.

    I don't think they have contributed to the economics of meeting the 2c. Rather they want to rant on about the science. Its the sort of denial we find in the general populous. Avoid avoid avoid.... Put off.

  11. Dumb Scientist at 15:26 PM on 12 April 2013
    The anthropogenic global warming rate: Is it steady for the last 100 years?
    So yes, the paper uses the linear detrended AMO index - which has been indicated by several publications to be in part misidentified global warming.

    Dr. Tung promised to discuss "the choice of the AMO Index." Hopefully his discussion will address the fact that the linearly detrended AMO likely contains an anthropogenic trend after 1950. The alternative AMO index you mentioned which is relative to the global mean SST seems like it would be much less likely to mistakenly subtract signal.

    KR is right, and I was wrong to manufacture unwarranted doubt by implying that Tung and Zhou 2013 might have used a different AMO index.

    Exploring the frontiers of knowledge inevitably results in mistakes. The true test of a scientist is admitting these mistakes and moving on. Especially when the mistake affects the future of our civilization.

    So: I was wrong. KR: thank you for correcting me, and for all your other informative comments.

  12. Dumb Scientist at 14:10 PM on 12 April 2013
    Tung and Zhou circularly blame ~40% of global warming on regional warming

    Dr. Tung responds.

  13. The anthropogenic global warming rate: Is it steady for the last 100 years?

    Dumb Scientist - From Tung and Zhou 2013:

    We will use the standard AMO Index of Enfield et al., which is defined as the North Atlantic mean sea-surface temperature (SST), linearly detrended.

    So yes, the paper uses the linear detrended AMO index - which has been indicated by several publications to be in part misidentified global warming.

  14. Real Skepticism About the New Marcott 'Hockey Stick'

    Paul, just a small additional note.

    Where you to make the corrections I recommend (or equivalent), I think you graphic would be an excellent tool that should be more widely spread; to which point I would heartilly recommend that SkS add it as an update to both this post and to the Y-axis of evil quote.

  15. Real Skepticism About the New Marcott 'Hockey Stick'

    Paul R Price @42, the question is whether or not we will accurately report the science.  As it stands, your diagram shows as temperatures never experience by human civilization where, at one point, Marcott et al show there was a 50% chance the 300 year average of  temperatures experienced by human civilization where higher than that; with a near certain chance that for individual decades they were higher than that.  You also say:

    "In the past 100 years temperature has increased from coldest in 10,000 years to the warmest in 125,000 years.  Warming is set to continue rapidly."

    That, again, contradicts what Marcott et al actually found.

    You may think it too much trouble to correct the graph by raising the "never experienced by human civilization" limt by 0.4 C to accurately reflect Marcott et al; or to modify the second claim to read:

    "In the past 100 years temperature has increased from close to the coldest in 10,000 years to close to the warmest in 125,000 years, and are still rising. That wWarming is set to continue rapidly."

    (Additions in italics)

    If you do so, you are choosing to be "effective" over being scientifically "honest", in Schneider's terms.  Why do so when you could so easilly do both?

  16. Real Skepticism About the New Marcott 'Hockey Stick'

    Glenn Tamblynn @43, Chris overstates the brevity of the 8.2 K event to the point of misrepresentation IMO.  He understates the geographical area of coverage; and overstates the SH warming by using model data where proxy data shows a more heterogenous situation.  Finally, he fails entirely to allow for the effects of errors in synchronization between different proxies.  See my post @44 for details.

  17. Real Skepticism About the New Marcott 'Hockey Stick'

    Chris @39:

    1) In Marcott's Agassiz-Renland data, the 8.2 K event first shows at 8.21 Kya and last shows at 8.09 Kya event, giving the event a total distinguishable duration of 140 years, with a peak duration of 100 years. That compares to a Tamino spike with a total distinguishable duration of 180 years and a peak duration of 20 years. (The total distinguishable duration is 180 rather than 200 as, for the first and last 10 years the spike would not be distinguishable from the background in any way.) Thus the 8.2 K event is comparable to a Tamino spike, and if anything should be easier to detect due to its longer duration at low values.

    This is in close agreement with your first link on the timing of the 8.2 K event, which shows the following graph, in which it can be clearly seen that the event starts at just under 8.25 Kya, has a period of lowest values from 8.21 kya to 8.14 Kya and finishes around 8.07 Kya, giving a total distinguishable duration of 180 years and a distinguishable peak duration of 70 years. Or, as claimed in the abstract:

    "Using a composite of four records, the cold event is observed as a 160.5 yr period during which decadal-mean isotopic values were below average, within which there is a central event of 69 yr during which values were consistently more than one standard deviation below the average for the preceding period."

    Finally, your second link on duration says:

    "Greenland temperature cooled by 3.3±1.1 °C (decadal average) in less than ∼20 years, and atmospheric methane concentration decreased by ∼80±25 ppb over ∼40 years, corresponding to a 15±5% emission reduction. Hemispheric scale cooling and drying, inferred from many paleoclimate proxies, likely contributed to this emission reduction. In central Greenland, the coldest period lasted for ∼60 years, interrupted by a milder interval of a few decades, and temperature subsequently warmed in several steps over ∼70 years. The total duration of the 8.2 ka event was roughly 150 years."
    (My emphasis)

    If you are going to quote only the coldest period of the 8.2 K event, as you appear intent on doing, you must compare it with only the coldest period of a Tamino style spike. At that spike has a triangular, not a square wave form, that coldest period is just 20 years.

    All in all, the comparison of durations suggest an 8.2 K event is more likely rather than less likely, to be picked up in the proxy record than a Tamino style spike.

    2) From the Kilamanjaro paper I linked to, the Kilamanjaro oxygen isotope data show a negative excursion at 8.2 Kya, although it is not large relative to other excursions. The Soreq Cave oxygen isotope data shows a positive oxygen isotope excursion at that time, indicating an enhanced isotope signature in groundwater. The isotope signature of d18O in caves depends on two functions, one of which always gets heavier (more 18O) with increasing decreasing temperature, and one which can get heavier or lighter with decreasing temperature. Thompson et al 2006 invert the d18O data for Soreq cave, presumably because the former function dominates at Soreq Cave, hence indicating a significant temperature excursion in palestine.

    Morril et al, 2013 show an up to date picture of the cooling pattern of the 8.2 K event. It is characterized by marked cooling in the North Atlantic and surrounding regions including the Tropical Atlantic; but by an amiguous pattern of regional warming or cooling in the south:

    Overall it would have been a net cooling event. GMST would have shown a negative excursion even if not all points globally experienced such an excursion. In fact, had it not we must assume that climate sensitivity is very low, because there was definitely an increase in albedo from extended glaciers, sea ice extent and snowfall in the Northern Hemisphere.

    The question is not whether there was an excursion, but how large (which is indeterminate) and why does it not show up in either the Marcott et al reconstruction or Tamino's partial replication of that reconstruction?

    3) Dome C data shows four large negative excursions within two standard deviations of the error in dating of 8.2 K. I doubt that they are statistically significant, but it provides no evidence of warming at 8.2 K, and is consistent with (that weakest of evidentiary measures) cooling at that time. They do not show up at low resolution as there are intervening spikes in temperature. Vostock does show a statistically significant spike, but depending on the age error that may or may not align with the 8.2 K event. If either of the two troughs on either side of that spike were in actuality the period of the 8.2 K event, then the 8.2 K event was global and about the size of a Tamino spike. If instead the spike was aligned, the 8.2 K event, while still reducing the GMST would only do so by a small amount. Which of these is the case we do not know.

    We too often pay only lip service to the error in dating or temperature estimation. We simply align proxies across the globe as though their mean date represented the actual date - but it does not. It is only the best estimate out of a range of dates, and may have a probability of being the actual date as low a 5% or less.

    Consequently, if I could suggest any single improvement to Tamino's attempted demonstration that a spike would show, it would be that he introduce the spike, then vary the time of that spike in each proxy based on the temporal error at the control points; and then from the pseudoproxies so created then attempt to reconstruct the spike using the full Marcott proceedure. His method assumes that all the temperature spikes are in fact alligned across all proxies, with the temporal jittering only smoothing the data. The temporal jittering, however, is an attempt to allow for the fact that, with very high probability, the temporal allignment of the various proxies has been lost. That is, events dated at 8 Kya in all proxies probably occured at different times by a varying amount depending on the temporal error of the proxy.

    It is a great advance by Marcott et al, IMO, to find a way to allow for that and to produce a statistical description of the constraints on global temperature given that error. Unfortunately many of us have insisted on interpreting Marcot et al as though they had not made that advance, or as if it was of no consequence.

  18. Dumb Scientist at 12:02 PM on 12 April 2013
    The anthropogenic global warming rate: Is it steady for the last 100 years?
    How can the anthropogenic warming be approximately linear in time when we know that atmospheric CO2 has been measured to increase almost exponentially? Implicit in that statement is the expectation that the warming (i.e. the rate of surface temperature increase) should follow the rate of increase of greenhouse gas concentration in the atmosphere. This rather common expectation is incorrect. An accessible reference is that from Britannica.com: "Radiative forcing caused by carbon dioxide varies in an approximately logarithmic fashion with the concentration of that gas in the atmosphere. ...

    No, I'm not ignoring the last century of physics. It's exasperating to be lectured about the ancient fact that CO2's radiative forcing in Earth's current atmosphere depends approximately on the logarithm of its concentration. My article linked to a graph of CO2's radiative forcing, which accounts for this logarithmic dependence. Notice that CO2's radiative forcing increases faster after 1950, because increasing CO2 faster also increases its logarithm faster. That's what makes the forcing "slightly more curvy than linear".

    As shown in Figure 2a, the green line is quite nonlinear and shows the acceleration of greenhouse gas forcing after 1950 referred to by DS, but the aerosol cooling also increased after 1950. The net anthropogenic forcing is the small difference of the two large terms.

    That same radiative forcings graph also accounts for aerosols. Notice that the black line includes aerosols and also increases faster after 1950.

    Because the aerosol cooling part is uncertain, we actually do not know what the net anthropogenic forcing looks like. There is no obvious argument that one can appeal to on what the expected warming should be. There is nothing obviously wrong if the anthropogenic warming is found to be almost linear in time.

    Perhaps the IPCC's estimates are wrong, but subtracting the standard NOAA AMO index to determine anthropogenic warming is equivalent to assuming that anthropogenic warming is steady before and after 1950. If it isn't, you'll never know because subtracting the AMO will just subtract signal after 1950.

    That was probably the source of the circular argument criticism from DS: "Tung and Zhou implicitly assumed that the anthropogenic warming rate is constant before and after 1950, and (surprise!) that's what they found. This led them to circularly blame about half of global warming on regional warming." It is important to note that the trend we were talking about is the trend of the Adjusted data, and not the presumed anthropogenic predictor.

    No, that wasn't the source of my criticism. Dana1981, KR and bouke correctly pointed out that your circular argument results from adding the AMO(t) regressor, which is correlated with surface temperatures after 1950 if you used the standard NOAA AMO index.

    Concerning Dana181's statement that most models use radiative forcing that show acceleration after 1970s, I just want to make the following observation. The models that adopted the kind of net radiative forcing that varies in time in approximately the same manner as the observed global mean temperature---with cooling in the 1970s and accelerated warming in the 1980s to 2000--were trying to simulate the observed warming using forced response alone (under ensemble average). So the net heating used has to have that time behavior otherwise the model simulation would not have been considered successful. Here we are questioning the assumption that the observed warming, including the accelerated warming in the later part of the 20th century, is mainly due to forced response to radiative heating.

    That's only true for inverse models of aerosol forcings. It's important to note that they're compared to independent forward calculations which are based on estimates of emissions and models of aerosol physics and chemistry.

    Dumb Scientists claim of circular argument on our part consists of two parts. The first part deals with the linear regressor used, which is discussed here, and the second part deals with the AMO index used, which will be discussed in my second post. ... the choice of the AMO Index (whether the detrending should be point by point or by the global mean)...

    If you used the AMO index with global SST removed that KR mentioned, then your result is really interesting. I assumed that you used NOAA's linearly detrended N. Atlantic sea surface temperatures, in which case the anthropogenic warming would be hiding in your AMO(t) function. Again, that's because warming the globe also warms the N. Atlantic, and anthropogenic warming was faster after 1950.

    We have tried many other predictors with similar results. Using a nonlinear anthropogenic regressor would still yield an almost linear trend for the past 100 years, if the Residual is added back. And so this procedure is not circular.

    It's only circular if you used NOAA's standard detrended AMO index. If so, you added a regressor that's correlated with surface temperatures since 1950. Again, in that case the warming would be hiding in your AMO(t) function.

  19. Glenn Tamblyn at 11:48 AM on 12 April 2013
    Real Skepticism About the New Marcott 'Hockey Stick'

    chris


    If the 8.2 kYr event is that sharp and localised to the NA then you would not expect it to show up under Tamino's analysis. In fact it might be hard to detect at all in Marcotts data once you have looked at the global average. Although more of the proxies are in the NH, Marcott uses area weighting when calculating it so any localised event is going to be filtered out in the averge.

    Really what Tamino is showing is how likely a global spike is to be detected. Regional spikes, or regional spikes compensated for by opposite changes elsewhere will be even harder to detect if they can be detected at all.

    The point of the test is to evaluate whether a global change comparable to the current rise would be detected.

    And as an aside, how does a spike that intense and that short occur in the NA? If there was a significant collapse of the MOC, would it recover that quickly?

  20. The anthropogenic global warming rate: Is it steady for the last 100 years?

    In reply to Dana181: As I said in the last paragraph of the post, I will address the AMO issue in Part 2, which is to come.  Dumb Scientists claim of circular argument on our part consists of two parts. The first part deals with the linear regressor used, which is discussed here,  and the second part deals with the AMO index used, which will be discussed in my second post.

    Concerning Dana181's statement that most models use radiative forcing that show acceleration after 1970s, I just want to make the following observation.  The models that adopted the kind of net radiative forcing that varies in time in approximately the same manner as the observed global mean temperature---with cooling in the 1970s and accelerated warming in the 1980s to 2000--were trying to simulate the observed warming using forced response alone (under ensemble average).  So the net heating used has to have that time behavior otherwise the model simulation would not have been considered successful.  Here we are questioning the assumption that the observed warming, including the accelerated warming in the later part of the 20th century, is mainly due to forced response to radiative heating.

  21. Real Skepticism About the New Marcott 'Hockey Stick'

    @21 Tom Curtis 
    I am not sure what to make of your response, on the one hand you seem to admit that the graphic in @21 does show the "broad ramifications" of climate change well, though you do not seem to engage with the ramifications and instead concentrate on details.

    All of your points do reduce to quibbles given that the graphic's intent is to give the broad ramifications from the graphic and from the comments regarding Marcott et al.'s findings by Mann, and others here at SkS.  Any real skeptics will go to Marcott et al. and read it and discussions directly, as referenced in the graphic, to get the detail.  

    By concentrating on 'uncertainty', and anything rather than on the ever increasing certainty of our predicament given the extreme rate of warming now under way, you seem to be criticising by saying we should worry about exactly the type of false skeptics that Dana's post above concerns.  They can go elswhere, why bother with time wasters?  We need to move on to mitigation action, that should be clear from the graphic, especially when combined with the analysis by Stocker.

    We need to get policy-makers to understand where the science is at, and that does mean we need 'un-sciencey' broad brush identification of dangerous global risks.  The vital message we need to convey and that comes through so clearly, thanks to the scientific work, is that the current rate of warming is outside any seen in the Holocene.  It does not take a climate scientist to see through the quibbling nonsense of the critiques from Pielke Jr and co.

    The aim of the Hagelaars' graphic and my attempt to present in another form is to convey the recent science to the public and decision-makers, to convey the enormous existential choice humanity faces regarding carbon emissions.  Somehow all of this weight of science has to get through more quickly and more fully to policy-makers, economists and most especially the public so that they too can recognise the dangers to their own childrens' future.  

  22. Models are unreliable

    @Sphearica, Dikran, CBDunkerson: You misunderstand me. I don't have a problem with climate models, I just thought this was the most appropriate place to ask my question. It is narrowly about SSIE, and I understand that it does not generalize to other output parameters of models. I guess I should have qualified my "it would be unwise to attribute too much predictive power to them" with "regarding SSIE".

    The issue I have is that I see scientists making predictions on the basis of their models. Those predictions will be used as the basis for policy, so they should be as accurate as possible. At the moment, a case can be made that simple statistics (or very simple models, if you like) are more accurate than these models regarding SSIE. I am missing that fact in the scientific discourse.

    I just read the leaked AR5 on the matter (paragraph 12.4.6.1), and it does not touch linear trends at all. The probable reason for that is that there are no papers on it, which in turn is because it is hard to present a new scientific result on just a simple linear trend. The result is that the AR5 is missing a piece of analysis that should have been present, IMHO.

    Btw, I read Neven's blog, so I know what these trends look like: ice free around 2020. A little earlier if you take PIOMAS volume, or an exponential fit, a little later if you take sea ice extent, or a linear fit. According to AR5, the best estimate for RCP4.5 is 2035–2065. There's an unexplained discrepancy there, IMHO.

    @Sphearica specifically: I am a bit surprised at your tone. I understand that you thought that I claimed that model problems regarding SSIE disqualified models altogether, which would indeed be a grandiose claim. The logical fallacy in this case would be Hasty Generalization. However, the things you mention, "false assumptions", "false dichotomy", "strawman", I really don't see in my post. I don't really think that using such strong words is helpful.

  23. Real Skepticism About the New Marcott 'Hockey Stick'

    hmmm...my HTML hasn't worked...oh well

  24. Real Skepticism About the New Marcott 'Hockey Stick'

    Hmm, I wonder if Tamino simply confused you with Tim Curtin (definitely persona not grata there)?

  25. Real Skepticism About the New Marcott 'Hockey Stick'

    Regarding the 8.2 kYr event: This isn’t necessarily a fair test of the sort of analysis that Tamino has done. That’s not to say that the issue settled and I personally think it’s a shame that this interesting study has become factionalised.

    The astonishing thing about the 8.2 kYr event as captured in Greenland cores is how fast it was. Temperatures dropped and rose so fast that there was only a 70 year period in which isotope ratios were below the Holocene average.

    The event is captured in high resolution Greenland cores and is also seen in N. Atlantic proxies. However a detailed analysis of these indicates that the amplitude of the event in proxies drops quite rapidly away from the main event. There isn’t much evidence that the 8.2 kYr event had much of a temperature impact further afield. Modelling of the event suggests that the cooling is expected to be localised mostly in the N. Atlantic with a warming in the Southern Oceans (as less heat is transported by the AMOC). The Kilimanjaro paper Tom  linked to has an indirect proxy for the climate event around (fluoride levels in dust arising from partial lake drying). This doesn’t necessarily arise from a local temperature change but might involve changes in hydrology arising from the event further afield. However if one wanted a more direct local temperature measure (e.g. delta 18O) the Kilimanjaro cores actually have a “warming” spike (or series of spikes) around the 8.2 kYr period.

    The Antarctic cores don’t help to pin things down very much. The dome C core shows a very long slow cooling that starts around 10,000 years ago and continues through the 8.2 kYr event with no particular change in rate (the temperature in the starts to rise slowly soon after). I don’t see how this can be taken as evidence of Antarctic cooling associated specifically with the 8.2 kYr event in Greenland. Interestingly the Vostock core has a very marked positive temperature spike right at the time of the 8.2 kYr event. Not sure if this is considered to be an artefact but it is represented by 3 or 4 points in the core time series that rise and then fall (I was going to plot this but someone has already done so ).

    I would conclude:

    i) If the Marcott proxy set is a representative selection of global temperatures captured in proxies then (a) the large N. Atlantic signal is likely to be diluted (or negated) by the relative smaller cooling elsewhere (or warming contributions from the S. hemisphere) so that the nett global signal is small (e.g. relative to the globally averaged warming that has accrued over the last ~ century). The fact that it hardly exists in the Marcott composite may simply reflect the fact that globally-averaged there wasn’t much of a temperature change (need to inspect each of the Marcott proxies to assess this).

    (ii) The extremely rapid signal in the ice cores approximates to around 70 years in total up and down. Such rapid temperature excursions (much faster than current warming) might be poorly captured in the proxies and/or smeared by the Marcott/Tamino methods (again would be helpful to inspect the individual proxies).

    What this boils down to is that the 8.2 kYr event isn’t a particularly fair test of the ability of the Marcott/Tamino methodologies to preserve a contemporary-style temperature excursion since (a) the 8.2 kYr event may not have involved much of a temperature change globally averaged and (b) it was much faster up and down compared to our (pretty fast) temperature rise.

  26. The anthropogenic global warming rate: Is it steady for the last 100 years?

    Two points regarding Tung and Zhou:

    1) If you add a factor to your multiple linear regression, even if it's just random noise, that factor is going to explain some of the trend. If you then put that factor into the non-anthropogenic group, you reduce the anthropogenic part. If the factor correlates with the trend, this effect becomes much stronger.  A look at the AMO graph shows that the AMO was at a minimum in 1910, and at a maximum in  2010, which means it is correlating with the trend.

    2) Lets take the following model:

    T(t) = a * distractor1(t) + b * distractor2(t) + c * distractor3(t) + d * Tdetrended(t) + e * trend(t) + residual(t)

    Tdetrended is equal to T(t) with the linear trend removed. trend(t) is the linear trend of T(t). I can already spell out what the best model is: a, b and c are 0, d and e are 1. Taken together, they perfectly reproduce T(t).

    This is, in essence, the model of Tung and Zhou.  Tdetrended is the AMO, which is defined as the detrended North Atlantic SST. Not quite the detrended world temperature record, but almost.  trend(t) is their anthropogenic factor, which they make a linear trend. Because these two factors can explain the temperatures so well, all the other factors become mere distractors.

    Note that it does not matter in linear regression which actual trend you use, as long as it is regular. If you would halve trend(t), the factor e would double, and you end up with the same model.

    To the degree that NA SST and world temperature correlates, the Tung and Zhou approach can only find linear anthropogenic forcings.

     

  27. The anthropogenic global warming rate: Is it steady for the last 100 years?

    Also of note in regards to the AMO definition is Emanuel and Mann 2006, along with a discussion by M. Mann and G. Schmidt at RealClimate, concluding in part that:

    The linear detrending is intended to remove any potential forced signal, under the assumption that it is linear in time. However, if the forced signal is not linear, then this procedure can produce a false apparent ‘oscillation’ purely as an artifact of the aliasing of the non-linear secular trend (Trenberth and Shea, 2006). In fact, we have very strong indications for the 20th Century that the forcings over that period have not varied in a smooth, linear fashion.

    Because of the procedural difficulties in isolating the AMO signal in the instrumental record, the estimated attributes of the signal are quite sensitive to how it is defined. [...] It is therefore likely that the non-linear temporal history of anthropogenic tropical Atlantic warming has masquaraded as the ‘AMO’ in some studies.

    [Emphasis added]

    Again, subtracting signal from signal is going to give incorrect regression results for other components, and linear detrending is not a good match to the actual forcing history. 

  28. Antarctica is gaining ice

    "Is he correct in his statement that models predicted this increase in ice?" What he says is that models predict an increase in snowfall, and yes they do (going back to TAR I think). Warming in the southern ocean inevitably means more humid air moving into the interior of the Antarctica where it will fall as snow. The prediction was that this would increase the ice thickness in the interior (GRACE shows this happening). However, ice loss from the margins is so far outpacing that gain. These predictions were not about sea ice.

  29. The anthropogenic global warming rate: Is it steady for the last 100 years?

    I would agree with dana1981 that the standard AMO index, being a linearly detrended set of sea surface temperatures, is quite tied to global warming and incorporates some of that warming signal - subtracting signal from signal, and reducing the identified anthropogenic component. 

    Any regression against the AMO requires, of course, defining which AMO index you are discussing - the linearly detrended version, or one as suggested by Trenberth and Shea 2006 Atlantic hurricanes and natural variability in 2005, who recognized the incorporation of a global warming signal into the traditional definition:

    In particular, the recent warming of North Atlantic SSTs is known to be part of a global (taken here to be 60N to 60S) mean SST increase. While detrending the AMO series helps remove part of this signal, the SST changes are not simply linear and a linear trend has no physical meaning. To deal with purely Atlantic variability, it is highly desirable to remove the larger-scale global signal that is associated with global processes, and is thus related to global warming in recent decades. Accordingly, the global mean SST has been subtracted to derive a revised AMO index.

    Also of interest is Anderson et al 2012, Testing for the Possible Influence of Unknown Climate Forcings upon Global Temperature Increases from 1950 to 2000, which analyzes ocean heat content (OHC), sea surface temperatures, and forcings, and indicates from an energy conservation point of view that:

    ...less than 10% of the long-term historical increase in global-mean near-surface temperatures over the last half of the twentieth century could have been the result of internal climate variability.

    According to that work, there is insufficient energy available within the constraints of OHC to cause observed warming via natural variability and maintain observed OHC. 

  30. The anthropogenic global warming rate: Is it steady for the last 100 years?

    Some people are just blinded by science....

    Just love the dynamic definition function....

  31. Models are unreliable

    bouke,

    First, for reference, readers can look at the two figures that you reference. In each figure, the black line/square represents the NSIDC observed value, while the colored lines/squares represent the values resulting from various model ensembles. The first graph is of 5-year running mean September sea ice extent, while the second cross references the1979-2010 mean and trend (the observed value is in the center of the 2-sigma black box, and the average of all of the models is shown as an orange cross).

    Note that these figures focus on the September mean. They show nothing concerning the September minimum or the overall flux of ice of the course of the year.  They also show extent, not area or volume.  As such, the main question one must ask is "how does this short lived model discrepency affect overall global mean temperature within the model."  In particular, it should be noted that most models overestimate the ice extent, especially in relation to the recent plummet, and so most models are most likely underestimating the expected feedback (with respect to Arctic ice extent) in recent years.

    Figure 1:

    Figure 2:

    Your question conflates any number of false assumptions.

    First, let me point out that the existence of this flaw in the models is first and foremost excellent evidence against the silly denier's misunderstanding that models are "tweaked" and "parameterized" to produce a particular result. This is not the case. The models are based on physics, and the only valid way (in most cases) to adjust the model is to refine the physics to bring the final result more in line with observations.

    With that said, one of the first issues with comparing models to observations is the quality of the observations. In the case of sea ice extent, there is a lot of wiggle room for how one computes the extent... how much of a "pixel" of sea area do you count as ice or open water? You can use this page to view some of the various methods all at once. There is a fair spread in the values of the various methods within observations. Why then would you not expect a spread in the model estimates?

    Secondly, and this is the more important point, the number of factors in the models is huge. You've picked one and said it doesn't work, so throw out the models and substitute the most unphysical thing you can imagine, a linear trend -- which in turn is the ultimate in parameterization. Such a false dichotomy is absurd. This is rather like complaining that some people die in car crashes, so until no one dies in a car crash, cars are too unsafe to drive and everyone should walk everywhere.

    Thirdly, you are missing the point of the climate models, and you present a strawman before your question ("it may be wiser to look at the linear trend"). Look at the linear trend for what? Sea ice prediction? Global mean temperature? For what reason? To evaluate the climate system, or to address policy issues?

    The climate models exist first and foremost to study the climate. Scientists will know they have Arctic sea ice modeled very well when they start to fall within that 10% range you discuss, but honestly, I don't think that will ever happen, because at the current rate the ice will be completely gone before they've figured out what is innaccurate within the models. Then they can just cut sea ice out of the picture.

    Climate models have another value in being one predictor of climate sensitivity. The linear trend that you suggest would need to be based on something. Observations? The flaw there is that we only have a very short time span to use as a basis. Paleo-studies? Very valuable, but also very constrained by the inference of "observations" through indirect means.

    In the end, however, what we do see is that the estimates for climate sensitivity from a very wide variety of models, observations, paleostudies and other methods all converge in the same 2˚C to 4.5˚C range. This tells us that, despite individual flaws within the models, the overall answers are pretty darn good.

    Lastly, I would point out that people have long been pointing to the plunge in Arctic ice, and the discrepency in the models, in that the Arctic ice is failing far faster than anyone ever imagined possible. I do wish that the modelers would redirect their attentions to that sphere, and figure out whe the models are so far off in that respect.

    But, as I've already said, there may be little value in that. It might be that heat transport under the ice is far greater than expected. It may be that additional factors such as black soot and altered weather patterns have a greater influence than expected. It may be that summer water runoff from North America and Siberia has a far greater influence on sea surface temperatures and salinity. It may be that the polar amplification of warming is even worse than expected (something that is difficult to perfectly measure, since most global mean temperature analyses are sparse in the polar regions).

    But whatever the reason... I'm pretty sure that the Arctic is going to be ice free for large parts of the summer and fall long before the modelers get around to understanding all of the physics behind the particular details of Arctic ocean currents and ice, and the Arctic will have become such an alien environment that that entire part of the models will more easily be replaced with a very, very simple linear trend... a flat line centered on zero.

     

  32. Real Skepticism About the New Marcott 'Hockey Stick'

    Ray - While Clive Best has some interesting work there, it isn't (IMO) complete.

    He presents data binned into 50-year blocks, rather than with 20-year linear interpolations (meaning a larger smoothing at 1/4 the spike length, reducing/spreading the hypothetic spikes). That is not directly comparable to either Marcott or Tamino - the difference in processing may make a considerable difference in results, and I don't see any evaluation of its effect. 

    In addition he has (as far as I can see) only presented two realizations of the data; one with nominal dates and one with a randomization of 20% of proxy resolution (not the age uncertainties described in the supplemental data). Given the high variability of individual realizations, a fairly large number of runs will have to be evaluated - which is why Marcott et al did 1000 perturbations - to see the signal common to all realizations. Running a 5-point smoothing (as Best did on his first realization) is not equivalent to a Monte Carlo permutation of the uncertainties. 

    There are other differences in approach that I suspect don't matter much, such as date-shifting an entire proxy rather than random-walking the radiocarbon age control points, and the Marcott modeling of time uncertainty as a first-order autoregressive process. But those first two (additional smoothing and only two realizations) may account for much of the difference in results between Best and Tamino. 

    ---

    Again, though, since current warming will not be a 200-year spike, but rather take thousands of years to reverse, well covered by the Marcott resolution, such hypothetic arguments about Marcott et al processing are irrelevant to current conditions - there is zero evidence for, and considerable evidence against, warming akin to current conditions during the pre-Industrial Holocene, including in the Marcott data. 

  33. The anthropogenic global warming rate: Is it steady for the last 100 years?

    I have two major issues with this response.

    My issue with point #1 is that although not all radiative forcing estimates are equal, they do all show an accelerated anthropogenic forcing sometime after 1950, so we should expect to see accelerated anthropogenic warming since, say, the 1970s.  If you don't, you have to explain why not, and saying 'the forcing is uncertain' isn't sufficient if all anthropogenic forcing estimates include an acceleration.

    My other issue is that the main point of Dumb Scientist's post doesn't seem to be addressed.  That's the criticism that AMO is associated with Atlantic sea surface temps, which themselves are warmed by the anthropogenic forcing.  So if you remove the AMO influence in your linear regression, you're removing some of the anthropogenic warming, and that may explain why it's apparently underestimated.  As far as I can tell, that main point isn't addressed in this response, unless I'm missing it.

  34. Antarctica is gaining ice

    Alternate interpretations of the mass changes driven by accumulation variations are given using results from atmospheric-model re-analysis and a parameterization based on 5% change in accumulation per degree of observed surface temperature change. A slow increase in snowfall with climate warming, consistent with model predictions, may be offsetting increased dynamic losses.

    The above was tail end of the conclusion of Zwally's presentation (I believe it was his presentation and not his paper), but anyway, my question is this; Is he correct in his statement that models predicted this increase in ice?  I believe "Barry" even suggested that AR4 had similar predictions.

    I'm not asking if his paper or his observations are correct, just the above statement.  If it is correct, does this debunking need to be re-done?

  35. Models are unreliable

    The 'intermediate' article for this topic notes that Arctic Sea ice decline and global sea level rise models have proven to be very conservative in comparison to actual results.

    The reasons for this are also generally known... scientists have only recently begun to get a handle on the rate of ice loss from Greenland and Antarctica and thus these were excluded from sea level rise models. Now that we know that there is significant ice loss going on that will be factored in to the models and they will move closer to the observed values. Similarly, recent findings have shown that mechanical effects (i.e. ice breaking up under wind), bottom melt due to unexpected circulation patterns, ice export, and other significant factors were not included in the Arctic ice loss models.

    This does not mean that all climate models are similarly lacking important factors. The atmospheric temperature models have been largely on target. The PIOMAS Arctic ice volume model has been confirmed by recent sattelite measurements. Et cetera.

    No model includes the wing flaps of every butterfly on the planet, but we have a pretty good idea of which models include major uncertainties and which do not. Also, contrary to what you say, it is very common for linear (and other) trends to be used when looking at various climate factors. Indeed, a linear trend is a model... just a very simple one. Search on Maslowski or Neven for examples of this pertaining specifically to Arctic sea ice. Their trend projections show sea ice extent dropping to near zero in the next few years.

  36. Dikran Marsupial at 22:12 PM on 11 April 2013
    Models are unreliable

    @bouke The fact that the models are obviously missing some physics that is important for regional (i.e. Arctic) climate does not imply that they are not useful for global climate projection, for the simple reason that the missing physics is only relevant to a particular region, and hence this doesn't substantially affect the global climate.

    Scientists are perfectly happy to discuss the flaws in the models, however the reason they don't often explicitly say that "the models aren't good enough" is because it isn't an all-or-nothing issue.  There are some things the models predict well, and others that they don't.

    The main reason they use models rather than linear trends on the other hand has nothing to do with predictive power.  A physical model allows you to test the consequences of a set of assumptions of how the physics of climate works.  A linear trend is just a statistical model based on correlations and does not allow you to draw any causal conclusions.

    I doubt there is a fundamental discussion on the relative merits of the two approaches for the simple reason that they are two tools with different jobs.  Secondly a linear model is only appropriate for a situation where the (rate of change) of the forcings are approximately constant, which is unlikely to be true of centennial scale projections for which physical models are more appropriate.

  37. Models are unreliable

    I have a question on model reliability in predicting arctic sea ice. I just read (parts of) "Constraining projections of summer Arctic sea ice" by Massonnet et al. Looking at their figure 1, it is clear that most CMIP5 models do not accurately model
    past september sea ice extent (SSIE). This is even more visible in their figure 2, which shows that 90% of the models either simulate an average SSIE in
    1979-2010 outside of 2 sigma of the actual average, or a trend in SSIE outside of 2 sigma of the actual trend. If the physics of the models were perfect, only 10% of model runs would be expected to fall outside this window.

    To my mind, this shows that the models still miss essential physics, and it would be unwise to attribute too much predictive power to them. A linear trend may be a better predictor. Yet, I never hear a scientist say 'The models aren't good enough at the moment, so it may be wiser to look at the linear trend.'

    Why is that?

    Is there a fundamental discussion somewhere on the relative merits of a linear trend versus a model as a predictor of the future?

  38. Real Skepticism About the New Marcott 'Hockey Stick'

    michael sweet @36, when I first raised issues regarding Tamino's argument, I raised them at Open Mind.  They never got past the "awaiting moderation stage".  I reposted the comment to be sure, with the same result.  I have drawn the conclusion that Tamino does not want my comments, for whatever reason, and given that, would not deign to comment at Open Mind again if future.

    Since you asked ...

  39. michael sweet at 20:07 PM on 11 April 2013
    Real Skepticism About the New Marcott 'Hockey Stick'

    Tom,

    Instead of debating yourself here, have you tried going to Tamino's blog and asking him what he thinks of the 8.2 ka event?  What did he say?   Tamino frequently replies to informed questions.

    As I read Tamino, he does not claim the analysis proves that no sudden spikes could occur.  He claims Marcott is strong evidence that such spikes did not occur.  This seems reasonable to me.

    How could all the proxies have missed such a spike?  As you point out, there is evidence of a drop in proxies at 8.2 ka.  Others challenge that as a global event.  Marcott may be correct that it was mostly a North Atlantic event, like the midieval warm period.  Manns' data shows that if you cherry pick your proxies you can find a lot of spurious, local trends.  KR lists several known local events. Where are the proxies for a sudden, global spike?  I do not see skeptics here providing a list of proxies showing a spike. They must have searched for such proxies without being able to find them.

    In addition to showing that Marcott might have missed a sudden spike, a physical mechanism for such a spike is required.  

  40. Real Skepticism About the New Marcott 'Hockey Stick'

    KR - if you applied Marcott method to only the altantic proxies and failed to find a 8.2ka cooling spike, then this would suggest problems with the method in resolving spikes. The magnitude of the 8.2ka event on global temperature being definitely debatable. The mechanism for sudden cooling in that region is there.

    However, I have little time for the argument that could be 0.9 warming spikes in the absence of any physical mechanism for creating one. I just share Tom's doubts the Marcott (and Tamino's) work conclusively provide observational evidence to show that they didn't occur. I am also of the opinion that it is a side-show from the significance of the paper.

  41. Lars Karlsson at 17:07 PM on 11 April 2013
    Real Skepticism About the New Marcott 'Hockey Stick'

    Steve, KR and others,

    To the convinced "skeptic", <a href="http://www.skepticalscience.com/marcott-hockey-stick-real-skepticism.html#93376">all those "might" would turn into "must"</a>, and the alternatives hiding behind them would be invisible.

     

     

  42. Real Skepticism About the New Marcott 'Hockey Stick'

    scaddenp  There are some calculations and comments by Clive Best on the Tamino generated peaks that reach somewhat (but not entirely) different conclusions that you might find interesting.  

  43. Real Skepticism About the New Marcott 'Hockey Stick'

    scaddenp - There's a good set of information on the 8.2 Ka event from NOAA, which lists various paleotemperature proxies from Minnesota, Germany, Costa Rica, Greenland, and the North Atlantic as supporting evidence. 

    They also list paleo evidence for the end of the African Humid Period, drought during the Akkadian Empire, and the drought leading to the collapse of the Mayan Empire. Those were all regional events of only a few hundred years, but multiple sets of relevant paleo evidence is available for each. 

    I would therefore consider a larger scale global warming event to be something for which we would see paleo evidence - and we do not.

    Just to clarify - Is anyone actually claiming both that the Marcott data might miss such a spike (which I'm willing to postulate for the sake of the discussion), and that we wouldn't see such a global event in the rest of the data? There really is no evidence for such a spike, let alone a physical mechanism. 

    WRT Tamino's work, I have yet to see anyone put forth a convincing counter to his work regarding a hypothetic Holocene 0.9 C global warming event, and his conclusion that it would be visible in the Marcott data. There has been a lot of discussion and/or complaints with regards to sampling effects and averaging, but - IMO there needs to be math, or it didn't happen. 

  44. It's not bad

    Mark @344 - first off, almost all of the estimates in the Tol paper you reference are from the most conservative economists doing climate research (Nordhaus, Tol, Mendelsohn, etc.), so the paper almost certainly underestimates the economic damage from climate change (probably by a very large amount, in my opinion).  It's really interesting that it references Chris Hope, who now says that the social cost of carbon is in the ballpark of $150 per tonne of CO2, which is 1-2 orders of magnitude higher than Tol believes.

    Despite these underestimates, the paper still concludes that the net impact on GDP at 2.5°C will be negative, and we're already committed to about 1.5°C warming and still rising fast.  So I'm not really sure what your point is.

  45. Climate's changed before

    Mark @345 - there's nothing directly wrong with 'the skeptic argument' as articulated by Lindzen here.  It's the implication of the statement where the problem lies.  Saying 'climate has changed naturally in the past' is like saying 'humans breathe oxygen'.  No duh.  Everybody knows that.  So what's the point in saying it?  The answer to that question is pretty clear.

  46. It's not bad

    Oops. I forgot to include the link:

    The Economic Effects of Climate Change

  47. It's not bad

    The economic impacts of climate change may be catastrophic, while there have been very few benefits projected at all.

    What about Figure 1 of this paper?

  48. Climate's changed before

    What was wrong in what Richard Lindzen wrote?

  49. Making Sense of Sensitivity … and Keeping It in Perspective

    "Notice that it is for all intents and purposes, in that small range, linear." You're right, but the global climate is so complex. It's still not sitting well with me so I decided to try and derive it myself. Take the following with a grain of salt. I might have done something wrong or interpreted something wrong.

    Anyway, using the energy balance...

    Power in = (1-α)πR2F0

    Power out = 4πR2ϵσT4, where ϵ is the emessivity

    Thus, F = ϵσT4, where F = (1-α)F0/4

    Thus, T=[F/(ϵσ)]1/4

    The total derivative is: dT/dF = δT/δF+δT/δϵ*dϵ/dF

    substituting and simplifying:

    dT/dF = T/(4F)-T/(4ϵ)*dϵ/dF, where the term T/(4ϵ)*dϵ/dF is from feedbacks.

    Thus, the linear approximation for climate sensitivity is k ≈ 4*dT/dF ≈ T/F.

    With this linear approximation we're assuming that for small changes in temp the term T/(4ϵ)*dϵ/dF is almost constant or negligible. Feedbacks aren't negligible so we're arguing it's almost constant.Thus, we're assuming

    dϵ/dF ≈ C*ϵ/T, where C is a constant. I still don't like this approximation. I guess I have to do more reading. Again I might have done something wrong or interpreted something wrong so take the stuff above with a grain of salt. Thanks

  50. Antarctic Octopus Living Testament To Global Warming

    When I was reading Twenty Thousand Leagues Under the Sea as a child, I thought "Interesting but completely our of reality nonsense" because Jules Verne did not know that Antarctica was a continent rather than ice shlf like arctic. BTW, Verne could have been ignorant, because even back then (mid-end of XIX century) the adventurers could have known about antarctic mountains.
    Anyway, that has now changed: Nautilus could have swam under the antarctic ice just like octopuses did, if the action have taken place in Eemian...

Prev  922  923  924  925  926  927  928  929  930  931  932  933  934  935  936  937  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us