Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  2494  2495  2496  2497  2498  2499  2500  2501  2502  2503  2504  2505  2506  2507  2508  2509  Next

Comments 125051 to 125100:

  1. On the reliability of the U.S. Surface Temperature Record
    jpark, "if those temps/trends are slightly higher than they should be and so, in reality, only slightly higher than older temp station data, or even older historic data then, yes, we have global warming but not very much" Exactly, but it ain't so. In principle it might be a resonable concern, in practice it does not stand up an in depth analisys. Remember, people working on it check the readings for possible biases/errors; something may slip through the check but, well, just some. And unless you belive in the bad intentions of the researchers, errors and biases (plural) tend to average out. Don't be confused by absolute temperature and anomaly. The former is more intuitive given that it's what we feel. The latter has the advantage of being more stable and correlated over long distance and time, then more easily shows an underlying trend, which is what we are interested in.
  2. The chaos of confusing the concepts
    re #22 hmmm...this is why I suggested in post #12 that we have to be careful what we mean when we use the term "chaos" in any particular instance. We end up misarguing around the meaning of a word or concept rather than the phenomenon itself. Meltwater-induced suppression of the Thermohaline Circulation happened many time is the past. So one can hardly say it can't happen! Of course the boundary conditions are different now (interglacial rather than the many instances identified during glacial periods). I (and everyone here, I think) was using it as an example 'though. It's not really chaotic behaviour (it has its chaotic elements on a microscale), but it's really a stochastic phenomenon that is essentially predictable, if not in relation to the precise timing of events, at least as a phenomenon that is a definite and predictable consequent of particular conditions. So, for example, it would likely be possible to model N. hemisphere ice sheet dynamics and ocean circulation during the last glacial period to reproduce the Daansgard-Oeschger (D-O) events, within an understanding of the conditions under which these events occurred (not sure if this is yet understood very well). Where this differs from chaotic phenomena (as I understand it), is largely the independence with respect to initial conditions. We wouldn't know exactly when a D-O might occur, but we would be able to predict that, independent of inital conditions, once the important factors tended towards threshold values, that a D-O event would have a high probability of occurring.. Two examples: (i) Knocking down a wall with one of those splendid balls on chain swung by a crane. We don't know exactly when the wall will tumble, or exactly the pattern of its disintegration (one might consider the latter to be chaotic). However the event (the wall falling down) is predictable (if not precisely defined temporally speaking), given that we understand the forcings that act in this situation, and is independent of initial conditions. (ii) In a warming world we expect coastal flooding events that might have 100 year probability (say) to occur more frequently, as a result of rising sea levels combined with more extreme weather events as sea surface temperatures rise etc. Now, however chaotic the weather is (chaos), the likelihood of an increased frequency of coastal flooding events is predictable. We don't know when any of these events will occur (stochastic), but our prediction of an increase of events in a warming world is likely to be robust, and increasingly so as our knowledge of the climate system increases.
  3. On the reliability of the U.S. Surface Temperature Record
    Albatross - thanks for the links - I will read. Carefully.
  4. On the reliability of the U.S. Surface Temperature Record
    Albatross - I think an illustration on temp anomaly might be a good idea. Ok I will give this one more go because I think you guys might be able to give me an answer and you haven't yet (apart from Kforestkat) Here is the problem: the world is getting hotter - I think we all agree - but last year we found out that CRU scientists made a hash of doing the data. However kindly you read the leaked emails you realise this was not good science. Then Copenhagen fails. Then this week Pachauri gets it in the neck for getting the Himalayan glacier date wrong and putting pure speculation in the IPCC report (apparently it was not the only error) and an error that had significant financial consequences. So when Watts puts out a report showing images of severely compromised temp stations and Menne replies with 'trends' people like me say...'er so what? What does a trend mean, I want to know whether the temp stations work or they are being lovingly heated by a/c units". Because if those temps/trends are slightly higher than they should be and so, in reality, only slightly higher than older temp station data, or even older historic data then, yes, we have global warming but not very much - which is what the report at Science Daily says. But of course I may be missing something...
  5. On the reliability of the U.S. Surface Temperature Record
    Jpark "This means actual temps do matter and trends in this particular instance dont," The actual temperatures form part of a long-term trend. You can't have a trend in a time series without either increasing or decreasing time series of temperatures. Moreover, those tmepratures do not have to increase montonically to get a positive trend as illustrated by the surface air tmeprasture records. The long term temperature trend (globally) is about 1.7C warming per century, and yes, that is actually something to worry about. Regarding "why we have not warmed as much as we should have". You are probably referring to the work of Scwartz that is aboutt o be publishe din J. Climate. Perhaps John can again (Schwartz has done this before) refute the work of Schwartz et al. Jpark, be wary of site slike WUWT, their goal is to confuse. Really it is just that simple, and it is cleverly done under the guise of "science" and the pursuit of "truth". That is what makes the misinformation there seem so compelling. The long term observed warming trends is consisent with the projections made by the IPCC. Look here: http://tamino.wordpress.com/2010/01/13/models-2/ and here http://tamino.wordpress.com/2009/12/07/riddle-me-this/ and here http://tamino.wordpress.com/2008/01/31/you-bet/ I really encourage you to actually read the above articles carefully.
  6. Berényi Péter at 09:36 AM on 24 January 2010
    Skeptical Science now an iPhone app
    Back to the original claim. It is getting pretty cool in the Arctic (-35°C, -31°F) http://ocean.dmi.dk/arctic/meant80n.uk.php Still, it is not terribly hot elsewhere around it. I've just walked my dog in the park (lat=47.4717672, lon=19.0426755) and he was anxious to get back which is rather unusual. It is -10°C (14°F) here right now. US http://www.wunderground.com/US/Region/US/2xpxTemperature.html Alaska http://www.wunderground.com/US/Region/Alaska/2xpxTemperature.html Canada http://www.wunderground.com/global/Region/CN/2xpxTemperature.html Europe http://www.wunderground.com/global/Region/EU/2xpxTemperature.html
  7. On the reliability of the U.S. Surface Temperature Record
    Reading some of the posts here is incredibly frustrating because it clearly demonstrates the stunning success Watts et al. have had in confusing and brainwashing people (even well educated professionals it seems)to the point where it is impossible to explain a simple concept of a temperature anomaly to them. I was going to chime in and try to dispel some of the confusion, but others have repeatedly and clearly explained the facts only for those facts to repeatedly fall upon deaf ears. What I will add, is that the Menne et al. study needed to be done and their results are incredibly important. Their results also represent the final nail in the coffin for the complaints from Watts et al. as to the validity of the US SAT record. There is simply no dobting the validity of the SAT record anymore, but I doubt this study will discourage the contrarians and denialists from perpetuating and rehashing old myths. Prof Mandia re #35, I too once tried to explain the science with the folks at WUWT, and it was a waste of time. Watts knows his audience and plays to that; he is very good at telling them what they want to hear. He is also guilty of confirmation bias and ignoring the inconvenient facts regarding AGW. Anyhow, I do hope that some of the misguided posters here represent the views of people who are in the minority, b/c if they represent a much larger segment of the populous then we have a serious problem on our hands in terms of communicating the science. Why is it so much easier to disseminate misinformation than the basic facts? Maybe someone with some time can show some schematics illustrating how one obtains anomaly values from a temperature record, and why systematic bias does not affect the trend? A picture is oftentimes far more convincing and informative than even the most carefully chosen words. PS: Actually those in denial are having a bad decade-- 2009 second warmest year on record globally, first decade of naughts warmest on record globally, warmest year on record in S. Hemisphere (lots of heat stored in the vaste southern oceans), continuing acceleration of rate of loss of summer Arctic sea ice and glaciers, PIG glacier in WAIS found to have exceeded its tipping point, and for what it is worth, January 2010 warmest lower trop. temps in the satellite record despite extremely cold weather in Eurasia and portions of N. America. The list goes on.....
  8. Why does CO2 lag temperature?
    re #39: thingadonta, the evidence tends not to support the interpretation of inception of polar ice sheets in Antarctica, and Greenland (see my post #38) that may have been the dominant theory in your uni days! In the intervening 20 or so years, that theory has been tested both for the N. hemisphere polar ice cap (see post #38) and the Antarctic ice cap (see following). In each case the evidence indicates that glaciations only occurred when CO2 levels dropped below thresholds that forced sufficient global cooling (these are thought to be of the order of ~ 700 ppm for Antarcic glaciations and ~ 300 ppm for Greenland glaciations). It’s possible that ocean circulation changes made some contribution (as likely did earth orbital properties). But greenhouse gas concentration seems to be the major player: (i) CO2 changes and temperature changes during the Phanerozoic (last 500-ish million years). The problem with the idea that changes in atmospheric CO2 concentrations in deep time are the response to earth temperature change is that these CO2 variations are simply too large. We can determine, for example, that during ice age glacial-interglacial-glacial transitions, atmospheric CO2 levels cycle rather faithfully between ~270 (interglacial) and ~ 190 (glacial ppm). These are slow (~5000 year) transitions (so CO2 re-partitioning between ocean/land and atmosphere will have come close to equilibrium), involving global temperature changes of around 5-6 oC. Therefore temperature-induced CO2 rises/falls are of the order of 13-15 ppm per oC of warming/cooling. Since the entire Phanerozoic temperature variation was likely no more than 10 oC overall, we don’t expect to see temperature-induced variation in CO2 levels of more than 150 ppm. However the CO2 changes observed in the record are much larger than this. The slow fall of atmospheric CO2 from 1000-1500 ppm during the mid to late Eocene to around 700 ppm at the Eocene-Oligocene boundary around 33.5 MYA and further to ~300 ppm and below by around 24 MYA (and ever since until now) are simply incompatible with temperature-induced changes in atmospheric CO2. (ii) Timing The steady long term cooling from the Eocene maximum global temperature at around 50 MYA began far in advance of any ocean circulation change resulting from isolation of Antarctica and possible effects on ocean currents. And the opening up of the Tasmanian gateway preceded the Eocene-Oligocene transition that heralded major Antarctic polar ice sheet growth by ~ 2 million years [*]. The steady cooling right through the middle-late Eocene to the onset of Antarctic glaciations ~ 33.5 MYA is associated with a long slow drawdown of atmospheric CO2 from 1500 ppm or greater to ~700 ppm [**]. As indicated in (i) the extremely large drops in atmospheric CO2 concentrations are incompatible with the idea of temperature-induced repartitioning of CO2 between oceans and atmosphere. Most likely the slow drop in atmospheric CO2 was due to enhanced weathering (possibly a result of the drifting of the highly weatherable volcanic Deccan Traps into the equatorial humid belt as the Indian subcontinent shuddered remorselessly Northwards for its eventually intimate rendevouz with Asia [***]). (iii) <>Attribution There are a number of studies that indicate that the ocean circulation effects associated with the isolation of the Antarctic continent are minor contributions compared to the effects of reduced-greenhouse-induced global cooling. Some of these are: a. The temperature changes associated with the cooling during the Eocene-Oligocene transition ~ 33.5 MYA and the onset of build up of a permanent ice cap in Antarctica, were global, and poorly compatible with the regional effects associated with changes in ocean gateways [****] b. As well as the timing mismatch in (ii), a number of studies have reconstructed and/or modelled the effects of ocean circulation changes involving isolation of the Antarctic continent, and concluded that the ocean circulation changes are simply not able to produce the localized cooling required for onset of Antarctic glaciations. This can have only occurred when atmospheric greenhouse gas levels dropped below thresholds that maintained the earth in a state without a significant permanent Antarctic ice cap [*****]. [*] Stickley, C. E et al. (2004) Timing and nature of the deepening of the Tasmanian Gateway, Paleoceanography, 19, PA4027 http://web.ics.purdue.edu/~huberm/STICKLEY.HUBER.PDF [**] P.N. Pearson et al. (2009) Atmospheric carbon dioxide through the Eocene-Oligocene transition Nature 461, 1110-1113 http://www.ncdc.noaa.gov/paleo/pubs/pearson2009/pearson2009.html http://www.nature.com/nature/journal/v461/n7267/abs/nature08447.html M. Pagani et al. (2005) Marked decline in atmospheric CO2 concentrations during the Paleogene Science 309, 600-603 http://earth.geology.yale.edu/~mp364/data/Pagani.Science.2005.pdf [***] D. V. Kent and G. Muttoni (2008) Equatorial convergence of India and early Cenozoic climate trends Proc. Natl. Acad. Sci. USA 105, 16065-16070 http://www.pnas.org/content/105/42/16065.abstract [****] Z. Liu et al. (2009) Global cooling during the Eocene-Oligocene climate transition Science 323, 1187-1190 http://www.sciencemag.org/cgi/content/abstract/323/5918/1187 E. Thomas (2008) Descent into the Icehouse Geology 36, 191-192 [*****] R. M. DeConto et al. (2003) Rapid Cenozoic glaciations of Antarctica induced by declining atmospheric CO2 Nature 421, 245-249 http://www.geo.umass.edu/faculty/deconto/deconto_nature.pdf Huber M et al. (2004) Eocene circulation of the Southern Ocean: Was Antarctica kept warm by subtropical waters? Paleoceanography 19, PA4026 http://doos.misu.su.se/pap/paleo2004.pdf M. Huber and D. Nof (2006) The ocean circulation in the southern hemisphere and its climatic impacts in the Eocene Palaeogeog., Palaeoclim., Palaeoecol. 231, 9-28 http://web.ics.purdue.edu/~huberm/huber+nof.pdf
  9. On the reliability of the U.S. Surface Temperature Record
    Obsessively repeating the same concept is the standard tool people in the media use to make it true. Anthony Watts is pretty good at it and the fact that even if true it does not have any pratical impact has no importance for him and his fellows. They'll stubbornly keep repeating "Poor siting! Poor siting!" ad infinitum.
  10. On the reliability of the U.S. Surface Temperature Record
    Thanks Prof! Again very helpful but to me a trend is not at issue. I am a newbie here and like many this area of science has only become a really hot (pardon the pun) topic for me since climategate. Before then I accepted the general consensus. Most in the blogosphere seem to believe in global warming - it is the extent and the 'unprecedented' nature of the warming that I think (from what I have read on blogs so far) is the issue, which is of course linked to the anthropogenic part. This means actual temps do matter and trends in this particular instance dont, to me at any rate. If it is getting a bit hotter then, well that is not too bad, climate does tend to do that. But if it is getting amazingly hotter then, of course, we are all going to be in big trouble. So are the temps showing something dangerous or something not so dangerous I read this "Why Hasn't Earth Warmed as Much as Expected? New Report on Climate Change Explores the Reasons" from Science Daily. I think you can understand my layman's puzzlement. http://www.sciencedaily.com/releases/2010/01/100119112050.htm Apologies if that is off topic.
  11. On the reliability of the U.S. Surface Temperature Record
    As others have said the bias does not affect the trend. "A rising tide lifts all boats." Increased GHGs are the rising tide. Watts refuses to admit the obvious. In fact, even today he has the following post: http://wattsupwiththat.com/2010/01/23/sanity-check-2008-2009-were-the-coolest-years-since-1998-in-the-usa/ He is using US data to try to cast doubt on global data. He knows better but loves the attention from his misguided followers. I used to post there as the loyal opposition but it ended up being a huge waste of time.
  12. The chaos of confusing the concepts
    Jacob Bock Axelsen: First of all, a comment on the thermohaline circulation. You said: "However, the engine of THC is surface cooling in the Arctic which global warming might turn off." But there are 2 factors driving the ocean currents. One is temperature and the other is salinity. Both affect density in a non-linear way. The reason why it is potentially "a switch" is that as the oceans warm, as they are doing at the moment, there is increased melt from Arctic ice and the Greenland ice sheet. If this melt rate increases to a certain point - no one exactly sure what that point is - then the low salinity will outweigh the cold and this water will stop sinking and the "conveyor belt" direction will change. Therefore, a very complex situation, and one where increased temperatures will eventually (possibly) lead to a colder northern hemisphere and a refreeze of the Arctic with the consequent (positive feedback) of increasing albedo. More on your other points later..
  13. The chaos of confusing the concepts
    Chris, you wrote in #17, "the THC could slow down or stop if sufficient freshwater from Arctic ice melt were to flood the Arctic ocean." Mostly anything can happen if sufficient conditions are present, but the interesting question is, if this is likely to happen or not. Afaik, current understanding says this is not likely to happen at all - if ever. "At some point we might well understand this process well enough that it might cease even to be considered “stochastic”." I don't think so. "stochastic" is just another way to label non-linear or "random" system, system we previously not been able to model or control properly. Chaos theory is in principle a theory about non-linear system and how to treat them as non-linear. Before the time we had the mathematical tool to model and understand such system (which wasn’t all that long ago) engineers was busy making sure any non-linear system was modeled with in a certain local region that could be approximated as linear, outside this approximated region the behavior can not be guaranteed. The mathematical tools to do this exercises with is know as ‘differential equations’, and one requirement for these tools to be “trivial” to use is linearity – nonlinear differential equations are extremely hard to solve. Stochastic processes lacks the linear properties, thus they are always hard problem to solve with differential equations. Only a few non-linear processes of special interest are understood this way, like fluid dynamics, but even these are solved with numerical methods. However, new mathematical tools and theories theory have help use to better understand system outside the approximated linear boundaries, but such understanding wont make them less stochastic, rather it will help use to even more appreciate the very special behavior of these system.
  14. On the reliability of the U.S. Surface Temperature Record
    Gordon, many thanks. I think what you say is wise, But do look at the Watts report - lots of stations, lots of pictures ( worked for Al, it might just work for Ant) http://wattsupwiththat.files.wordpress.com/2009/05/surfacestationsreport_spring09.pdf
  15. On the reliability of the U.S. Surface Temperature Record
    jpark, I doubt that Watts et al photographed all of the stations, including well sited ones. This study shows that both well sited and poorly sited stations show the same basic trend, and that the bias in poorly sited stations is to cooler temperatures. What could be clearer than that? You haven't seen a bad week in AGW yet. You may well see many in your lifetime. I hope we don't.
  16. The chaos of confusing the concepts
    Errata comment #13 (and #20), "... less to weather being non-chaotic and more to weather being affected by a ..." Above is an editing confusion of mine. I usually edit text a lot before I make a post. I this case I was considering to use the word "chaotic" OR "non-linear", and apparently it all got mixed up in the final edit. :( There are also some editing confusion in post #20 as well. For instance: "because of the presence of a forcing from linearity in that make Ned able to do the predict as he did." Was intended to be: "because of the presence of a forcing from linearity in the system that one is able to do a prediction as Ned did"
  17. On the reliability of the U.S. Surface Temperature Record
    I'm not a scientist, but this topic fascinates me. I do have 30 years of professional experience to help guide me. When a colleague speaks in abruptly dismissive terms, claiming something is "useless," "trash," or "not really worth serious discussion" I pay attention, but my guard goes up. My years of experience have taught me to listen, but be skeptical. I have rarely found that such a tone is warranted. Here again, I appreciate the careful explanations by people who have responded. I am not a blind believer, but I do have confidence that serious professionals are sincere and careful in their effort, and are correct more often than not. I think that the argument that temperatures are rising is well backed by the loss of sea ice extent, and especially the rapid loss of multi-year ice in the past couple years. The next few years may be telling.
  18. On the reliability of the U.S. Surface Temperature Record
    Hi Doug! Many thanks, nice explanation. - I do understand the paper but still feel it does not, like a lot of posts here, answer the quite basic Watts question of how accurate the stations are. http://wattsupwiththat.com/2010/01/23/quote-if-the-week-27/#more-15561 And the picture tells a 1000 words - how do you convince people of global or even just US warming when you get to see a weather station next to a/c. It has been a bad week for AGW.
  19. On the reliability of the U.S. Surface Temperature Record
    jpark at 06:37 AM on 24 January, 2010 It's not actually a debate, instead repeated attempts at explanation. Try this one, at home if you like but it's so simple you'll probably find words do the job: Set up a thermometer in a large room where the temperature is steady. Let the thermometer stabilize at ambient temperature. Now, turn on a small lamp next to the thermometer, close enough to warm it a bit. You'll see an immediate bias in the reading given by the thermometer; the reading will be higher than ambient temperature in the room. Let the thermometer stabilize again. Now raise slowly raise the temperature of the room. The thermometer will still register the increase in the temperature of the room. We've learned that bias does not make it impossible to extract a trend in temperature. It's really -that- simple. Not so hard, really, but easy to lose in a detailed technical explanation. To me it seems what we have here after all the hat and light is stripped away is the famous "failure to communicate".
  20. michaelkourlas at 06:38 AM on 24 January 2010
    It's the sun
    @Tom Dayton Thanks, I hadn't seen that.
  21. On the reliability of the U.S. Surface Temperature Record
    Interesting debate. So we are talking measuring trends vs actual data, yes? But this does not really answer the Watts paper http://wattsupwiththat.files.wordpress.com/2009/05/surfacestationsreport_spring09.pdf To me Kforestcat, along with Watts paper, makes sense. I cant see the point of the Menne exercise - why bother with a trend when you could measure how good the station was at measuring temperature. Why not put the army of volunteers to good use - how long would it take?
  22. It's the sun
    michaelkourlas, that 2007 paper by Friis-Christensen and Svensmark is old news. See Svensmark and Friis-Christensen rebut Lockwood’s solar paper.
  23. michaelkourlas at 06:31 AM on 24 January 2010
    Climategate CRU emails suggest conspiracy
    Regarding "hide the decline": If it is true that tree rings are definitely inaccurate after 1960 (having compared them with the instrumental temperature record), shouldn't we question the entire data set, as that might be flawed too?
    Response: This is a good question and is explored in Tree-ring proxies and the divergence problem. In short, tree-ring proxies show good agreement with other proxies before 1960 and also show good agreement with tree-ring proxies that don't show divergence (eg - at lower latitudes). This indicates divergence is a purely recent phenomenon (and hints that there's a good chance it's anthropogenic in cause).
  24. michaelkourlas at 06:28 AM on 24 January 2010
    It's the sun
    This paper (http://www.spacecenter.dk/publications/scientific-report-series/Scient_No._3.pdf/view), published in 2007 by Eigil Friis-Christensen and Henrik Svensmark at the Danish National Space Center, is a response to Lockwood and Fröhlich's paper disputing the correlation between solar activity and land surface temperature. This new paper discusses the correlation between cosmic rays (solar activity) and sea surface temperature/atmospheric temperature. In both cases there is a clear correlation. While Lockwood and Fröhlich are correct in saying there is a divergence between solar activity and land surface temperature, the correlation remains true for two other temperature data sets (sea surface temp. and tropospheric temp.) Thus, one must question the validity of the land surface measurements, and admit the possibility that the sun may be playing a major role in current global warming.
  25. The chaos of confusing the concepts
    @chris, #15. "Surely Ned is basing his prediction on "the system itself". Sure, I agree with that, and I see I was a bit unclear with my point, my excuses for that. My point was to try to make a distinction, and to separate, linear and non-linear elements in a system, and to clarify that it is because of the presence of a forcing from linearity in that make Ned able to do the predict as he did. I didn’t meant to say this is not part of the system, but to say it can be seen as a separate from the non-linearity of the system. I sometime notice that some people seems to believe that there is proportional linear relation between CO2 levels and global mean temperature. This relation is thou not so trivial as the green house effect from CO2 is said to be a non-linear function of the concentration. In other words, the contributing effect from a linear increase of CO2 will not change as rapid as temperature, therefore, unless we are working with a system that locally can be said to be linear, if both CO2 and temperature increase linear in respect to each other then I would suspect there to be yet another factor in the equation.
  26. Jacob Bock Axelsen at 06:01 AM on 24 January 2010
    The chaos of confusing the concepts
    @stevecarsonr and Marcel Bökstedt Thanks for your questions, which I will try to comment on. Please bear with me for trying to answer arguing from the variables of the Rayleigh number. Consider two plates (hot and cold) enclosing a liquid convecting fluid: http://en.wikipedia.org/wiki/File:Convection-snapshot.gif http://en.wikipedia.org/wiki/B%C3%A9nard_cells This is the Rayleigh number. Ra = gravity * expansion coefficient * system size * temperature gradient / (viscocity * conductivity * diffusivity) = g*b*D^3*dT/(v*a*k). In the Lorenz attractor Ra must be above the threshold Ra = 13.926 to exhibit any chaos, and below the dynamics is predictable. For instance, my plots are for Lorenz' own choice of Ra = 28. The idea that chaos is prevented by boundedness can then be understood: just decrease D or dT sufficiently to end below the threshold. I was using the 'leash'- analogy differently: The mean global temperature is determined as a steady state of huge energy fluxes. It is suspended by the Sun pulling up and the heat loss to space pulling down. To exhibit chaos you need to be able to delay heat transport (advection) through fluid dynamics, and with El Niño being the largest phenomenon of relevance we are still far away from fully developed climate chaos. Notice that sea levels increase on the order of centimeters during an El Niño - this is the small expansion coefficient of water. Make b small and you move away from chaos. The thermohaline circulation (THC) is a true convection roll resulting from density change. However, the engine of THC is surface cooling in the Arctic which global warming might turn off. If dT cannot drive even laminar currents, then we have smaller dT and lesser probability of chaos. I mentioned aerosols in the post, but they are much more transient than CO2. Much like airborne water, aerosols is argued to be fighting a negative feedback: cloud seeding, gravity and precipitation. My understanding is that clouds more or less cancels out in climate models. If aerosols cool they lessen dT for possible oceanic chaos. Interestingly, dust depositions on glaciers is hypothesized to be part of the ice age trigger: http://forecast.uoa.gr/conferences/iamas/10july/4b/69_smn_dst_dam_iamas_200707.pdf I hope you find these comments useful.
  27. The IPCC's 2035 prediction about Himalayan glaciers
    Charlie A, i didn't claim it was right or wrong; i just pointed out that pluging in the correct starting time the calculation of the rate is correct.
  28. On the reliability of the U.S. Surface Temperature Record
    Or maybe even restrain myself :)
  29. On the reliability of the U.S. Surface Temperature Record
    " A poor station with an absolute temperature error of +5 degrees C still has a bias error of +5 degree C - no matter what the variation occurring due to instrumentation type." We're interested in trends, so a constant bias has no effect, nor does the choice of baseline from which to compute the anomaly. For any bias B, and any two temperature reading at points in time N0 and N1, (N0-B) - (N1-B) = N0 - N1. And you can extend that into any statistical trend analysis taken over a time series N0 ... Nn. "I'm a chemical engineer with U.S. government and 20 years of research experience in various areas including environmental mitigation. If one of my phD's came to me with this nonsense, I'd fire him on the spot. " I could make a snarky statement about 9th grade algebra students but I'll withstrain myself.
  30. The IPCC's 2035 prediction about Himalayan glaciers
    I live in the East USA but have skied annually in California/West and talking about glaciers as a water supply seems awful SILLY. Whatever glaciers there are in California are teeny(most all of the snow melts by September) and their contribution to melt has to be very small compared to general melting snowpack. Worrying about glaciers, without considering an overall predicted rise in precipitation from AGW "theory", does not seem to be right to me. And even with the worst case AGW scenarios coming true, say a 4F rise in temp, is that going to stop a snowpack from forming? I don't think so. Here in the Eastern US snowfall is extremely variable with some years we have very little snowpack, yet our rivers flow all year long, except in periods of a true drought, when they just flow low. I conclude the glacier scare is nothing but that, another scare. I don't have the deep scientific knowledge of AGW "theory" that many of you have, but common sense appears once again to rule the day on the Himalayan glacier/melt issue!
  31. On the reliability of the U.S. Surface Temperature Record
    kforestcat writes: "I'm fully aware of how anomaly data is used ( having used it in my own research)" I'm not sure you actually do understand this, because your comments still show the same kinds of errors and confusion. "NASA's individual station temperature readings are taken in absolute temperature (not as an anomaly as you have suggested)." NASA doesn't take temperature station readings, and nobody has suggested that the temperature sensors measure anomalies directly. "Menne has to have (and use) absolute temperature data to get the 1971-2000 mean temperature and then divide the current temp with the mean to get the anomaly. We are back to the same problem - Menne is measuring instrument error - he is not measuring error resulting from improper instrument location." That is very confused. The temperature anomaly is the current daily (or monthly) temperature minus the mean temperature on the same day (or month) during a given reference period. You don't "divide" any temperatures. And Menne et al. are not measuring "instrument error". They are analyzing measurements of temperature as a function of site quality in order to determine the difference in temperature trends between well-sited and poorly-sited stations. "Actual anomaly is 93F - 85F = 8F; Instrument anomaly is 105F - 90F = 15F. The data is trash. There is simply no way to recover either the actual ambient temperatures nor an accurate anomaly reading. What you are missing is that an improperly placed instrument is reading air temperatures & anomalies influenced by unnatural events." You still completely fail to understand what's going on here. Menne et al. are taking the temperature data and grouping them into categories based on the site quality. They then determine the difference in long-term trends between well-sited and poorly-sited stations. In the raw, unadjusted data, poorly-sited stations tend to have a slightly lower trend than well-sited stations. The network homogenization and adjustment process brings poorly-sited stations into closer agreement with well-sited stations. "The readings bear no relationship to either the actual temperature nor the actual anomaly - the data's no good, can't be corrected, and will not be used by a reputable researcher." That is just bluster. What the analysis shows quite clearly is that if anything, poorly-sited stations on average underestimate the warming trend, but that the network adjustment process is able to successfully compensate for this effect. And even if you were reluctant to accept that, the close agreement between in-situ surface temperature and satellite microwave temperature retrievals from the lower troposphere suggests that the surface temperature record is realistic. "Finally, it's not entirely surprising that Menne finds a downward bias in his individual anomaly readings at poorly situated sites. Because: 1) a poorly located instrument produces a higher mean temperature; hence, the anomaly will appear lower; " Huh? Again, this makes no sense. If a sensor always reads 5C too high, its anomaly will be exactly the same as if it were perfectly sited. If a sensor's environment changes such that the current temperature is biased high relative to the period of record, then it will have a positive anomaly, not a negative one. "and 2) generally there's a limit to how hot an improperly placed instrument will get (i.e. mixing of unnaturally heated air with ambient air will tend to cool the instrument - so the apparent temperature rise is lower than one might expect)." That is both confused and irrelevant to the paper at hand. "Had Mennen (NASA) actually measured both absolute temperature and calculated anomaly data using instrumentation at properly setup sites, within say a couple of hundred feet of the poor sites, as a proper standard to measure the bias against - our conversation would be different." (1) Menne et al. work for NOAA, not NASA, and the paper being discussed here is about NOAA's temperature data. (2) You still seem confused about the relationship between measured temperature data and calculated temperature anomaly. (3) The entire point of this paper is to compare poorly-sited and well-sited stations. (4) By doing this comparison using trends in the anomaly rather than using the absolute temperatures, there's no need to compare stations within "a couple of hundred of feet" of each other. "As it stands Menne's data is useless nonsense and not really worth serious discussion." Again, that is just bluster. It sounds to me like you don't understand the subject but are deeply invested in casting doubt on it.
  32. Skeptical Science now an iPhone app
    re #19 "The numbers don't seem to add up" The numbers add up pretty well if one considers the system in it's entirety (all the forcings and a realistic assessment of climate response times). So, for example, the 20th century global temperature evolution can be reproduced rather well by incorporating all of the contributions and climate response times [*](see Figure 1): [*] http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf It's possible to illustrate part of the difficulty with your analysis by considering the global temperature from the late 19th century to the mid 20th century [**]. The global warming during this period wasn't more than around 0.2-0.3 oC overall. It's just that the surface temperature was knocked back quite a bit for a while (see post just above) by volcanic activity. So the net warming in response to your net forcing of 0.5 W/m2 1910-1940 likely wasn't more than 0.2-0.3 oC (perhaps even a bit less, if there was a significant contribution from ocean current effects of the sort that Tsonis and Swanson have discussed). But the bottom line is that the nett effect can only be assessed by a realistic incorporation of all of the contributions and the earth's responses to these.... [**] http://www.cru.uea.ac.uk/cru/data/temperature/nhshgl.gif
  33. Skeptical Science now an iPhone app
    You're mistaking "lag" and "time constant"/"response time", HumanityRules (see my post #20) It's pretty straightforward: make a step change in a forcing to a new value. The earth starts to warm essentially immediately (no lag!). The time taken for the earth to come to equilibrium with the new forcing is a function of the time constants/response times of the system (rapid time response of a few years in the atmosphere; slower time constants for penetration of heat into the "deeper" elements of the climate system, with a very slow response time indeed for the vast oceans to come towards equilibrium with the forcing). It's the latter that gives the "heat in the pipeline" that you remarked upon. That's all very straightforward I think. The mistake is to think that the response of the surface temperature can be encapsulated within individual simple hived-off pieces of the whole. For example we could look at the temperature rise during the early 20th century. There was some very dramatic volcanic activity in the late 19th century/early 20th century and inspection of the global temperature record [*] shows that this knocked back the surface temperature by quite a large amount (0.2-0.3 oC) during a period of 20-odd years. However volcanic forcings are temporary; they have a significant short term effect on the surface temperature, which can be prolonged if there is a period of sustained volcanic activity [as in the period 1883 (Krakatoa) through to Soufriere, Santa Maria Mt Pelee in 1902], and so their effects don't penetrate "deeply" into the climate system. So much of the earth's surface temperature suppression due to volcanic forcing was recovered relatively quickly through the period 1910-1930's. There was also a small solar contribution and an enhanced greenhouse effect contribution to the early 20th century temerature rise. The earth responds to these again without lag, but the full response to these persistent in the long term forcings will take a long time to saturate the elements of the climate system that have a high intertia to change (i.e. the oceans). The earth still hasn't come fully to equilibrium with the enhanced forcing as it stood in 1940 (say), let alone with the forcing as it stands at this particular point in time. Obviously, 'though, if we want to attribute the contributions to the 20th century temperature evolution, we have to consider all of these (including the negative forcing contributions like anthropogenic aerosols), and the manner that the earth responds to these. It's not that complex. However it does require thinking (modelling) of the system in it's entirety. One can't insist on cutting everything right back to individual components and simplistic responses and then complain that reality doesn't conform to a grossly oversimplified view - that's essentially to use straw-man argumentation!
  34. On the reliability of the U.S. Surface Temperature Record
    Kforestcat, I'm sorry, but you're off in the weeds on this one. What you describe with your pavement example is an example of signal + bias + noise. Because the instrument's location is constant, we can eventually come up with a correction mechanism to remove the bias from the data. That leaves us with signal + noise. Removing the noise is simple filtering, of which averaging is one variety. Mathematically, averaging a signal removes noise (increases the signal-to-noise ratio) at the rate of the square root of the number of samples. Averaging daily samples over the course of a week increases the SNR by nearly 3 over any single sample. So if we picked up thermal noise from a car one day, then we merely have to average that data point with others from the same instrument in order to dramatically reduce the impact of that noisy sample on the overall data. I'll grant you that, if you only have a single data point, biases and noise on that data point will be a major problem. But that's not the case with the temperature record.
  35. On the reliability of the U.S. Surface Temperature Record
    Regardless of site, we see an obvious rising temperature gradient from 1980 to 2009 based on a line of best fit. However, ‘eyeballing’ the unadjusted data suggests a striking fall based on a line of best-fit beginning with an anomalously high 1998 to 2009. Now we certainly don't want to cherry pick. The 1998 data was attributable to a very large El Nino. However, following on from the preceding post (‘The chaos of confusing the concepts’) with its discussion of the Lorenz attractor, I find myself wondering whether we may indeed be seeing evidence of greater inherent unpredictability than we commonly suppose. Eleven years after all seems a long period, especially when we consider the preceding data set covers eighteen years. Should we be considering the two periods as one segment? Alternatively, should we be considering these periods as two distinct segments and asking why 1998 produced such a high El Nino (followed by a relatively warm period) and why 2007 – 2009 are producing a much lower gradient? Moreover, is this gradient likely to continue? I think the question of site location is clearly a furphy given the broad consistency between better and not so well located sites. However, deciding which periods we select to measure trends is of much more fundamental importance given the arbitrary nature of lines of best fit. Otherwise, we risk failing to ask obvious questions.
  36. On the reliability of the U.S. Surface Temperature Record
    Kforestcat, in your example you wrote "Say the mean 1971-2000 temperature well away from the parking lot...." But that's not of interest. Instead, the temperature on that given day, from that parking-lot-situated instrument, is differenced from the average temperature across 1971-2000 of that same instrument.
  37. On the reliability of the U.S. Surface Temperature Record
    Kforestcat, of course the temperature stations produce absolute temperatures as their "raw" data rather than as anomalies from a baseline. I have never seen anyone claim otherwise. You are misreading quite drastically. The baseline against which the anomalies are computed, is the average temperature for that specific locality across whatever time range has been chosen as the baseline. Each station has its own, local, baseline computed. Then each individual temperature reading from that one given station is differenced from that baseline for that one given station. The result is a difference of that one reading, from that tailored baseline. That procedure is done separately for each individual temperature reading, each against its own individual, tailored, baseline. It is a simple mathematical transformation that has nothing to do with instrument error and nothing to do with instrument calibration. It is a simple re-expression of each individual temperature reading that preserves all changes from the baseline temperature. The resulting collection of individually transformed temperatures is the collection of "raw" anomalies. Those are the "raw" data that you see being discussed.
  38. Why is Greenland's ice loss accelerating?
    The paper states: "Our results show that both mass balance components, SMB and D (eq. S1), contributed equally to the post-1996 cumulative GrIS mass loss (Fig. 2A)." But then, Fig.3 shows: Ice Discharge: -94 Gt/yr Surface Mass Balance: -144 Gt/yr Isn't this a contradiction? Then comes this statement: "A quadratic decrease (r^2 = 0.97) explains the2000–2008 cumulative mass anomaly better thana linear fit (r^2 = 0.90). Equation S1 implies thatwhen SMB-D is negative but constant in time, ice sheet mass will decrease linearly in time. If, however, SMB-D decreases linearly in time, ashas been approximately the case since 2000 (fig.S3), ice sheet mass is indeed expected to decrease quadratically in time" What is this "r^2 = 0.97" and how it is related to the equations: MB = ∂M/∂t = SMB – D (S1) δM = ∫dt (SMB-D) = t (SMB0–D0) + ∫dt (δSMB–δD) (S4) Any idea?
  39. On the reliability of the U.S. Surface Temperature Record
    Gentlemen I'm fully aware of how anomaly data is used ( having used it in my own research) and I know full well what can go awry in the field experiments. We are talking about every day instrument calibration and QA/QC - this is not rocket science. I firmly maintain the Menne 2010 paper is fundamentally flawed and entirely useless. NASA's individual station temperature readings are taken in absolute temperature (not as an anomaly as you have suggested). The temperature data is reduced to anomaly after the absolute temperature readings for a site are obtained. For example see, the station data Orland (39.8 N, 122.2 W) obtained directly from the NASA's GISS web site. The temperatues are recorded in Annual Mean Temperature in degrees C - not as an anomaly as you have suggested. (Tried to attach a NASA GIF as visual aid -but did not succeed). Bottom line. Menne has to have (and use) absolute temperature data to get the 1971-2000 mean temperature and then divide the current temp with the mean to get the anomaly. We are back to the same problem - Menne is measuring instrument error - he is not measuring error resulting from improper instrument location. The Menne paper is absolutely useless for the stated purpose. Anyone who actually collects field data, I have, knows they are going to immediately run into two fundamental problems when an instrument is improperly located. 1) they are not reading ambient air temperature and 2) neither temperature readings nor the anomaly can be corrected back to a true ambient because other factors are influencing the readings. For example: Suppose we have placed our instrument in a parking lot. Say the mean 1971-2000 temperature well away from the parking lot is 85F; but the instrument is improperly reading a mean of 90F. Now on a given day, say the ambient temp is 93 but your instrument is reading 105F (picked up some radiant heat from a car). Ok our: Actual anomaly is 93F - 85F = 8F; Instrument anomaly is 105F - 90F = 15F. The data is trash. There is simply no way to recover either the actual ambient temperatures nor an accurate anomaly reading. What you are missing is that an improperly placed instrument is reading air temperatures & anomalies influenced by unnatural events. The readings bear no relationship to either the actual temperature nor the actual anomaly - the data's no good, can't be corrected, and will not be used by a reputable researcher. Finally, it's not entirely surprising that Menne finds a downward bias in his individual anomaly readings at poorly situated sites. Because: 1) a poorly located instrument produces a higher mean temperature; hence, the anomaly will appear lower; and 2) generally there's a limit to how hot an improperly placed instrument will get (i.e. mixing of unnaturally heated air with ambient air will tend to cool the instrument - so the apparent temperature rise is lower than one might expect). Had Mennen (NASA) actually measured both absolute temperature and calculated anomaly data using instrumentation at properly setup sites, within say a couple of hundred feet of the poor sites, as a proper standard to measure the bias against - our conversation would be different. As it stands Menne's data is useless nonsense and not really worth serious discussion. Dave
  40. The IPCC's 2035 prediction about Himalayan glaciers
    nofreewind -- As Tom Dayton points out, the absence of glaciers just affects the timing of the water flow. Assuming constant annual precipitation, then the total annual water flow in the rivers will stay the same, but there will be a bigger seasonal variation. Without glaciers, melting snowpack would be the source of summer water flow in most Himalayan/Indian rivers. I've seen some non peer reviewed articles that said that the loss of glaciers would cause most rivers in India to go dry during the summer, but have not seen any peer reviewed articles that had any such drastic predictions. The alarmist articles seem to ignore the snowpack, which is the primary storage in many areas, such as California as mentioned above by Tom Dayton.
  41. The IPCC's 2035 prediction about Himalayan glaciers
    nofreewind, glaciers and snowpack are natural reservoirs of water not only in the Himalayas, but in California and many places around the world. They hang on to precipitation during the winter and dole it out gradually as meltwater as the weather warms into Spring and Summer, and even into Fall. Huge numbers of people, agriculture, and industry rely on the resulting somewhat steady and predictable supply of water around the year. It is impossible to build enough artificial reservoirs to compensate for the loss of those natural reservoirs. Also, excessive supply of water (because it is not being held long enough and doled out in measured quantities) causes flooding by exceeding the short-term capacities of the human infrastructure.
  42. The IPCC's 2035 prediction about Himalayan glaciers
    I don't understand why the glaciers are so important to water supply. We don't have glaciers here in Pennsylvania USA, yet the rivers flow year round. In the Himalayans, the water comes down the mountains, not because of glaciers, but because the monsoons bring snow to the mountains. If there is precipitation, the rivers will flow, right? Why is it so important that the water is hundreds or thousands of year old glacier water? Let's get rid of the glaciers and get some fresh water down to drink!
  43. On the reliability of the U.S. Surface Temperature Record
    Kforestcat, I think you may be confusing two things here - bias error and probabalistic noise. The paper makes it clear that the unadjusted curves represent the measurements before known bias errors are removed, while the "adjusted" curves are after the bias errors have been corrected. Conversion to an anomaly is effectively the same as normalization, and the purpose is the same. Both serve to accentuate the part of the data that you care about. I do both regularly in my professional field of electrical engineering, especially when I'm interested in understanding the nature of noise plaguing my circuitry. Finally, what Watts et al are essentially saying is that heat islands, in this case caused by electrical transformers, waste treatment plants, air conditioners, or pavement, have made the global temperature record unusable. This paper points out that Watts is incorrect, but it's not the first paper to do so by any means. The following paper showed that well established urban areas had the exact same trends as rural areas, but with a removable warm temperature bias: http://www.agu.org/pubs/crossref/2008/2008JD009916.shtml To use an analogy, if a trampoline can get you 10 feet into the air out on a farm, there's every reason to believe that it'll get you just 10 feet into the air if you move it into a city.
  44. Skeptical Science now an iPhone app
    Chris I guess my point was made in #19. There is no evidence of lag in the early-mid century. The radiative forcing increase 1910-1940 coinsides with a delta T which leaves nothing "in the pipeline". "the earth should somehow miraculously come instantaneously to the new forced surface temperature" it seems miracles did happen 1910-1940. For the preposed system to work lag would have to be a late 20th century phenomenon only.
  45. The IPCC's 2035 prediction about Himalayan glaciers
    Apparently, there are other errors in this section. The erroneous 2035 date has been acknowledge, but the IPCC has not acknowledge the error I've pointed out above in table 10.9. Another error is the statement, referring to Himalayan glaciers of "Its total area will likely shrink from the present 500,000 to 100,000 square kilometers by the year 2035." The statement appears to have its original source in a 1996 article which states that the total worldwide extrapolar glacial area of 500k sq km is expected to go down to 100k sq km by 2350. I don't have a peer reviewed source handy for total Himalayan glacial area, but the UNEP/WGMS report global glacier changes says total area of Himalayan glaciers is 33,040 sq kilometers, so this appears to be yet another clue that the statements in this section should have been reviewed more thoroughly. Georg Kaser, a lead author of a WG1 chapter has said that he told others in the IPCC of the errors, but they chose not to correct them. The entire section on Himalayan glaciers is not of that muc importance. What is more important is that this is yet another example of problems in the IPCC review process. Nominations for reviewers of AR5 are now being taken, but only from selected organizations. The IPCC would be well served to include some reviewers that don't have strong confirmation bias in favor of AGW, and for there to be procedure put in place that don't allow the lead authors to rely almost exclusively on their own publications, to the exclusion of other peer reviewed papers that conflict with the lead authors opinions.
  46. On the reliability of the U.S. Surface Temperature Record
    Swerving off topic so possibly may never see the light of day, but further to remarks on skepticism versus denial, etc., English is a rich language and there's no need to use a single word to describe a plethora of approaches. Doubter, contrarian, skeptic, denier, they're all different in meaning and need to be applied individually. "Faithful" would be a better word for some, for that matter, seemingly detached from the material world. My limited experience w/participating in discussions on this topic tells me I'm generally far too hasty in categorizing, to the point where I've already had to resort to apology too often, enough to make me more cautious about committing accidental slurs. As is said, discretion is the better part of valor.
  47. The IPCC's 2035 prediction about Himalayan glaciers
    19 Ricardo says "it's a typo, the starting year is 1947 not 1847." Ricardo, what is your data source for this statement? 2840 meters of retreat from 1845 to 1966 is consistent with other reports of 1600 meters of retreat from 1847 to 1906 (27 meters/year) and 1040 meters of retreat from 1906 to 1958 (20 meters per year). What is your source for saying that the starting year is 1947 ?? My figures come from http://iahs.info/redbooks/a058/05828.pdf and several other sources of similar numbers. The current retreat rate of 10 meters/year comes from the 9th volume of Fluctuations of Glaciers, issued by the World Glacier Monitoring Service If AR4 is incorrect and the other correct, then the snout of the Pindari has slowed from 27m/yr up to 1906, to 20 meters per year to 1958, to 10 meters per year to 2006. If the AR4 is correct, then there has been an even more dramatic reduction in the retreat rate of 135.2meter/year down to 10 meters per year.
  48. On the reliability of the U.S. Surface Temperature Record
    Steve Carson. I'm no expert in climatology, but if I read a paper which claimed that surface temperatures had been falling (not rising) for the last 30 years, then I'd seek independent verification from other sources before I accepted or dismissed the claim-that's what makes me a Skeptic (& a scientist). A denialist, by contrast, will automatically dismiss any evidence that doesn't fit their ideology-without independent verification-no matter how strong the evidence is (yet they still demand ever more evidence-even though they'll dismiss that too). If it helps, the other side contains what I call the "True Believers"-they accept the theory of global warming because someone they admire &/or want to believe tells them so-without independent verification. Personally, I have no time for denialists or true believers, but instead seek independent verification of every claim & counter-claim being made. Its always important to think for yourself rather than blindly accept the claims of people who might have a vested interest. Hope that makes more sense.
  49. On the reliability of the U.S. Surface Temperature Record
    Gentlemen You really ought to read the methods used before you gloat. The individual station anomaly measurements were based on each stations "1971-2000 station mean". See where the document states: "Specifically, the unadjusted and adjusted monthly station values were converted to anomalies relative to the 1971–2000 station mean." In other words, the only thing this study measures is the difference in instrument error at each station. The absolute error occurring at individual stations because the station had not been properly located is not measured. A poor station with an absolute temperature error of +5 degrees C still has a bias error of +5 degree C - no matter what the variation occurring due to instrumentation type. I'm a chemical engineer with U.S. government and 20 years of research experience in various areas including environmental mitigation. If one of my phD's came to me with this nonsense, I'd fire him on the spot. Sorry boys, you are going to have to better than this. Dave
    Response: Whenever you look at a graph of global temperature, invariably you're looking at "temperature anomaly" (the change in temperature), not absolute temperature. As NASA puts it, "the reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region". It's the change in temperature (eg - the trend) that is of interest and the analysis in Menne 2010 determines if there is any bias in the trend due to poor siting of weather stations.
  50. Why is Greenland's ice loss accelerating?
    As shown in the other Greenland post in this site, the best-fit curve of total ice mass loss from GRACE shows that Greenland ice loss is accelerating at a rate of 30 Gigatonnes/yr^2. But now results that we have the contributions: Ice Discharge: -94 Gt/yr (39,5%) Surface Mass Balance: -144 Gt/yr (60,5%) So most of ice loss comes from just surface melting! This is surprising because surface-melt minus surface-precipitation is something that is very weather-sensitive. Now I ask: 1. How could a weather-sensitive melting follow a quadratic function so closely (i.e. how could the acceleration be so close to a constant value of 30 Gigatonnes/yr^2)? 2. Can we expect this trend to persist or weather-climate variability will "break" the soft curve here shown at any time?

Prev  2494  2495  2496  2497  2498  2499  2500  2501  2502  2503  2504  2505  2506  2507  2508  2509  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us