Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  2251  2252  2253  2254  2255  2256  2257  2258  2259  2260  2261  2262  2263  2264  2265  2266  Next

Comments 112901 to 112950:

  1. Waste heat vs greenhouse warming
    @ doug_bostrom at 12:16 PM on 10 August, 2010 Doug, I love to, but a) the topic is OT, and b) I did address that question in another post but that post was deleted by the moderators, and c) it is not only pointless, but also a waste of my time and energy to try to establish any what so ever fruitful communication here as about half of my comments I post gets deleted by the moderator team. The only recommendation I can give, is to check the references I gave in an earlier post, but for your convenience that post was deleted as well.
  2. It's cooling
    Voila, Charles.
  3. Charles Higley at 07:43 AM on 12 August 2010
    It's cooling
    It is surely curious that the chart of ocean temperature stops at 2005. The oceans have been clearly cooling since 2006. How about an update and a new discussion?
    Response: It's a good question - I actually asked the same question to Dan Murphy, author of the paper where the total heat content data comes from  (from Figure 1 above). The data actually ends in 2003. This is because the ocean heat data taken from Domingues et al. 2008 ends in 2003. This is why I post the von Schuckmann data in Figure 2 - this continues onto 2008.
  4. Grappling With Change: London and the River Thames
    Pete, I don't see any points of disagreement to resolve with you, only some repetitions of things I mentioned in my article and from which some folks may derive comfort, and some speculations on your part. As to your question about the contribution of isostatic adjustment to local sea level rise, earlier in this thread of comments that question came up. Answer: Indeed the whole southern coast of England is sinking dorlomin. The rate varies by location; near London it's dropping at a pretty good clip of 0.5mm/yr.
  5. Grappling With Change: London and the River Thames
    Doug, getting back to your article, in which you address a long-standing battle between nature and humans – flooding - let’s try to resolve some points of disagreement (which may arise from the interpretation of words rather differences of opinion. You say in your article “It's notable that of the thousands of pages of assessment and planning documents associated with future London flood management there is essentially no mention of anthropogenic causes for climate change, naturally so because cause has nothing to do with response when cause is outside of the control of planners”. Of course there is another way to explain that lack of any mention of humans causing climate change. Perhaps the authors saw no convincing evidence that any such changes were at all significant. One thing that puzzles me is that London was established as a major urban area around AD 70 and has grown rapidly since (Note 1) with the population peaking in the 1940s. During those 2000 years we are led to believe that global mean temperature rose to a peak about AD 1300, fell to a low about AD1600 and has been rising since. In 1990 the IPCC was suggesting that the medieval period was hotter than in 1975 (Note 2) - but there is considerable debate about the extent and rate of increase occurring presently. Despite those higher medieval temperatures it was not until the damaging floods in 1928 and 1953 that consideration was given to the need for a “Thames Barrier” and that was not for protection from unusual downpours but “.. to prevent London from being flooded by exceptionally high tides and storm surges moving up from the sea” (Note 3). The Institute of Historical Research (Note 4) says “The lands bordering the tidal river Thames and the Thames Estuary have historically been highly vulnerable to marine flooding. The most severe of these floods derive from North Sea storm surges, when wind and tide combine to drive huge quantities of water against the coast, .. ”. This is substantiated by the Environmental Agency (Note 5). The London Regional Flood Risk Appraisal – October 2009 (Note 6) talks about “ .. responding to potential increases in flood risk from urban development, land use change, and climate change .. ”. They put climate change at the end of the list, perhaps because of the enormous uncertainty about what changes will occur. The extent to which global mean temperatures will rise, fall or remain the same is unknown and climate change can only be speculated about. There are apparently two causes of rising water levels in that area: 1) post glacial tilting of the UK, up in the N & W and down in the S & E, 2) rise in the high water level (2mm/year). Although the last ice age is long gone the effects of the weight of all that extra ice are still with us, including the restoration of equilibrium to the land. It is suggested (Note 7) that “Today, typical uplift rates are of the order of 1 cm/year or less” which suggests that sink rates are of the same order. I wonder how much of the mean sea level change (3mm/yr approx?) estimated using tide gauge measurements results from this sinking rather than from any global warming. Can anyone link to a study covering this? NOTES: 1) see http://en.wikipedia.org/wiki/History_of_London 2) see http://www.uoguelph.ca/~rmckitri/research/NRCreport.pdf 3) see http://en.wikipedia.org/wiki/Thames_Barrier 4) see http://www.history.ac.uk/projects/tidal-thames 5) see http://www.environment-agency.gov.uk/homeandleisure/floods/117047.aspx 6) see http://static.london.gov.uk/mayor/strategies/sds/docs/regional-flood-risk09.pdf 7) see http://en.wikipedia.org/wiki/Post-glacial_rebound Best regards, Pete Ridley
  6. Models are unreliable
    Yeah, Pete: circumspect, conservative. Hargreaves notes that Hansen's 1988 model passes the "null hypothesis" test but does not leap to any conclusions about "all the models are really great."
  7. Models are unreliable
    Doug, thanks for that link to Julia Hargreaves’s paper. I wholeheartedly agree with her conclusion that “Uncertainty analysis is a powerful, and under utilized, tool which can place bounds on the state of current knowledge and point the way for future research, but it is only by better understanding the processes and inclusion of these processes in the models that the best models can provide predictions that are both more credible and closer to the truth”. There’s a lot more research to be done into obtaining a proper understanding of those horrendously complicated and poorly understood global climate processes and drivers before any reliable models can be constructed and used for predictions. Best regards, Pete Ridley
  8. On Statistical Significance and Confidence
    #12 Ken Lambert, It is well known that CO2 is not the only influence on the earth's energy content. As temperature has a reasonably good relationship with energy content (leaving out chemical or phase changes), it is reasonable to use air temperatures to some extent. (Ocean temps should be weighed far more heavily than air temps, but regardless...) If you pull up any reputable temperature graph, you will see that there have been about 4 to 6 times in the past 60 years where the temperature has actually dipped. So, according to your logic GW has stopped 4 to 6 times already in the last 60 years. However, it continues to be the case that every decade is warmer than the last. What I find slightly alarming is that, despite the sun being in an usually long period of low output, the temperatures have not dipped.
    Moderator Response: Rather than delve once more into specific topics handled elsewhere on Skeptical Science and which may be found using the "Search" tool at upper left, please be considerate of Alden's effort by trying to stay on the topic of statistics. Examples of statistical treatments employing climate change data are perfectly fine, divorcing discussion from the thread topic is not considerate. Thanks!
  9. On Statistical Significance and Confidence
    #14 Arkadiusz Semczyszak, "Why exactly 15 years?" Good question. The answer is that the person asking the question of Phil Jones used the range 1995-2009, knowing that if he used the range 1994-2009, Dr. Jones would have been able to answer 'yes' instead of 'no'.
  10. Models are unreliable
    Pete, regarding validation you ought to take a look at Hargreaves' remarks here. Concerning that item, be sure also to read Annan's remarks here where as you can see he leads us to the conclusion that making broad condemnatory statements about purported lack of model utility is not circumspect.
  11. Models are unreliable
    Well that comment of mine on 11th August @ 07:12 did elicit some interesting responses but, as Doug acknowledged @ 08:03 “you won't find a refutation to M&M 2010 coming from here”. I think that Doug’s contribution @ 14:10 offered the best read, at friend James’s blog (Note 1). There are lots of interesting comments there, the one that I found most appropriate being from Ron Cram on 12th August @ 01:10 QUOTE: Gavin writes "It is also perhaps time for people to stop trying to reject 'models' in general, and instead try and be specific." People are not trying to reject models in general. It has already been done. Generally speaking commenters are bringing up points already published in Orrin Pilkey's book "Useless Arithmetic: Why Environmental Scientists Can't Predict the Future." Nature is simply too chaotic to be predicted by mathematical formulas, no matter how sophisticated the software or powerful the hardware. None of the models relied on by the IPCC have been validated. It is fair to say the models are non-validated, non-physical and non-sensical. Perhaps it is time to quit pretending otherwise UNQUOTE. NOTE: 1) see http://julesandjames.blogspot.com/2010/08/how-not-to-compare-models-to-data-part.html#comments Best regards, Pete Ridley
  12. On Statistical Significance and Confidence
    ABG at 01:29 AM on 12 August, 2010 Thanks, Alden. I actually understood exactly what you're getting at. Whether I can remember and apply it in future is another matter!
  13. On Statistical Significance and Confidence
    BP @17: Nice. That level of disingenuousness must be applauded. Using a plot of localized ENSO-related temperature anomaly to suggest that the oceans are losing heat is pure genius. Anyone interested in the source and significance of BP's plot is directed here. See, in particular, the "Weekly ENSO Evolution, Status, and Prediction Presentation."
  14. On Statistical Significance and Confidence
    Ken Lambert @12: No scientist who studies climate would use 10 or 12 years, or the 15 in the OP, to identify a long-term temperature trend. For reasons that have been discussed at length many times, here and elsewhere, there is quite a bit of variance in annualized global temperature anomalies, and it takes a longer period for reliable (i.e., statistically significant) trends to emerge. Phil Jones was asked a specific question about the 15-year trend, and he gave a specific answer. Alden Griffith was explaining what he meant. Neither, I believe, would endorse using any 15-year period as a baseline for understanding climate, nor would most climate scientists. The facts of AGW are simple and irrefutable: 1. There are multiple lines of direct evidence that human activity is increasing the CO2 in the atmosphere. 2. There is well-established theory, supported by multiple lines of direct evidence, that increasing atmospheric CO2 creates a radiative imbalance that will warm the planet. 3. There are multiple lines of direct evidence that the planet is warming, and that that warming is consistent with the measured CO2 increase. One cannot rationally reject AGW simply because the surface temperature record produced by one organization does not show a constant increase over whatever period of years, months, or days one chooses. The global circulation of thermal energy is far too complex for such a simplistic approach. The surface temperature record is but one indicator of global warming, it is not the warming itself. When viewed over a period long enough to provide statistical significance, all of the various surface temperature records indicate global warming.
  15. On Statistical Significance and Confidence
    I bet you can get low-ish significance trends in any short interval in the last half century. There's nothing special in the "lack of significance" of this recent period. One could claim forever that "the last x years did not reach 95% significance".
  16. Alden Griffith at 01:29 AM on 12 August 2010
    On Statistical Significance and Confidence
    John Russell: You're not alone! Statistics is a notoriously nonintuitive field. Instead of getting bogged down in the details, here's perhaps a more simple take home message: IF temperatures are completely random and are not actually increasing, it would still be rather unlikely that we would see a perfectly flat line. So I've taken the temperature data and completely shuffled them around so that each temperature value is randomly assigned to a year: So here we have completely random temperatures but we still sometimes see a positive trend. If we did this 1000 times like John Brookes did the average random slope would be zero, but there would be plenty of positive and negative slopes as well. So the statistical test is getting at: is the trend line that we actually saw unusual compared to all of the randomized slopes? In this case it's fairly unusual, but not extremely. To get at your specific question - the red line definitely fits the data better (it's the best fit, really). But that still doesn't mean that it couldn't be a product of chance and that the TRUE relationship is flat. [wow - talking about stats really involves a lot of double negatives... no wonder it's confusing!!!] -Alden
  17. Berényi Péter at 01:27 AM on 12 August 2010
    On Statistical Significance and Confidence
    #13 CBDunkerson at 00:09 AM on 12 August, 2010 We must see rising temperatures SOMEWHERE within the climate system. In the oceans for instance. Nah. It's coming out, not going in recently.
  18. On Statistical Significance and Confidence
    Discussing trends and statistical significance is something that I attempt to do - with no training in statistics. All I have learned from various websites over the last few years is conceptual, not mathematical. I would appreciate anyone with sufficient qualifications straightening out any misconceptions re the following: 1) Generally speaking, the greater the variance in the data, the more data you need (in a time series) to achieve statistical significance on any trend. 2) With too-short samples, the resulting trend may be more an expression of the variability than any underlying trend. 3) The number of years required to achieve statistical significance in temperature data will vary slightly depending on how 'noisy' the data is in different periods. 4) If I wanted to assess the climate trend of the last ten years, a good way of doing it would be to calculate the trend from 1980 - 1999, and then the trend from 1980 - 2009 and compare the results. In this analysis, I am using a minimum of 20 years of data for the first trend (statistically significant), and then 30 years of data for the second, which includes the data from the first. (With Hadley data, the 30-year trend is slightly higher than the 20-year trend) Aside from asking these questions for my own satisfaction, I'm hoping they might give some insight into how a complete novice interprets statistics from blogs, and provide some calibration for future posts by people who know what they're talking about. :-) If it's not too bothersome, I'd be grateful if anyone can point me to the thing to look for in the Excel regression analysis that tells you what the statistical significance is - and how to interpret it if it's not described in the post above. I've included a snapshot of what I see - no amount of googling helps me know which box(es) to look at and how to interpret.
  19. Alden Griffith at 00:59 AM on 12 August 2010
    On Statistical Significance and Confidence
    Stephan Lewandowsky: I used the Bayesian regression script in Systat using a diffuse prior. In this case I did not specifically deal with autocorrelation. We might expect that over such a short time period, there would be little autocorrelation through time which does appear to be the case. You are right that this certainly can be an issue with time-series data though. If you look at longer temperature periods there is strong autocorrelation. apeescape: I'm definitely not a Bayesian authority, but I'm assuming you're asking whether I examined this in more of a hypothesis testing framework? No - in this case I just examined the credibility interval of the slope. Ken Lambert: please read my previous post -Alden
  20. Arkadiusz Semczyszak at 00:12 AM on 12 August 2010
    On Statistical Significance and Confidence
    Since we are at the basis of statistics. I studied a “long three years” statistics in ecology and agriculture. Why exactly 15 years? I have written repeatedly that the counting period for the trend may not be in the decimal system, because in this system is not running type noise variability: EN(LN) SO, etc. For example, trends AMO 100 and 150 years combined with the negative phase of AMO positive "improving "results. The period for which we hope the trend must have a deep reason. While in the above-mentioned cases (100, 150 years), the error is small, in this particular case ("flat" phase of the AMO after a period of growth for 1998 - an extreme El Nino), the trend should be calculated from the same phase of EN(LN)SO after a period of reflection after the extreme El Nino, ie after 2001., or remove the "noise": extreme El Nino and the "leap" from cold to warm phase AMO. This, however, and so may not matter whether you currently getting warmer or not, once again (very much) regret tropical fingerprint of CO2 (McKitrick et al. - unfortunately published in Atmos Sci Lett. - here, too, went on statistics, including the selection of data)
  21. On Statistical Significance and Confidence
    Ken Lambert #12 wrote: "The answer is that the temperatures look like they have flattened over the last 10-12 years and this does not fit the AGW script!" This is fiction. Temperatures have not "flattened out"... they have continued to rise. Can you cherry pick years over a short time frame to find flat (or declining!) temperatures? Sure. But that's just nonsense. When you look at any significant span of time, even just the 10-12 years you cite, what you've got is an increasing temperature trend. Not flat. "With an increasing energy imbalance applied to a finite Earth system (land, atmosphere and oceans) we must see rising temperatures." We must see rising temperatures SOMEWHERE within the climate system. In the oceans for instance. The atmospheric temperature on the other hand can and does vary significantly from year to year.
  22. On Statistical Significance and Confidence
    Alden # Original Post We can massage all sorts of linear curve fits and play with confidence limits to the temperature data - and then we can ask why are we doing this? The answer is that the temperatures look like they have flattened over the last 10-12 years and this does not fit the AGW script! AGW believers must keep explaining the temperature record in terms of linear rise of some kind - or the theory starts looking more uncertain and explanations more difficult. It it highly likely that the temperature curves will be non-linear in any case - because the forcings which produce these temperature curves are non-linear - some and logarithmic, some are exponential, some are sinusoidal and some we do not know. The AGW theory prescribes that a warming imbalance is there all the time and it is increasing with CO2GHG concentration. With an increasing energy imbalance applied to a finite Earth system (land, atmosphere and oceans) we must see rising temperatures. If not, the energy imbalance must be falling - which either means that radiative cooling and other cooling forcings (aerosols and clouds) are offsetting the CO2GHG warming effects faster that they can grow, and faster than AGW theory predicts.
  23. Dikran Marsupial at 23:36 PM on 11 August 2010
    On Statistical Significance and Confidence
    I'm going to have a go at explaining why the 1 - the p-value is not the confidence that the alternative hypothesis is true in (only) slightly more mathematical terms. The basic idea of a frequentist test is to see how likely it is that we should observe a result assuming the null hypothesis is true (in this case that there is no positive trend and the upward tilt is just due to random variation). The less likely the data under the null hypothesis, the more likely it is that the alternative hypothesis is true. Sound reasonable? I certainly think so. However, imagine a function that transforms the likelihood under the null hypothesis into the "probability" that the alternative hypothesis is true. It is reasonable to assume that this function is strictly decreasing (the more likely the null hypothesis the less likely the alternative hypothesis) and gives a value between 0 and 1 (which are traditinally used to mean "impossible" and "certain"). The problem is that other than the fact it is non-decreasing and bounded by 0 and 1, we don't know what that function actually is. As a result there is no direct calibration between the probability of the data under the null hypothesis and the "probability" that the alternative hypothesis is true. This is why scientists like Phil Jones say things like "at the 95% level of significance" rather than "with 95% confidence". He can't make the latter statement (although that is what we actually want to know) simply because we don't know this function. As a minor caveat, I have used lots of "" in this post because under the frequentist definition of a probability (long run frequency) it is meaningless to talk about the probability that a hypothesis is true. That means in the above I have been mixing Bayesian and frequentist definitions, but I have used the "" to show where the dodgyness lies. As to simplifications. We should make things a simple as possible, but not more so (as noted earlier). But also we should only make a simplification if the statement remains correct after the simplification, and in the specific case of "we have 92% confidence that the HadCRU temperature trend from 1995 to 2009 is positive" that simply was not correct (at least for the traditional frequentists test).
  24. On Statistical Significance and Confidence
    "While this whole discussion comes from one specific issue involving one specific dataset, I believe that it really stems from the larger issue of how to effectively communicate science to the public. Can we get around our jargon? Should we embrace it? Should we avoid it when it doesn’t matter? All thoughts are welcome…" More research projects should have metanalysis as a goal. The outcomes of which should be distilled ala Johns one line responses to denialist arguments and these simplifications should be subject to peer review. Firtsly by scientists but also sociologists, advertising executives, politicians, school teachers, etc etc. As messages become condensed the scope for rhetoricical interpretation increases. Science should limit its responsability to science but should structure itself in a way that facilitates simplification. I think this is why we have political parties, or any comitee. I hope the blogsphere can keep these mechanics in check. The story of the tower of babylon is perhaps worth remembering. It talks about situation where we reach for the stars and we end up not being able to communicate with one another.
  25. Dikran Marsupial at 23:20 PM on 11 August 2010
    On Statistical Significance and Confidence
    John Russell If it is any consolation, I don't think it is overly contraverisal to suggest that there are many (I almost wrote majority ;o) active scientists who use tests of statistical significance every day that don't fully grasp the subtleties of underlying statistical framework. I know from my experience of reviewing papers that it is not unknown for a statistican to make errors of this nature. It is a much more subtle concept that it sounds. chriscanaris I would suggest that the definition of an outlier is another difficult area. IMHO there is no such thing as an outlier independent of assumtions made regarding the process generating the data (in this case, the "outliers" are perfectly consistent with climate physics, so they are "unusual" but not strictly speaking outliers). The best definition of an outlier is an observation that cannot be reconciled with a model that otherwise provides satisfactory generalisation. ABG Randomisation/permutation tests are a really good place to start in learning about statistical testing, especially for anyone with a computing background. I can recommend "Understanding Probability" by Henk Tijms for anyone wanting to learn about probability and stats as it uses a lot of simulations to reinforce the key ideas, rather than just maths.
  26. Alden Griffith at 23:05 PM on 11 August 2010
    On Statistical Significance and Confidence
    John Brooks: yes, this is definitely one way to test significance. It's called a "randomization test" and really makes a whole lot of sense. Also, there are fewer assumptions that need to be made about the data. However, the reason that you are getting lower probabilities is that you are conducting the test in a "one-tailed" manner, that is you are asking whether the slop is greater instead of whether it is simply different (i.e. could be negative too). Most tests should be two-tailed unless you have your specific alternative hypothesis (positive slope) before you collect the data. -Alden p.s. I'll respond to others soon, I just don't have time right now.
  27. Models are unreliable
    rcglinski. Not precipitation and not a century, but this item gives a really neat alignment of humidity over the last 40 years. I've not followed the references through, but you might find some leads to what you're after if you do. http://tamino.wordpress.com/2010/08/08/urban-wet-island/#comments
  28. On Statistical Significance and Confidence
    As has been mentioned elsewhere by others, given that the data prior to this period showed a statistically significant temperature increase, with a calculated slope, then surely the null hypothesis should be that the trend continues, rather than there is no increase? I guess it depends on whether you take any given interval as independent of all other data points... stats was never my strong point - we had the most uninspiring lecturer when I did it at uni, it was a genuine struggle to stay awake!
  29. On Statistical Significance and Confidence
    The data set contains two points which are major 'outliers' - 1996 (low) and 1998 (high). I appreciate 1998 is attributable to a very strong El Nino. Very likely, the effect of the two outliers is to cancel one another out. Nevertheless, it would be an interesting exercise to know the probability of a positive slope if either or both outliers were removed (a single and double cherry pick if you like) given the 'anomalous' nature of the gap between two temperatures in such a short space of time.
  30. Dikran Marsupial at 22:28 PM on 11 August 2010
    Has Global Warming Stopped?
    fydijkstra A few points: (i) just because a flattening curve gives a better fit to the calibration data than a linear function does not imply that it is a better model. If it did then there would be no such thing as over-fitting. (ii) it is irellevant that most real world functions saturate at some point if the current operating point is nowhere near saturation. (iii) there is indeed no physical basis to the flattening model, however the models used to produce the IPCC projections are based on our understanding of physical processes. They are not just models fit to the training data. That is one very good reason to have more confidence in their projections as predictions of future climate (although they are called "projection" to make it clear that they shouldn't be treated as predictions without making the appropriate caveats). (iv) while low-order polynomials are indeed useful, just because it is a low-order polynomial does not mean that there is no over-fitting. A model can be over-fit without exactly interpolating the calibration data, and you have given no real evidence that your model is not over-fit. (v) your plot of the MDO is interesting as not only is there an oscillation, but it is super-imposed on a linear function of time, so it too goes off to infinity. (vi) as there is only 2 cycles of data shown in the graph, there isn't really enough evidence that it really is an oscillation, if nothing else it (implicitly) assumes that the warming from the last part of the 20th century is not caused by anthropogenic GHG emissions. If you take that slope away, then there is very little evidence to support the existence of an oscillation. (v) it would be interestng to see the error bars on your flattening model. I suspect there are not enough observations to greatly constrain the behaviour of the model beyond the calibration period, in which case the model not giving useful predictions.
  31. On Statistical Significance and Confidence
    I hate to admit this -- I'm very aware some will snort in derision -- but as a reasonably intelligent member of the public, I don't really understand this post and some of the comments that follow. My knowledge of trends in graphs is limited to roughly (visually) estimating the area contained below the trend line and that above the trend line, and if they are equal over any particular period then the slope of that line appears to me to be a correct interpretation of the trend. That's why, to me, the red line seems more accurate than the blue line on the graph above. And this brings me to the problem we're up against in explaining climate science to the general public: only a tiny percentage (and yes, it's probably no more than 1 or 2 percent of the population) will manage to wade through the jargon and presumed base knowledge that scientists assume can be followed by the reader. Some of the principles of climate science I've managed to work out by reading between the lines and googling -- turning my back immediately on anything that smacks just of opinion and lacks links to the science. But it still leaves huge areas that I just have to take on trust, because I can't find anyone who can explain it in words I can understand. This probably should make me prime Monckton-fodder, except that even I can see that he and his ilk are politically-motivated to twist the facts to suit their agenda. Unfortunately, the way real climate science is put across, provides massive opportunities for the obfuscation that we so often complain about. Please don't take this personally, Alden; I'm sure you're doing your best to simplify -- it's just that even your simplest is not simple enough for those without the necessary background.
  32. Berényi Péter at 20:55 PM on 11 August 2010
    Temp record is unreliable
    #109 kdkd at 19:37 PM on 11 August, 2010 Your approach still gives the appearance of cherry picking stations You are kidding. I have cherry picked all Canadian stations north of the Arctic Circle that are reporting, that's what you mean? Should I include stations with no data or what? How would you take a random sample of the seven (7) stations in that region still reporting to GHCN every now and then? 71081 HALL BEACH,N. 68.78  -81.25 71090 CLYDE,N.W.T.  70.48  -68.52 71917 EUREKA,N.W.T. 79.98  -85.93 71924 RESOLUTE,N.W. 74.72  -94.98 71925 CAMBRIDGE BAY 69.10 -105.12 71938 COPPERMINE,N. 67.82 -115.13 71957 INUVIK,N.W.T. 68.30 -133.48 BTW, here is the easy way to cherry pick the Canadian Arctic. Hint: follow the red patch.
  33. Has Global Warming Stopped?
    In my comment #20 I showed that the data fit better to a flattening curve than to a linear line. This is true for the last 15 years, but also for the last 50 years. I also suggested a reason why a flattening curve could be more appropriate than a straight line: most processes in nature follow saturation patterns instead of continuing ad infinitum. Several comments criticized the polynomial function that I used. ‘There is no physical base for that!’ could be the shortest and most friendly summary of these comments. Well, that’s true! There is no physical basis for using a polynomial function to describe climatic processes, regardless of which order the function is, first (linear), second (quadratic) of higher. Such functions cannot be used for predictions, as Aldin also states: we are only speaking about the trend ‘to the present’. Aldin did not use any physical argument in his trend analysis, and neither did I, apart from the suggestion about ‘saturation.’ A polynomial function of low order can be very convenient to reduce the noise and show a smoothed development. Nothing more than that. It has nothing to do with ‘manipulating [as a] substitute of knowing what one is doing’ (GeorgeSP, #61). A polynomial function should not be extrapolated. So far about the statistical arguments. Is there really no physical argument why global warming could slow down or stop? Yes there are such arguments. As Akasofu has shown, the development of the global temperature after 1800 can be explained as a combination of the multi-decadal oscillation and a recovery from the Little Ice Age. See the following figure. The MDO has been discussed in several peer-reviewed papers, and they tend to the conclusion, that we could expect a cooling phase of this oscillation for the coming decades. So, the phrase ‘global warming has stopped’ could be true for the time being. The facts do not contradict this. What causes this recovery from the Little Ice Age, and how long will this recovery proceed? That could be a multi century oscillation. When we look at Roy Spencers ‘2000 years of global temperatures’ we see an oscillation with a wavelength of about 1400 years: minima in 200 and 1600, maximum in 800. The next maximum could be in 2200.
  34. On Statistical Significance and Confidence
    Another interesting way to look at it is to look at the actual slope of the line of best fit, which I get to be 0.01086. Now take the actual yearly temperatures and randomly assign them to years. Do this (say) a thousand times. Then fit a line to each of the shuffled data sets and look at what fraction of the time the shuffled data produces a slope of greater than 0.01086 (the slope the actual data produced). So for my first trial of 1000 I get 3.5% as the percentage of times random re-arrangement of the temperature data produces a greater slope than the actual data. The next trial of 1000 gives 3.5% again, and the next gave 4.9%. I don't know exactly how to phrase this as a statistical conclusion, but you get the idea. If the data were purely random with no trend, you'd be expecting ~50%.
  35. Temp record is unreliable
    BP #108 Your approach still gives the appearance of cherry picking stations. As I said previously, you need to make a random sample of stations to examine. Individual stations on a global grid are not informative, except as curiosities :)
  36. Berényi Péter at 18:59 PM on 11 August 2010
    Temp record is unreliable
    This one is related to the figure above. It's adjustments to GHCN raw data relative to the Environment Canada Arctic dataset (that is, difference between red and blue curves). Adjustment history is particularly interesting. It introduces an additional +0.15°C/decade trend after 1964, none before.
  37. gallopingcamel at 16:30 PM on 11 August 2010
    Why I care about climate change
    Some great posts! Here are a few comments: macoles (#123 & #124), The irony was unintended. For me the establishment/consensus is often wrong whether it be based on religion or science. It is in my nature to question authority whether it is based on church, ideology or science. muoncounter (#125), Like you, I care about the teaching of science in K-12 as well as college level. In my state, there are 370 high schools but less than 40 teachers with physics degrees teaching science. The quality of science text books is critical when so few teachers have an adequate background in the subject. I hope you will want to support John Hubisz in his efforts to improve science text books: http://www.science-house.org/middleschool/ doug_bostrom (#126) In Newton's day they used to talk about "Laws" but modern physicists understand that they are always wrong even though their theories often have great predictive power. The perihelion of Mercury does precess as Einstein predicted, GPS systems need relativistic corrections and the energy released from nuclear reactions appears to follow the E=mc2 relationship. In spite of all this success, Einstein understood the limitations of his theories better than the folks at Conservapedia. muoncounter (#127), Loved the cartoon (how did I miss it?). At least one more pane needed for evolution vs. creationism.
  38. Models are unreliable
    Do any climate models have substantial agreement with the last century of precipitation data?
  39. CO2 was higher in the past
    Thanks Doug.
  40. Models are unreliable
    Fun! Schmidt and Knappenberger are found at Annan's blog, discussing M&M 2010. Minor celebrities For extra credits in "Climate Science Arcana" coursework, follow the "old dark smear" links at the top of Annan's post. Those have a bit of useful background material to the M&M 2010 treatment of Santer 2008, to do with RPjr. If you have a clue what that's all about, you spend too much time on climate blogs.
  41. Grappling With Change: London and the River Thames
    Further to HR's remarks, I see that it's actually quite easy to find publications indicating some changes in storm behavior and frequency in the North Atlantic. I should not so easily conclude that I can't contribute a little further information here. Increasing destructiveness of tropical cyclones over the past 30 years A shift of the NAO and increasing storm track activity over Europe due to anthropogenic greenhouse gas forcing Heightened tropical cyclone activity in the North Atlantic: natural variability or climate trend? Trends in Northern Hemisphere Surface Cyclone Frequency and Intensity As the London folks noted, this information is in keeping w/predictions. As they also noted, while no particular storm can be linked to climate forcing it would not be prudent to ignore an emerging pattern of observed evidence of a predicted trend. HR, this exercise leads me to suggest you ask yourself, "Why did I talk about sea level change over the past 30 years when our topic is about sea level rise over the next 100+ years? Why am I trying so hard to ignore what's in front of me?"
  42. Don Gisselbeck at 11:58 AM on 11 August 2010
    More evidence than you can shake a hockey stick at
    The record low in Guinea is interesting. I was a Peace Corps volunteer in neighboring Sierra Leone in the late 70s. Several of my students were from Guinea and had seen ice form on open water during the Harmattan.
  43. CO2 was higher in the past
    Robert I don't see anything unusual there. WUWT folks are angry because some poor scientist found out something boxing them in a little bit more.
  44. On Statistical Significance and Confidence
    Thanks for this, you have a great website. btw, did you check out the Bayes factor relative to the "null"?
  45. CO2 was higher in the past
    Watts has just posted a new article http://wattsupwiththat.com/2010/08/10/study-climate-460-mya-was-like-today-but-thought-to-have-co2-levels-20-times-as-high/ It refers to a new study in PNAS http://www.pnas.org/content/early/2010/08/02/1003220107.abstract?sid=08063fb7-c9e9-48d7-a515-b3db8907505c Hope you can comment on this soon.
  46. Grappling With Change: London and the River Thames
    I'm not the person to deal with your points, HR, I'm the wrong person to challenge. You're in disagreement with experts having more knowledge of this topic than either of us. What I can surmise based on what I've read of our processes of cognition is that your disagreement with people knowing more of the topic of operating the Thames Barrier than the both of us suggests you're unwilling to confront information that makes you uncomfortable. I can't think of any other explanation. By the way, you're by no means unique or even at fault for having a hard time dealing with risk. As far as researchers can tell so far it's a universal trait of humans.
  47. Grappling With Change: London and the River Thames
    30.doug_bostrom at 11:45 AM on 5 August, 2010 Doug it's a little weak to suggest the data I presented is just an aspect of a psycological problem I have. There is no significant trend for storminess in the North Sea that I can find published. The list of Thames barrier closures shows a downward trend in the surge and High Water Level readings during closure events suggesting the barrier is being closed for less severe events. Ocean levels have risen how much in the last 3 decades? 10cm? There is no justification to suggest the very large increase in barrier closures has anything to do with real changes in climate. Deal with the points rather than my mental state.
  48. Berényi Péter at 10:38 AM on 11 August 2010
    Temp record is unreliable
    #102 Ned at 06:50 AM on 11 August, 2010 I thought it would be worth putting up a quick example to illustrate the necessity of using some kind of spatial weighting when analyzing spatially heterogeneous temperature data OK, you have convinced me. This time I have chosen just the Canadian stations north of the Arctic Circle from both GHCN and the Environment Canada dataset. The divergence is still huge. Environment Canada shows no trend whatsoever during this 70 year period, just a cooling event centered at the early 1970s, while GHCN raw dataset is getting gradually warmer than that, by more than 0.5°C at the end, creating a trend this way. No amount of gridding can explain this fact away.
  49. CO2 was higher in the past
    Here's an excellent writeup on main sequence stars rcglinksi.
  50. CO2 was higher in the past
    How is solar heat output determined for periods before direct measurement? I ask because the article says solar output was 4% lower during the Ordovician but I can't tell how the number was arrived at.

Prev  2251  2252  2253  2254  2255  2256  2257  2258  2259  2260  2261  2262  2263  2264  2265  2266  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us