Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Still Going Down the Up Escalator

Posted on 3 February 2012 by dana1981

The Escalator, originally created as a simple debunking to the myth "Global warming stopped in [insert date]", turned out to be a very popular graphic.  Going Down the Up Escalator, Part 1 recently surpassed 20,000 pageviews, Part 2 has an additional 4,000+ views, and the graphic itself has been used countless times in other blogs and media articles.  Due to its popularity, we have added a link to The Escalator in the right margin of the page, and it also has its own short URL, sks.to/escalator.

The popularity of the graphic is probably due to the fact that (1) it's a simple, stand-alone debunking of the "global warming stopped" myth, and (2) that particular myth has become so popular amongst climate denialists.  As The Escalator clearly illustrates, it's easy to cherry pick convenient start and end points to obtain whatever short-term trend one desires, but the long-term human-caused global warming trend is quite clear underneath the short-term noise.

The original Escalator was based on the Berkeley Earth Surface Temperature (BEST) data, which incorporates more temperature station data than any other data set, but is limited to land-only data; additionally the record terminates in early 2010.  We originally created the graphic in response to the specific myth that the BEST data showed that global warming had stopped.

It is interesting to apply the same analysis to a current global (land-ocean) temperature record to determine whether short term trends in the global data can be equally misleading. A global version of the Escalator graphic has therefore been prepared using the NOAA NCDC global (land and ocean combined) data through December 2011 (Figure 1).

ncdc escalator

Figure 1: Short-term cooling trends from Jan '70 to Nov '77, Nov '77 to Nov '86, Sep '87 to Nov '96, Mar '97 to Oct '02, and Oct '02 to Dec '11 (blue) vs. the 42-year warming trend (Jan '70 to Dec '11; red) using NOAA NCDC land-ocean data.

The Predictable Attacks

On 31 January 2012, John Cook emailed me about several recent uses of The Escalator, including an inquiry from Andrew Dessler, requesting to use it in one of his lectures.  In the email, John suggested that the graphic had gained so much popularity, it would likely soon be the target of attacks from fake skeptics.

As if eavesdropping on our conversation, the first such attack on the escalator came the very next day, on 01 February 2012.  The graphic had been publshed nearly 3 months earlier, and John predicted the fake skeptic response within a day's margin. 

The Escalator was recently used by a number of sources in response to the denialist plea for climate inaction published in the Wall Street Journal, including Media Matters, Climate Crocks, Huffington Post, and Phil Plait at Discover Magazine's Bad AstronomyStatistician William Briggs took issue with Phil Plait's use of the graphic.  Specifically, he criticized the lack of error bars on the data used in The Escalator, making some rather wild claims about the uncertainty in the data.

"...the models that gave these dots tried to predict what the global temperature was. When we do see error bars, researchers often make the mistake of showing us the uncertainty of the model parameters, about which we do not care, we cannot see, and are not verifiable. Since the models were supposed to predict temperature, show us the error of the predictions.

I’ve done this (on different but similar data) and I find that the parameter uncertainty is plus or minus a tenth of degree or less. But the prediction uncertainty is (in data like this) anywhere from 0.1 to 0.5 degrees, plus or minus."

As tamino has pointed out, calculating an area-weighted average global temperature can hardly be considered a "prediction" and as he and Greg Laden both pointed out, BEST has provided the uncertainty range for their data, and it is quite small (see it graphically here and here).  Plait has also responded to Briggs here.

The Escalating Global Warming Trend

Briggs takes his uncertainty inflation to the extreme, claiming that we can't even be certain the planet has warmed over the past 70 years.

"I don’t know what the prediction uncertainty is for Plait’s picture. Neither does he. I’d be willing to bet it’s large enough so that we can’t tell with certainty greater than 90% whether temperatures in the 1940s were cooler than in the 2000s."

It's difficult to ascertain what Briggs is talking about here.  We're not using the current trend to predict (hindcast) the global temperature in 1940.  We have temperature station measurements in 1940 to estimate the 1940 temperature, and data since then to estimate the warming trend.  Once again, we're producing estimates, not predictions here. 

Moreover, the further back in time we go and the more data we use, the smaller the uncertainty in the trend.  For example, see this post by tamino, which shows that the global warming trend since 1975 is roughly 0.17 +/- 0.04°C per decade in data from NASA GISS (Figure 2).  The shorter the timeframe, the larger the uncertainty in the trend.  This is why it's unwise to focus on short timeframes, as the fake skeptics do in their "global warming stopped in [date]" assertions.  As tamino's post linked above shows, when we limit ourselves to a decade's worth of data, the uncertainty in the trend grows to nearly +/- 0.2°C per decade (Figure 2).
GISS trend uncertainty

Figure 2: The estimated global temperature trends through July 2011 (black dots-and-lines), upper and lower limits of the 95% confidence interval (black dashed lines), and the estimated trend since 1975 (red dashed line) using GISS land and ocean temperature data (created by tamino)

Foster and Rahmstorf (2011) also showed that when the influences of solar and volcanic activity and the El Niño Southern Oscillation are removed from the temperature data, the warming trend in the NCDC data shown in the updated Escalator is 0.175 +/- 0.012°C per decade.  Quite simply, contrary to Briggs' claims, the warming trend is much larger than the uncertainty in the data.  In fact, when applying the Foster and Rahmstorf methodology, the global warming trend in each of the major data sets is statistically significant since 2000, let alone 1940.

Ultimately Briggs completely misses the point of The Escalator.

"...just as the WSJ‘s scientists claim, we can’t say with any certainty that the temperatures have been increasing this past decade."

This is a strawman argument.  The claim was not that we can say with certainty that surface temperatures have increased over the past decade (although global heat content has).  The point is that focusing on temperatures over the past decade (as the fake skeptics constantly do) is pointless to begin with, and that we should be examining longer, statistically significant trends.

Briggs' post was of course hailed by the usual climate denial enablers (i.e. here and here), despite the rather obvious inflation of the data uncertainty, and utter lack of support for that inflation.  Despite the fake skeptic struggles to go the wrong way down, The Escalator unfortunately continues ever-upward.

0 0

Printable Version  |  Link to this page

Comments

Prev  1  2  3  

Comments 101 to 131 out of 131:

  1. There's no real truth, as even picking the entire global temperature record is unreliable, since the reliability of weather stations can be disputed going back in time. If that were the case, then picking different subsets of GHCN temperature stations would produce significantly different global-average temperature results. Anyone who spends a serious amount of time "slicing and dicing" the global temperature data will find otherwise. I have processed the GHCN data in umpteen different ways and have gotten consistent warming results every time. If there were serious disruptions to temperature station data that actually impacted global-average temperature results, those disruptions would have had to occur simultaneously to data from most or all of the stations. Otherwise, processing of various random station subsets would reveal the presence of such disruptions.
    0 0
  2. ... His is just one of several amateur reconstructions of the temperature indices using a limited number of spatially well spread data points. Using just 45 stations may set a record... OK, I know that I'm continuing to stray a bit off topic with my post here, but the "45 stations" bit isn't the best of it. It turns out that although I chose a total of 45 stations to process, in most years far fewer stations actually reported data. Below is a "diagnostic dump" from my program where I counted up the number of stations that reported in any given year. Actually, I counted up the total *months* that a station reported in each year and divided by 12, so a station that reported data for 6 months in a given year was counted as "half a station". So here is the dump, showing the number of "station equivalents" that reported for each year: Year=1883 #Stations=12.4167 Year=1884 #Stations=12.3333 Year=1885 #Stations=11.6667 Year=1886 #Stations=12.8333 Year=1887 #Stations=13.0833 Year=1888 #Stations=13.9166 Year=1889 #Stations=14.0833 Year=1890 #Stations=14.8333 Year=1891 #Stations=15.25 Year=1892 #Stations=15.8333 Year=1893 #Stations=16.6666 Year=1894 #Stations=18.8333 Year=1895 #Stations=18.75 Year=1896 #Stations=19 Year=1897 #Stations=21.4167 Year=1898 #Stations=21.0833 Year=1899 #Stations=22.5 Year=1900 #Stations=23.5 Year=1901 #Stations=23.4167 Year=1902 #Stations=23.3334 Year=1903 #Stations=24.6667 Year=1904 #Stations=25.75 Year=1905 #Stations=27.5834 Year=1906 #Stations=27.6667 Year=1907 #Stations=28.5834 Year=1908 #Stations=27.7501 Year=1909 #Stations=27.7501 Year=1910 #Stations=27.8334 Year=1911 #Stations=29.0834 Year=1912 #Stations=28.7501 Year=1913 #Stations=28.7501 Year=1914 #Stations=28.7501 Year=1915 #Stations=28.1667 Year=1916 #Stations=29.7501 Year=1917 #Stations=29.8334 Year=1918 #Stations=29.4167 Year=1919 #Stations=28.7501 Year=1920 #Stations=28.6667 Year=1921 #Stations=29.2501 Year=1922 #Stations=29.5001 Year=1923 #Stations=29.5001 Year=1924 #Stations=28.7501 Year=1925 #Stations=29.0834 Year=1926 #Stations=29.6667 Year=1927 #Stations=29.9167 Year=1928 #Stations=31.0001 Year=1929 #Stations=31.0001 Year=1930 #Stations=30.9168 Year=1931 #Stations=30.8334 Year=1932 #Stations=30.5834 Year=1933 #Stations=30.5001 Year=1934 #Stations=30.3334 Year=1935 #Stations=30.6668 Year=1936 #Stations=30.9168 Year=1937 #Stations=31.9168 Year=1938 #Stations=31.8334 Year=1939 #Stations=33.8334 Year=1940 #Stations=32.1668 Year=1941 #Stations=31.4168 Year=1942 #Stations=31.4168 Year=1943 #Stations=32.1668 Year=1944 #Stations=32.0834 Year=1945 #Stations=32.0834 Year=1946 #Stations=35.75 Year=1947 #Stations=36 Year=1948 #Stations=36.0834 Year=1949 #Stations=37.9167 Year=1950 #Stations=39.1667 Year=1951 #Stations=40.1666 Year=1952 #Stations=41.0833 Year=1953 #Stations=40.6666 Year=1954 #Stations=40.6666 Year=1955 #Stations=41.5 Year=1956 #Stations=42.4166 Year=1957 #Stations=42.8333 Year=1958 #Stations=42.9166 Year=1959 #Stations=43.5833 Year=1960 #Stations=43.5833 Year=1961 #Stations=43.9166 Year=1962 #Stations=43.9999 Year=1963 #Stations=43.9999 Year=1964 #Stations=42.6666 Year=1965 #Stations=41.5833 Year=1966 #Stations=41.6666 Year=1967 #Stations=41.6666 Year=1968 #Stations=42.2499 Year=1969 #Stations=43.9166 Year=1970 #Stations=43.9166 Year=1971 #Stations=43.8332 Year=1972 #Stations=43.7499 Year=1973 #Stations=43.8332 Year=1974 #Stations=43.3333 Year=1975 #Stations=42.0833 Year=1976 #Stations=41.7499 Year=1977 #Stations=42.3333 Year=1978 #Stations=42.7499 Year=1979 #Stations=42.4166 Year=1980 #Stations=42.8333 Year=1981 #Stations=40.0833 Year=1982 #Stations=40.1666 Year=1983 #Stations=39.25 Year=1984 #Stations=39.1667 Year=1985 #Stations=38.9167 Year=1986 #Stations=38.25 Year=1987 #Stations=36.75 Year=1988 #Stations=33.9167 Year=1989 #Stations=36.25 Year=1990 #Stations=32.6668 Year=1991 #Stations=28.5834 Year=1992 #Stations=25.1667 Year=1993 #Stations=23.8334 Year=1994 #Stations=23.0834 Year=1995 #Stations=21.25 Year=1996 #Stations=24 Year=1997 #Stations=26.6667 Year=1998 #Stations=26.5834 Year=1999 #Stations=27.2501 Year=2000 #Stations=26.0834 Year=2001 #Stations=27.7501 Year=2002 #Stations=28.7501 Year=2003 #Stations=30.0001 Year=2004 #Stations=29.0834 Year=2005 #Stations=27.4167 Year=2006 #Stations=28.7501 Year=2007 #Stations=29.6667 Year=2008 #Stations=27.3334 Year=2009 #Stations=34.1667 Year=2010 #Stations=33.4167 As you can see, most of the time, I didn't have anywhere near 45 reporting stations. This further demonstrates the robustness of the global-average temperature results. Just saying that I used 45 stations understates this very important point. Shout it from the rooftops, folks -- this really needs to be pounded home in *any* argument about the quality of the global-temperature data. When folks argue that we because we have only X thousand stations for 2011 when we have data for Y thousand stations for 1990, that there is a problem with the current global temperature estimates, you know that they haven't taken a serious look at the data!
    0 0
  3. As a general announcement, a known spammer (jdey123/jdey/cdey/mace) has perpetrated fraud by masquerading as scientist Judith Curry on this thread. His comments and those replies to him were deleted from the thread. Skeptical Science apologizes to Doctor Curry for this travesty.
    0 0
  4. I have to say, his comments captured the zeitgeist of her blog and public statements perfectly. Had me going.
    0 0
  5. I assumed poster wasn't Dr. Curry - it was fun to pretend though. I think travesty overstates the case by a fair bit.
    0 0
  6. Stephen Baines @74 "To imply it is a common mistake among those doing analyses of temperature patterns is downright puzzling." Is it? You make this very mistake in your comment @81: "If we have confidence that the slope parameter is positive, we can say confidently that temperature has increased." But it's not just you. It is incredibly common when looking for a change in some dataset, not just in climate science, to plot a linear regression and calculate a confidence interval with the hope of finding a small p-value so statistical significance can be claimed. This method, however, doesn't do what many think it does - all it does it reject some (often silly) null hypothesis for some unobservable, unverifiable parameter like "slope" under the assumption that your straight line model is correct. And because you're uninterested (at least explicitly) in seeing if your model can forecast skillfully, we don't know if it is even very good. Briggs argues that most people aren't actually interested in the uncertainty in unobservable model parameters, but the uncertainty in the unobserved observables - the temperature at places where averaging techniques (statistical models) attempt to predict, and where we don't have measurements (though theoretically we could). That's how I interpret what he's saying, anyway.
    0 0
  7. RobertS#106: "some unobservable, unverifiable parameter like "slope" under the assumption that your straight line model is correct." In this case there are physical models and they predict a slope that is verified by the observables. -- source So Briggs' argument hardly applies here. But 'unobserved observables'? Are they like known unknowns?
    0 0
  8. RobertS @62, in that case his argument is nothing more than bait and switch. If he wanted to show poor interpretation of the (classical statistics) prediction interval of the regression, the proper Bayesian comparison was the credible interval of the regression as calculated by Dikran Marsupial. Remember that his conclusion was that "Users of classical statistical methods are too sure of themselves." He did not conclude that sometimes classical statistical results are sometimes misinterpreted. That is because he intended an assault on classical statistical methods per se, rather than occasional particular misinterpretations. What is more, if he is not simply misinterpreting the confidence interval of the regression for the confidence interval of the data, then his comparison is bizarre. The "confidence interval" of the regression using Bayesian methods would have been similarly narrow; and the confidence interval of the data using classical methods would have been almost as wide as that which he calculated using Bayesian methods, and indeed would have included 95% of the data used to calculate it. What is worse, if this is the basis on which he asserts classical confidence limits are too narrow, he has no basis for that assertion. That goes directly to the issue of the main post here.
    0 0
  9. First the Hockeystick, now a very robust Escalator - enough to send the WUWT commentariat into a paroxism of doubt, if not anguish. Well done Dana!
    0 0
  10. Cherry-pickers of course hate "The Escalator." It strips their ploy naked, exposing its fallacy! (phallacy?) :)
    0 0
  11. BTW, SkSer John Russell had a guest post today at ClimateBites titled 'Escalator' Critics Miss the Point. John notes that the "skeptics'" quibbling about the data simply misses the main point of the graphic, which is to expose the deceptive technique known as "cherry picking." And it achieves that goal admirably.
    0 0
  12. Tamino (Grant Foster) has an excellent new post explaining how you can make a reasonable estimate of the uncertainty of a trend without knowing the uncertainty of the data, ie, why Briggs is wrong. Interesting read.
    0 0
  13. In case anyone is interested, I tried recomputing the credible interval on the regression taking into account the stated uncertainties in the estimates of GMST provided by BEST (I didn't use the last two values due to the lack of coverage of the stations used). Here again is the Bayesian regression analysis using the estimates themselves: Here is the Bayesian regression analysis taking into account the uncertainty in the BEST estimates. As I suspected, the 90% credible interval is a little wider, but the difference is barely detectable, which suggests Brigg's criticism of the handling of uncertainty is not much of a cause for concern. The expected trend for the BEST estimates is 0.0257 (95% credible interval 0.0048 to 0.0464) and when the uncertainties in the estimates are accounted for it becomes 0.0255 (95% credible interval 0.0026 0.0486). Note that the credible interval does not include zero or negative values whether the uncertainty is accounted for or not. Technical note: The Bayesian regression is performed by sampling from the posterior distribution of regression parameters (including the variance of the noise process). To incorporate the uncertainty in the estimates, I just sampled from the distribution of the responses assuming that the BEST estimate is the mean and the stated uncertainty is the standard deviation (RATS, reading the documentation it is the 95% confidence interval, so it is actually twice the standard deviation, so my analysis overstates the uncertainty by a factor of two - and it still doesn't make much difference!).
    0 0
  14. Interesting! BTW DM, is your identity 'outed', or was it something you never bothered to hide?
    0 0
    Response:

    [DB] As a general note, when an individual uses a "nom de plume" rather than their actual name, that decision is to be respected.  Period.  This is irregardless of whether or not other indiviuals behave less respectfully and then "out" that individual.  I'm sure that you'd agree that sharing of personal, privileged data without the express consent of the source of that information is wrong.

    Please treat it no differently than the acquisition of stolen (intellectual) property.

  15. Personally, I don't see what all the fuss is about. There are obviously periods of cooling trends (depending on the date range chosen), but overall the trend is rising (since the start date chosen). Skeptic or believer, it's as plain as the nose on your face.
    0 0
  16. piratelooksat50 From a statistical and scientific perspective there is no fuss, it is all really quite straighforward. The fuss arises when those who don't understand the science and/or statistics argue that climate change has stopped [or other such claims] on the basis of the lack of statistically significant warming over a timescale to short to expect a statistically significant trend whether the climate was warming at the expected rate or not. It is very much the purpose of SkS to point out such canards, and explain why they are specious. The escalator diagram is a very good example of this.
    0 0
  17. DB Absolutely, that's what I thought and why I checked. I wanted to alert him to a connection to his moniker in case he didn't know about it.
    0 0
  18. If Briggs is arguing that (as Robert S said)
    Briggs argues that most people aren't actually interested in the uncertainty in unobservable model parameters, but the uncertainty in the unobserved observables - the temperature at places where averaging techniques (statistical models) attempt to predict, and where we don't have measurements (though theoretically we could).
    Then he simply is ignorant (and Eli uses that word advisedly) of the research underpinning all global climate records. Remember that the prequel to GISSTemp was a study by Hansen and Lebedeff showing that there was significant correlation in temperature trends out to 1000 km. That conclusion has been confirmed by a large number of more recent publications and never falsified. So indeed, the global surface temperature anomaly records DO provide significant, statistically useful information about the anomalies at places where there are no thermometers and about the variability in those records. (you can, of course, verify this further by holding out stations from the calculation and then comparing, in essence BEST does this as a lumped study with the 36K stations)
    0 0
  19. Eli, takes no position on whether the temperature is rising in jerks or ramps, but even if you hold for jerks, why are all of the jerks are positive? and jerks are bigger trouble because the damage comes all at once.
    0 0
  20. Hi Dikran Marsupial -- does your Bayesian analysis correct for autocorrelation? What you've done is really cool, and is something I should learn how to do! But I wonder if autocorrelation in the series would widen your credible interval.
    0 0
  21. Muoncounter @107 "In this case there are physical models and they predict a slope that is verified by the observables." When I say "slope", I mean the b quantity in a simple linear regression model y=a+b*x+e. If you want to argue that climate models skillfully predict actual temperature - something your source doesn't attempt to show - or that the slope in a linear regression of temperatures is insignificantly different from that of climate models, that is one thing (exactly what the latter means I can't say for certain). Slope in a linear regression of either temperatures or GCM output, however, is still an unobservable parameter - it's not a measurable, identifiable feature of the data itself, but of your particular model - and cannot be verified. Unobserved observables are quantities which can be measured, but haven't been. So that could be the temperature measured at a particular station some time in the future, or simply a place on the Earth where we haven't sampled. Tom Curtis @108 I agree that a "classical predictive interval" would be similarly wide as a Bayesian interval, and Dikran has indeed shown the credible interval of the regression to be comparable to the classical interval, but I believe Briggs overall point is that the frequentist interpretation of confidence intervals is not intuitive, which begets confusion. And that confidence/credible intervals of observables is preferable to confidence intervals of model parameters. Briggs is part-way through a new series of posts about time series analysis, model selection, and how to treat uncertainty. Maybe it will help clear up some confusion about his position.
    0 0
  22. RobertS - "Slope in a linear regression of either temperatures or GCM output, however, is still an unobservable parameter" What, in the vast world, are you talking about? That's complete nonsense. Slopes are a completely observable quantifiable (including uncertainties) value (see Tamino on this very topic). you're sounding as bad as Rumsfeld with his "unknown unknowns"... Spatial correlations of temperatures ("a place on the Earth where we haven't sampled") are extremely well established (Hansen et al 1987), and Briggs is simply arguing semantics, not numbers. You have most certainly not presented evidence to the contrary. I look forward to Brigg's further posts. Although, based upon what I've read so far, I don't expect anything beyond a confirmation of his biases, poorly supported excuses, and misinterpretations...
    0 0
  23. In case you haven't seen, Briggs had previously attempted to quantify the credible interval on the observables for the BEST record here, with the result being greatly increased uncertainty in temperature estimates.
    0 0
  24. KR @122 Model parameters like slope are, by definition, unobservable. That is, they cannot be measured, observed, detected, identified, and thus, verified in the real world.
    0 0
  25. RobertS @121:
    "I believe Briggs overall point is that the frequentist interpretation of confidence intervals is not intuitive, which begets confusion.
    I don't know if that is Briggs main point, but if it is, well yes (obviously), but they do not introduce anywhere near the confusion Briggs has with his comments. And speaking of which:
    "And that confidence/credible intervals of observables is preferable to confidence intervals of model parameters.
    The distinction being made here is arbitrary, and without any justification in epistemology. As Briggs (and you) are using the distinction, the temperature at a specific time and location is an observable, but the GMST (Briggs) and the linear regression of the GMST over a period (you) are not. However, respectively, the GMST is determinable by an (in principle) simple calculation. It is rendered difficult not by any fundamental issue, but by limitations in the available observational data set. And once you have a time series of the GMST, determining the linear trend is an even simpler calculation with no in principle difficulties. Your distinction appears to be, therefore, a distinction between data obtained by "direct" observation, and data derived by mathematical manipulation of data obtained by direct observation. But as has been noted previously, there are no direct observations of temperature. Rather, we directly observe the length of a column of mercury or alcohol. Or we directly observed the degree of bending of a bi-metal strip. Or we directly observe the current in a circuit (by observing the position of a needle). Converting any of these "direct" observations into temperature data involves calculations just as much as determining the linear regression of a time series. At the most fundamental level, all that is actually observed (visually) is progression of patterns in colours on a two dimensional field. If you are going to make a distinction between observing temperatures, and observing slopes, there is no in principle distinction that will keep you from limiting "direct observation" to that simple descriptions of that visual field (and good luck developing any physics on that basis). In practice we do not make the distinction between what we observe, and what we can know from what we observe except pragmatically (and because pragmatically, based on the needs of particular situations). Briggs appears not to recognize that, and wishes to reify a pragmatic distinction. To which the only appropriate response is, more fool him.
    0 0
  26. RobertS @124, tell me the last time you saw a temperature. Indeed, we cannot even detect temperatures with our sense of touch. What we detect is the rate of heat transfer through the skin, and that in non-quantifiable terms.
    0 0
  27. RobertS - Fair enough, I have perhaps not been sufficiently clear on the terminology. However: Are you asserting that a trend line cannot be determined (as a statistical evaluation, within stated and computed limits of uncertainty) from the data? I ask because that is the apparent direction of your recent comments. And if this is not what you are asserting - then what is your issue with such statistical analyses? Quite frankly, I'm finding difficult to ascertain your point...
    0 0
  28. Tom Curtis @125,126 You're being silly now. Practically, and perhaps physically, we could never measure temperature perfectly. So we can never truly observe "temperature" You're right. From a statistics standpoint, however, the distinction is different. Let's say we have these devices which we'll call "thermometers" that measure, for the sake of simplicity, some quantity called "temperature" (though we both agreed that they don't actually measure temperature). Say we want to measure the temperature of the entire planet. It would be simply unfeasible - or even impossible in practice - to measure every single point on the entire planet, so we place a few of these devices at choice points around the planet, and with the magic of statistics, from these measurements we construct an "average" and an "uncertainty" using some or other method. This "average temperature" isn't an actual temperature which we've measured and neither is the uncertainty; they arise from the method in which we combined our sample. Is our method the true and correct method? Probably not, but we can't say that with absolute certainty. Whether it's a reasonable method is another question. Say we then compute these average temperatures in regular time intervals to find an "average monthly temperature", and we want to see what these average monthly temperatures are doing over some specified time period. So we look up some kind of statistical model, compute it for our average monthly temperatures, and out pops some parameter of that model which we'll call "slope". Is slope a feature of the data itself, or the way in which we manipulated the data to create our model? Is our model the true and correct model? Probably not, but we can't say that with absolute certainty. KR @127 "However: Are you asserting that a trend line cannot be determined (as a statistical evaluation, within stated and computed limits of uncertainty) from the data?" No, of course a trend line can be determined from the data. I might question the value or interpretation of such a metric, but not that one can be calculated. For what it's worth, it's clear that Tamino knows his stats, and he has that rare quality of being able to explain esoteric statistical methods easily to laymen, but his latest post again misses Briggs' point.
    0 0
  29. Robert S is simply Essex and McKitrick dresses in fancy statistical pants. http://www.realclimate.org/wiki/index.php?title=C._Essex_and_R._McKitrick Been there, done that
    0 0
    Moderator Response: [Dikran Marsupial] Link activated
  30. Tom @125 "I don't know if that is Briggs main point..." It's probably not his main point - I misspoke. He's primarily arguing for the use of predictive statistics, which is not standard in most fields. And because frequentist interpretations are counterintuitive and often unrealistic, he prefers Bayesian predictive techniques. Eli @129, I don't have a problem with a global mean surface temperature. The issue comes with how uncertainty in this value is calculated and viewed, and how a change in GMST is determined.
    0 0
  31. RobertS, the data of the temperature series is a set of pairs of numbers. The linear regression of such a set has a unique solution. Therefore it is a property of that set of numbers. So, to answer the question, it is a property of the data set, not of our mathematical manipulation. We could employ the same form of verbal tricks you do in making your case with regard to measurements. Consider a simple mercury thermometer placed in a pot of water. The length of the mercury column in the evacuated tube depends critically on the diameter of that tube. Does that make the temperature a property of the evacuated tube, of the the manipulation of glass in creating the tube? By your logic we must conclude it is a property of the glass blowers manipulation. Perhaps that is to simple for you. Suppose instead of a mercury thermometer we measure temperature with an IR thermometer. The IR thermometer records the intensity of IR radiation across a range of frequencies. Using the laws of black body radiation, a computer chip then calculates the temperature of the body emitting the radiation. So, is the temperature a property of the pot of water, or the mathematical manipulation that derived the temperature from the IR radiation. For consistency, you need to say the later. But then you are committed to the claim that Planck's law has a temperature of x degrees, where x is the result of the measurement. Going back to your example, the formula for the linear regression of a time series does not have a slope of 0.175 C/decade (+/- 0.012°C/decade). Neither does the computer, or the pages of paper on which the calculation was performed. That slope is the property of the NCDC temperature data. In other words, it is simply incoherent to say the linear regression is a property of the mathematical manipulation rather than the data. It is absurd on the same level as saying "The green dreams slept furiously together". And the reason you are generating such incoherent notions is because you are trying to reify a purely pragmatic distinction. The question you need to be asking is not whether the linear regression is a property of the data set or the mathematical manipulation (trivially it is a property of the data set). What you need to ask is, is it a useful property to know. And, as with any question of usefulness, that depends critically on the use intended.
    0 0
  32. Steve L The Bayesian analysis doesn't take autocorellation into acount (unlike Tamino I am no expert in time-series analysis - yet ;o). However the main aim was to determine whether the effect of the uncertainty of the estimates of GMST had much of an effect on the width of the credible interval. It doesn't which suggests that Dr Briggs is making a bit of a mountain out of a molehill (if not a worm cast), in my opinion. BTW, if you are interested in Bayesian statistics then a good place to start is Jim Alberts book "Bayesian Computation with R", which as a package for the R programming environment called "LearnBayes" that implements Bayesian linear regression (which I used to generate the plots).
    0 0
  33. RobertS (various posts) "Slope in a linear regression of either temperatures or GCM output, however, is still an unobservable parameter - it's not a measurable, identifiable feature of the data itself, but of your particular model - and cannot be verified" This is not true, as others have pointed out there the data has a unique linear least-squares fit, just as the difference between the temperature at the start point and end point is uniquely defined by the data (as I pointed out on Dr Brigg's blog). You can view linear regression as being a generative model of the data, however it also has a perfectly reasonable interpretation as a descriptive statistic. "but I believe Briggs overall point is that the frequentist interpretation of confidence intervals is not intuitive, which begets confusion." This is true, conidence intervals are counter-intuitive, however if that is Brigg's overall point he is making it rather indirectly, to say the least! "And that confidence/credible intervals of observables is preferable to confidence intervals of model parameters." This is non-sense, which is preferable depends on the purpose of the analysis. "I don't have a problem with a global mean surface temperature. The issue comes with how uncertainty in this value is calculated and viewed, and how a change in GMST is determined." Are your concerns answered then, by my analysis which shows that using Bayesian methods, the uncertainty in the estimates of GSMST have almost no effect on either the expected regression or the credible interval? You might want to ask yourself why Briggs hasn't already performed this analysis before making a fuss about it on his blog. BTW, not all statistics should be "predictive", they should be chosen to suit the purpose of the analysis. If you are aiming to predict something, then obviously predictive statistics are likely to be most appropriate. If the purpose is to describe the data, then descriptive statistics are appropriate, if you are exploring the data, then exploratory statistics. Ideally a statistician should have all of these tools in his/her toolbox and be able to chose the appropriate tool for the job at hand.
    0 0
  34. After this, I'll let you guys have the last word - I've spent far too much of my precious little free time here as it is. Tom Curtis @131 "Going back to your example, the formula for the linear regression of a time series does not have a slope of 0.175 C/decade (+/- 0.012°C/decade). Neither does the computer, or the pages of paper on which the calculation was performed. That slope is the property of the NCDC temperature data. In other words, it is simply incoherent to say the linear regression is a property of the mathematical manipulation rather than the data." This is a backwards interpretation of what I've attempted to argue, but perhaps I could have been clearer. The data we are attempting to model isn't a simple straight line, else we wouldn't attempt to model it with one. We could just look at it and calculate the slope and intercept. But we do plot some straight line onto this data, and attempt to estimate the parameters of that model based on the data. In a linear model y=a+bx+e, only the x and y are variables which we "observe" (even if we "observe" them with error which must be treated probabilistically; i.e. a model). The parameters a, b, and e must be "estimated", with their value and uncertainty dependent on your chosen method of estimation (OLS, MLE, LAD, etc.), and how you treat the underlying process. Simply, it's the linear model of temperatures which have a slope of 0.175 C/decade, not the temperature record itself (as you allege in the above quote). Not NCDC, or GISS, or HadCRU, or BEST. The temperature record helped to determine the slope of the linear model, but the temperature record and the model are not the same thing. As far as the "pragmatic distinction" goes, Tom seems to be hung up on the words "temperature" and "observation" - he's using a highly specific physical definition, rather than as a useful statistical designation (as I attempted to explain above @128). Both inferential and descriptive statistics assume that something, somewhere in the sample we are hoping to analyze is measured without error, and hope that this oversight doesn't greatly affect our conclusions. If we can, however, minimize or quantify and include as many sources of uncertainty as possible, that would be a good thing. Dikran @133 "You might want to ask yourself why Briggs hasn't already performed this analysis before making a fuss about it on his blog." Because he is not interested in the credible interval of the linear regression. He's not interested in the linear regression at all. He has calculated the predictive interval of the "observables" and found a much larger uncertainty in estimates of global temperatures than stated by BEST, with increasing uncertainty the further back we go. A linear regression is not the only (or necessarily ideal way) to answer a question about a change in some dataset. And no, not all of statistics is "predictive," but "prediction" is often implicitly assumed in some way in an analysis or interpretation, so perhaps it should be. In practice I too use descriptive parametric statistics, because that it the way the world works, but it's fun to opine on the way we should view things from an epistemological standpoint.
    0 0
  35. I'm with Eli: this whole thing reminds me of the McKitrick "there is no such thing as a Global Temperature" claim. Deltoid has done at least one discussion of this McKitrick nonsense. I also remember Bob Grumbine doing a takedown on how McKitrick's argument against the concept of a mean temperature, pointing out that McKitrick's version basically becomes a useless definition with no practical application. It was buried on Usenet, so finding stuff is a bit hard, but I did come up with this somewhat-related discussion: Google search of newsgroups Look for Grumbine's comments about half way down (although the link as presented puts you near the bottom of the thread). This Briggs stuff seems to be a new variant on the "if you don't know everything, you don't know anything" meme.
    0 0
  36. RobertS wrote: "The data we are attempting to model isn't a simple straight line, else we wouldn't attempt to model it with one." That seems rather a non-sequitur. It is the underlying data generating process that we are interested in, rather than the data themselves. We do not expect this to be a linear function, we are merely trying to approximate the local gradient of the function by fitting a straight line. "We could just look at it and calculate the slope and intercept. But we do plot some straight line onto this data, and attempt to estimate the parameters of that model based on the data." There is no difference whatsoever between these two activities, the parameters of the model are the slope and intercept (other than fitting by eye is subjective and likely to be biased and hight variable). Both inferential and descriptive statistics assume that something, somewhere in the sample we are hoping to analyze is measured without error" This is simply incorrect. There are methods for inferential and descriptive statistics that assume measurement without error and there are other methods that don't make that assumption. Just because you are unaware of them, doesn't mean they don't exist (many are based on the EM approaches). Because he is not interested in the credible interval of the linear regression. He's not interested in the linear regression at all. This is Dr Briggs main failing, he is not interested in the climatology and is ignoring the purpose of the analysis (to estimate the local gradient of the data generating process). Which is why the points he is making would be valid if his assumptions about the purpose of the analysis were correct, but unfortunately they are not. "He has calculated the predictive interval of the "observables" and found a much larger uncertainty in estimates of global temperatures than stated by BEST" This is becuase he is using a definition of "warming" that is highly sensitive to the noise (i.e. unforced variability), which a proper analysis should aim to ignore. This is the reason that climatologists don't use his definition of warming, it is not a reliable indicator of underlying climate trends. "And no, not all of statistics is "predictive," but "prediction" is often implicitly assumed in some way in an analysis or interpretation" No, that is not correct, descriptive and inferential statistics do not implicityly aim to predict, they are about explanation and description. In this case, the aim is to estimate the local gradient of the (unobserved) data generating process from the data. This does not involve any attempt at prediction as we are using only a local approximation to the function of interest, so it would be unwise to use it for prediction outside the data we already have.
    0 0
  37. As a purely hypothetical example, if we had a time series with some underlying physical properties that suggested a sinusoid, it would be possible to do a linear regression on a small portion of the sinusoid (e.g. from below the origin to above). But the slope from that section would only "make sense" in the context of the sinusoid and the underlying physical causes. In the same sense a linear trend of some portion of the global average temperature should be considered as part of a bigger picture. A shorter interval (e.g. the 80's and 90's or the 2000's) would involve some natural GW and GC, AGW and possibly some counteracting aerosols. The single number for the trend is inadequate and oversimplified. Looking at longer intervals is even more problematic since then there is a mixture of AGW and recovery from the LIA. This does not seem like a valuable exercise in analysis to me.
    0 0
  38. 137, Eric, I'm not exactly certain what you're saying... are you saying that there is no trend that is reliable? But you are right in that simply looking at the numbers and trying to tease things out without an underlying understanding is climatology climastrology [oops -- Sph]. The solution is to study and tease out the mechanics of the system, like the physics (e.g. radiative emission and absorption) and the component systems (e.g. ENSO and aerosol sources) and then to put it all together, rather than looking at any one piece of information in isolation. The wrong thing to do is to attribute great power to unknown acronyms, like a savage giving homage to the Sun God and the Wind Spirit, and then to just throw up your hands and say that no one could possibly know anything, and anyone who says they do is a nutter.
    0 0
  39. Eric#137: "The single number for the trend is inadequate and oversimplified." All curves (including sine curves) can be linearly approximated on a local basis; that's one of the things that makes calculus work. Fitting a linear trend is an essential first step because it forces you to justify the cause of the linear trend and it alerts you to look for departures from linearity. If those are merely residuals, you look for sources of 'noise,' as in FR2011. If those are changes in slope, you suspect acceleration (+ or -) and again go looking for causes. You seem to be suggesting a variant of the 'we don't know nuthin' myth' because we can see a linear trend and use it as a descriptor. Rather than address what is underlying the trend, imposing these arbitrary jumps doesn't seem like a valuable exercise to me.
    0 0
  40. Eric (skeptic) - "...since then there is a mixture of AGW and recovery from the LIA." (emphasis added) This is a meme I've been seeing more often recently. I will simply note that this is discussed in some detail on the We're coming out of an ice age threads. The LIA was (as far as can be determined at the moment) a combination of solar minimum and possibly (recent evidence) a triggering tipping event of high volcanic activity. While the climate warmed from that low point, that does not explain recent warming. Only AGW can account for the magnitude of climate change over the last 50 years or so.
    0 0
  41. And when will we know when we have finally recovered from that LIA...?
    0 0
  42. 140, KR, A brief digression... another theory for an influence (far from sole cause) on the LIA is the recovery of forest land, which drew down CO2 levels by about 4 ppm, from the combination of population first lost in Europe due to the Black Death and 150 years later in Native American populations due to the introduction of European diseases like smallpox. In effect, we had two major population losses which allowed land that had been cleared to be reclaimed by forests, taking CO2 out of the atmosphere. Forest re-growth on medieval farmland after the Black Death pandemic—Implications for atmospheric CO2 levels (Thomas B. van Hoof et al, 2006) Effects of syn-pandemic fire reduction and reforestation in the tropical Americas on atmospheric CO2 during European conquest (Nevlea and Bird, 2008) I have no idea how viable the theories are... but it's another example of trying to understand the system and the "natural variations," rather than just saying "Sky God angry... find virgin sacrifice, make Sky God happy... ugh!"
    0 0
  43. Sphaerica - Very interesting references. While the exact causes of the LIA are not entirely fleshed out, it's clear that anthropogenic greenhouse gases are the overwhelming cause of current warming - not natural forcings, which we have a fairly good identification of. If, on the other hand, we have to placate some great Sky God, I definitely want to be on the committee selecting who gets tossed into the volcanoes...
    0 0
  44. "when will we know when we have finally recovered from that LIA...?" Here's a chronology of sorts, putting the end of LIA in the 1850s. That looks similar to the pattern of this graph, which shows flattened temperatures in the mid 19th century: --source And then the modern warming began. So LIA over, we're not just 'recovering,' whatever that actually means.
    0 0
  45. I am sorry about the LIA digression, I was obviously not clear on what I meant by bringing it up in this thread. For an example, see Figure 4 here The linear trend shown there encompasses some natural warming and cooling transitioning to manmade warming with natural variations. IMO, it is not a suitable use for a linear model. According to Dikran above a model describes and explains. The use in that link explicitly predicts but is not explicit about what it is predicting (natural?, anthro plus natural? anthro exceeding natural?)
    0 0
  46. Eric (skeptic) models can describe, explain or predict; how well they perfoem any of these tasks depends on the appropriateness of the model. If you are not sure what a model represents, why not ask on the appropriate thread (I?
    0 0
  47. Eric (skeptic) - The use of a linear model in the figure you reference is entirely appropriate for the question asked, which is in determining that recent warming does not fit a linear trend to the last 150 years of data. Without the linear fit in that figure, you wouldn't be able to evaluate the question. Linear fits are a minimal model, perhaps the easiest to evaluate based upon limited data - and almost always an excellent first pass. Fitting a 2nd or higher order polynomial requires fitting additional parameters, meaning more information is required to fit such a model with statistic significance. As Dikran has pointed out in several conversations, the model used when looking at any data set is dependent on the question asked, and whether the data statistically supports such a model over alternatives.
    0 0
  48. Addendum to previous post - The model used also depends on whether that model makes physical sense. One of the primary arguments against 'step' models, even though they can fit the data with considerable statistical significance, is that physically they just don't make sense in terms of thermal inertia and the mechanics of heat transport.
    0 0
  49. It might become even a teensy bit clearer if one considers a series of measurements of something unknown made in a black box with only a digital readout on the cover. Eric doesn’t know what the instrument is. Eric doesn’t know how accurate or precise the instrument is. Eric doesn’t know what the instrument is measuring. Eric doesn’t know if what the instrument is measuring is changing in time. All Eric knows is the numerical (digital) representation of what he reads on the indicator. Eric diligently records the numbers and then arranges them in a table as a function of time. We know that Eric never gives up so he gives us a long series or maybe not. Eric can now use statistical analysis to estimate the probability of there being a trend or not and to estimate the trend and the uncertainty in the trend. The residuals and trends in the residuals can be used to estimate the probability of the trend being linear or higher order (since everything reduces to a power series we don’t have to directly deal with anything but polynomials. From that point of view accepting or rejecting the hypothesis that the behavior of whatever is being measured is unchanging is simply checking if the zeroth order term in the series is the only significant one). From this POV, the residuals tell us of the summed variability of the what is being measured and the measurement device. We would, of course, have to break into the box and calibrate the instrument against a well characterized source to separate accuracy and precision.
    0 0
  50. EliRabett, Thanks for the example. Are observations are uniformly spaced? I assume that daily or seasonal cycles are already averaged out in some way? I don't think those are difficult problems but the methods can sometimes be controversial. Once the cycles are removed is there any other role for spectral analysis? Are there any a priori statistical tests for the residuals or trends in the residuals? By what method are nonlinear trends in the residuals measured? How do we know that we have sufficient data for that method?
    0 0

Prev  1  2  3  

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us