Recent Comments
Prev 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 Next
Comments 25851 to 25900:
-
Tom Curtis at 11:30 AM on 14 January 2016Tracking the 2°C Limit - November 2015
I have responded to Angusmac @37 on a more appropriate thread.
Moderator Response:[PS] thank you for cooperation. Would all other commentators do likewise please.
-
Tom Curtis at 11:28 AM on 14 January 2016Medieval Warm Period was warmer
This is a response to Angusmac elsewhere.
Angusmac, I distinguished between three different meanings of the claim "the MWP was global":
"Was the GMST durring the MWP warm relative to periods before and after? ... Were there significant climate perturbances across the globe durring the MWP? ... Were temperatures elevated in the MWP across most individual regions across the globe?"
In response you have not stated a preference to any of the three, and so have not clarrified your usage at all. In particular, while your citing of the AR5 graphs suggests you accept this first meaning, you then go on to cite the Luning and Varenholt google map app, which suggests you accept also the third, false meaning. Your question as to whether or not I believe the MWP was global remains ambiguous as a result, suggesting you are attempting to play off agreement on the first definition as tacit acceptance of the third, and false position.
With regard to the Luning and Varenholt google map app, KR's response is excellent as it stands and covers most of what I would have said. In particular, as even a brief perusal of Luning and Varenholt's sources shows, the warm periods shown in their sources are not aligned over a set period and include colder spells within their warm periods which may also not align. Because of the possible failure of alignment, any timeseries constructed from their sources proxies will probably regress to a lower mean - and may not be evidence of a warm MWP at all. (The continuing failure of 'skeptic' sources such as Soon and Baliunas, CO2Science and now Luning and Varenholt to produce composite reconstructions from their sources in fact suggest that they are aware that doing so will defeat their case - and have taken a rhetorically safer approach.)
In addition to this fundamental problem, two further issues arise. The first is that Luning and Varenholt do not clarrify what them mean by "warm" on their map legend. One of their examples helps clarrify, however. This is the graph of a temperature reconstruction from Tasmanian tree rings by Cook et al (2000) that appears on Luning and Varenholt's map:
They claim that it shows a "Warm phase 950-1500 AD, followed by Little Ice age cold phase." Looking at that warm phase, it is evident that only two peaks within that "warm phase" rise to approximately match the 1961-1990 mean, with most of the warm phase being significanly cooler. It follows that, if they are not being dishonest, by "warm phase" they do not mean as warm as the mid to late twentieth century, but only warmer than the Little Ice Age. That is, they have set a very low bar for something to be considered warm. So much so that their google map app is useless for determing if the MWP had widespread warmth relative to late 20th century values or not.
As an aside, Cook et al stated "There is little indication for a ''Little Ice Age'' period of unusual cold in the post-1500 period. Rather, the AD 1500-1900 period is mainly characterized by reduced multi-decadal variability." Evidently they would not agree with Luning and Varenholt's summary of the temperature history shown in that graph over the last one thousand years.
The second point is that it amounts to special pleading for you to accept the IPCC global temperature reconstruction that you showed, which is actually that of Mann et al (2008), but to not then also accept the reconstruction of spatial variation on MWP warmth from Mann (2009) which uses the same data as Mann (2008):
Apparently the data counts as good when it appears to support a position you agree with, but as bad when it does not. If you wish to reject Mann (2009), you need also to reject the reconstuction in 2008 and conclude that we have not reliable global temperature reconstruction for the MWP (unless you want to use the PAGES 2000 data). If, on the other hand, you accept Mann (2008), end your special pleading and accept Mann (2009) as our best current indication of the spatial variation of MWP warmth.
-
Glenn Tamblyn at 11:25 AM on 14 January 2016The Quest for CCS
Ryland
In principal no since you grow trees, sequester their carbon and crow more trees. However, there are still lots of issues with BECCS. The land area required that competes with agriculture, nutrient requirements to maintain the growth potential of that land, then the need to transport the biomass to the power stations, then transport the captured CO2 to another site for sequestration.
In one sense BECCS is a misnomer. It should actially BECCRRS - Bioenergy Carbon Capture, Release, Recapture and Sequestration.
When we harvest the plant crop we have already captured the carbon! Then we take it to a power station, release it through combustion, recapture it (but not all of it) from the smoke stack, then sequester it.
Maybe a simpler approach is to simply take the organic matter and directly sequester that! The tonnage required would be lower - by mass organic molecules such as cellulose have a higher proportion of carbon than CO2 does. -
Andy Skuce at 11:23 AM on 14 January 2016The Quest for CCS
ryland:
Yes.
See John Upton's excellent series Pulp Fiction and the UK DECC report on biomass life-cycle impacts.
-
Andy Skuce at 11:17 AM on 14 January 2016The Quest for CCS
wili:
The potential for catastrophic leakage from CCS wells that fail is certainly a serious concern. There are, though, some differences in scale and rate between what is happening at Porter Ranch and the tragedy at Lake Nyos. I understand that the rate of gas release in California is about 1200 tons per day (please forgive the Wiki references), whereas, the Lake Nyos release was a very sudden eruption of 100,000-300,000 tons of CO2, basically three months to a year's worth of the California gas leak in less than a day, as the entire lake catastrophically degassed like a shaken Champagne bottle.
I'm not exactly sure what would happen in the case of a CCS well blowout and I suspect nobody else is either, since it has never happened. There have been CO2 blowouts from mines and wellbores (and some have caused fatalities), but a CCS blowout might be different because the CO2 is likely stored in the form of a super-critical fluid. My understanding is that when such a fluid is subjected to depressurization and changes to the gas phase, it causes a refrigeration effect (the Joule–Thompson effect), which slows the degassing process and forms ice, dry ice and hydrates which may also block or slow the flow. See Bachu (2008). I believe that the expectation is that a failed CCS well will sputter out gas, seal itself and then sputter out more gas in a cycle, as the rock and wellbore cools and warms up again.
In other words, a failed CCS well might not be as bad as Porter Ranch and is very unlikely to be a catastrophe as bad as Lake Nyos. Having said that, I'm probably not alone in not wanting to live in a valley below a big CCS operation, because might not be as bad and very unlikely to be a catastrophe is not reassuring enough. If CCS is ever to be deployed at the scale that some of the modelers envisage, then among the required tens of thousands of projects, involving who-knows-how-many injection wells, unexpected disasters are certain.
-
ryland at 10:45 AM on 14 January 2016The Quest for CCS
Wouldn't the use of biomass be somewhat self defeating as the use of living trees not only has an impact environmentally but also reduces the global carbon sink capacity
-
wili at 10:09 AM on 14 January 2016The Quest for CCS
As current events in California indicate, gas 'stored' in underground wells does not necessarily stay there. If the gas escaping from Porter Ranch had in fact been CO2, and if had been a quiet night, there may have been no need for an evacuation--everyone in the valley below would have been suffocated to death in their sleep. (As happened at Lake Cameroon's Lake Nyos in 1986.)
-
Kevin C at 09:26 AM on 14 January 2016Surface Temperature or Satellite Brightness?
John Kennedy of the UK Met Office raised an interesting issue with my use of the HadCRUT4 ensemble. The ensemble doesn't include all the sources of uncertainty. In particular coverage and uncorrelated/partially correlated uncertainties are not included.
Neither RSS and HadCRUT4 have global coverage, and the largest gaps in both cases are the Antarctic then the Arctic. Neither include coverage uncertainty in the ensemble, so at first glance the ensembles are comparable in this respect.
However there is one wrinkle: The HadCRUT4 coverage changes over time, whereas the RSS coverage is fixed. To estimate the effect of changing coverage I started from the NCEP reanalysis (used for coverage uncertainty in HadCRUT4), and masked every month to match the coverage of HadCRUT4 for one year of the 36 years in the satellite record. This gives 36 temperature series. The standard deviation of the trends is about 0.002C/decade.
Next I looked and the uncorrelated and partially correlated errors. Hadley provide these both for the monthly and annual data. I took they 95% confidence interval and assumed that these correspond to the 4 sigma width of a normal distribution, and then generated 1000 series of normal values for either the months or years. I then calculated the trends for each of the 1000 series and looked at the standard deviations of each sample of trends. The standard deviation for the monthly data was about 0.001C/decade, and for the annual data about 0.002C/decade.
I then created an AR1 model to determine what level of autocorrelation would produce a doubling of trend uncertainty on going from monthly to annual data - the autocorrelation parameter is about 0.7. Then I grouped the data into 24 month blocks and recalculated the standard deviation of the trends - it was essentially unchanged from the annual data. From this I infer that the partially correlated errors become essentially uncorrelated when you go to the annual scale. Which means the spead due to partially correlated errors is about 0.002C/decade.
The original spread in the trends was about 0.007C/decade (1σ). Combining these gives a total spread of (0.0072+0.0022+0.0022)1/2, or about 0.0075 C/decade. That's about a 7% increase in the ensemble spread due to the inclusion of changing coverage and uncorrelated/partially correlated uncertainties. That's insufficient to change the conclusions.
However I did notice that the ensemble spread is not very normal. The ratio of the standard deviations of the trends between the ensembles is a little less than the ratio of the 95% range. So it would be defensible to say that the RSS ensemble spread is only four times the HadCRUT4 ensemble spread.
-
JWRebel at 06:01 AM on 14 January 2016The Quest for CCS
Approaches using natural processes (accelerated olivine weathering, etc) seem to be a lot more promising. Below is a brief (2010) claiming it is possible to capture global annual carbon emissions for B$250/year. The second is using weathering to produce energy and materials using carbon dioxide as a major input.
Moderator Response:[RH] Shortened links.
-
MA Rodger at 05:20 AM on 14 January 2016Surface Temperature or Satellite Brightness?
rocketeer @11.
I've posted a graph here (usually two clicks to 'download your attachment') which plots an average for monthly surface temperatures, an average for TLTs & MEI. As TonyW @13 points out (& the graph shows) there is a few months delay between the ENSO wobbling itself & the resulting global temperature wobble. The relative size of these surface temp & TLT wobbles back in 1997-98 is shown to be 3-to-1. So far there is no reason not to expect the same size of temperature wobble we had back in 1997/8, which would mean the major part of the TLT wobble has not started yet.
-
mitch at 05:12 AM on 14 January 2016The Quest for CCS
My understanding is that Statoil has been injecting CO2 for sequestration from the North Sea Sleipner Gas field into a saline aquifer since 1996, roughly a million tons/yr. They found that it was economic because Norway was charging $100/ton for CO2.
Another problem with CCS is that the CO2 has more mass than the original hydrocarbon/coal. For each ton of coal, one develops 2.7 tons of CO2. Nevertheless, it is worth continuing to investigate how much we can bury and for what price.
-
John Hartz at 04:10 AM on 14 January 2016NASA study fixes error in low contrarian climate sensitivity estimates
Suggested supplementary reading:
How Sensitive Is Global Warming to Carbon Dioxide? by Phil Plait, Bad Astronomy, Slate, Jan 13, 2016
-
PhilippeChantreau at 03:39 AM on 14 January 2016The Quest for CCS
Interesting post Andy. From the big picture point of view, the thermodynamics of CCS seems to be quite a problem. The sheer size of the undertaking is another.
I think it is worth mentioning the CCS potential offered by Hot Dry Rock systems. The MIT report on HDR indicates there is definitely possibility there, in addition to all the other advantages of HDR:
https://mitei.mit.edu/system/files/geothermal-energy-full.pdf
-
Rob Honeycutt at 03:22 AM on 14 January 2016Tracking the 2°C Limit - November 2015
angusmac... The conversation has also veered well off course for this comment thread. You should try to move any MWP over to the proper threads and keep this one restricted to baselining of preindustrial.
-
Tracking the 2°C Limit - November 2015
angusmac - While that map (generated by Dr.s Lüning and Vahrenholt, fossil fuel people who appear to have issues understanding fairly basic climate science) an interesting look at the spatial distribution of selected proxies, there is no time-line involved in that map, no indication of what period was used in the selection. No demonstration of synchronicity whatsoever. Unsurprising, because (as in the very recent PAGES 2k reconstruction of global temperature):
There were no globally synchronous multi-decadal warm or cold intervals that define a worldwide Medieval Warm Period or Little Ice Age...
As to IPCC AR5 Chapter 5:
Continental-scale surface temperature reconstructions show, with high confidence, multi-decadal periods during the Medieval Climate Anomaly (950 to 1250) that were in some regions as warm as in the mid-20th century and in others as warm as in the late 20th century. With high confidence, these regional warm periods were not as synchronous across regions as the warming since the mid-20th century. (Emphasis added)
You are again presenting evidence out of context, and your arguments are unsupported.
---
But this entire discussion is nothing but a red herring - again, from IPCC AR5 Ch. 5, we have a fair bit of knowledge regarding the MCA and LIA:
Based on the comparison between reconstructions and simulations, there is high confidence that not only external orbital, solar and volcanic forcing, but also internal variability, contributed substantially to the spatial pattern and timing of surface temperature changes between the Medieval Climate Anomaly and the Little Ice Age (1450 to 1850).
Whereas now we have both external forcings (generally cooling) and anthropogenic forcings, with the latter driving current temperature rise. In the context of the present, a globally very warm MCA and cold LIA would be bad news, as it would indicate quite high climate sensitivity to the forcings of the time, and hence worse news for the ongoing climate response to our emissions. I see no reason to celebrate that possibility, let alone to cherry-pick the evidence in that regard as you appear to have done.
-
angusmac at 02:14 AM on 14 January 2016Tracking the 2°C Limit - November 2015
Tom Curtis@27 & KR@28
Referring to your request that I clarify the sense in which I mean that the MWP was global, I thought that I had already done this in angusmac@12 & 20 but I will repeat it here for ease of reference.
My definition of the global extent of the MWP was summarised in Section 5.3.5.1 of AR5 which states that, “The timing and spatial structure of the MCA [MWP] and LIA are complex…with different reconstructions exhibiting warm and cold conditions at different times for different regions and seasons.” However, Figure 5.7(c) of AR5 shows that the MWP was global and (a) and (b) show overlapping periods of warmth during the MWP and cold during LIA for the NH and SH.
Additional information on the global extent of the MWP is shown graphically by the paleoclimatic temperature studies highlighted in Figure 1 below.
Figure 1: Map showing Paleoclimatic Temperature for the MWP (Source: Google Maps MWP)
The following colour codes are used for the studies highlighted in Figure 1: red – MWP warming; blue – MWP cooling (very rare); yellow – MWP more arid; green – MWP more humid; and grey – no trend or data ambiguous.
The map in Figure 1 was downloaded from this Google Maps website. The website contains links to more than 200 studies that describe the MWP in greater detail. Globally, 99% of the paleoclimatic temperature studies compiled in the map show a prominent warming during the MWP.
Moderator Response:[JH] You have been skating on the thin ice of excessive repetition for quite some time now. Please cease and desist. If you do not, your future posts may be summarily deleted.
-
shoyemore at 00:07 AM on 14 January 2016Surface Temperature or Satellite Brightness?
Kevin c #10,
Many thanks, pictures not required! :)
-
Rob Honeycutt at 23:29 PM on 13 January 2016Tracking the 2°C Limit - November 2015
Absolutely, Tom!
All of these combined also become a big multiplier effect on socio-political stresses.
-
Tom Curtis at 23:25 PM on 13 January 2016Tracking the 2°C Limit - November 2015
Rob Honeycutt @24, there are three "huge differences" between the current warming and the HTM.
First, as you mention, the rate of temperature change is much faster, with temperatures expected to increase in a century or two by the same amount it took 8000 years to increase leading into the HTM (and hence time frames in which species must migrate or evolve to adapt being much smaller).
Second, humans have a much more static, industrialized society making it difficult or impossible for populations to pick up and move to more friendly conditions. The extensive agricultural, road and rail networks place similar restrictions on adaption by migration of land animals and plants.
Third, Global Warming is just one of three or four major stressors of nature by human populations. Because of the additional stresses from overpopulation, over fishing, cooption of net primary productivity, and industrial and chemical waste, the population reserves that are the motor of adaption for nature just do not exist now, as they did in the HTM. AGW may well be the 'straw' (more like tree trunk) that breaks the camel's back.
-
Rob Honeycutt at 23:15 PM on 13 January 2016Tracking the 2°C Limit - November 2015
angusmac @29... Your quote from Ljundqvist is not is disagreement with anything we're saying here. At ~1°C over preindustrial we have brought global mean surface temperature back to about where it was at the peak of the holocene. That statement in Ljundqvist does not in anyway suggest that 2°C is unlikely be a serious problem.
Look back at your PAGES2K chart @20. There's one huge difference between the peak of the holocene and today, and that's the rate at which the changes are occurring. That is the essence of the problem we face. It's less about relative temperature and more about the incredible rate of change and the ability of species to adapt to that change.
Human adaptability is one of the keys to our success as a species. Physiologically, we would have the capacity to survive whatever environment results from our activities. But the species we rely on for our sustenance, not so much.
A change in global mean temperature of >2° is very likely to produce some pretty dramatic climatic changes on this planet right about the time human population is peaking at 9-10 billion people. Feeding that population level with frequent crop failures and any substantive decrease in ocean fish harvests is likely to cause very serious human suffering.
-
Tom Curtis at 23:13 PM on 13 January 2016Tracking the 2°C Limit - November 2015
Further to my preceding post, here are Ljungqvist 2011's land and ocean proxies, annotated to show latitude bands. First land:
Then Ocean:
I have also realized that proxies showing temperatures between -1 to +1 C of the preindustrial average are not shown in Ljungqvist 2011 Fig 3, and are never less than about 20% of proxies. As they are not shown, their impact cannot be quantified even intuitively from that figure suggesting inferences to global temperatures from that figure would be fraught with peril, even if the proxies were geographically representative.
-
Tom Curtis at 23:06 PM on 13 January 2016Tracking the 2°C Limit - November 2015
Angusmac @29 (2), I am disappointed that you drew my attention to Ljungqvist 2011 for I had come to expect higher standards from that scientist. Instead of the standards I have expected, however, I found a shoddy paper reminiscent of Soon and Baliunas (2003) (S&B03). Specifically, like S&B03, Ljungqvist 2011 gathers data from a significant number (60) of proxies, but does not generate a temperature reconstruction from them. Rather, they are each categorized for different time periods as to whether they are more than 1 C below the preindustrial average, withing 1 C of that average, more than 1 C but less than 2 C, or more than 2 C above the preindustrial average. The primary reasoning is then presented by a simple head count of proxies in each category over different periods, shown in Figure 3, with figure 3 a showing land based proxies, and figure 3 b showing marine proxies:
(As an aside, C3 Headlines found the above graph too confronting. They found it necessary to modify the graph by removing Fig 3b, suggesting that the thus truncated graph was "terrestial and marine temperature proxies".)
If the proxies were spatially representative, the above crude method might be suitable to draw interesting conclusions. But they are not spatially representative. Starting at the simplest level, the 70% of the Earth's surface covered by oceans are represented by just 38% (23/60) of the proxie series. As the ocean proxie series, particularly in the tropics, are cooler than the land series, this is a major distortion. Worse, the 6.7% of the Earth's surface North of 60 latitude is represented by 25% of the data (15/60 proxies). The 18.3% of the Earth's surface between 30 and 60 degrees North is represented by another 43% of the data (26/60 proxies). In the meantime the 50% of the Earth's surface between 30 North and 30 South is represented by just 23% of the data (14/60 proxies), and the 50% of the Earth's surface below the equator is represented by just 15% of the data (9/60 proxies).
This extreme mismatch between surface area and number of proxies means no simple eyeballing of Fig 3 will give you any idea as to Global Mean Surface Temperatures in the Holocene Thermal Maximum. Further, there are substantial temperature variations between proxies in similar latitude bands, at least in the NH where that can be checked. That means in the SH, where it cannot be checked due the extremely small number of proxies, it cannot be assumed that the 2 to 4 proxies in each latitude band are in fact representative of that latitude band at all. Put simply, knowing it was warm in NZ tells us nothing about temperatures in Australia, let alone South America or Africa. This problem is exacerbated because (as Ljungqvist notes with regard to Southern Europe, data is absent from some areas known to have been cool HTM.
The upshot is that the only reliable claims that can be made from this data is that it was very warm North of 60 North, and North of 30 North on land in the HTM. The data is too sparse and too poorly presented to draw any conclusions about other latitude bands and about Ocean temperatures, or Land/Ocean temperatures from 30-60 North.
Given the problems with Ljungqvist 2011 outlined above, I see no reason to prefer it to Marcott et al (2013):
More illustrative is his Figure 3:
Note that the statistical distribution of potential holocene temperatures tails out at 1.5 C above the 1961-1990 baseline, or 1.86 C above a 1880-1909 baseline. Unlike the reconstruction, the statistical distribution of realizations does not have a low resolution. Ergo, we can be confident from Marcott et al that it is extremely unlikely that the Earth has faced temperatures exceeding 2 C above the preindustrial average in the last 100 thousand years.
-
Richard Lawson at 23:02 PM on 13 January 2016NASA study fixes error in low contrarian climate sensitivity estimates
This is a critical point in the debate. Science works by refuting false hypotheses. The contrarians' hypothesis is that "Human additions to atmospheric CO2 will not adversely affect the climate". Low climate sensitivity is absolutely crucial to their case. They were relying on Lewis and the 20th century TCR studies to sustain low climate sensitivity, and since the studies have been shown to be flawed, it is surely time for us to say loud and clear that the contrarian hypothesis has no merit.
-
Rob Honeycutt at 22:58 PM on 13 January 2016Tracking the 2°C Limit - November 2015
angusmac @21/22... "I conclude from the above that many parts of the world exceeded the 2 °C limit without any dangerous consequences and that these temperatures occurred when CO2 was at ≈ 280 ppm."
That's not the question at hand, though, is it? Parts of the world today exceed 5°C over preindustrial. The question is whether global means surface temperature will exceed 2°C, which would be inclusive of Arctic amplification having northern regions exceeding 8°-10°C.
-
Tom Curtis at 22:05 PM on 13 January 2016Tracking the 2°C Limit - November 2015
Angusmac @29 (1) Renssen et al (2012) label their figure 3 a as "Simulated maximum positive surface temperature anomaly in OGMELTICE relative to the preindustrial mean, based on monthly mean results." You reduced that to "Thermal Maximum anomalies", thereby falsely indicating that they were the mean anomaly during the Holocene Thermal Maximum. That mislabelling materially helped your position, and materially misprepresented the data shown. In defense you introduce a red herring that I was suggesting you indicated they were "global average temperatures", when "global average temperatures" would of necessity be shown by a timeseries, not a map.
Further, you argued that the map showed that “...many parts of the world exceeded the 2 °C limit”. However, as the map showed mean monthly temperatures (from widely disparate seasons and millenia), they do not show mean annual temperatures for any location and therefore the map cannot show that the 2 C limit for annual averages at any location, let alone that it was exceeded for the Global Mean Surface Temperature. That your conclusion did not follow from your data was significantly disguised by your mislabelling of the data.
Given that you are persisting with the idea that you did not misrepresent the data, now that I have spelled it out I will expect an apology for the misrepresentation. Failing that, the correct conclusion is that the misrepresentation was deliberate.
-
tmbtx at 20:10 PM on 13 January 2016Latest data shows cooling Sun, warming Earth
I would argue this scale is proportional. The dip, for example, from 2000, is oft cited as a contributor to the temperatures coming in on the lower side of the modeling. The scale for the Bloomberg plot makes a good point about long term stability, but the differentials within that noise are on a scale that does affect the climate. This isn't really a major thing to argue about though I guess.
-
JohnMashey at 18:20 PM on 13 January 2016Surface Temperature or Satellite Brightness?
Just so people know ...
Jastrow, Nierenberg and Seitz (the Marshall Institute folks portrayed in Merchants of Doubt) published Scientific Perspeftives on the Greenhosue Problem(1990), one of the earliest climate doubt-creation books. One can find a few of the SkS aerguments there.
pp.95-104 is Spencer and Christy (UAH) on satellites
That starts:
"Passive microwave radiometry from satellites provides more precise atmospheric temperature information than that obtained from the relatively sparse distribution of thermometers over the earth’s surface. … monthly precision of 0.01C … etc, etc.”So, with just the first 10 years of data (19790=-1988), they *knew* they were more precise, 0.01C (!) for monthly, and this claim has been marketed relentlessly for 25 years .. despite changes in insturments and serious bugs often found by others.
Contrast that with the folks at RSS, who actually deal with uncertainties, have often found UAH bugs, and consider ground temperatures more reliable. Carl Mears' discussion is well worth reading in its entirety.
"A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets (they certainly agree with each other better than the various satellite datasets do!).”"
-
angusmac at 18:12 PM on 13 January 2016Tracking the 2°C Limit - November 2015
Tom Curtis@24 Regarding your assertion of my “abuse of data” and being “fraudulent”, regarding the use of the Renssen et al (2012) HTM temperature anomalies, I can only assume that you are stating that I portrayed Renssen et al as global average temperatures. You are incorrect. I did not state that they were global average temperatures; I only stated that, “...many parts of the world exceeded the 2 °C limit” in my comment on Renssen et al. I fail to see anything fraudulent in this statement.
Referring to global average temperatures, I do not know why Renssen et al did not present global averages because they obviously have the data to do so. However, if you wished to obtain an early Holocene global average from Renssen et al, it is a simple matter to inspect one their references, e.g., Ljungqvist (2011) offers the following conclusions on global temperatures:
Figure 1: Extract from Conclusions by Ljungqvist (2011) [my emphasis]
I agree that with you regarding temperatures during earlier warm periods that it could be, “…plausibly argued that in some areas of the world those conditions were very beneficial” but I will stick to what you call my “faith” that they were beneficial to humanity overall. I will leave it to the proponents of 2°C-is-dangerous scenario to prove that temperatures of 1 °C or “possibly even more” were harmful to humanity as a whole.
Finally, you state that I frequently cite Marcott et al but, once again, you are incorrect. I only cited Kaufman et al (2013) which shows Marcott et al as one of their temperature simulations in their diagram. The Marcott et al Climate Optimum was only mentioned once by me in angusmac@17
-
TonyW at 17:30 PM on 13 January 2016Surface Temperature or Satellite Brightness?
rocketeer,
I've seen comments about the satellite data having a delay (don't know why). Just as the 1997/1998 El Nino shows up clearly in the satellite data only in 1998, we might expect that the 2015/2016 El Nino will spike the satellite data in 2016. So watch out for this year's data. The British Met Office expect 2016 to be warmer than 2015, at the surface, so that tends to support the idea of the satellite data spiking this year.
Tamino analysed various data sets in this post. It did seem to me that the RSS data appeared to deviate markedly from the RATPAC data around 2000, which leads me to guess that something started to go wrong with the data collection or estimations around then. I don't know yet but this article did mention a study into UAH data which suggests they are wrong.
-
davidsanger at 16:20 PM on 13 January 2016Latest data shows cooling Sun, warming Earth
@tmbtx it makes aesthetic sense in that it fills the vertical space, but it also visually implies a proporionality of cause and effect that doesn't exist.
-
sidd at 15:55 PM on 13 January 2016NASA study fixes error in low contrarian climate sensitivity estimates
doi: 10.1038/ncomms10266 is a nice paper on the effects of clouds in Greenland preventing refreeze and ratcheting up melt, especially since firn is full. The paper illustrates that clouds have a spatial, temporal, ocean and many other fingerprint, and averaging these effects would mislead. -
Tom Curtis at 10:07 AM on 13 January 2016Why is the largest Earth science conference still sponsored by Exxon?
ryland @26, Yes or No: Government funding has not distorted the results of climate science, merely enabled its existance.
If no, in what way did it distort the science, and where is your evidence of that? If yes, what was your point in posting - and why have you not made it clear that you think there is no distortion from the funding before now?
Moderator Response:[PS] Ryland appears to be attempting offtopic commentary. DNFTT.
-
Rik Myslewski at 09:28 AM on 13 January 2016Surface Temperature or Satellite Brightness?
Thank you for the excellent, clear, and concise article. I've been trying to wrap my head around the uncertainties of satellite versus surface temperature readings, and thanks to you that wrapping is approaching, oh, let's estimate about 270º ± 5º ...
-
rocketeer at 06:42 AM on 13 January 2016Surface Temperature or Satellite Brightness?
I see that Spencer and Christy at UAH have a new (and, I believe, as yet unpublished) version of their satellite temperature record that makes the warming trend go away altogether, at for 2000-present. The are pushing this as an improvment to te earlier version which is used in the Skeptical Science trend calculator (as well as the very similar calculator at Kevin Cowtan's University page). Does anyone have further information on this new version of the data? I had a denier throw this at me with no sense of irony about all the noise being made about the Karl corrections constituting data manipulation. Also, any speculation about why the satellite data shows such a huge peak for the 1998 el nino, much higher than the 2015 el nino despite other evidence that their magnitude is similar?
-
Kevin C at 06:06 AM on 13 January 2016Surface Temperature or Satellite Brightness?
OK, I'll try, but I don't have time to draw the pictures.
Suppose we just have temperature measurements for 2 months. Then the trend is easy: (T2-T1)/t, where t is the time difference between the measurements.
But what is the uncertainty? Well, if the errors in the two observations were uncorrelated, then it would be σdiff/t, where σdiff = (σ12 + σ22)1/2
But suppose the errors are correlated. Suppose for example that both reading have an unknown bias which is the same for both readings - they are both high or both low. That contributes to the error for the two readings, but it doesn't contribute to the trend.
In terms of covariance, the errors have a positive covariance. The equation for the uncertainty in the difference with non-zero covariance is σdiff = (σ12 + σ22 - 2σ12)1/2 where σ12 is the covariance. And as expected, the uncertainty in the difference, and therefore the trend, is now reduced.
Now suppose we have a long temperature series. We can again store the covariance of every month with respect to every other month, and we can take those covariances into account when we calculate the uncertainty in an average over a period, or the uncertainty in a trend. However the covariance matrix starts to get large when we have a thousand months.
Now imagine you are doing the same not with a global temperature series, but with a map series. Now we have say 5000 series each of 1000 months, which means 5000 matrices each of a million elements. That's getting very inconvenient.
So the alternative is to provide a random sampling of series from the distribution represented by the covariance matrix. Let's go back to one series. Suppose there is an uncertain adjustment in the middle of the series. We can represent this by a covariance matrix, which will look a bit like this:
1 1 1 0 0 0
1 1 1 0 0 0
1 1 1 0 0 0
0 0 0 1 1 1
0 0 0 1 1 1
0 0 0 1 1 1But an alternative way of representing it is by providing an ensemble of possible temperature series, with different values for the adjustment drawn from the probability distribution for that adjustment. You give a set of series: for some the adjustment is up, for some it is down, for some it is near zero.
Now repeat that for every uncertainty in your data. You can pick how many series you want to produce - 100 say - and for each series you pick a random value for each uncertain parameter from the probability distribution for that parameter. You now have an ensemble of temperature series, which obey the covariance matrix to an accuracy limited by the size of the ensemble. Calculating the uncertainty in an average or a trend is now trivial - we just calculate the average or the trend for each member of the ensemble, and then look at the distribution of results.
If you want maps, then that's 100 map series. Which is still quite big, but tiny compared to the covariance matrices.
-
shoyemore at 05:16 AM on 13 January 2016Surface Temperature or Satellite Brightness?
Kevin,
Thanks again for a great post. I understand a covariance matrix, can you explain what you mean by an "ensemble of temperatures realisations". Why is that only RSS and HADCRUT have done this? Thanks.
-
Kevin C at 01:22 AM on 13 January 2016Surface Temperature or Satellite Brightness?
Bart: I'm not very interested in behaviour I'm afraid. You are as qualified as me (probably more) to discuss ethics and motivation.
With respect to the UAH data, as far as I am aware they have not presented an analysis capable of quantifying the structural uncertainties in the trend in their reconstruction. However they are working from the same source data as RSS, with the same issues of diurnal drift, intersatellite corrections and so on. While their method might be better or worse than the RSS method in constraining these corrections, in the absence of any analysis I think all we can do is assume that the uncertainties are similar.
Unfortunately uncertainty analysis is hard, and it very easy to do it badly. Given multiple sources of uncertainty analysis for a particular set of source data, in the absence of any other information I would generally trust the one with the greatest uncertainties. It is disappointing that we only have one analysis each of the surface and satellite record for which a comprehensive analysis of uncertainty has been attempted. Hopefully that will change.
-
One Planet Only Forever at 01:17 AM on 13 January 2016Surface Temperature or Satellite Brightness?
No matter how solid the science showing the unacceptability of a potentially popular and profitable activity may be, the ability to deceptively create temporary perceptions contrary to the developing better understanding can be very powerful. Especially if 'fancier more advanced technology' can be used as justification for the desired temporary perceptions.
Many people are clearly easily impressed by claims that something newer, shinier and 'technologically more advanced' is superior to and more desireable than an older thing.
The socioeconomic system has unfortunately been proven to waste a lot of human thought and effort. The creative effort and marketing effort focuses on popularity and profitability. Neither of those things is actually real. They are both just made-up (perceptions that may not survive thoughtful conscientious consideration of their actual contribution to the advancement of humanity).
The entire system of finance and economics is actually 'made-up'. The results simply depend on the rules created and their monitoring and enforcement. And conscientious thoughtful evaluation of whether something actually advances humanity to a lasting better future is 'not a required economic or financial consideration' (and economic games are played by a sub-set of a generation of humanity that in their minds can show that they would have to personally give up more in their time than the costs they believe they are imposing on others in the future. That twisted way of thinking, that one part of humanity can justify a better time for themselves at the expense of others ... taken to the extremes that personal desired perception acan take such thinking, is all too prevelant among the 'perceived winners' of the current made-up game)
That can be understood, especially by any leader in politics or industry. And yet the popularity, profitiability and perceptions of prosperity that can be understood to only be made-up can easily trump the development of better understanding of what is actually going on.
That push for perceptions of popularity, profit and prosperity has led many people be perceive 'technological advancements' to be desired proofs of prosperity and superiority. These unjustified thoughts develop even though those things have not actually advanced human thoughts or actions toward a lasting better future for all life. Those thoughts even persist if it can be understood that the developments are contrary to the advancement of humanity (the collective thoughts and actions), to a lasting better future for all life. And in many cases the focus on perceptions of success can distract or even cause a mind to degenrate away from thoughts of how to advance to a lasting better future for all of humanity.
That clear understanding that everyone is capable of developing is actually easily trumped by the steady flood of creative appeals attempting to justify and defend the development of new 'more advanced and desired technological things' that do not advance the thinking and action of humanity.
So, the most despicable people are the ones who understand things like that and abuse their understanding to delay the development of better understanding in global humanity of how to actually advance global humanity.
The Climate Science marketing battle is just another in a voluminous history of clear evidence regarding how harmful the pursuit of popularity and profit can be. Desired perceptions can trump rational conscientious thought.
Technological advancement does not mean better (the advancement of humanity being better), except in the minds of people who prefer to believe things like that. And such people will want to believe that more ability to consume and have newer more tehnologically advanced things (or simply more unnecessarily powerful or faster things like overpowered personal transport machine that can go faster than they are allowed to safely go in public). They will willingly believe anything that means they don't have to develop and accept the better understanding of how unjustified their developed desired perceptions actually are.
-
Bart at 01:00 AM on 13 January 2016Surface Temperature or Satellite Brightness?
But what about UAH?
Version 6 is a strongly adapted version that shows less warming than before, especially in the past 20 years. The LT trend, according to Spencer, is only 0.11 degrees per decade since 1979, which result in a large gap with 2m observation and also RSS.
I found a comment here, that says, among others, "They [Spencer and Christy] have continued this lack of transparency with the latest TLT (version 6), which Spencer briefly described on his blog, but which has not been published after peer review." and also with respect to TMT: "I think it's rather damning that Christy used the TMT in his committee presentation on 13 May this year. He appears to be completely ignoring the contamination due to stratospheric cooling."
I am interested in you view on this.
Bart S
-
Alpinist at 00:29 AM on 13 January 2016Surface Temperature or Satellite Brightness?
"but it is not our best source of data concerning temperature change at the surface." Which, coincidentally, is where most of us live...
Thanks Kevin for an excellent post.
-
Kevin C at 23:21 PM on 12 January 2016Surface Temperature or Satellite Brightness?
No, I didn't contact any other record providers (although I've talked to Hadley people about the ensemble in the past). As far as I am aware only Hadley and RSS have published the relevant analysis to allow the estimation of structural uncertainty.
Doing so for trends requires either a covariance matrix of every month with every other month (which is unwieldy), or an ensemble of temperature realisations. Fortunately Hadley and RSS have done this - it's a killer feature, and sadly underused - but the other providers haven't.
-
barry1487 at 22:15 PM on 12 January 2016Surface Temperature or Satellite Brightness?
Very useful. TLT and surface records have been much compared and contrasted, with predilections in Climateball fostering tribal affiliations with either. I can't remember seeing a post that laid out the differences in much detail until now.
Were Spencer or Christy approached for input?
-
ryland at 20:47 PM on 12 January 2016Why is the largest Earth science conference still sponsored by Exxon?
No I don't think governments "affect the way science is conducted' but I do think they affect what science is conducted. For example see here and here
Moderator Response:[PS] Your references show governments fund climate science (as they should). So? The problem? From where I sit, I dont see you making relevant comment on this article, but rather attempting to use this forum for political commentary. Plenty of other blogs for this.
-
ryland at 19:04 PM on 12 January 2016Why is the largest Earth science conference still sponsored by Exxon?
In answer to the question from PS@23. The first part of the deleted comment referred to Exxon knowing that many are easily seduced by money. The second part added that governments are aware of this too. My comment @3 made several days ago mentions Exxon fundng both renewables research and campaigns against climate change, I believe the deleted comment added to that.
KR@24 IMO you have no evidence whatsoever for your statements
Moderator Response:[PS] You are still not linking "pork-barrelling by governments" to anything about climate change science. Your personal opinions on the behavour of governments is not relevant to this topic unless you think it affects the way science is conducted.
-
Why is the largest Earth science conference still sponsored by Exxon?
ryland - You're claiming your previous (deleted) comment equating Exxon funding to government funding actually applies to a completely different (not to mention off-topic and completely unmentioned) context? Quite an inventive post-hoc justification, IMO.
Pull the other one, it's got bells on.
-
ryland at 15:00 PM on 12 January 2016Why is the largest Earth science conference still sponsored by Exxon?
I had no hidden agenda at all and certainly not about climate science or scientists. The line "The hardest thing to refuse is money" immediately made me think of porkbarrelling and the way in which Australian governments, at both State and Federal level, hand out tax payers' money to further their own electoral chances. If my comment was taken as "insinuations of fraud or agenda" I think that reflects more on the mindset of the reader than of the writer
Moderator Response:[PS] And the relevance to any part of climate science is?
-
Tracking the 2°C Limit - November 2015
angusmac - "I only stated (and cited references) that showed that temperatures in the MWP were similar to 1961-1990 mean tempratures."
During the Medieval Climate Anomaly, a period of several hundred years, various regions reached temperatures similar to the latter half of the 20th century. But very importantly, not simultaneously - as per the recent PAGES 2k reconstruction:
There were no globally synchronous multi-decadal warm or cold intervals that define a worldwide Medieval Warm Period or Little Ice Age...
[...] Our regional temperature reconstructions (Fig. 3) also show little evidence for globally synchronized multi-decadal shifts that would mark well-defined worldwide MWP and LIA intervals. Instead, the specific timing of peak warm and cold intervals varies regionally, with multi-decadal variability resulting in regionally specific temperature departures from an underlying global cooling trend".
There was no MCA shift in global temperature anomaly comparable to recent changes. This has been pointed out to you repeatedly, with copious documentation by Tom Curtis in particular - your continued insistence on a MCA similar to recent temperatures seems to indicate that you aren't listening to the evidence presented.
-
scaddenp at 14:14 PM on 12 January 2016Why is the largest Earth science conference still sponsored by Exxon?
Ryland - as moderator I deleted a comment which was redolent with trolling and sloganeering. If you actually have something germane to say that is relevant to climate science, then try again - with examples, evidence and why you think what you say is relevant to climate science. Dont bother trolling with subtle insinuations of fraud or agenda.
-
Why is the largest Earth science conference still sponsored by Exxon?
ryland - Ah, implying the old "scientists are willing to lie for the money" canard. I haven't heard that in at least, oh, a week or so. I wonder if that qualifies as "sloganeering"?
Lobbyists who charge by the hour to write in support of CO2, coal, and unbridled fossil fuel use, writings they admit probably won't pass peer review, have nothing in common with academics who submit grant requests in open competition to produce meaningful science.
Professor Frank Clemente, a sociologist from Penn State university, was asked if he could produce a report “to counter damaging research linking coal to premature deaths (in particular the World Health Organization’s figure that 3.7 million people die per year from fossil fuel pollution)”.
He said that this was within his skill set; that he could be quoted using his university job title; and that it would cost around $15,000 for an 8–10 page paper. He also explained that he charged $6,000 for writing a newspaper op-ed.
When asked whether he would need to declare where the money came from, Professor Clemente said: “There is no requirement to declare source funding in the US.”
Yeesh. It's like you're not even trying anymore, ryland.
-
Tom Curtis at 13:27 PM on 12 January 2016Climate denial linked to conspiratorial thinking in new study
chriskoz @54, Sheehan writes:
"The paper was entitled "NASA faked the moon landing – therefore (climate) science is a hoax". The abstract of the study states: "Endorsement of a cluster of conspiracy theories [...] predicts rejection of climate science … This provides confirmation of previous suggestions that conspiracist ideation contributes to the rejection of science."
Note the term "conspiracist ideation". The English language is being brutalised in the social sciences to create a false sense of rigour.
When Jussim checked the data, he found that of the 1145 participants in the study, only 10 thought the moon landing was a hoax. Of those who thought climate science was a hoax, almost all of them, 97.8 per cent, did NOT think the moon landing was a hoax."
(Emphasis mine, elipsis in square brackets mine)
If you look at the underlined sentence, what is claimed by the Lewandowski paper is that:
1) If you are a conspiracy theorist, you are more likely to be a climate change denier.
It does not claim that:
2) If you are a climate change denier, you are more likely to be a conspiracy theorist.
The two claims are quite distinct. One is a particular claim about the population of conspiracy theorists, and makes no particular claim about the population of climate change deniers. The other is a particular claim about the population of climate change deniers and makes no claim about the population of conspiracy theorists.
However, when we look at the evidence presented by Sheehan, it is a statistic about the population of climate change deniers, not about the population of conspiracy theorists. That is, it shows that the data for the Lewandowski "moon landing" paper does not support proposition (2) above. (Actually, it only shows it for a restricted version of proposition (2), as there were a total of 10 conspiracy theories considered by Lewandowski et al.)
For some strange reason, the logician in me wants to insist that refuting 'if B then A' does not refute 'if A then B'. It really does not. Ony those who do not understand the meaning of the word "if" could think otherwise.
So the best that can be said of Sheehan's critique (which he copied of McIntyre, JoNova and a host of other 'skeptical' luminaries) is that he is incompetent at either at logic, or at reading comprehension, or both.
Prev 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 Next