A brief history of climate targets and technological promises

This is a re-post from Carbon Brief by Prof Duncan McLaren

Over the past three decades, the perceived wisdom for how to approach climate targets has changed several times.

From initial ideas of climate stabilisation, suggested approaches have focused on percentage CO2 emissions cuts, atmospheric CO2 concentrations, carbon budgets and today’s dominant framing of temperature rise limits.  

It might seem that this successive reframing reflects an improving scientific representation of what it means to avoid dangerous human-caused climate change, interpreted through enhanced modelling power and capacities, and in the light of better scientific knowledge regarding climate impacts. 

However, my research into this history, published in Nature Climate Change with my coauthor Dr Nils Markusson and part of a project examining the cultural political economy of carbon removal, suggests that the process has been much less rational – and more problematic – than this explanation might imply.

In particular, our analysis highlights that each shift in target framing has opened the door to new hopes of future technological solutions, such as widespread nuclear power or carbon capture and storage. Yet, while these technologies have promised much, as promises they have instead delayed the immediate acceleration of action to change behaviours or transform economies.

Early stabilisation targets

In our paper, we identify five “phases” through the history of climate targets, detailing how those targets were formed and framed. 

We begin in the late 1980s and early 1990s, when the United Nations Framework Convention on Climate Change (UNFCCC) was negotiated. At the Rio Earth Summit in 1992, the UN settled on a goal of “stabilising atmospheric concentrations of greenhouse gases (GHGs) at a level commensurate with avoiding dangerous anthropogenic climate change”. 

UK Prime Minister John Major addresses the plenary at the United Nations Conference on Environment and Development, Rio de Janeiro, Brazil. June 1992. Credit: Sue Cunningham Photographic / Alamy Stock Photo

Around this time, general circulation climate models (GCMs) and the first simple integrated assessment models (IAMs) allowed modellers to begin to explore both the impacts of emissions and the economic costs of emissions reduction techniques. However, assessing specific policy interventions with these early models was difficult, and responses were often discussed in very broad-brush terms.

In this period technological proposals for tackling climate change included suggestions for ocean iron fertilisation and for large-scale deployment of nuclear power.

The UK, for example, created a “non-fossil-fuel obligation” on electricity utilities, intended to support both nuclear power and renewables. Yet a potential nuclear revival stalled in the face of high costs and public concern.

While the Rio Earth Summit aimed for stabilising GHG concentrations, it left the job of turning this aim into actionable targets to subsequent negotiations. This was picked up by the Kyoto Protocol in 1997 – the second phase in our tour through history.

Target framingKey event(s)Models, scenarios and promises
Stabilisation Rio 1992 Early GCMs. Nuclear power.
Percentage emissions cuts Kyoto 1997 Early IAMs, SRES. Fuel switching and CCS.
Atmospheric concentrations Copenhagen 2009 IAMs & RCPs. BECCS.
Cumulative budgets Durban 2011,
Doha 2012
C-budget models and inverted IAMs. GGR to balance residual emissions.
Outcome temperatures Paris 2015 Linked Earth system models, SSPs. GGR to reverse overshoots, SRM.

Percentage emissions reductions

Around and after the Kyoto Summit in 1997, percentage emissions reductions was the dominant target framing. However, as with subsequent framings, it was not without debate over the most appropriate levels and timings. 

Promises of energy efficiency and “fuel switching” to cleaner alternatives formed part of many countries’ efforts and played a big role in the “SRES” scenarios, which looked at four different possible future trajectories of population, economic growth and greenhouse gas emissions. The UK launched its Climate Change Levy in 2001, for example, while the EU Emissions Trading Scheme began in 2005.

However, this period also saw the emergence of promises of “carbon capture and storage” (CCS) on fossil energy. (This included, for example, a report from the Intergovernmental Panel on Climate Change (IPCC) in 2005.) CCS is, effectively, a “tailpipe” measure intended to reduce emissions, yet it enabled promises of sustained use of coal (and later gas) for power generation, through the idea of “capture-ready” power stations. But, like the previous generation of nuclear promises, the reality of delivery of CCS was – and remains – very limited.

When modellers introduced the emerging technologies of CCS into IAMs this enabled the models to project cheaper pathways to the same climate outcomes. CCS was selected preferentially by the model algorithms because the simulated costs of continued expansion and use of fossil-fuel power – linked to retrofitting with CCS – were lower than those associated with phasing out electricity generation using coal and gas.

With the heavy lifting on emissions cuts delivered by future CCS, such projections also indicated a slower rate of decline in emissions in the immediate future, which also reduced the simulated cost. But as global emissions continued to grow, CCS moved from being an economic preference to being a central component in the vast majority of emissions reduction scenarios. 

The United Nations Framework Convention on Climate Change in Kyoto (COP3). December 1997. Credit: Aflo Co. Ltd. / Alamy Stock PhotoThe United Nations Framework Convention on Climate Change in Kyoto (COP3). December 1997. Credit: Aflo Co. Ltd. / Alamy Stock Photo

In other respects, modelling continued to become more sophisticated. It moved on to establish direct links between economic activity and the concentration of CO2 in the atmosphere. This effort produced the “Representative Concentration Pathways” (RCPs), which describe different levels of greenhouse gases and other radiative forcings that might occur in the future.

Atmospheric concentrations

In the third phase of international climate targets – the run-up to the Copenhagen climate summit in 2009 – there was widespread debate over an appropriate goal for atmospheric CO2 concentrations. This debate saw a shift from earlier recommendations of a 550ppm safe level (pdf) to 450ppm, with some calling for a 350ppm target.

Alongside modelling that targeted particular concentrations, rather than particular emissions cuts, another novel technological promise – bioenergy with CCS (BECCS) – gained significant traction. BECCS was proposed as a way to achieve net negative emissions by sequestering the fossil emissions resulting from the burning of biomass for power.

At this point, BECCS was almost a purely conceptual technology, but the models already featured both bioenergy and CCS: combining them was a relatively simple task. Like CCS before it, BECCS promised ways to cut the costs of meeting a particular target, allowing the justification of a slower transition by its promise to effectively reverse emissions at a future date.

In the following years, BECCS remained a central technology in the next phase of climate targets.

Carbon budgets

Even before the idea of CO2 concentrations was fully embedded in the UNFCCC or IPCC reporting, the fourth phase of climate targets – carbon budgets – entered the debate. The RCPs were only formally launched in 2014, for example, yet the UK began setting periodic five-year carbon budgets under its Climate Change Act in 2008.

The concept of a cumulative carbon budget is to set a total limit of CO2 that can be emitted while still keeping global temperature rise below a certain level, such as 1.5C or 2C. 

COP President consults with delegates from the Russian Federation, Ukraine and Belarus before the closing plenary at COP18 in Doha. November 2012. Credit: IISD/ENB COP president Christiana Figueres consults with delegates from the Russian Federation, Ukraine and Belarus before the closing plenary at COP 18 in Doha. November 2012. Credit: IISD/ENB

With the agreement of a limited second commitment period for the Kyoto Protocol (covering 2013-20) – agreed at the Doha climate summit in 2012 – this signalled the beginning of the end for percentage-based targets. There were also calls for a “scientifically calculated carbon budget” from some negotiators.

As with atmospheric CO2 targets, carbon budgets allowed the use of carbon removal technologies as a means to tackle residual emissions in future pathways. In addition, they enabled promises of future carbon removal as a means to reverse any “overshoot” of the budget.

And there is a fine line between inadvertent and planned overshoot. Once again in such budget-based analyses and modelling, we saw promises of future technologies – variously described as carbon dioxide removal (CDR), negative emissions techniques (NETs) and greenhouse gas removal (GGR) – effectively keeping alive hopes that climate goals could be met, while simultaneously enabling continued prevarication over emissions reductions.

Temperature targets

Despite widespread publicity around the idea of a “trillion-tonne” carbon budget – about half of which had already been emitted – policy began to consolidate around a focus on temperature outcomes. This is the fifth, and current, phase of our climate-target history. 

With more powerful models, researchers were able to better quantify the probabilities of achieving certain warming limits. Temperatures became more closely tied to carbon budgets and concentrations. 

The idea of 2C as a rough guardrail, although first suggested in the mid-1970s, only became a focal point for policy after Copenhagen in 2009. The shift to a focus on 1.5C came in the lead up to, and at, the Paris talks in 2015. This reflected a changing view on the boundary to “dangerous climate change” – especially in the Global South, where higher resolution modelling had made expected impacts more noticeable.

However, in incorporating the higher ambition, IAMs became more reliant on negative emissions, with the majority of NETs simply substituting for other mitigation, rather than adding to it. In this period other NETs began to feature more heavily as BECCS promises were constrained by concerns over land availability and competition with food production.

US Secretary of State Hillary Clinton speaks at a press conference at COP15, Copenhagen. 17 December 2009. Credit: Kristian Buus/Greenpeace / Alamy Stock PhotoUS secretary of state Hillary Clinton speaks at a press conference at COP15, Copenhagen. 17 December 2009 Credit: Kristian Buus/Greenpeace / Alamy Stock Photo

This same pattern persists in the post-Paris era. Many national and business targets are now framed as “net-zero” carbon, explicitly – or implicitly – achieved through substantial future deployment of carbon removal. Globally, however, the dominant target framing is now temperature outcomes.

Cost optimisation

Our analysis highlights a persistent co-evolution of climate politics and climate science, which still continues. Rather than stimulating the development and practical deployment of new technologies which help mitigate climate change, the climate policy system tends to conjure promises of future technologies. These promises both respond to, and enable, continued delays in mitigation, yet rarely deliver in practice. We call them “technologies of prevarication”.

In our paper, we argue that unless this tendency is recognised and addressed, it is likely to continue, with the most obvious candidate for a new technology being solar geoengineering.

Our analysis suggests that the emergence of BECCS – associated with a shift away from targets being framed in terms of “emissions cuts” and towards “atmospheric concentrations” or “carbon budgets” – was not a one-off problem solved by better information.

The inclusion of BECCS in pathways models made carbon budgets appear achievable, despite continuing international delays in delivering near-term cuts in emissions. And, despite subsequent critical analysis, very few models have backed away from including negative emissions.

One contributor to this problem, which remains unresolved, is that IAMs focus on “cost optimisation” with time discounting. This means they favour future promises of action over plausible, but potentially costly, near-term interventions.

A similar mechanism boosted early promises of nuclear power and then fossil CCS. In each case, the delays in mitigation made the overall outcome appear cheaper to deliver, but as time passed, neither significant emissions reductions nor the promised technological developments emerged. 

Technological promises that had been adopted in models for cost-optimisation reasons became unavoidable essentials in delivering climate targets, even when practical or political shortcomings were revealed

Public opposition and high costs slowed or prevented the development of nuclear power and CCS, while the potential for conflict with biodiversity and food production raised warning flags about BECCS. None of these technologies have delivered anything like the levels initially projected and, arguably, even today none of them even have the foundations in place for rapid carbon impact. 

In the real world, for example, BECCS is a cheap niche application reducing emissions from ethanol biofuel fermentation, selling the captured CO2 for use in enhanced oil recovery. Trials at Drax notwithstanding, commercial large-scale BECCS on biomass combustion remains unproven.


Our analysis shows how prevarication can emerge from the coevolution of technological promises, modelling techniques and political aspirations, especially around the framing of targets. 

This does not rely on deliberate efforts to slow action, although technological solutions are often favoured by industries involved in producing fossil fuels. Oil companies, for example, are enthusiastic investors into direct air capture technologies to recover carbon from the atmosphere.

We also recognise that it is a challenging problem for modellers and engineers, particularly when there is the possibility of a very useful new technology and the restrictions of tight carbon budgets. There are good reasons why we might overlook, or postpone consideration of such complex interactions and simply advocate for new technologies as a way to broaden our climate arsenal.

But we believe it is essential to acknowledge this problem and seek to break the pattern, for two key reasons. 

First, merely adding new technologies is unlikely to bring the climate challenge under control, unless we also deliver behavioural, cultural and economic transformations. 

Second, technological promises allow those benefiting from the continued exploitation of fossil fuels and the comfortable lifestyles it enables to justify those practices to themselves. This allows their activities to impose ever greater burdens and risks on those most vulnerable to climate change – today’s poor and future generations.

McLaren, D. & Markusson, N. 2020. The co-evolution of technological promises, modelling, policies and climate change targets, Nature Climate Change, doi:10.1038/s41558-020-0740-1

Posted by Guest Author on Tuesday, 16 June, 2020

Creative Commons License The Skeptical Science website by Skeptical Science is licensed under a Creative Commons Attribution 3.0 Unported License.