# Sun & climate: moving in opposite directions

## What the science says...

Select a level... |
Basic
| Intermediate | Advanced | ||||

The sun's energy has decreased since the 1980s but the Earth keeps warming faster than before. |

## Climate Myth...

It's the sun

"Over the past few hundred years, there has been a steady increase in the numbers of sunspots, at the time when the Earth has been getting warmer. The data suggests solar activity is influencing the global climate causing the world to get warmer." (BBC)

Over the last 35 years the sun has shown a cooling trend. However global temperatures continue to increase. If the sun's energy is decreasing while the Earth is warming, then the sun can't be the main control of the temperature.

Figure 1 shows the trend in global temperature compared to changes in the amount of solar energy that hits the Earth. The sun's energy fluctuates on a cycle that's about 11 years long. The energy changes by about 0.1% on each cycle. If the Earth's temperature was controlled mainly by the sun, then it should have cooled between 2000 and 2008.

*Figure 1: Annual global temperature change (thin light red) with 11 year moving average of temperature (thick dark red). Temperature from NASA GISS. Annual Total Solar Irradiance (thin light blue) with 11 year moving average of TSI (thick dark blue). TSI from 1880 to 1978 from Krivova et al 2007. TSI from 1979 to 2015 from the World Radiation Center (see their PMOD index page for data updates). Plots of the most recent solar irradiance can be found at the Laboratory for Atmospheric and Space Physics LISIRD site.*

The solar fluctuations since 1870 have contributed a maximum of 0.1 °C to temperature changes. In recent times the biggest solar fluctuation happened around 1960. But the fastest global warming started in 1980.

Figure 2 shows how much different factors have contributed recent warming. It compares the contributions from the sun, volcanoes, El Niño and greenhouse gases. The sun adds 0.02 to 0.1 °C. Volcanoes cool the Earth by 0.1-0.2 °C. Natural variability (like El Niño) heats or cools by about 0.1-0.2 °C. Greenhouse gases have heated the climate by over 0.8 °C.

*Figure 2 Global surface temperature anomalies from 1870 to 2010, and the natural (solar, volcanic, and internal) and anthropogenic factors that influence them. (a) Global surface temperature record (1870–2010) relative to the average global surface temperature for 1961–1990 (black line). A model of global surface temperature change (a: red line) produced using the sum of the impacts on temperature of natural (b, c, d) and anthropogenic factors (e). (b) Estimated temperature response to solar forcing. (c) Estimated temperature response to volcanic eruptions. (d) Estimated temperature variability due to internal variability, here related to the El Niño-Southern Oscillation. (e) Estimated temperature response to anthropogenic forcing, consisting of a warming component from greenhouse gases, and a cooling component from most aerosols. (IPCC AR5, Chap 5)*

Some people try to blame the sun for the current rise in temperatures by cherry picking the data. They only show data from periods when sun and climate data track together. They draw a false conclusion by ignoring the last few decades when the data shows the opposite result.

**Basic rebuttal written by Larry M, updated by Sarah**

**Update July 2015**:

Here is a related lecture-video from Denial101x - Making Sense of Climate Science Denial

Last updated on 2 April 2017 by Sarah. View Archives

Tom Daytonat 12:30 PM on 25 January, 2015Will somebody please add to the "It's the Sun" rebuttal, a section explicitly focused on addressing the sub-myth that the Earth's temperature still is catching up to the TSI increase that peaked around 1960?

At the least, that section should show that TOA energy imbalance has continued to grow since then, in contrast to its shrinkage that would be required if insulation was constant, since input has been constant (or even decreasing) and increasing temperature requires increasing output. For example, a good graph of imbalance was pointed to by jja in a comment.

It would be nice if that new section also explained that temperature response lag to increased TSI was taken into account by the many regression analyses.

Possibly relevant existing posts: "Has Earth Warmed As Much As Expected?" and "How We Know Global Warming Is Happening: Part 2." I recall that John wrote another relevant post that included energy imbalance, but I can't find it now.

Tom Daytonat 01:44 AM on 26 January, 2015HK also commented on energy imbalance, with material that might be used in the new section I requested be added to "It's the Sun."

Tom Daytonat 02:57 AM on 19 February, 2015Climate Dialogue has a good and recent overview of the Sun's potential role in Earth's temperature increase--not just with regard to the effect of a new Maunder Minimum--in its "Introduction" to its New Maunder Minimum topic.

Dan Pangburnat 06:53 AM on 5 March, 2015If TSI is a forcing, shouldn't the comparison on the graph be between the temperature change and the time-integral of the TSI which exceeds break-even?

KRat 09:46 AM on 5 March, 2015Dan Pangburn - When total forcing changes, so does the climate in response. That changes the break-even point, and the imbalance goes away.

Now if the total forcing continues to change, as we see with our GHG emissions, the climate will follow along

(albeit with a lag due to thermal inertia and slower feedbacks, primarily ocean heat content on the decadal level), but if the forcing ceases to change any imbalance will decay accordingly. There is no 'fixed offset' from a TSI change in the presence of a dynamic climate response.Short answer - a step change in forcing will cause a climate change, after which there won't be an imbalance to integrate.

Dan Pangburnat 00:23 AM on 10 March, 2015KR - To have an effect, forcings, must exist for a duration. The time-integral of the forcing accounts for both a variation in magnitude and the duration.

Break-even is defined (by me) as the constant net forcing that would result in the same average global temperature (AGT) at the end of a duration as existed at the beginning. For example, the duration could start at some time during the MWP and end at a more recent time when the AGT was the same.

If net forcing exceeds break-even, AGT will rise (AGT at the end of the duration will be higher than it was at the beginning) or if it is less than break-even, AGT will decrease. (Break-even is not the static (steady-state) solution to a dynamic heat transfer problem)

The net (or total) forcing is the algebraic sum of all constituent forcings. The constituent forcings could each vary but the algebraic sum of the time-integrals of the individual constituents is the same as the time-integral of the algebraic sum of the constituents (for the same time period).

The time-integral of the algebraic sum of the constituent forcings is the energy change of the planet (for the duration of the forcings) and the energy change divided by the effective thermal capacitance (sometimes called thermal inertia) of the planet is the AGT change (during the time period).

If TSI is considered to be one of the constituent forcings, then its effect on AGT (as one of the constituents, its contribution to the total AGT change) is determined by its time-integral. To be conceptually correct, when on the same graph, the time-integral of TSI is the correct metric for comparison with AGT change.

Tom Curtisat 06:48 AM on 10 March, 2015Dan Pangburn @1106:

1) Your formulation ignores Outgoing Longwave Radiation (OLR), which is to a close approximation a linear function of Global Mean Surface Temperature (GMST). You also ignore change in heat content. Specifically:

αΔT = ΔQ - ΔF,

where ΔT is the change in temperature, ΔQ is he change in heat content, and ΔF is the change in forcing, and α is the climate feedback parameter, ie, the change in OLR in Watts per meter squared per unit change in temperature in degrees Kelvin. (The climate senstitivity parameter should not be confused with the climate sensitivity factor, λ, which is 1/α.)

Because OLR is a function of temperature, you can in theory have identical forcing histories with different temperature histories and end up with a different final temperature as a result. Ergo, your concept of "break-even" is undefined. There is no unique integral of forcing history such that given that forcing history the temperature will always be the same at the initial and final points of the period of integration.

2) Ignoring point (1), if your analysis is correct, then if we have a period, t, over which we have an integral of forcing, then we also have two non-overlapping periods of lenght t/2 in which the same reasoning applies.

Now consider three possible forcing histories, each with the same integral of forcing. In history A, forcing is constant over the full period at "break even". In history B, forcing stars at half of the level in history A, and increases linearly to 1.5 times the forcing in history A. In history C, forcing is the mirror image of history B, starting high and ending low. In each case, the integral of forcing over the full history is identical, and at break even.

Given your reasoning, however, the integrals of forcing for the first half of t are 0.5, 0.375 and 0.625 for A, B and C respectively, treating the forcing integral over the full period as being 1. Conversely, the integrals over the second period are 0.5, 0.625 and 0.375 respectively. Ergo, according to your reasoning, temperatures will stay constant in history A, initially fall, and then rise in history B, and initially rise and then rise in history C. Ergo, according to your theory we can distinguish as to whether a given forcing is an adequate account of a temperature change by tracking not just the integral, but the integral over subunits of the total time.

Indeed, according to your theory, if we make the subunits the smallest value for which we have clear resolution of the data, temperatures over those subunits should track the integral of forcing over those subunits. In fact, ignoring noise, if a given factor is the dominant forcing, temperature should track the actual forcing with high correlation.

But, of course, temperature does not track TSI with high correlation. That is why you introduced your theory to begin with. Ergo, your theory actually disproves your contention unless you deliberately avoid applying it critically. That is, your argument only looks good by avoiding detailed analysis.Dan Pangburnat 00:38 AM on 11 March, 2015Tom - It appears that your equation has forcing, in Joules per sec (aka watts) subtracted from energy, in Joules. That would be like subtracting your speed in mph from your distance traveled, in miles.

Perhaps it is unclear that the beginning and ending temperatures are the same in the definition of break-even. Given that requirement, the time-integral of the forcing from beginning to end must be zero. Then each of the periods A, B, and C must (by definition) begin and end at the same temperature and the time-integral of the forcings for each of them must all also be zero.

This has only to do with the meaning of the word 'forcing' as used in discussing climate change.

Tom Curtisat 01:16 AM on 11 March, 2015Dan Pangburn @1108:

1) ΔQ is actually average rate of change of heat content per unit area. That means all units on the right hand side are in terms of Watts per meter squared, and on the left hand side the units are Degrees Kelvin times Watts per meter squared per degree Kelvin = Watts per meter squared. I apologize for the mistatement. My mistatement in no way, however, justifies your failure to account for either OLR or ΔQ in your formulation.

2) Forcing is by definition "...is the change in the net, downward minus upward, radiative flux (expressed in W m–2) at the tropopause or top of atmosphere due to a change in an external driver of climate change, such as, for example, a change in the concentration of carbon dioxide or the output of the Sun" (AR5). As the forcing is a change, it must be specified relative to a particular index time. By convention, and by specification in AR5, that time is 1750. It can, however, be any time. There is no need for it to be the start time of any given period. Ergo, your definition of "break-even" is satisfied by my examples on condition that ΔQ = 0 at the initial point.

That, however, is entirely a distraction. My example can be easilly reworked so that the forcing in Scenario A is 0, that in Scenario B it starts at - N, and ends at + N, with a linear trend throughout, and so that in Scenario C it starts at + N and ends at - N, with a linear trend throughout. Once N, and the duration is specified, the logical consequences are the same. That, I believe is self evident so I wonder

why you are distracting with irrelevant (and fallacious) trivia rather than actually trying to deal with the argument.Dan Pangburnat 01:50 AM on 12 March, 2015Until you get the 'trivia' right, it's not a distraction.

A forcing must act for a period of time to have an effect on average global temperature (AGT). The forcing is not the difference between what it was at one time (1750) compared to what it is at another time (now). To determine the effect that a forcing has on AGT requires the time-integral of the difference between the forcing and the break-even forcing. If the forcing goes from .5 below break-even linearly to .5 above break-even during the time period, the time-integral for that time period is zero.

KRat 03:29 AM on 12 March, 2015Dan Pangburn - The integrated imbalance is of great interest, and is perhaps best seen in ocean heat content changes that in fact tell what what the long term imbalances have been. But the direction of change is driven by the sign

(and magnitude)of that forcing imbalance against the thermal inertia of the climate, hence the graph in the (Basic) opening post showing changes in solar forcing is indeed quite relevant.However, I have to say that it's very unclear to me what

your actual point(s)might be in this exchange. Are you arguing for a larger influence from solar changes than is generally accepted? Do you have an alternate graph to in your opinion better display the information already presented?Tom Curtisat 07:25 AM on 12 March, 2015Dan Pangburn @1110:

You say:

Yet I have now corrected for the trivial points you raised, and you are still not responding to the thrust of the argument. Ergo, your intent was not to correct the trivia but to distract from the thrust of the argument, which you are unable to answer.

You go on to say:

Except that is plainly false. I quoted from the IPCC AR5 WG1 glossary as to the definition of forcing. You can trace that definition back through the reports, and through the scientific literature if you want, but the definition is as I have given it. If you want to introduce a different concept into climate science, introduce a new term and define it explicitly. Stop using ambiguity to conceal the weakness of your argument. Alternatively, if you want to use the currently accepted term in climate science, "forcing", use it as currently defined, and stop trying to give it an idiosyncratic defintion.

I will note that there are very good reasons for the standard definition. Explicitly, your definition only works if there is no change in temperature due to other reasons (ie, no other forcings, and no internal variability). It also only works if the time integral of (OLR minus initial OLR) is zero over the "break even" period. Further, it depends on there being intervals of zero net change in OLR, temperature and heat flux to benchmark the 0 value of the forcings. When you show me that period over which we have reasonably accurate measurements of all relevant values, I might consider using your definition.

Finally, you comment that:

Well, yes. But the time integral over the first half of the period is negative, and the time integral over the second half is positive. Ergo, you are compelled (of you wish to be reasonable) to accept that even with your abberant and idiosyncratic definition of forcing, the temperature histories of scenarios A, B and C will be different. That being the case, only looking at the initial temperature and final temperature to determine whether a particular forcing could be the main driver of change in GMST is to simply avoid the majority of the evidence. It is to argue by hiding data, not by examining it.

Dan Pangburnat 01:30 AM on 13 March, 2015KR - I don't believe that atmospheric CO2 increasing from 3 parts in 10,000 to 4 parts in 10,000 has significantly changed the way that the oceans absorb sunlight.

My only point in this discussion is, to be a meaningful comparison, the temperature change should be compared to the time-integral of the forcing instead of the forcing itself.

Tom - It is puzzling why you declare that my definition of forcing is bogus when I have not even defined forcing. I assumed that everyone knew what cosntituted a forcing. My understanding is no different from AR5 (except my analysis has found that CO2 has no significant effect on climate).

I HAVE defined 'break-even'.

If you cannot see that the energy change (which, when divided by effective thermal capacitance, is temperature change) is the time-integral of the energy change rate (AKA net forcing) this isn't going anywhere and you are destined to wonder why the average global temperature isn't increasing.

Response:[PS] And your analysis is published where? Time to show us some data I think. It is pretty hard to accept the word someone who cannot calculate the radiative effect from an increase in CO2 without some pretty convincing mathematical analysis including all definitions used.

Rob Honeycuttat 01:58 AM on 13 March, 2015Dan @1113... It's not a matter of "belief." You have to understand the physics involved. For one, atmospheric concentrations of CO2 don't affect incoming radiation that warms the ocean.

KRat 02:40 AM on 13 March, 2015Dan Pangburn - Reality doesn't care about beliefs.

With respect to the significance of CO2 concentration changes, I suggest reading the CO2 is just a trace gas thread. Your statement sounds like an

argument from incredulity.Atmospheric GHGs (active in the IR) have very little effect on how the oceans absorb sunlight. But by warming the surface atmosphere, they have a significant effect on how fast the the oceans lose energy to the atmosphere, and hence create a forcing imbalance on the oceans themselves. See the discussion here.

"...you are destined to wonder why the average global temperature isn't increasing."What?How can you possibly claim this? There are short term variations in atmospheric temperatures, but if you look at the global temperatures including the oceans, or even just examine a sufficiently long period for statistical significance in atmospheric temperatures, they are indeed increasing. That statement of yours is nonsense.KRat 02:42 AM on 13 March, 2015Dan Pangburn -

"...my analysis has found that CO2 has no significant effect on climate..."Then, with all due respect, your analysis is simply wrong.

Dan Pangburnat 00:35 AM on 14 March, 2015Rob - I agree and restate: Atmospheric CO2 increasing from 3 parts in 10,000 to 4 parts in 10,000 can not significantly change the rate that the oceans absorb sunlight.

KR - The effect of CO2 is not the point of discussion here.

Temperature change, in degrees K, multiplied by the effect thermal capacitance (thermal inertia?), in Joule sec/m/m/K results in units Joule sec/m/m.

Forcing is in Joule/m/m.

My only point here is that it is misleading to compare these on the same graph. The correct comparison is between the temperature change and the time-integral of the net (you can call it total) forcing.

KRat 01:27 AM on 14 March, 2015Dan Pangburn - If CO2 isn't the point of discussion

(or rather, the relative influences of anthropogenic GHGs and the myth that 'it's the sun' responsible for all recent climate changes), then why did you bring it up? Particularly when your claim is so unsupported?In the meantime, since we are concerned with changes in temperature, graphing those against changes in TSI is entirely appropriate to investigate correlations.

Regarding the oceans, both Rob and I have agreed that GHGs have little effect on how the oceans absorb SW radiation - but you seem to be missing the physics where GHG changes greatly affect how the oceans lose that energy, causing a forcing imbalance and therefore warming the oceans.

Climate temperatures are a balance between incoming energy gain and outgoing loss scaled by the Stephan-Boltzmann relationship, and

changes in a balance can come from a finger on either side of the scales.Dan Pangburnat 03:53 AM on 14 March, 2015KR - To see the effect that TSI has on temperature requires the time-integral of TSI. Without even that trivial science skill, further discussion is useless.

Response:[PS] By all means feel free to link to or post what you mean by a "time integral of TSI". Be sure to do the same for the CO2 forcing.

KRat 04:48 AM on 14 March, 2015Dan Pangburn -

"...further discussion is useless."I'm afraid I would have to agree.Tom Curtisat 10:23 AM on 14 March, 2015As Dan Pangburn does not appear interested in following reason, I thought I would short cut the argument. His claim is that the integral of TSI explains the temperature history since 1880. Therefore, I took the record of TSI forcing used in Kevin C's simple response funtion climate model (default setting). I tested the regression of offsets of that forcing to 1960 to determine which best correlated with the GISS LOTI. As it happened, 0 offset was best. I then regressed the resulting integral of TSI against the GISS LOTI up to 1960, and projected the regression on to 2010:

For the record, the correlation over the full interval is 0.917 and the r squared is 0.841. I did not calculate the Root Mean Squared Error, but as you can see it is lousy. Sufficiently so as to falsify the model.

For comparison, here are the full forcings with a simple, two box response function as shown with default settings minus ENSO from Kevin's model:

R squared is given as 0.877. Better than the integral of TSI, but not stunningly so. The overall fit, however, smashes the Pangburn model. If you hold to the quaint notion that scientific results should be determined by empirical evidence, then there is no question as to which model is superior.

Dan Pangburn may not be happy with my regression. If not, however, it is incumbent on him to do better - and to tell us how he did it. Absent such an attempt, his counter theory is not science. It is merely a thought bubble. And until he does better, showing us the graph of the regression and explaining his methods, we are quite right to ignore that thought bubble.

Response:[PS] As you pointed out earlier, there is rather large gap in Dan's physical understanding. Are you going to put up the CO2 time integral as well?

Tom Curtisat 12:09 PM on 14 March, 2015PS inline @1121, no, there is no point in putting up the time integral of CO2 forcing. The correct relationship is Heat Content (not temperature) to the time integral of (Incoming energy - outgoing energy). CO2 changes the time integral of outgoing energy by reducing OLR. Increasing temperature changes the time integral of outgoing energy by increasing OLR. Because Pangburn persistently ignores OLR, his formulation is nonsense. However, it is his formulation I wanted to test, hence the first graph.

Response:[PS] frankly not much point to time integral of TSI either but I thought that might help see the issue.

KRat 14:28 PM on 14 March, 2015Dan Pangburn's argument appears to be one I've seen before - where a

'break-even'point is defined in some fashion(TSI, or a particular sunspot number as in an earlier Pangburn post here, etc.) -it's assumed that any energy above that breakpoint will integrate and accumulate positively, and any below that breakpoint will integrate negatively.Utterly neglecting the other side of the equation, the outgoing LWR which scales with temperature and effective Earth emissivity, and that climate energy is driven by the

differencebetween incoming and outgoing energies. There is no fixed break-even since temperatures change in response to forcing, the difference is between two moving values, and henceno fixed threshold.In fact, since the sign of the speculative integration against a particular

'break-even'is solely and rather arbitrarily set by where that breakpoint is defined, different breakpoints can suggest either ridiculously large warming or cooling depending on how they relate to the time series as a whole. It's a hypothesis focused entirely on the climate energy input, wholly ignoring energy output - and therefore it's meaningless.Pangburn has been pushing this hypothesis for several years, in the face of multiple replies pointing out these issues - it's unlikely he's going to change his mind now. But readers should be aware of the difference between a fixed integrative threshold, and an imbalance

(the case in reality)between two moving values. And judge such simplistic hypotheses accordingly.Response:[PS] Thank you for bringing up Dan's previous posting history. This shows excessive repetition and amount now to just sloganeering without supporting evidence.

KRat 00:01 AM on 15 March, 2015PS - In all fairness, Pangburn hasn't been arguing

on SkSfor very long, and has yet to make a clear causal claim (something I've been trying to extract). But given his history on other sites and his own blog posts, I'm not sanguine about better results here.Witkh13at 06:10 AM on 16 March, 2015I find it ironic how this group is only skeptical towards proof that violates in what they believe. Climate Change/Global Warming believers are eager to believe others are cherry picking data because their follow acolytes have been proven to cherry pick data and promote biased readings since the beginning.

http://wattsupwiththat.com/2015/03/10/study-climate-change-is-nothing-new-in-fact-it-was-happening-the-same-way-1-4-billion-years-ago/

http://www.livetradingnews.com/orbital-variations-key-cause-earths-climate-change-98741.htm

http://www.aip.org/history/climate/solar.htm

http://science.nasa.gov/science-news/science-at-nasa/2003/17jan_solcon/

http://www.newsmax.com/Newsfront/scientists-Milankovitch-cycles-orbit-variations/2015/03/11/id/629605/

https://www.heartland.org/sites/all/modules/custom/heartland_migration/files/pdfs/24807.pdf

All the Climate Change/Global Warming acolytes have to answer to this is to cherry pick one major study and then imploy impropriety based on who funded the research. They use degrading, false, slanderous insults instead of actual proof that any impropriety actually occured.

As I said, the acolytes are merely reflecting their own lack of morals or ethics on everyone else. Apparently it is inconcievable to them that someone may actually have a backbone and tell a sponsor to go pound sand.

However, please continue with this elementary sandbox mentality. Those who are without the mental illness of statism are the opposite of impressed.

Response:[PS]

Please note that posting comments here at SkS is a privilege, not a right. This privilege can be rescinded if the posting individual treats adherence to the Comments Policy as optional, rather than the mandatory condition of participating in this online forum.

Please take the time to review the policy and ensure future comments are in full compliance with it. Thanks for your understanding and compliance in this matter.

Pick one topic where you think science has wrong and your fellows believe they have truth. Comment on that topic and that topic only, support your statements with references rather than repeating grossly misinformed slogans for misinformation sites and then be prepared to discuss the topic in keeping with the comments policy. Take note in particular of inflammatory tone, sloganeering, and staying on topic. This is a site to discuss the science. If you find the requirements of the comments policy too burdensome, then there are plenty of other sites which would welcome your kind of contribution.

Dan Pangburnat 21:54 PM on 16 March, 2015KR – There have been some refinements in the 3+ years since the paper you linked to. The current version of the equation has R2 = 0.9049 (95% correlation) when compared to a normalized average of reported averages of average global temperatures. Everything not explicitly considered (such as the 0.09 K s.d. random uncertainty in reported annual measured temperature anomalies, aerosols, CO2, other non-condensing ghg, volcanoes, ice change, etc.) must find room in the unexplained 9.51%. If the effect of CO2 is included, R2 = 0.9061, an insignificant increase.

The analysis includes an approximation of ocean cycles that oscillate, with a period of 64 years, above and below a long-term trend calculated using the time-integral of sunspot number anomalies as a forcing proxy. The ‘break-even’ sunspot number is 34. Above 34 the planet warms, below 34 the planet cools.

Graphs of results, the drivers, method, equation, data sources, history (hind cast to 1610), predictions (to 2037) and a possible explanation of why CO2 change (fossil fuel burning) is NOT not a driver are at http://agwunveiled.blogspot.com.

Response:[JH] The use of "all-caps" is akin to shouting and is prohibited by the

SkS Comments Policy.KRat 00:01 AM on 17 March, 2015Dan Pangburn -

"Everything not explicitly considered..." -I suggest you read up on omitted-variable bias, which leads to over or underestimating the effect of the factor(s) you regress upon when you leave out other important causal factors. You've only regressed upon sunspot numbers, but it's impossible get correct results by sequential regression when there are multiple factors in play. You need to regress against all of them at once(hence the use of multiple linear regression).The physics indicate that insolation is a factor. But the physics also indicate that GHGs, natural and volcanic aerosols, albedo, land use, black carbon, etc., are also causal factors. Physics informs any regression analysis - ignore causal factors, and your analysis will be in error.

I will also note that your equation appears to have roughly 4 free variables (your constnats) to relate a sunspots and a cyclic pattern to a single temperature value - that appears to be more a curve-fitting exercise then a causal analysis. As John von Neumann said,

A 'break-even' point of 34 sunspots

(darn, I was hoping the number would be 42)might fit the data and your equation over a particular period, but you are again utterly ignoring the output side of the equation. Under a doubling of CO2 radiative physics indicates a direct forcing of 3.7 W/m^{2}, and a direct warming of 1.1C(ignoring feedbacks for now). Under those conditions your 'break-even' of 34 sunspots will still lead to a radiative imbalance, a warming; the actual balance point would be where the TSI was 3.7 W/m^{2}lower to match the decreased energy leaving the climate.There is no fixed breakpoint, what matters is the balance between climate energy input and climate energy output, conservation of energy, and ignoring the output makes your analysis simply a curve-fitting exercise on one aspect of energy input.And as such, your equation(s) have no predictive power. There is no physical basis for your prediction of a 0.3C temperature drop by 2030 - you've simply ignored multiple causal factors and the energy relationships involved.

Dan Pangburnat 03:19 AM on 17 March, 2015KR - The correlation equation initially included CO2 and T^4 considerations but they made no significant improvement in the coefficient of determination (R^2). The correlation with measurements is obviously not linear. Multiple linear regression on the period since 1700 is misleading.

Effectively there are only two free variables in the equation that gives R^2 = 0.9049. C is set to 0 so it has no influence and D simply compensates for the arbitrary reference temperature for the measured temperature anomalies.

The equation was derived using the first law of thermodynamics as described in Ref. 2 in the linked paper.

As shown in Table 1 of the linked paper, R^2 is quite insensitive to the 'break-even' number. 34 gives the highest R^2 1895-2012 and credible estimate back to the depths of the LIA.

The equation allows prediction of temperature trends using data up to any date. The predicted temperature anomaly trend in 2013 calculated using data to 1990 and actual sunspot numbers through 2013 is within 0.012 K of the trend calculated using data through 2013. The predictions depend on sunspot predictions which are not available past 2020

I have made public exactly what I did and the results of doing it including prediction. It will be interesting to see how it plays out.

Response:[JH] You are now skating on the thin ice of excessive repition which is prohibited by the

SkS Comments Policy.KRat 05:36 AM on 17 March, 2015Dan Pangborn - I would suggest reading Lean and Rind 2008, who performed multiple regression on temperature data since ~1889, and who conclude:

They certainly found multiple linear regression both possible and useful, as did Foster and Rahmstorf 2010. If your regresssion neglects multiple factors that physics indicates are significant, your model doesn't describe reality. If you're not including the outgoing energy to space, which scales linearly with effective IR emissivity

(which changes with GHG concentrations)and by T^{4}, then you aren't accounting for energy conservation. And if your results indicate that CO2 las little or no effect in complete defiance of radiative physics, thatshouldbe a huge red flag regarding your analysis.Quite frankly, I don't see much of use in your analysis. You might try some hold-out tests

(derive your model from perhaps the first half or the second half of the temperature data, and using those computed coefficients see how well you can follow the other half)to see just how dependent your fit is on the initial data presented. I suspect you won't be happy with the results.Dan Pangburnat 08:01 AM on 17 March, 2015OK, apparently you don't grasp or at least don't believe what I have done.

Paraphrasing Richard Feynman: Regardless of how many experts believe it or how many organizations concur, if it doesn’t agree with observation, it’s wrong.

The Intergovernmental Panel on Climate Change (IPCC), some politicians and many others mislead the gullible public by stubbornly continuing to proclaim that increased atmospheric carbon dioxide is a primary cause of global warming.

Measurements demonstrate that they are wrong.

CO2 increase from 1800 to 2001 was 89.5 ppmv (parts per million by volume). The atmospheric carbon dioxide level has now (through December, 2014) increased since 2001 by 28.47 ppmv (an amount equal to 31.8% of the increase that took place from 1800 to 2001) (1800, 281.6 ppmv; 2001, 371.13 ppmv; December, 2014, 399.60 ppmv).

The average global temperature trend since 2001 is flat (average of the 5 reporting agencies http://endofgw.blogspot.com/). Graphs through 2014 have been added. Current measurements are well within the range of random uncertainty with respect to the trend.

That is the observation. No amount of spin can rationalize that the temperature increase to 2001 was caused by a CO2 increase of 89.5 ppmv but that 28.47 ppmv additional CO2 increase did not cause an increase in the average global temperature trend after 2001.

What do you predict for 2020?

Response:[PS] Please carefully read the Comments Policy. Compliance is not optional. Note in particular accusations of fraud, and sloganneering. Repeating long debunked myths without offering evidence and demonstrations that you have not even read the science let alone understood do not progress any argument. You would do well to read the IPCC report before making strawman claims about what is and is not predicted.

Letoat 08:32 AM on 17 March, 2015Dan,

You greatly underestimate the complexity of the issues.

If you want to take the flattish trend in global surface temperatures since 2001 as proof that the IPCC are mistaken, first you have to demonstrate that you understand what the experts in the field say about fluctuations in those surface temperatures. No-one (except you and other deniers) is claiming that there should be a tight one-to-one correlation between CO2 and global surface temperature over the scale of a few years, because of all the various processes that shuffle heat around. Many of those processes have been discussed exetensively on this site, and before making pronouncements that you know better than others you show evidence of having at least done the basic reading that would let you enter the conversation at anything but newbie level.

You are basically attacking a straw man - and not even an interesting or novel straw man, as this is an issue on which hundreds of articles have already been written, and to which you have added no new understanding.

BTW, I had a look at your blog site, and found it full of similar simplistic musings. The most blatant was a graph in which CO2 and temperature were plotted on the same graph, but with the scales adjusted to make the CO2 curve steep and the temperature curve flat. This is the so-called "World Climate Widget", the use of which is a clear marker of someone who is not interested in the truth, but in mathturbation. This graph has been discussed is several places online, including here:

http://www.realclimate.org/index.php/archives/2014/12/the-most-popular-deceptive-climate-graph/

Any claims you had of knowing beter than the world experts on this topic are completely undermined by your use of such cheap parlour tricks.

Leto.

Letoat 08:37 AM on 17 March, 2015edit:

Many of those processes have been discussed extensively on this site, and before making pronouncements that you know better than others you

show evidence of having at least done the basic reading that would let you enter the conversation at anything but newbie level.shouldrkrolphat 08:51 AM on 17 March, 2015Dan,

"mislead the gullible public"Because someone believes what the vast majority of climate experts believe makes them gullible? If the scientific understanding changed and some other mechanism (non-human) is determined by science to be the cause of global warming then I would believe that. Would that still be gullible? But I don't see how you can call the public gullible for believing what the experts are saying.

Tom Curtisat 09:28 AM on 17 March, 2015Here is the default Cowtan model including ENSO:

It has an R squared of 0.932, superior to that obtained by Pangburn. I also uses just three parameters, compared to the five used by Pangburn to obtain his fit. In other words, it is a superior model by every measure. Yet Pangburn says of the theory underlying this model that it does not fit the observations.

For comparison, here is Pangburn's own presentation of his model matched against HadCRUT4 and the 95% confidence intervals of Loehle and McCulloch 2008 (a paper fraught with its own problems, but Pangburn's chosen empirical measure):

You will notice that in 1625, the retrodicted temperature by his method is 0.5 C above the upper confidence bound of his chosen paleo-reconstruction. Granted, he has another graph later chosen for its lower sunspot numbers in the 17th century in which his retrodicted temperatures only exceed the 95% value by a small amount (and drop below the lower value later on). Use of that graph, however, constitutes a cherry pick. It follows that Pangburn's model (unlike the IPCC models) has been falsified - and he knows it. You know that he knows it because he truncates the graph so that you cannot see just how far his model falls below the lower bound.

Even with the cherry picked sunspot data, the 17th century trend in Pangburn's model is of opposite sign to the data for a century. Contrast Pangburn's evidentiary standard for his own model, which accepts this discrepancy without qualm, to his standard for the IPCC models - which he claims are falsified by a reduced but same sign trend for 15 years.

And this just glances at the evidentiary contradictions in the empirical results of Pangburn's model. (If you want more, and a laugh, check out his predicted temperature for 2014.) It pays no attention to his assumption of constant outgoing energy over time, his ignoring of the relative strengths of forcings, his insistence that CO2 has no effective greenhouse effect contrary to very direct data - all of which fall into the category of simply unphysical mistakes.

Why is Panburn trying to insult our intelligence so with his hypocrisy?

John Hartzat 09:31 AM on 17 March, 2015Moderation CommentAll:Please do not respond to any future posts by Dan Pangburn until a moderator has had a chance to review them for compliance with theSkS Comments Policy.Thank you.

Letoat 18:13 PM on 17 March, 2015Tom @1134 (or others), do you have any idea why the otherwise excellent model-data match for the Cowtan model comes a little unstuck around 1940?

Tom Curtisat 09:07 AM on 18 March, 2015Leto @1136, 1944 (-3.27 SD), 1938 (-2.81 SD), 1943 (-2.45 SD) and 1963 (-2.02 SD) are the only years with greater than two standard deviations below the mean error between model and observed temperatures. We would expect values exceeding SD of 3.29 from the mean, assuming a normal distribution, just 0.1%. Ergo, with 131 observations, we expect to see such a value 12.3% of the time. So, while the observation is unusual, it is far from clear that the model has come "unstuck" in 1943.

There is, however, a better than even chance that there is a problem with the 1944 values, and given the closeness in time, possibly also those of 1938 and particularly 1943. Curiously two of those years are at the height of WW2, and one immediately preceeds it. This raises several issues.

First, there were large, and unevenly distributed changes in shipborne traffic in WW2. Specifically, there was a large reduction in shipborne traffic outside of military convoys in the Pacific. In the Atlantic traffic from the US to Brittain and back diverted substantially north or south of normal routes to sale near airbases that provided aircover against submarines. There is a very real possibility that these factors have distorted WW2 SST records. There are also likely to have been disruptions of land records at the same time.

Second, there was a very rapid change in the proportion of SST records taken from engine manifolds rather than by buckets in WW2, with an abrupt change back immediately after. It is not certain the correction for these factors is entirely accurate, with again the possibility of WW2 SSTs being too hot.

Third, one area that certainly saw a marked loss of traffic was the NINO3 to 4 region of the Pacific. That means ENSO records of the period are likely to be unreliable resulting in a potential erroneious ENSO correction.

Fourth, WW2 saw extensive production black carbon and oil slicks, both of which may have markedly reduced albedo. It is not clear that this has been picked up in the forcing records. If they have not been, it may be the case that the WW2 records underplay the forcing in that era.

I suspect the larger errors in the model in and near WW2 are due to some combination of these five factors (chance plus the four potential sources of error). Of the four potential sources of error, two represent potential errors in the temperature record, and two potential errors in the model. Given all of this, it is not clear that there is a problem, and if there is it is not clear that the problem is in the model. It is also possible that some other factor in what was an unusual period (to say the least) was involved.

Given all of this, my inclination is to not give too much weight to errors in the WW2 period. Where I a scientist looking at the temperature record, or the forcing or ENSO history, I would be looking at that period in detail to try and resolve the issue, but the error is not so large that it would trouble me if I could not.

Glenn Tamblynat 11:35 AM on 18 March, 2015Leto @1136

Further to Tom's comment, this paper is interesting

LINK

Particularly fig 11b.

Significant step changes in the percentage of SST measurements from US ships with a significant rise during the war and a sharp drop in Aug 1945. The paper is using the older HadSST2 dataset for SST's. The more recent version has some correction for this but perhaps ot completely.

Response:[RH] Shortened link.

Letoat 09:23 AM on 19 March, 2015Thanks Tom and Glenn... Tom's list of "error years" (1938, 1943, 1944, 1963) do not appear to be randomly distributed - if we plotted a rolling 2-year or 5-year average of absolute (or squared) model-data mismatch, I suspect there would be a peak in the 1938-1944 period that stuck out well above the rest of the plot (more than 2 SD), so I was hoping there would be better explanations than "it's chance".

Clearly, there are several potential explanations and it seems more than likely that the data around that time is itself suspect (particularly given the association with WW2 and the change in coverage). That makes the performance of the model even more impressive.

Tom Curtisat 09:57 AM on 19 March, 2015Leto @1139, temperature shows a level of autocorrelation across years. Because of that, clustering of high SD years is not unexpected. If follows that "just chance" cannot be excluded as an explanation for the cluster of high SD years. And even though it is more probable than not that it is not just chance, I certainly cannot claim that just chance is less probable than any or all of the other alternative explanations.

Letoat 08:28 AM on 20 March, 2015Hi Tom,

If you know of a mathematcal tool that could resolve whether autocorrelation is sufficient to explain the clustering of error years, I would be interested, though it is hardly an important point. (I confess I don't know the correct approach, myself, but eyeballing the graph did not at first suggest to me that simple autocorrelation was enough; looking at it again I am not so sure.)

The bigger problem I have with the "It's chance" line of argument is that it seems to be largely devoid of explanatory power. It is a truism that, within normallly distributed sets of data, a certain proportion will fall below a certain number of standard deviations, but it is a truism that applies as well to good models as to bad. It would remain true even if we added noise to the model to the point that it ceased to be useful. Even Pangburn could raise it in defence of the worst patches of his own model. The 2-SD yardstck is itself modified as the model deteriorates.

If the Minister for Education says: "We have to lift our game, 1% of schools are performing below ther 99th centile", then it is appropriate to point out to the Minister that 1% are always expected to perform below the 99th centile. Conversely, if the principal of a school says: "We have to lift our game, our school is performing below the 99th centile," or even just asks, in the boardroom: "Why are we performing below the 99th centile?", he would be rightly frustrated if his teachers said, "Don't worry, there'll always be 1% of schools below the 99th centile."

Asking why a particular patch of data-model matching is much worse than the rest is more analagous to the second situation, I believe. And while it may have been the case that there was no explanation other than chance, and I agree that thsi cannot be dismissed entirely, I am not surprised there are better explanations.

On the other hand, we have wandered off-topic and I greatly respect the work you do here so I wil leave it at that.

Regards, Leto.

Tom Curtisat 12:21 PM on 20 March, 2015Leto @1141, for comparison, I took HadCRUT4 from 1880-2010 and used it as a model to predict GISS LOTI. To do so, I used the full period as the anomaly period. Having done so, I compared statistics with the Cowtan model as a predictor of temperatures. The summary statistics are (HadCRUT4 first, Cowtan Model second):

Correl: 0.986, 0.965

R^2: 0.972, 0.932

RMSE: 0.047, 0.067

St Dev: 0.047, 0.067

Clearly HadCRUT4 is the better model, but given that both it and GISS LOTI purport to be direct estimates of the same thing, that is hardly surprising. What is important is that the differences in RMSE and St Deviations between the HadCRUT4 model and the Cowtan model are small. The Cowtan model, in other words, is not much inferior to an alternative approach at direct measurement in its accuracy. Using HadCRUT4 as a predictive model of GISS, we also have a high standard deviation "error" (-2.5 StDev in 1948) with other high errors clustering around it.

This comparison informs my attitude to the Cowtan model. If you have three temperature indices, and only with difficulty can pick out that which was based on a forcing model to those which were based on compilations of temperature records, we are ill advised to assume that any "error" in the model when compared with a particular temperature index represents an actual problem with the model rather than a chance divergence. (On which point, it should be noted that the RMSE between the Cowtan model and observations would have been reduced by about 0.03 if I had adjusted them to have a common mean as I did with the two temperature indices.) Especially given that divergences between temperature indices show similar patterns of persistence.

Now, turning to your specific points:

In fact, saying "it's chance" amounts to saying that there is no explanation, so of course it is devoid of explanatory power. In this particular context, it amounts to saying that the explanation is not to be found in either error in the measurements (of temperatures, forcings, ENSO, etc) nor in the model. That leaves open that some other minor influence or group of influences on GMST (of which there are no doubt several) was responsible. "Was", not "may be" because it is a deterministic system. However, the factor responsible may be chaotic so that absent isolating it (very difficult among the many candidates with so small an effect) and providing an actual index of it over time, we cannot improve the model.

Of course it is more analogous to the second situation. But the point is that the "it's chance" 'explanation' has a better than 5% (but less than 50%) chance of being right. That is, there is a significant chance that the model cannot be improved, or can only be improved by including some as yet unknown forcing or regional climate variation. The alternative to the "it's chance" 'explanation" is that the model can be improved by improving temperature, ENSO or forcing records to the point where it eliminates such discrepancies as found in the 1940s. On current evidence, odds on this is the case - but it is not an open and shut case that it is so.

Letoat 17:52 PM on 20 March, 2015Hi Tom,

Points taken. My rhetorical example was admittedly unfar, as it would obviously be facile and unhelpful to suggest that a model was okay because only 1% of its errors were worse than the 99th centile of its errors. And although I would see it as almost as facile and circular to defend a model because "only" the expected number of its worst errors were beyond some number of SDs of its own error distribution, that is not quite the same as pointing out, as you did, that the most extreme outlier was only ~3.3 SDs worse than the mean errors. If the outliers were several SDs out, we both agree that would be an entirely different situation.

Thanks, and best wishes,

Leto.

ancient_nerdat 16:17 PM on 29 June, 2015I tried a fourier analysis of the solar incidence and temperature data. The idea was that there would be big peaks in the spectra at the frequency of the sunspot cycle. I used a 121 year period where the SATIRE-T2 and NOAA anomaly sets overlap. A nice big peak showed up at just the right spot with the Solar Data. However, with the temperature data, the spectral components were almost missing entirely. They were actually low points in the noise floor.

Any idea what I could be missing?

Glenn Tamblynat 23:34 PM on 29 June, 2015As a first approximation, that you would get a nice peak in the sunspot power series at the solar cycle frequency is a bit of a no-brainer - like duh man!

Expecting that the temperature data would show a similar correlation is based on assuming a raft of physical realtionships that ae actually unphysical. Starting with the fact that most energy exchange in the climate system is into and out of the oceans which have huge thermal mass and massively damp down any frequency responses to something like solar variations.

So not what are you missing. What are you expecting and are your expectations reasonable; thermodynamically reasonable?

APTat 23:08 PM on 25 July, 2015Hello,

I'm curious about the graphs shown here: http://hockeyschtick.blogspot.com.es/2014/08/its-sun_9.html

and here:

http://hockeyschtick.blogspot.com.es/2013/11/the-sun-explains-95-of-climate-change.html

Clearly this isn't published, peer-reviewed science, but I'd like to know if there's any sense to it, and if not, to understand what the problems with it are. I know a little about climate change, particularly regarding reconstructions of past environments, but I'm out of my depth trying to understand these sunspot calculations.

Many thanks.

Response:[TD] Hotlinked the URLs. In future please do that yourself with the link button in the comment editing controls.

Tom Daytonat 00:46 AM on 26 July, 2015APT: Dan Pangburn, the author of those claims, commented here on SkS several years ago. Please read the responses.

Also, the cooling stratosphere is incompatible with increased energy from the Sun.

Tom Daytonat 00:48 AM on 26 July, 2015APT: Dan Pangburn re-appeared in a recent comment. Read the responses there.

MA Rodgerat 04:19 AM on 26 July, 2015Tom Dayton @1147/1148.

I think the two previous excursions of Dan Pangburn here @SkS do not provide a clear explanation of Pangburn's proposition, possibly even less clear than Pangburn's explanation linked to by APT @1146.

APT @1146.

The graphs you link to are simple nonsensical curve-fitting with zero basis in physics. The guts of Pangburn's sunspot equation can be much simplified and still produce the same-shaped resulting graph. That simple equation is:-

_{(i+1)}= T_{(i)}+0.00002(S_{(i)}-34)where T is temperature and S is sunspot number for year i.

For the last 75 years, the average sunspot number has been about 75, way above the average 34 used in the equation which is why the graphed temperature soars despite the heavily lagging terms employed. Indeed, it is only during the Manuder Minimum & the Dalton Minimum that the average sunspot number drops below 34 allowing Pangburn's graph to dip downward. Including SSN data to 2014 shows that even weak Sunspot Cycle 24 is averaging above 34 and showing a further increase in temperature.

Heavy lagging is used by Pangburn because the T

^{4 }term is far too weak to define an equilibrium temperature. If the ~75 average sunspot number of recent decades persisted, the equation tells us global temperatures would rise by over 60ºC before equilibrium appears. Given the forcing involved will be less than 1Wm^{-2}, this means this equation of Pangburn's is suggesting an Equilibrium Climate Sensitivity ECS > 240ºC, an entirely lunatic value.Tom Curtisat 05:58 AM on 26 July, 2015MA Rodger @1149, 0.003503/17 = (approx) 0.0002. You have misplaced a decimal point. Further, the temperature term takes the fouth power of the ratio between T(i) and T

_{(o)}, not ratio between T_{(i)}and T_{(i-1)}. Consequently it is not always negligible, and is certainly not negligble at T(i) = T_{(o)}+ 60. Of course, you did not neglect that in calculating the equilibrium temperature. Neglecting the temperature ratio changes the time to equilibrium but not the equilibrium temperature. That, as you know, is determined solely by the requirement that at equilibrium (T_{(i)}/T_{(o)})^{4}= s(i)/34, resulting in the integrated term in Pangburn's formula equalling zero. I estimate the increase in temperature at equilibrium to be +62.59 C, or given the baseline temperature, at 75.6 C. Ignoring the misplaced decimal point, a neat analysis, and "lunatic value" is exactly correct.