## Just Put the Model Down, Roy

#### Posted on 2 August 2011 by bbickmore

This is a re-post of an entry on Dr. Barry Bickmore's blog, *Anti-Climate Change Extremism in Utah*

For the past few years, Roy Spencer has had a love affair, of sorts, with “simple climate models”. After all, who needs some fancy-schmancy global circulation model (GCM) when you can boil down the main features (energy in and energy out) to a simple “1-box” or “zero-dimensional” model that you can run on a spreadsheet?

Spencer wasn’t the first one to use such a model, and every modeler knows that it is usually a good idea to use the simplest model you can get away with to represent complex physical processes. The key here is to recognize that the simpler the model, the more phenomena are glossed over, so simpler models are only going to be good for particular, specialized purposes.

In this case, Spencer wants to use simple climate models to estimate equilibrium climate sensitivity for a doubling of CO2. Let’s look back and see how he’s done with that so far.

**The Model**

The basic model is shown in Eqn. 1 below.

Equation 1: d(*∆T*)/d*t* = (*Forcing – Feedback*)/*Cp*

Here, *∆T* is the difference between the temperature at time *t* and the temperature at equilibrium. (That is, *∆T* is the “temperature anomaly” with respect to equilibrium.) *Cp* is the total heat capacity of a column of ocean water 1 m^2 on top and *h* meters deep. The Forcing tells us the rate at which extra energy is coming in, while the feedback tells us how the climate system responds to the push, by either enhancing the forcing or hitting the brakes. Another way of putting it is that (*Forcing –Feedback*) gives you the net change in the energy accumulating in the ocean, and *Cp* controls how quickly the ocean temperature can change in response.

The *Feedback* term in Eqn. 1 can be broken down as in Eqn. 2 below. This means that the hotter the ocean becomes, the more it radiates energy back into space, and the *alpha* term determines the degree to which this is the case. The *alpha* term also determines the equilibrium climate sensitivity

Equation 2: *Feedback *= *alpha* * *∆T*

The *Forcing* term can be divided up into contributions from different sources–changes in solar output, greenhouse gases, aerosols, and so on–or lumped into one. In some versions of his model, Spencer uses the GISS forcing history, where they are all lumped together. In others, he multiplies the index for some natural mode of climate variability (like ENSO or PDO) with a scaling factor to obtain a hypothetical forcing, as in Eqn. 3, where *beta* is a scaling factor and *Vi* is the natural variability index being used. In still other incarnations, he combines the GISS forcing with the “internal” forcing provided by Eqn. 3.

Equation 3: *Forcing = beta * Vi*

Finally, in the latest versions of his model, Spencer has begun adding more layers to the ocean. Whereas the original version only had one homogeneous ocean layer of depth *h*, the latest ones have 30-40 ocean layers, each 50 m deep. Eqn. 1 governs the net energy input into the top layer, but Spencer also adds a “diffusion” term so heat can escape into the next layer down. The second through 30^{th} layers all have diffusion terms for heat coming in the top and heat going out the bottom. Eqn. 4 shows what one of these diffusion terms look like for heat going out the bottom of a layer, where ∆*T* is the temperature anomaly of the layer in question, ∆*Tnl* is the temperature anomaly of the next layer down, *D* is a diffusion coefficient, and *Cp *is the heat capacity of a 1 x 1 x 50 m column of water.

Equation 4: -d(*∆T*)/d*t* = *D ** (*∆T – ∆Tnl*)/*Cp*

**It’s the PDO!**

In his first attempt, Spencer asked, what if it isn’t human greenhouse emissions that have been driving climate change, lately, but rather natural, chaotic oscillations? Spencer thinks that one such oscillation, the Pacific Decadal Oscillation (PDO) may have been the culprit. He fit his simple climate model (Eqns. 1-3 with only one ocean layer) to temperature data for the 20^{th} century, and found that he could explain most of the observed warming! He tried to publish these results in a climate journal, but (he says) biased reviewers and editors maliciously quashed the manuscript. So instead, he decided to take his message directly to the people by publishing this work in his book, *The Great Global Warming Blunder*.

I wrote about this modeling effort in Part 3 of my recent review of Spencer’s book. I even went to the trouble of programming his model into MATLAB and fitting the parameters using least-squares regression. I found that some of the parameters in the model were perfectly covariant, so that there were an infinite number of “best-fit” solutions with climate sensitivities ranging from really low to really high. You see, if you fool around with *alpha* and *beta*, you can control how fast energy builds up in the ocean surface layer, and if you fool around with the depth of the surface layer (*h*), you can control how much water has to be heated up, which affects how quickly the temperature can approach a new equilibrium. He also had a couple other fitting parameters (for a total of five) that I discussed in the review.

But even though he could have gotten exactly as good a fit to the data with low or high climate sensitivity, Roy Spencer claimed his modeling provided striking evidence for low sensitivity. The secret was that he didn’t use a normal optimization routine to get his “best fit” parameter values. Instead, he made up a bogus statistical technique that automagically allowed him to obtain a low sensitivity (about 1.3 °C for 2x CO2). Furthermore, by manipulating the starting value of his model temperature series, Spencer was able to make his model fit the first half of the data without much influence from the PDO. Finally, he used wildly unphysical model parameters, e.g., a 700 m ocean mixed layer. When you are fitting a model with 4-5 completely unconstrained parameters, after all, it’s hardly surprising if you can explain some data, but it would be foolhardy to take the fitted parameter values too seriously.

To put it bluntly, I found that this work deserved to be rejected–with prejudice–from the scientific literature.

It turned out that my book review became rather popular, and many of Roy’s blog readers were asking him for a response. He did respond…that he wasn’t going to respond. Why? Because he was working on a paper for the peer-reviewed literature, so he couldn’t be bothered to respond to a mere blog critique. I thought that was kind of funny, given that my review was about work Spencer had published in a book because he claimed the peer-review process had been corrupted, but hey, people have to prioritize. It’s been months, however, and Spencer still hasn’t gotten around to answering my initial criticisms. He has, ironically, had time to publish four more blog posts in which he used variants of the same simple climate model to support his claim that climate sensitivity is low.

**Ocean Heat Content**

In the first of these posts, Spencer drove his model with GISS forcing estimates and fit it to ocean heat content (OHC) data since 1955. Once again, I reviewed his methods and found he had made several elementary mistakes, ALL of which drove his model climate sensitivity lower. When I corrected these mistakes, I got a higher climate sensitivity, within the IPCC’s most probable range of 2-4.5 °C (2x CO2). I also mentioned that it isn’t clear such a simple climate model is really suited for estimating climate sensitivity, especially when only constrained by about 50 years of data. Isaac Held, for instance, had fit a simple climate model just like Roy’s to the 20^{th} century output of a GCM with a climate sensitivity of 3.4 °C, but the sensitivity of the simple model (which was tuned to give the same output!!!) was only about 1.5 °C. No response from Roy so far.

**Ocean Temperature Change With Depth**

Now we come to Spencer’s second and third blog posts of this type (with a follow-up on the third post here), in which he used somewhat more complicated versions of the model. As I mentioned above, these new versions of the model are different in that they represent the ocean with 30-40 layers, each 50 m deep. The heat flux into the top layer was determined by the same old simple climate model (Eqns. 1-2), but then for every ocean layer Spencer added another term to represent “diffusion” of heat from that layer into the next one down (Eqn. 4). In his second blog post, he drove his model with the GISS forcings and fit the output to match a profile of temperature change with ocean depth for the last 40 years, published by the IPCC (2007, WG1, Fig. 9.15). In his third post, he drove the model using both the GISS forcings and “internal” forcing caused by ENSO (Eqn. 4), and fit his model to the temperature evolution over 1955-present or 1880-present. He also compared the temperature change from 1955-present in the different ocean layers to the IPCC curve.

The curve fits look pretty impressive, especially when the ENSO forcing is added in. Even many of the little squiggles in the surface temperature data are matched quite well by the model! What’s more, the model climate sensitivities were only around 1 °C, much lower than the IPCC estimate of 2-4.5 °C!!! Indeed, Spencer began one of these blog posts with the following bold proclamation.

The evidence for anthropogenic global warming being a false alarm does not get much more convincing than this, folks.

But wait–remember how I criticized Spencer for the wild and crazy curve-fitting adventures he chronicled in his book? And how he decided he wasn’t going to respond? Well, the fact is that he’s making the same kinds of errors again, and patting himself on the back for it. Here are several reasons why nobody should take Roy’s pronouncements of victory seriously.

In Spencer’s original model, he could tune the *beta* and *h* parameters to get ANY CLIMATE SENSITIVITY HE WANTED, with exactly the same quality of curve fit. That was because manipulating *beta* and *alpha* (which determines climate sensitivity) changes the net rate of energy input into the top of the surface layer, while manipulating *h* changes how fast the ocean heats up in response. In his latest posts, it’s about the same story. He can still manipulate *beta* and *alpha* to change the net input into the top of the ocean, but now to change the temperature response, he just has to change the thermal diffusion rate out the bottom! I dinked around with a spreadsheet he provided in one of his posts, and sure enough, I could fit the data just about as well with higher climate sensitivity (within the IPCC range).

That’s a big problem, because Spencer’s entire argument is *statistical* in nature, but he has made no attempt to find out how sensitive his model fits are to the different parameter values. If the model fit is about equally as good with low or high climate sensitivity, after all, then the modeling exercise has given us NO INFORMATION about the relative plausibility of either scenario. It does not count as evidence for ANYTHING, in other words.

Supposing Spencer does try to go back and quantify parameter sensitivity, then good luck with that, because his newer models all have MORE THAN 30 FULLY ADJUSTABLE PARAMETERS (*alpha*, *beta*, and diffusion coefficients for heat transfer between layers). After Tim Lambert over at Deltoid read my review of Spencer’s book, he posted a quotation from the famous mathematician, John von Neumann.

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

Well, give me more than 30 parameters, and I can fit a trans-dimensional lizard-goat and make rainbow monkeys shoot out its rear end.

Some Spencer-boosters might complain that the GCMs used by IPCC scientists have many more parameters than that. That’s a good point, except that those are typically not “fully adjustable.” When a modeler is using a complicated model, instead of letting all the parameters ride in some kind of statistical free-for-all, he or she typically would constrain most of the parameters to physically reasonable values. For instance, if in his original attempt Spencer had constrained the mixed-layer depth of the ocean (*h*) to a physically reasonable value (say about 100 m), he would have come up with much higher climate sensitivity. Instead, he allowed a 700 m mixed layer depth (!!!!!) to get the answer he wanted.

In the later versions of his model, Spencer would have to constrain his diffusion coefficients to physically reasonable values, but there’s a problem with that. In the real world, “diffusion” is governed by random molecular motions, and can be described by expressions like Eqn. 4, but it’s typically very, very slow. Since heat transfer in the ocean doesn’t happen so slowly, it’s apparent that much of the heat transfer is due to “advection,” rather than diffusion. Thermal advection is essentially movement of heat with the medium, i.e., in currents, and it isn’t necessarily linearly proportional to the temperature difference between layers of the ocean, as in Eqn. 4. That being the case, I don’t have a clue what “physically reasonable” values for the model’s diffusion coefficients would be.

Let’s ignore that last objection about the form of the model for a moment, and bring up a nitpick about how Spencer set up his model. That is, he set the initial temperature anomalies to zero for ALL the ocean layers. Since heat diffusion is dependent on the temperature difference between layers (Eqn. 4), that means Spencer set his model up so there would be NO HEAT TRANSFER among layers at the beginning of the simulation, and it gradually builds up over time.

**The Challenge**

I could go on with more nitpicks, but I’m going to stop here, because it should be clear that, once again, Spencer has made a big deal out of something that doesn’t have any evidentiary value. So if, as Spencer claims, “[t]he evidence for anthropogenic global warming being a false alarm does not get much more convincing than this,” then can we please move on? Can Roy PLEASE put his toy model down?

I doubt he will, but maybe he will accept this challenge. Instead of complaining about how biased and awful the peer review system has gotten, he should (at the very least) get a statistician to work with him and do the modeling right, and then submit it for publication. Personally, I don’t think the work can be saved, even then. However, I think the exercise of working with someone who knows how to properly make statistical inferences would be enlightening for Roy Spencer.

scaddenpat 12:32 PM on 4 August, 2011Stevoat 12:59 PM on 4 August, 2011DSLat 14:01 PM on 4 August, 2011Stevoat 14:37 PM on 4 August, 2011scaddenpat 14:47 PM on 4 August, 2011DSLat 00:29 AM on 5 August, 2011SEAN Oat 11:46 AM on 6 August, 2011Jose_Xat 15:33 PM on 20 September, 2011Jose_Xat 15:35 PM on 20 September, 2011