Roy Spencer’s Great Blunder, Part 3
Posted on 3 March 2011 by bbickmore
The following is reposted from Barry Bickmore's blog - it's PART 3 of my extended critique of Roy Spencer’s The Great Global Warming Blunder: How Mother Nature Fooled the World’s Top Climate Scientists (New York: Encounter Books, 2010). In this part I refer constantly to Spencer's simple climate model, which I explained in Part 1, so make sure to read that first. See also Part 2.
Summary of Part 3: Roy Spencer posits that the Pacific Decadal Oscillation (PDO) is linked to chaotic variations in global cloud cover over multi-decadal timescales, and thus has been the major driver of climate change over the 20th century. To test this hypothesis, he fit the output of a simple climate model, driven by the PDO, to temperature anomaly data for the 20th century. He found he could obtain a reasonable fit, but to do so he had to use five (he says four) adjustable parameters. The values he obtained for these parameters fit well with his overall hypothesis, but in fact, other values that are both more physically plausible and go against his hypothesis would give equally good results. Spencer only reported the values that agreed with his hypothesis, however. Roy Spencer has established a clear track record of throwing out acutely insufficient evidence for his ideas, and then complaining that his colleagues are intellectually lazy and biased when they are not immediately convinced.
It Must Be the PDO!
As I mentioned in Part 1, Roy Spencer believes that climate change is largely controlled by chaotic, natural variations in cloud cover, rather than by external forcing. The idea that there are chaotic, natural climate variations over short timescales of up to a decade or so is non-controversial, but Spencer wants to take it a step further.
So, what might yearly, 10-year, or 30-year chaotic fluctuations in cloudiness do? Maybe the Medieval Warm Period and the Little Ice Age are examples of chaos generated by the climate system itself. (p. 107)
As we saw in Part 2, Spencer tries to scuttle the standard explanation for the ice ages of the past million years because 1) the standard explanation is consistent with the models the IPCC uses to project future temperature trends, and 2) even he doesn't hypothesize how "chaos generated by the climate system itself" can cause trends spanning tens of thousands of years. I.e., the fact that mainstream models of climate change can explain more data is threatening to him.
I believe that the ice core record is largely irrelevant to what is happening today. (p. 30)
Therefore, it is reasonable to suspect that the ice ages and the interglacial periods of warmth were caused by some as yet undiscovered forcing mechanism. (p. 69)
To be fair, I should mention that if Spencer's hypothesis were correct, it would be impossible to apply to the distant past because we have no methods for estimating past cloud cover. So for the moment, let's give him a pass on this issue and see how he accounts for the climate change in the more recent past, when we have had decent meteorological records.
Spencer's hypothesis is that the Pacific Decadal Oscillation (PDO) has been controlling most of the global temperature change over the last century. The PDO is a mode of natural climate variation in the Northern Pacific that oscillates over timescales of a few decades. The Pacific Decadal Oscillation Index (PDOI) is a unitless quantity climatologists have created to describe how strongly the PDO is favoring warming (positive PDOI) or cooling (negative PDOI). The PDOI over the 20th century (subjected to a 5-year running average) is plotted as the green line in Figure 1. For comparison, the global average temperature anomaly (HadCRUT3v, subjected to a 5-year running average) is plotted as the blue line.
You probably immediately noticed that the temperature and PDOI records don't look exactly the same, although there are some positive and negative humps in similar places. However, Spencer actually posits that the PDO constitutes a "forcing" in the system, and there can be some time lag before the system responds to a forcing. "If you understand this distinction, you are doing better than some climate experts" (p. 111).
If the PDO is forcing the system, Spencer reasoned, maybe he could take the PDOI, multiply it by some scaling factor to convert it into W/m^2, and then run that through his simple climate model (see Part 1) to see what comes out. The problem, of course, is that there are several parameters included in the simple climate model, so if Roy wanted to make his scaled PDOI produce the observed temperature data, he had to provide values for those parameters by "adjusting" them to get the best fit. Here is a list of the adjustable parameters he used.
- alpha = the feedback parameter (see Part 1 for explanation)
- beta = the scaling factor to convert the PDOI into W/m^2
- h = the depth of the ocean mixed layer (see Part 1)
- ?To = the temperature deviation from equilibrium at the start of the simulation
Spencer describes how he proceeded.
Since we don't know how to set the four [parameters] on the model to cause it to produce temperature variations like those in [the 20th century temperature record], we will use the brute force of the computer's great speed to do 100,000 runs, each of which has a unique combination of these four [parameter] settings. And because spreadsheet programs like Excel aren't made to run this many experiments, I programmed the model in Fortran.
It took only a few minutes to run the 100,000 different combinations.... Out of all these model simulations, I saved the ones that came close to the observed temperature variations between 1900 and 2000. Then, I averaged all of these thousands of temperature simulations together.... What we see is that if the computer gets to "choose" how much the clouds change with the PDO, then the PDO alone can explain 75 percent of the warming trend seen during the twentieth century. In fact, it also does a pretty good job of capturing the warming until about 1940, then the slight cooling until the 1970s, and finally the resumed warming until 2000.
If I instead use the history of anthropogenic forcings that James Hansen has compiled..., somewhat more of the warming trend can be explained, but the temperature variations in the middle of the century are not as well captured. I should note that the "warm hump" around 1940 and the slight cooling afterward have always been a thorn in the side of climate modelers. (p. 115)
I digitized the data in Spencer's figure, and have plotted it here in Figure 2.
Next, Spencer took nine years (2000-2008) of satellite radiation flux data and removed the influence of feedbacks via a method related to his work in Spencer and Braswell (2008) (which we saw has been discredited in Part 1), leaving only the radiative forcing. He then plotted the average forcing vs. the average PDOI for each of those years, so that he could use the slope of the data to obtain an empirical estimate of beta, the PDOI scaling factor. The best-fit slope was 0.97 W/m^2, whereas his model produced a best-fit value of 1.17. Pretty close! I've digitized the data in Spencer's graph (p. 119) and reproduced it in Figure 3.
Attack of the Zealots
It looks pretty impressive, doesn't it? Roy Spencer thought so, too, so he submitted a paper on these results to a reputable scientific journal. He describes the result.
In early 2009 I submitted the work I am describing for publication in Geophysical Research Letters, and the paper was quickly rejected by a single reviewer who was very displeased that I was contradicting the IPCC. Besides, this reviewer argued, because the PDO index and temperature variations... do not look the same, the PDO could not have caused the temperature changes....
This expert's comments revealed a fundamental misunderstanding of how temperature changes are caused, and as a result my paper was rejected for publication. In fact, the editor was so annoyed he warned me not to bother changing and then resubmitting it. My results... had obviously struck a nerve. This is the sorry state of scientific peer review that can develop when scientists let their preconceived notions get in the way. (pp. 111-12)
This episode (and perhaps others like it) is one of the main reasons why Spencer says he is taking his message "to the people".
The peer review process for getting research proposals funded and scientific papers published is no longer objective, but is instead short-circuited by zealots adhering to their faith that humans now control the fate of Earth's climate. (p. xvi)
I'd be the last one to claim that the peer review system in science is perfect, but is it really that broken? Is the research Roy Spencer describes so groundbreaking and brilliant that real scientists--those who are Truly Objective--would have accepted it without such a lot of misguided, trivial objections? I decided to conduct my own, more thorough peer review to find out.
Adjustable Parameters
Anyone who deals with numerical modeling knows that if you start using too many adjustable parameters, you can often make your model fit the data very well, but the parameters chosen for the model might not be physically meaningful. That is, there are often a number of distinct combinations of the parameters that would give about equally good results. So when scientists like me see Roy Spencer curve-fitting withfour adjustable parameters, red flags go up right away. The typical thing to do in this situation would be to see if we can constrain some of the variables into a physically reasonable range. We can actually go out and measure the depth of the ocean mixed layer, for example.
To hear Roy tell it, he just let all four parameters ride, but no matter, because the values his computer program chose all came out to be physically reasonable! Here are the "best-fit" values he came up with.
- alpha ≈ 3.0 W/m^2/°C (p. 116; alpha is the feedback parameter)
- beta ≈ 1.17 W/m^2 (p. 119, Fig. 26; beta is the PDO scaling factor)
- h ≈ 700 m (pp. 115-116; h is the ocean mixed layer depth)
- ?To ≈ -0.6 °C (pp. 116-117, ?To is the starting temperature anomaly in the year 1900)
In the next sections, I'll look at both 1) how Spencer came up with these values, and 2) what to make of his claims that they are physically reasonable. Some readers might recognize that some of my criticisms are the same or similar to those made by Ray Pierrehumbert about a related episode of Roy's curve-fitting. That's ok, he didn't listen the first time, either, and I'm going to go into a little more depth, in some cases.
Spencer's Model in MATLAB
To explore Spencer's claims, I first programmed his model into MATLAB, and connected it to a built-in curve-fitting routine. (If you want a copy of the m-files, just e-mail me.) When I plugged in the values listed above, I got the blue curve in Figure 4. The red curve is the one I digitized from Spencer's figure (see Fig. 2), and the black curve is the HadCRUT3 temperature anomaly subjected to a 5-year running average. Since Spencer's curve is supposedly the average of thousands of individual curves, I'd say mine is quite close, and in any case it's clear I programmed my model to be identical to his.
Any Answer I Want?
Having made sure the model was correct, I used the same parameter values as the starting point when I applied the curve-fitting routine. The fitting routine changed three of the parameter values significantly (alpha = 3.7, beta = 1.55, and ?To = -0.66), but the ocean mixed-layer depth (h) stayed pegged right at 700 m. (The resulting model output is shown as the green curve in Figure 4.) So I asked, "What would happen if I set the starting point of h at different values from 50 to 1200 m (in 50 m increments), and then re-fit all of the parameters every time?" I did it, and had the computer spit out a graph with all the "best-fit" curves, so I could compare them. The result is in Figure 5.
Astute readers will be scratching their heads, wondering why there is only one model curve shown. Well, so was I. I combed through my code, beating my head on the desk every once in a while. Finally, I found out that all 24 model curves were there, but they were all exactly on top of one another. That's right, what I'm telling you is that I could generate the exact same best-fit model curve by assuming h values anywhere from 50 m to 1200 m. What happened to the parameter values during the fitting process? Again, the h values didn't budge, and ?To was quite stable at around -0.66. However, the alpha and beta values both varied dramatically with different depths. In Figure 6, I've plotted the alpha and beta values vs. h.
What Figure 6 shows is that best-fit alpha, beta, and h values are all perfectlycovariant with one another. That is, no matter what number you pick for h, there will always be a combination of alpha and beta values that will give you the same best-fit model curve. The exact same model curve.
Roy Spencer said he ran 100,000 different combinations of the fitting parameters, so how on Earth did he just happen to pick a set of values that agreed well with his hypothesis, when he could get the exact same curve no matter how deep he made his model ocean? Let's examine that question.
How Deep is the Ocean?
First, if a 700 m mixed layer is a physically reasonable value, then maybe my objections are moot. Here's what Spencer says about it.
By coincidence, this figure actually matches the approximate depth over which warming has been observed to occur in the last fifty years, which is something the model did not know beforehand. (p. 116)
Even if the water temperature has been measurably heating down that deep, however, Spencer's model assumes that the temperature is uniform throughout that entire 700 m, which is demonstrably false. The thermocline (i.e., the boundary between the warmer, well-mixed layer at the surface of the ocean and the colder deep ocean water is typically in the range of 50-100 m deep (Baker and Roe, 2009). In a simple model like Spencer's that doesn't account for upwelling and diffusion of heat into the deep ocean, one needs to fudge that figure a little higher. Murphy and Forster (2010)discussed previous work on this question, and it appears mixed-layer depths of 100-200 m (probably closer to 100 m) are reasonable for models such as Spencer's. The irony, of course, is that Murphy and Forster were criticizing Spencer and Braswell (2008) for using only a 50 m mixed layer, which skewed their results. (Spencer provides the spreadsheet he used for the 2008 study here. Go ahead and plug in a 700 m mixed layer, and see what kind of nonsense comes out. You can compare it to what you're supposed to get here.)
Automagic!
The key to understanding Spencer's choice of a 700 m mixed layer depth is in Figure 6. My best-fit values for alpha and beta at h = 700 m were 3.71 W/m^2/°C and 1.55 W/m^2, respectively. My technique was somewhat different from Spencer's--for some reason he averaged together thousands of different curves that seemed to fit the data pretty well, and I assume he averaged the adjustable parameter values from these different model runs, as well. Therefore, he obtained similar, but not identical, values: alpha = 3.0 W/m^2/°C and beta = 1.17 W/m^2. Remember that for Spencer's hypothesis to work, he needed to obtain an alpha value corresponding to negative (alpha > 3.3) or weakly positive feedback. The value alpha = 3.0 corresponds to positive feedback, but it is much weaker than the range Spencer gives for the IPCC models (alpha = 0.9-1.9). So why not choose a mixed layer depth of 800 or 1000 m, and obtain an even larger alpha value? Because the graph in Figure 3 dictates that Spencer also needed a beta value close to 1 W/m^2. And guess what? His ad hoc statistical method automatically gave him answers in the right range!
Did he purposefully manipulate his method to produce just the right values? I actually don't think so. Roy's computer program may have generated just the right values simply due to luck, combined with a marked misunderstanding of his model system and a flawed statistical method. When I generated the 24 model curves in Figure 5, which all fit the data equally well using widely different parameters, I collected the averages of all the best-fit parameters and got: alpha = 3.3 W/m^2/°C,beta = 1.38 W/m^2, h = 625 m, and ?To = -0.66 °C. Wow, those are close to Roy's preferred parameters, right? Well, the truth is that at first I ramped the ocean depth from 50 to 1000 m, and some of my average parameter values were too low. All I had to do to get what I wanted was change the upper bound to 1200 m. But that's the point, isn't it? I could get whatever I wanted by judiciously choosing the right boundary conditions... or by dumb luck.
This discussion brings up another intriguing question. What if we were to choose a realistic mixed-layer depth? What kind of alpha and beta values would we obtain then? In Figure 6, the values for h = 100-200 m are alpha = 0.53-1.06 and beta = 0.22-0.44. In other words, the feedback would have to be just as positive as, or more positive than, that assumed by the IPCC models. And as for beta, Ray Pierrehumbert pointed out that if it were as high as Roy Spencer wants it to be, it would produce fluctuations in the net radiation flux that are much larger than actually observed via satellite. He instead suggested a more reasonable value of 0.25 W/m^2 for beta. So what do you know? By assuming a reasonable mixed layer depth, you can obtain abeta value that is consistent with satellite observations, and an alpha value that indicates feedback that is at least as positive as the IPCC asserts. But then, they wouldn't be consistent with Roy Spencer's method for estimating beta shown in Figure 3, or with his hypothesis that climate feedbacks are more negative than the IPCC estimates.
Another Adjustable Parameter?
What about Roy's favored value of ?To ≈ -0.6 °C? Why, that's exactly what he would expect, too!
The third parameter is the starting temperature anomaly in 1900: the model chose a temperature of about 0.6 deg. C below normal. This choice is interesting because it approximately matches what the thermometer researchers have chosen for their baseline in Fig. 23. That is, the temperature the model decided is the best transition point between "above normal" and "below normal" is the same as that chosen by the thermometer researchers.
It's difficult to put into words how strange this statement is. The HadCRUT3v temperature anomaly, which Spencer uses, is normalized to a 1961-1990 base period. I.e., they calculated the average temperature from 1961-1990 and then subtracted that value from all the raw temperatures. Why did they choose 1961-1990? Because it's a "climatological standard normal", as defined by the World Meteorological Organization (WMO). The WMO explains,
WMO defines climatological standard normals as "averages of climatological data computed for the following consecutive periods of 30 years: January 1, 1901 to December 31, 1930, January 1, 1931 to December 31, 1960, etc." (WMO, 1984).The latest global standard normals period is 1961-1990. The next standard normals periode is January 1, [1991] – December 1, 2020.
Therefore, 1961-1990 isn't the only "normal" period they could have chosen to be in line with WMO guidelines--it's just the latest one. Roy Spencer seems to be implying that these "normal" periods approximate some kind of equilibrium state, but the WMO explicitly says otherwise. In a document called "The Role of Climatological Normals in a Changing Climate", the WMO explains that this was the view in the early 20th century, when they first started using the concept of "normals", but that has changed.
It is now well-established (IPCC, 2001) that global mean temperatures have warmed by 0.6 ± 0.2°C over the period from 1900 to 2000, and that further warming is expected as a result of increased concentrations of anthropogenic greenhouse gases. Whilst changes in other elements have not taken place as consistently as for temperature, it cannot be assumed for any element that the possibility of long-term secular change of that element can be ruled out. The importance of such secular trends is that they reduce the representativeness of historical data as a descriptor of the current, or likely future, climate at a given location. Furthermore, the existence of climate fluctuations on a multi-year timescale (Karl, 1988), to an extent greater than can be explained by random variability, suggests that, even in the absence of long-term anthropogenic climate change, there may be no steady state towards which climate converges, but rather an agglomeration of fluctuations on a multitude of timescales.
The near-universal acceptance of the paradigm of a climate undergoing secular long-term change has not, as yet, resulted in any changes in formal WMO guidance on the appropriate period for the calculation of normals (including climatological standard normals).
If there isn't such a thing as a climate "steady state" (a concept similar to "equilibrium",) that's mighty inconvenient for people who want to fit temperature data using a simple climate model like Spencer's. Just for the sake of argument, however, let's assume there is such a thing. Now look at Figure 7, where I have plotted the entire HadCRUT3v temperature series (1850-present). If you had to pick any period in the entire series where it seems like the system might have been hovering around some kind of "equilibrium" or "steady state", what would it be? Personally, I would pick the beginning of the series (1850-1900), and certainly 1961-1990 wouldn't be near the top of my list.
Now let's play around with the model again to see how important the choice of base period and ?To is. Figure 8 shows the results when I left the temperature data as is, set the mixed layer depth to 700 m, and re-fit the model to the data with ?To values ranging from -0.6 to 0.6 °C. As you can see, in the latter half of the 20th century, it doesn't make a whole lot of difference what the starting value is, but boy, does it matter in the first half! If we compare the overall slope of the data in the first half of the century to the model curves, it's pretty clear that to match the slope of the data you have to have a ?To value of about -0.4 to -0.6, and again it's just dumb luck that the base period was chosen so that the actual data starts down in that range.
What would happen if we chose another base period--one that is more likely to represent something like an "equilibrium state"? In Figure 9, I adjusted the HadCRUT3v temperature anomaly to have a 1850-1900 base period. Then I fit the model to the data given 24 different h values ranging from 50-1200 m. This time, it looks like there are two different model curves, instead of 24. (Again, there are multiple curves exactly on top of one another.) One set of model curves (blue) fits the data really well, while the other set (red) fits rather badly. Unfortunately, the curve that fits really well was generated with unrealistic mixed layer depths (h ≥ 700 m) and negative alpha values, which indicate an unstable system. In other words, there is no way to fit the adjusted temperature anomaly data with Spencer's model without making assumptions even he would admit are implausible. So in effect, the base period chosen for the temperature anomaly data was a fifth adjustable parameter.
The Acid Test: More Data
Now let's put the preceding discussion on the shelf, and assume for the sake of argument that Roy Spencer had done everything right in his curve-fitting adventures. The acid test of a model produced this way is to see if it can predict any data other than that used to calibrate it. It turns out that MacDonald and Case (2006) used tree rings from a certain type of hydrologically sensitive tree to reconstruct the PDOI from AD 996-1996. What if we were to use this to drive Spencer's simple climate model, with his preferred parameters? The results are in Figure 10.
Spencer spends several pages (pp. 2-3, 9-11) bashing the "hockey stick" reconstructions of temperature over the last 1000-2000 years, and instead prefers a particular reconstruction that has a much more prominent Medieval Warm Period (MWP), and which, incidentally, has been shown to be riddled with errors. But it seems pretty obvious from Figure 10 that if you drive Spencer's simple climate model with this longer record of the PDOI and Spencer's preferred parameters, you don't exactly produce a big MWP.
Oh, I know. Maybe the tree ring record of the PDOI isn't reliable, even though it matches the 20th century record pretty well. Fine, but the failure of Spencer's model to produce anything even remotely like anyone's reconstruction of global temperatures over the last 1000 years is truly spectacular. Is the tree ring reconstruction that far wrong? (And by the way, the value of ?To chosen only affects the results for the first 50 years, so there's no wiggle room there.)
Certainly Spencer might respond that maybe OTHER modes of climate variability were driving the system over this longer time period. Sure, that could be. But then, what do we make of his professed devotion to Occam's Razor?
The simple, natural explanation for most of the global warming experienced from 1900 to 2000 took only a desktop computer and a few days to put together. In contrast, hundreds of millions of dollars have been invested in explaining those same temperature variations with supercomputers using not just one but two manmade forcings: warming from manmade carbon dioxide and cooling from particulate pollution. This looks like a good place to apply Occam's razor, which states that it is usually better to go with a simpler explanation of some physical phenomenon than a more complicated one. (p. 120)
My reading of Occam's Razor tends to favor a model that uses known physical principles to pretty well explain climate changes over timescales from a hundred years to hundreds of millions, rather than a model that explains only the 20th century (sort of, and if you ignore the creative curve-fitting techniques), but has to posit all kinds of unknown climate drivers for time periods that are any longer.
What About Roy?
The take-home message here is that Spencer's curve-fitting enterprise could (and did!) give him essentially any answer he wanted, as long as he didn't mind using parameters that don't make any physical sense. And let's face it, Roy Spencer has established something of a track record in this area. In Part 1, we saw that he plugged unrealistic values (including a 50 m ocean mixed layer depth) into his simple climate model to prove that random variations in cloud cover could skew estimates of the feedback parameter, alpha. In Part 2, we saw that he glommed onto a single 2004 study that cast doubt on the standard explanation for the ice ages, but since then he has ignored the fact that the objections raised have been adequately answered. In this installment, I've shown that he once again employed unrealistic parameter values (including a 700 m deep ocean, rather than 50 m!!!) to get the answers he wanted. Finally, it turns out that years ago Roy Spencer and John Christy, who manage the UAH satellite temperature data set, made several mistakes in their data analysis that made it appear the temperature wasn't rising like all the thermometers were saying. Ray Pierrehumbert summarized,
We now know, of course, that the satellite data set confirms that the climate is warming , and indeed at very nearly the same rate as indicated by the surface temperature records. Now, there’s nothing wrong with making mistakes when pursuing an innovative observational method, but Spencer and Christy sat by for most of a decade allowing — indeed encouraging — the use of their data set as an icon for global warming skeptics. They committed serial errors in the data analysis, but insisted they were right and models and thermometers were wrong. They did little or nothing to root out possible sources of errors, and left it to others to clean up the mess, as has now been done.
One of the kindest things Roy said about his scientific colleagues in the book was,
I do not believe that there is any widespread conspiracy among the scientists who are supporting the IPCC effort--just misguided good intentions combined with a lack of due diligence in scientific research. (p. 66)
You're probably expecting that now I'll go off on a tirade about how, even though he complains and complains that all his colleagues are intellectually lazy and biased, Roy Spencer is the one who isn't being Truly Objective, and the editor of Geophysical Research Letters was absolutely right to send him packing with his curve-fitting paper. He would certainly deserve it, given how he's treated his colleagues in The Great Global Warming Blunder, but that's not where I'm going.
It's true that science is about data, and science is about logic, but to a large extent it's also about creativity. Cutting-edge scientific research involves having great ideas, and then following them up to see if they work out. Since scientists are just people, sometimes they can get a little dogmatic about their hunches, which can cause them to ignore contrary evidence, or glom onto isolated bits of evidence that fit with the hunch. That's ok, however, because as the philosopher Paul Feyerabend once pointed out, the history of science has shown that sometimes a little dogmatism can be a good thing. Even if the evidence doesn't favor a brilliant scientist's hunch at the moment, maybe the idea just needs a little work. Continuing to follow a hunch because you think there's enough evidence to show "there's something there," might be just the thing needed to produce a real breakthrough. It's also ok because science is a community effort. Scientists may have their own hunches, but that doesn't mean they'll accept someone else's hunch without a good deal of evidence! This serves as an essential check to separate brilliant inspiration from plausible-sounding nonsense.
My point is that if Roy Spencer has a hunch that chaotic variations in cloud cover are controlling the climate, and that the PDO has been driving recent temperature increases, then more power to him. But let's face it, trying to play the part of the brilliant iconoclast hasn't been working out so well for Roy, lately, because he's been sloppy about lining up his evidence. Probably every scientist has had papers or proposals rejected based on reviews they thought weren't entirely fair. But if every time that happened to me I were to take my ball and go home, as Spencer did when he decided to bypass the peer review system and take his message "to the people," I would be missing out on something uncommon and valuable. That is, I would be missing the chance to develop my ideas in the face of unrelenting and, for the most part, intelligent and informed criticism.
I hope he takes some lessons from the criticisms I've given, and tones down his wild accusations of impropriety and bias against his colleagues. Scientists, including Roy Spencer, are just people, and most of them are trying to do their best.
[UPDATE: Arthur Smith has now done the full mathematical proof for what I showed by playing around with MATLAB. UPDATED UPDATE: Arthur went on to show that, given the mathematical form of Spencer's model, he would have to start his model at ?To = negative a few trillion degrees in 1000 A.D. to have his model produce a suitable anomaly in 1900 to adequately fit the 20th century data. Ok, so if you keep reading down into the comments, it turns out that there are other ways (that aren't physically impossible) to drive the model and get the proper starting point for the 20th century, but they are still wildly improbable, and there's no evidence for anything like that.]
References
Baker, M.B., and Roe, G.H. (2009) The shape of things to come: Why is climate change so predictable?, Journal of Climate, 22, 4574-4589.
MacDonald, G.M., and Case, R.A. (2005) Variations in the Pacific Decadal Oscillation over the past millennium, Geophysical Research Letters, 32, L08703.
Murphy, D.M., and Forster, P.M. (2010) On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation,Journal of Climate, 23, 4983-4988.
Spencer, R.W., and Braswell, W. D. (2008) Potential biases in cloud feedback diagnosis: A simple model demonstration, Journal of Climate, 21, 5624-5628.
Comments