Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Lindzen and Choi 2011 - Party Like It's 2009

Posted on 6 July 2012 by dana1981

We previously discussed  Lindzen and Choi 2009 (LC09) which used Earth Radiation Budget Experiment (ERBE) instrumental measurements of shortwave and longwave radiation fluxes and observed sea surface temperature (SST) variations in the tropics to evaluate the overall radiative feedback of the climate to SST changes.  The study essentially looked at how much energy escapes into space as the tropics warm, and concluded that climate sensitivity to increasing CO2 is very low (in the ballpark of 0.5°C for a doubling of atmospheric CO2).

As we noted in our discussion of LC09, the paper contained a number of major flaws.  Lindzen himself has even gone as far as to admit the paper contained "some stupid mistakes...It was just embarrassing."  Lindzen and Choi attempted to address some of those issues in a new paper, Lindzen and Choi 2011 (LC11), which they submitted to the Proceedings of the National Academy of Sciences (PNAS).  LC11 was very similar to LC09, but used both ERBE and Clouds and the Earth’s Radiant Energy System (CERES) data.

PNAS editors sent LC11 out to four reviewers, who provided comments available here.  Two of the reviewers were selected by Lindzen, and two others by the PNAS Board.  All four reviewers were unanimous that while the subject matter of the paper was of sufficient general interest to warrant publication in PNAS, the paper was not of suitable quality, and its conclusions were not justified.  Only one of the four reviewers felt that the procedures in the paper were adequately described.  As a result, PNAS rejected the paper, which Lindzen and Choi subsequently got published in a rather obscure Korean journal, the Asia-Pacific Journal of Atmospheric Science.

As PNAS Reviewer 1 commented,

"The paper is based on...basic untested and fundamentally flawed assumptions about global climate sensitivity"

Here we will discuss the untested and flawed assumptions in LC11 identified by the PNAS reviewers.

Flaw #1: Comparing Tropical Apples with Global Oranges

The first unjustified assumption in LC11 noted by Reviewers 1, 3, and 4 is that correlations observed in the tropics reflect global climate feedbacks.  LC11 only examined data in the tropics (20° South to 20° North), but used that limited data to draw conclusions about global climate sensitivity without providing any real justification as to why tropical responses are representative of global changes.

This was also a problem in LC09, as we previously discussed.  A great deal of energy is exchanged between the tropics and subtropics.  For example, Murphy et al. (2010) found that small changes in the heat transport between the tropics and subtropics can swamp the tropical signal.  Murphy et al. and Chung et al. (2010) concluded that climate sensitivity must be calculated from global data.

To be fair, in the version of LC11 published in the Asia-Pacific Journal of Atmospheric Science, Linden and Choi did attempt to use global CERES data from 2000 to 2008, and concluded "the use of the global CERES record leads to a result that is basically similar to that from the tropical data in this study."  However, Lindzen and Choi also noted that the global CERES record contains more noise than the tropics-only data, and they do not explain why their results differ from those of Chung et al.

Flaw #2: Assuming Short-Term Local Feedbacks are Representative of Long-Term Global Feedbacks and Cherrypicking Noisy Data

Similarly, Reviewer 3 noted that LC11 focuses on short-term local changes which might not be representative of equilibrium climate sensitivity, because for example the albedo feedback from melting ice at the poles is obviously not reflected in the tropics.  LC09 and LC11 only looked at short-term local effects and then compared them to long-term global sensitivity, which as Reviewer 3 noted, is yet another "apples and oranges" comparison.

Reviewer 2 expressed two main concerns about the paper, the first of which was essentially echoed by Reviewer 4.

"The first concern is that month-to-month variability of the tropics may have nothing to do with climate feedback processes. Although the paper acknowledges this in its introductory sections, the conclusion is greatly overstated as applying directly to CO2 driven climate change."

This is similar to an issue noted by Trenberth et al. (2010), who found that the LC09 low climate sensitivity result is heavily dependent on the choice of start and end points in the periods they analyse.  Small changes in their choice of dates entirely change the result. Essentially, one could tweak the start and end points to obtain any feedback one wishes (Figure 1). 


Figure 1: Warming (red) and cooling (blue) intervals of tropical SST (20°N – 20°S) used by Lindzen et al 2009 (solid circles) and an alternative selection proposed derived from an objective approach (open circles) (Trenberth et al 2010).

As with the Murphy et al. (2010) conclusion regarding tropical data being insufficient to draw global conclusions, Lindzen and Choi simply did not address this fundamental problem in their analysis, carrying the problem from LC09 into LC11.

Flaw #3: Insufficiently Clear Methodology

Reviewer 2 also had major concerns with the methodology in the paper, which were not explained in sufficient detail to reproduce the LC11 analysis.  The results of the paper (that climate sensitivity is less than 1°C for doubled atmospheric CO2) are quite radical, since virtually all other research using many different lines of evidence finds that climate sensitivity is very likely between 2 and 4.5°C for doubled CO2

The long and short of it is that extraordinary claims require extraordinary evidence, and LC11 does not provide anything remotely like extraordinary evidence.  Reviewer 2 specifically noted:

"I am very concerned that further analysis will show that the result is an artifact of the data or analysis procedure."

Flaw #4: Failing to Address Contradictory Research

A general problem incorporated into several of the previously discussed flaws is a failure of Lindzen and Choi to address the contradictory results from the Trenberth, Murphy, and Chung et al. groups in 2010.  The entire purpose of LC11 should have been to address those subsequent results which had conflicted with LC09.  Instead, all reviewers voiced the same concerns LC11 did not address the problems identified in LC09.  Reviewer 3 said:

"I feel that the major problem with the present paper is that it does not provide a sufficiently clear and systematic response to the criticisms voiced following the publication of [LC09]"

For example, Chung et al. (2010) attempted to reproduce the LC09 analysis using near-global data, and found net positive feedback.  Trenberth et al. also tried to reproduce those results and obtained the opposite result as LC09 - yet LC11 did not explain how they could arrive at the opposite conclusion as these two other studies while using the same data.

Flaw #5: Cloud Causality

An additional flaw noted by Reviewer 1 involved LC09 and LC11's treatment of clouds as an internal initiator of climate change, as opposed to treating cloud changes solely as a climate feedback, as most climate scientists do. 

"the authors go through convoluted arguments between forcing and feed backs. For the authors' analyses to be valid, clouds should be responding to SST and not forcing SST changes.  They do not bother to prove it or test the validity of this assumption. Again this is an assertion, without any testable justification."

Lindzen and Choi plotted a time regression of change in top of the atmosphere energy flux due to cloud cover changes vs. SST changes and found larger negative slopes in their regression when cloud changes happen before surface temperature changes, vs. positive slopes when temperature changes happen first.  LC09 and LC11 thus concluded that clouds must be causing global warming.

However, Dessler (2011) plotted climate model results and found that they also simulate negative time regression slopes when cloud changes lead temperature changes.  Crucially, SSTs are specified by the models.  This means that in these models, clouds respond to SST changes, but not vice-versa.  This suggests that the lagged result first found by Lindzen and Choi is actually a result of variations in atmospheric circulation driven by changes in SST, and contrary to Lindzen's claims, is not evidence that clouds are causing climate change, because in the models which successfully replicate the cloud-temperature lag, temperatures cannot be driven by cloud changes.

Reviewer 2 also noted a quandry for Lindzen and Choi with regards to their assumption that clouds are driving SST changes and their attempts to estimate climate sensitivity:

"If the cloud variations are driving the SST, then these data are not appropriate for computing climate feedbacks, as they are disequilibrium forced fluctuations."

LC11 - Overhyped and Under-Supported

Ultimately the main flaws in LC11 are the same as those in LC09 - Lindzen and Choi simply did not address most of the problems in their paper identified by subsequent research, and what few issues they did address, they failed to explain why their results differ from those who attempted to reproduce their methodology.

Nevertheless, LC09 and LC11 have become extremely over-hyped.  Frequently climate contrarians (for example, Christopher Monckton and John Christy) claim that mainstream climate sensitivity estimates rely wholly on models (which is untrue), whereas lower climate sensitivity results are based on observational data.  When they make this assertion, they are referring to LC09 and LC11.

This is a key point for climate contrarians, whose arguments are effectively a house of cards balanced atop the 'low climate sensitivity' claim.  Since the body of research using multiple different approaches and lines of evidence is remarkably consistent in finding an equilibrium climate sensitivity of between 2 and 4.5°C for doubled CO2 (whereas a 'low' sensitivity would be well below 1.5°C), climate contrarians reject the body of evidence by (falsely) claiming it is based on unreliable models, and attempt to replace it with this single study by Lindzen and Choi under the assertion that it is superior because is observationally-based.

However, subsequent research identified a number of fundamental errors in LC09 which simply were not addressed in LC11, which is why the PNAS reviewers - even those chosen by Lindzen himself - unanimously agreed that the journal should not publish the paper.  While LC09 and LC11 are based on observational data, they also rely on a very short timeframe, mainly on data only from the tropics, and their methodology contains a number of problems.

Quite simply, this one paper is insufficient to overturn the vast body of evidence which contradicts the 'low climate sensitivity' argument.

The information in this post has been incorporated into the rebuttal to Lindzen and Choi find low climate sensitivity.  Thanks to Kevin Trenberth and Andrew Dessler for their feedback on this post.

0 0

Printable Version  |  Link to this page

Comments

Comments 1 to 18:

  1. "some stupid mistakes...It was just embarrassing." nice one :)
    0 0
  2. Dana, - you seem to suggest that Lindzen because Lindzen conceded some 'stupid mistakes' in LC09 that this is somehow a concession that it contained 'major flaws'. Saying 'stupid mistakes' is hardly the same as 'major flaws'. - you assert that 'Two of the reviewers were selected by Lindzen, and two others by the PNAS Board.' This is completely wrong. The two reviewers selected by Lindzen were Ming-dah Chou and Will Happer, but these are not among the four reviewers you refer to. (And of course the two reviewers Lindzen selected recommended publication, which is a relevant but omitted detail.) - you write, 'As a result, PNAS rejected the paper, which Lindzen and Choi subsequently got published in a rather obscure Korean journal'. This is, again, wrong. PNAS did not reject the paper; they asked Lindzen and Choi to revise and resubmit. The authors, however, believed that dealing with reviewers #1 & #2 was a waste of time, and decided to submit elsewhere. In general, you have mentioned all the negative comments made by the reviewers and ignored all the positive comments. There is no discussion of the fact that Lindzen and Choi, for instance, have demonstrated that the methods of Forster and Gregory, Dessler 2010 and others, using the simple regression, are almost certainly flawed. What is good about your post, however, is the important reminder that there is still no peer-reviewed response to LC11.
    0 0
  3. It is worth noting that the LC11 article in Asia-Pacific Journal of Atmospheric Science is considerably longer than the LC10 PNAS submission that received the reviewer comments described above (I suspect 'skeptics' might object to this opening post on those grounds). However, to a large extent this is due to the page limits in the PNAS journal - the PNAS submission included a very large appendix that was brought into the APJAS article main body, and having read both I find the content significantly identical. The final LC11 paper includes all of these issues: poorly described methodology, tropical rather than global data, no sensitivity testing for the time periods examined, no real addressing of the multiple papers that found much higher sensitivities from the same data, and the rather astounding conclusion that clouds are a forcing rather than a feedback. That final item - clouds as a forcing - appears to be a common element in several attempts to prove climate sensitivities to be low. Dr. Spencer took much the same approach in his book The Great Global Warming Blunder and Spencer and Braswell 2008, where he believes most observed climate change is due to chaotic changes in cloud cover. From that, and an overly simple climate model, he obtained low sensitivity values. This is just foolishness - Dessler 2011 (referenced above) and others have shown that the techniques used in LC11 derive the same low values and cloud forcing from models where the causality operates the other way around - a false conclusion. And contradicted by the responsiveness of humidity and thus clouds on temperature, as a feedback. It's a bad analysis. I suspect, however (just my opinion here), that cloud forcing is attractive to skeptics because such analysis, while flawed, leads to low sensitivity values they find attractive - a confirmation bias temptation.
    0 0
  4. alexharv074 - "...there is still no peer-reviewed response to LC11" That would be incorrect - Dessler 2011 as referenced above is a direct rebuttal of LC11 and of Spencer and Braswell 2011, both of which argue (incorrectly, according to D11) that clouds are a forcing. dana1981 - The Dessler 2011 link in the body of the article is broken.
    0 0
  5. alexharv074 @2: 1) Dana did not suggest that Lindzen conceded that there where major flaws in LC09. Rather he said that we (SkS) had noted the existence of major flaws, and the Lindzen had admitted the existence of stupid mistakes. Both statements are true. 2) As to your further points, I quote from the letter to Lindzen:
    "Dear Dr. Lindzen, The Board appreciates your cooperation in soliciting additional reviews on the paper you recently contributed to PNAS. We consulted the two experts you approved and two others selected by the Board. All four reviews (enclosed) were shared with two members of the Board before reaching a final decision. One of the Board members noted:
    All of the reviews are thoughtful assessments of the strengths and weaknesses of the manuscript in question by leading experts, so they provide valuable hints for (possibly) improving the paper…I sympathize with Rev. 4's comments who concludes that the new paper simply has to explain why the opposite conclusions from the same data set by Trenberth et al. are flawed. If that could be achieved through a major review of the current version (hopefully accounting also for other important referee remarks) then the article would provide a crucial contribution to a most relevant scientific debate.
    In light of these additional critiques, the Board concurs that the current paper must be declined for publication. I am sorry we cannot be more encouraging at this time and hope the additional reviews will help in revising the work."
    (My emphasis) This letter directly contradicts all of your major claims. Specifically, a) It specifies that two of the four reviewers whose reviews where enclosed where approved by Lindzen, and two selected by the board contradicting your claim that Lindzen did not select two of those four reviewers; b) Reading the reviews, it is apparent that all four reviewers did not consider the paper of sufficient quality; and that all four reviewers did not think Lindzen and Choi had justified their conclusions; c) The letter explicitly states that the paper is declined for publication, ie, that it has been rejected. It certainly does not invite L&C to resubmit, contrary to your claim. Finally, none of the reviewers conclude that L&C has shown earlier papers to be flawed, and indeed one of them explicitly criticizes the paper for failing to adress the arguments of previous papers, specifically Dessler 2010. If you are going to try and spin the debate, may I suggest you do so on details which are not so easily checked and rebutted.
    0 0
  6. KR, if you insist, Dessler 2011 is a response - a rushed, half-hearted response that probably shouldn't have been published. When I suggested at RealClimate that it is not a serious response, I don't recall anyone disputing this. I also don't recall anyone claiming that Dessler had settled the matter. The arguments were about justifying why there would never be a serious response to LC11.
    0 0
  7. alexharv074 - Your claim, "...there is still no peer-reviewed response to LC11", is flatly incorrect. If you wish to discuss the merits of Dessler 2011, in regards to LC11 or SB08/SB11 (a different thread), then present your arguments. You have, to date, not done so. Dessler 2011 is quite short - 4 pages. I think this primarily speaks to how clear the errors in LC11 and SB11 really are. If those authors or others disagree, they should then comment on D11. However, as I and Tom Curtis have pointed out, you have yet to make a supported (or, in my opinion, supportable) claim in this thread. And just insulting D11 (with phrases such as "a rushed, half-hearted response that probably shouldn't have been published") is not making your case.
    0 0
  8. Tom Curtis, not to put too fine a point on it but "approving" of editors selected by PNAS is not the same as "selecting" your editors. Moreover, Lindzen makes clear that the editors PNAS claimed he "approved" were not in fact the editors he did approve. Meanwhile, your bolded text says that the 'current' paper is rejected and hopes that the comments will assist in revising. I assume that one revises a paper only with the intent to resubmit.
    0 0
  9. KR, the paper is indeed quite short and Dessler clearly had written it with a view to refuting Spencer and Braswell 2011. He seems unaware that Lindzen and Choi's argument is not the same as Spencer and Braswell's. He asserts, for instance, that Lindzen and Choi's paper claims that "clouds cause climate change". In fact, their paper says nothing about clouds or the cause at all. It relates OLR to changes in SST. It is hard to take the paper seriously when it is not even clear that Dessler has even read the paper he briefly criticises.
    0 0
  10. alexharv074 @8, Lindzen himself has confirmed that one of the reviewers was Dr Patrick Minnis, a person suggested by him, and not by PNAS. The second reviewer approved by Lindzen may have been Albert Arking who was not suggested by PNAS, or Veerabhadran Ramathan, who was suggested by PNAS and accepted by Lindzen as an appropriate reviewer. Whichever of the two, the fact that one reviewer suggested by Lindzen and not previously suggested by PNAS was used shows that had Lindzen expressed a serious objection to all of those suggested by PNAS, then he would have had two entirely of his own choosing. I assume that a paper will be revised for resubmission even if the resubmission is not to the journal it was originally submitted to. Therefore wishing that the reviews will be helpful for the purpose of revision in no way invites resubmission to the journal. The most that can be said is that the editors did not actively forbid such resubmission.
    0 0
  11. alexharv074 - "He (Dessler) asserts, for instance, that Lindzen and Choi's paper claims that "clouds cause climate change". In fact, their paper says nothing about clouds or the cause at all." It really appears that you have not read LC11 - the word "clouds" one of the most common nouns in the paper. The sensitivities LC11 derive are with temperatures and radiation lagging cloud changes by several months (1 month for short wave radiation, 3 months for outgoing LWR, IIRC), which is a temporal causal statement (causes do not lag effects!), and they state in section 5:
    Based on our simple model (...), this ambiguity results mainly from nonfeedback internal radiative (cloud-induced) change that changes SST.
    (Emphasis added) They do acknowledge in LC11 that in LC09 they had erroneously used the same causal reversal, with clouds leading temperature changes, in analyzing a number of models where causality specifically goes the other direction - models where SST is an explicit input value and cloud cover results from it. In LC11 they limited the lag values for the models to zero, which in my opinion is equally unphysical (instantaneous response). But they keep the LC09 cloud temporal lead for analyzing the observational data. That seems pretty clear to me, and Dessler 2011 is entirely relevant. "Nonfeedback internal radiative (cloud-induced) change that changes SST" is a claim of forcing, not of feedback.
    0 0
  12. alexharv074 - "He (Dessler) asserts, for instance, that Lindzen and Choi's paper claims that "clouds cause climate change". In fact, their paper says nothing about clouds or the cause at all." (emphasis added) A minor point here: LC11 lists a total of five keywords after the abstract: "Climate sensitivity, climate feedback, cloud, radiation, satellite" (emphasis added). I'm not impressed with your understanding of the paper you are supporting...
    0 0
  13. alexharv074 - Tom Curtis and KR have already said everything I would have said in response to your comments. Everything in the above post is supported by documentation, be it the PNAS letter or the papers being referenced. Your contradictory comments thus far have been wholly unsubstantiated. In short, if you think something in the post is wrong and should be corrected, provide the documentation to support your position. If you are correct, I will amend the post accordingly, but thus far your criticisms are all direclty contradicted by the source documentation linked in the post.
    0 0
  14. May I also note that Alex's "still no peer-reviewed response" is in many ways problematic, as LC11, as noted on numerous occasions, does not address many issues already raised for LC09. That is, LC11 was in essence already rebutted by the various studies criticizing LC09!
    0 0
  15. I suspect, however (just my opinion here), that cloud forcing is attractive to skeptics because such analysis, while flawed, leads to low sensitivity values they find attractive - a confirmation bias temptation.
    There are other implications to this. It would mean that the climate has not only a low sensitivity to CO2, but a lower sensitivity to human activities that could raise the temperature generally. If it's the clouds that are the forcing (instead of a feedback) and they drive warming, then there's no way to stop it through regulation or disruptive energy technologies. Furthermore, the reflective properties of aerosol emissions would work to counteract hypothetical cloud-driven warming, so regulating industries that send aerosols up smoke stacks would have a stronger argument against it.
    0 0
  16. Clouds as a forcing? It's like saying that automobile airbags are a cause of collisions.
    0 0
  17. "The authors, however, believed that dealing with reviewers #1 & #2 was a waste of time, and decided to submit elsewhere." (alexharv074, #2) As a young engineer I learned that review comments indicate that you need to revise your content. It is very likely that other readers will have the same reactions to your writing. (An early mentor said, "If it can be misunderstood, it will be misunderstood," in encouraging me to address disagreeable comments.) After reading the four reviewers' comments, I see no reason why they should not be addressed. Also, I do not see how reviewers #3 and #4 were any more agreeable than the first two. Even if the authors lost confidence in getting the paper into PNAS, it would be unthinkable to ignore the review comments and turn the same text over to another journal, whether more or less prestigious. And that is beyond the point made by each of the four reviewers that the LC09 critiques were barely addressed.
    1 0
  18. "Even if the authors lost confidence in getting the paper into PNAS, it would be unthinkable to ignore the review comments and turn the same text over to another journal, whether more or less prestigious." Very well said, Lambda. That is indeed, one aspect of how science works, and of why peer review is useful.
    0 0

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us