Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  1473  1474  1475  1476  1477  1478  1479  1480  1481  1482  1483  1484  1485  1486  1487  1488  Next

Comments 74001 to 74050:

  1. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    Anne-Marie Blackburn @38, In my post to muoncounter at #33 I link to an artice that does go into the effect of Global Warming on the Texas drought. His conclusion is about 0.9F of the total 5.4F above normal for the Texas summer.
  2. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Matthew @99: 1) You are projecting temperatures with a linear trend, so of course your projections understate IPCC projections. 2) Taking a 5 year mean at the start and the end of a period is not an accurate method of determining a linear trend in any event. Mathematically, therefore, you cannot do what you purport to do (find a trend) with the methods you use. 3) If you take a five year mean, the value arrived at is the value for the median year of the mean. Consequently when you take a 2006-2010 mean, you determine a value for 2008, not 2010 as you suppose. 4) Most importantly, El Nino's and La Nina's do not neatly alternate. Sometimes, as for example 2002-2007, you get a string of El Nino's with only neutral conditions or weak La Nina's (2006) intervening. Nor are all ENSO fluctuations of equal strength. The 1997/1998 El Nino was particularly strong, while the following 1999-2001 La Nina episodes were moderate. Consequently a simple five year mean will not eliminate ENSO effects from the data. For example, a 5 year mean centered on 2000 would include a strong El Nino year, 2 moderate and a weak La Nina, and a moderate El Nino, probably resulting in a slightly positive (El Nino) average. A five year mean ending with 2011 will include a moderate, a strong and a very strong La Nina and two moderate El Nino's, probably being net negative as a result. As noted @ 23, Tamino has already produced temperature indices adjusted for ENSO, vulcanism, and solar variation. The result shows a 0.17 degree per decade warming, just shy of the 0.18 projected by the AR4 A2 multi-model mean. That is certainly well within error.
  3. Anne-Marie Blackburn at 22:13 PM on 25 September 2011
    Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    Norman One thing I find interesting about your approach to the possible link between climate change and extreme weather is the very narrow way in which you think climate change can affect extreme weather. You say that many extreme events are caused by blocking events. This is correct, but many are not linked to blocking events. So I really don't think is a valid premise to start with, and I've not seen any scientist claim that this is the only way to determine the link we're trying to establish. Also, let's say, hypothetically, that a blocking event is purely natural in cause, how can you show that greater temperatures or water vapour levels brought on by climate change are not going to have an impact on a drought or flood caused by that blocking event, for example? What mechanism do you suggest nullifies the role of climate change in extreme weather when a blocking event is involved?
  4. Lessons from Past Climate Predictions: IPCC AR4 (update)
    It looks like Carrick has joined Lucia and the others on this thread who cannot criticize the conclusion of the post so they make up unsupportable accusations of cherry picking to distract readers. The conclusion of the post is: the IPCC projections match the last 5 years of temperature reasonably well but are a little low (within the error bars). If you contest the conclusion please present a correct graph that shows the conclusion is in error (make sure that you have not cherry picked the start time). {snip} Carrick: please use at least 30 years of data to compare temperature trend records (your data only covers 20 years)or people will think you are cherry picking. You will find your differences mostly go away. Why did you make such a transparent argument here? {snip}
    Moderator Response: [grypo] Please, take it easy on the inflammatory language and insinuations. Thanks!
  5. Lessons from Past Climate Predictions: IPCC AR4 (update)
    So I figured out a smoothing avg of 5 years on a 1980-1999 baseline for giss data to remove the enso... Here is what I got... 1987-1991 is avged to .02c 2006-2010 is avged at .316c 20 year equation for 1990-2010 is f(x)=.0148x+.02 f(20)=.0148(20)+.02 f(20)=.316c check! f(30)=.0148(20)+.02 f(30)=.464c for 2020 f(40)=.0148(30)+.02 f(40)=.612c for 2030 The ten year trend for 2000-2010 is (0,.164)(10,.316) equation f(x)=.0152x+.164 f(x)=.0152(10)+.164 f(x)=.316 checks! for 2010 f(20)=.0152(20)+.164 f(20)=.468c for 2020 f(30)=.0152(30)+.164 f(30)=.64c by 2030 So yes it is quite a bit off the models here they are year A1b1, A2 2010 .408, .423 2020 .684, .615 2030 .944, .809 I did a smoothing of 5 years with the new maps to get the means...>1987-1991(.02c), 1997-2001(.164), 2006-2010(.316), each one of these have nino's and nina's. Had to as there is no map of the giss at 1980-1999 baseline. I put together a 20 year equation from 1990-2010 and worked it out to 2020, 2030.
    Response:

    [DB] Your link is broken to your graphic.

  6. Ocean Heat Content And The Importance Of The Deep Ocean
    Mlyle @ 11 - the average depth of the ocean is 4300 mtrs according to NOAA. See hyper-link provided in the post. CharlieA @ 12 - Figures 1 &2 are from the model runs. Wingding @ 16 & 17 - One only has to look back over the last half-dozen decades to see that the slowdown in the 700 mtr ocean surface layer is not unprecedented - in fact greater than decade-long slowdowns have occurred. See Levitus (2009). In other words what we're probably looking at is natural variability. Periods of little upper ocean warming balanced against periods of large upper ocean warming. Again this is evident in the observational record of the 20th Century. Dean @18 - The idea is that: if the upper ocean has a limited capacity for warming, and being the principal source of atmospheric warming (covering around 70% of Earth's surface) how can the atmosphere continue to warm? All very hypothetical of course. Paul Magnus @ 21 - global warming has already seen an increase in the intensity/frequency of La Nina/El Nino in the 20th century. What lies ahead is uncertain - the models don't seem to agree. Heat (as in longwave radiation) doesn't warm the upper ocean. The surface ocean is warmed by solar radiation, which loses heat to the cooler atmosphere above, thus making surface atmospheric temperatures warmer. Increased greenhouse gases change the relationship by warming the top of the ocean 'cool skin' layer. This lowers the temperature gradient in the skin layer resulting in less heat escaping to the atmosphere, and causing the ocean to steadily accumulate heat. I've written a post on this topic, which should be published soon after the post on Meehl (2011).
  7. Galactic cosmic rays: Backing the wrong horse
    The start is much better Muon. The reason I raised it is because a lot of people are not aware of the experiment and the issues it has raised. To a 'novice' the initial paragraph may have been confusing.
  8. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Garrick @94 again suggest that Dana has been guilty of cherry picking. It is a rather bizzare accusation given that Dana started with his comparison with the first year of the projection. Of course, as those who make it know very well, in short trends, changing the start of finish date by a year can make a large difference. They know, in fact that shifting the start date back a year will make a difference to the trend. That makes the accusation worth looking at closely. Carrick claims that Dana's start year is an outlier, and it is indeed cooler than any other year in the 2000-2011 period. There is a reason for that - it was a moderate La Nina year. Of course, 2011 was also an outlier in that respect. Indeed, the first months of 2011 (and hence Dana's end point) was the strongest La Nina in over thirty years. I'm pretty sure Carrick knows that too, but you don't see him insisting that Dana finish his trend analysis in December 2010 because 2011 is an outlier. It is only cool periods early in the trend that Carrick believes should be expunged from the analysis. You can see what is going on in this detail of the Multivariate ENSO Index: Delaying the start point of the analysis until one year after the start of the projection shifts us from a moderate La Nina to neutral (but slightly cool) conditions). It would make a very large difference in a plot of the MEI trend, both by shortening the interval and by raising the start point. It would also make a difference to the temperature trend. But excluding 2000 as a start date because doing so will give a flatter trend is cherry picking. In fact, the suggestion of a 2001 start date is not the only one that has been suggested by Lucia, Carrick and cohorts. Suggestions have even been made that 2004 (a moderate El Nino) should have been chosen as a start date. Indeed, even more bizzarely, Lucia has even suggested that running the trend to the most recent date available is also grounds for an accusation of cherry picking. Apparently, in order to be absolutely free of any taint of Cherry-Picking according to Lucia's rabble, Dana needed to start the trend between 2001 and 2004, and finish it in a strong La Nina year (2011 if you must, but 2008 by preference). In simple terms, in order to avoid accusations of cherry picking by Lucia's rabble, Dana needed to deliberately cherry pick for a low trend.
  9. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Carrick @90, contrary to your claim, the NCDC does not include data north of approximately 80 degrees North, or south of 60 degrees South (except for a few isolated stations). This can be seen clearly on this plot of the temperature anomaly for March 2011 (chosen because it has a minimum number of 0 degree anomalies to confuse the issue of coverage). (Clicking on the image links to the NCDC so you can compare multiple graphs). I have also downloaded the ncdc_blended_merg53v3b.dat file and confirmed directly that the relevant cells are in fact missing. So my original claim that both Hadcrut and NCDC do not show polar regions stands. NCDC is preferable to HadCRUt in that it at least shows remote locations in South America, Africa, Australia, Siberia and the Canadian Arctic, unlike Hadcrut. Never-the-less, gistemp remains superior in coverage to both its rivals. In your follow up you suggest comparing DMI and gisemp. I have already shown one such comparison in my 23. Based on that comparison, gistemp is running cool in the Arctic.
  10. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    Norman#34: "stating extreme weather events are like loaded dice is an incorrect view" Norman, its a metaphor. When the expected range is 2-12 and 13s start popping up, the dice aren't loaded; there's a die with a 7 on it. Dr. Tobis says this best: A truly bizarre season occurs in a particular place. Either these extraordinary events are connected, which is perhaps unlikely, or they are unconnected, which is extremely unlikely. That is, you are asking for a bizarre coincidence. But now we add up the number of bizarre coincidences, for each of which John [Nielson-Gammon] can make comparable arguments. The tornado outbreak this spring. The huge blocking event in Asia last summer which did so much damage in central Russia, Pakistan, and parts of China. The fires in Australia in 2009 and the floods this year. The floods in the midwest. Heat waves in Europe. None of these are clearly part of local trends. None of these are particularly predicted in the literature, and as far as I know the GCMs don't indicate these things happening. But, here's the thing. They are happening.
  11. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Carrick, as DB suggests, please tone it down. I don't see why we can't have a civil discussion without becoming so abrasive. As we already discussed on Lucia's blog, choosing 2000 as the start date for the analysis was not "a mistake", nor was it a cherrypick. That's the year that the IPCC AR4 began running its models. To exclude the first year of the model run, as you suggest, would be a cherrypick. And to exclude any available data when we're only looking at 11 years' worth or so would be unwise. Removing the effects of ENSO is another possible approach, but an incomplete one. What about solar effects? And AMO? And volcanoes? And anthropogenic aerosols? If you're going to start filtering out short-term effects, you need to filter them all out, and that's a major undertaking. The point of this post, as in most of the 'Lessons' series, is merely to get an idea about how well model projections and data have matched up to this point. As I noted in the post, there's really not enough data since the AR4 to make any concrete conclusions. If you disagree with my approach, you're free to do the analysis however you'd like on your own blog. But if you're going to keep posting here, please take the time to read our comments policy. This isn't Lucia's or Watts' or Bishop Hill's blog. Accusations of deception and inflammatory tones are not allowed here. Please keep it clean.
  12. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    I don't have a Kindle or Kindle SW. Has anyone used Kindle SW on a Mac or PC and found it useable and useful?
    Moderator Response:

    [DB] I use my Kindle to store PDFs of science papers from my PC.  Works well for travel and camping trips.

    [grypo] Kindle for PC works well. Kindle also offers apps for most smartphone OS (iOS, Android, etc)
  13. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Just one follow up: Here are the tends 1990-now (°C/decade) ecmwf 0.151 giss 0.182 hadcrut 0.149 ncdc 0.153 GISTEMP runs a bit high, the other three appear to be in complete agreement. There are legitimate issues with how GISTEMP computes the upper arctic, this seems to call that method further into question. For an independent test of GISTEMP, I need mean temperatures averaged over 80°-90° to do a direct comparison with DMI (which is based on instrument measurements), but my suspicion is you'll find it runs hot compared to more physics-based models (both ECMWF and NCDC are heavily physics based, Hadcrut and GISTEMP are basically tonka toys in comparison, IMO). ClearClimateCode provides the gridded data I would need to make a direct comparison with DMI, but they are in a binary format I haven't gotten around to decoding. If anybody here is a data sleuth here is a link to a rasterized version of DMI, updated daily. If anybody here has a clue on how to decode the CCC grid files, I'd appreciate a bone thrown my way on that.
    Response:

    [DB] Please note WRT DMI:

    "DMI averages the data based on a regular 0.5 degree grid. That means it weights the region from 88.5N to 90N the same as the region from 80N to 80.5N despite the fact that there's a 40-fold difference in area.

    Ergo, the DMI value is very strongly weighted to the area immediately around the Pole and neglects the warming areas around the periphery."

    Essentially, the DMI "runs cold".

    H/T to Peter Ellis at Neven's.

  14. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    34, Norman,
    I just wish on these threads that more mechanisms would be developed to demonstrate how global warming will create more severe weather.
    Did you consider buying and reading the book that is the subject of this post? If not, why not?
  15. Lessons from Past Climate Predictions: IPCC AR4 (update)
    I wanted to post a clarification to some comments made about by Tom Curtis, who makes the following claim:
    2) In fact there is good reason to believe that two of the indices understate trends in GMST, in that they do not include data from a region known to be warming faster than the mean for the rest of the globe. In contrast, while there are unresolved issues relating to Gistemp, it is not clear that those issues have resulted in any significant inaccuracy. Indeed, comparison of Gistemp north of 80 degrees North with the DMI reanalysis over the same region shows Gistemp is more likely to have understated than overstated the trend in that region:
    I apologize for missing it earlier and any response that may have set him right, but I am busy getting ready for a trip and only had a chance to skim the comments, but this comment is in error. Actually of the three regularly updated series, NCDC probably has the best approach among them, and were I to pick one to put my money on, it would be this one. Unlike Tom's claims, NCDC does in fact interpolate, it uses something called "optimal interpolation' combined with empirical orthogonal functions to reconstruct the global mean temperature. Here's a standard reference on it (To quote from my comment on Lucia's blog): NCDC is the only method that incorporates fluid mechanics into their reconstruction. That is, they use pressure, temperature, and wind speed to reconstruct the various fields in a self-consistent fashion, and they use an empirical orthogonal function approach to do so. (he person who is the closest to this, as I understand it, is Nick’s code. Mosh can update me on that. Nick Stoke's Moyhu index uses an OLS based approach that also (in some sense) produces an "optimal interpolation". I also like NCDC's approach because they can estimate the uncertainties in the measurement. Those of us in physics would crucify any of you guys trying to stand up and present graphs that don't have uncertainty bounds on them (unless the uncertainty bounds are comparable or smaller than the thickness of the plot line). Secondly, simply making a correct, without being able to set bound on it, doesn't guarantee that the "change for change's sake" is an improvement over the simpler technique. With GISTEMP, the evidence is that it runs "hot" over the last decade compared to HadCRUT, NCDC and ECMWF. ECMWF is a meteorological based tool. An ascii version isn't currently available (to my knowledge), I'm trying to see if I can get that changed. In the mean time here's a rasterized version in case anybody wants to play with it I also think a lot has been made about GISTEMP being better. This smells purely of confirmation bias. I think people are picking GISTEMP because it runs hot (they like the answer) not because they understand in any profound way how it works. Truthfully, if you compare the series: You really have to squint to tell them apart. My personal predelection is to use all of the data series together, it's more consistent if your goal is to compare against a mean of models, than say cherry picking one data series, especially when your defense of it is utterly flawed and only demonstrates the depths of your own ignorance on this topic: (-Snip-). (What is magical about smoothing over a 1200-km radius. Why do you need two radiuses to compare? Do you realize how ad hoc using two different radiuses basically drawn from a hat and just visually comparing the products truly is???) (-Snip-). There would have been no serious criticism to using GISTEMP, if Dana hadn't mistakenly started with a year that was an outlier in estimating his trend. (This is another thing you get crucified for in physics presentations.) The better thing to do is shift the period to 2001, move away from the outlier when comparing trend. The best is to regress your data with MEI as Tamino does and remove this natural fluctuation that is not been captured by the GCM models before attempting a linear regression. Then you are free to pick any start and end point you please. But otherwise, issues with OLS sensitivity to outliers near the boundaries should trump other considerations. Science starts by considering the merits of a technique, and not its outcome, in determining which technique to use. ECMWF and NCDC are heads above the other two methods. (-Snip-) (-Snip-). I also find it interesting that he thinks mistakes that would get him reamed at a science conference (and I'm being *really nice* compared to how some of us behave there) constitute nitpicking. If you're going to blog in this area, you need to start with the assumptions 1) you're not smarter than everybody else, 2) other people have valid contributions and viewpoints to make that should influence your future blog postings, and 3) if you are going to disregard criticism of your post, why bother posting? It just ends up discrediting you and the viewpoint you are advocating in favor of.
    Response:

    [DB] Multiple inflammatory statements snipped.  Please rein in the inflammatory tone and rhetoric; confining your postings to the science with less inflammatory commentary is highly recommended.

    "you need to start with the assumptions 1) you're not smarter than everybody else"

    You would do well to remember this yourself.

  16. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    Eric (skeptic) @ 32, Thanks for the link, I have already been there and posted on this thread and it does not explain much. A warmer atmosphere can hold more water vapor, yes. Does that mean it will increase rain? Might but it would not have to. It can hold more water vapor does not mean more rain will fall. "Atmospheric blocking leads to a stagnation of weather patterns. As you are well aware, atmospheric patterns tend to repeat themselves. In the case of blocking, the same pattern repeats for several days to even weeks. This can lead to flooding, drought, above normal temperatures, below normal temperatures and other weather extremes. It is important to recognize a blocking pattern in its initial development. With this awareness, you will be able to forecast out to several days in advance with a high degree of accuracy." Source of above quote. Droughts, floods and heat waves are not a random fluctuation anywhere on the earth. They are created by known weather patterns. A blocking system is responsible for many of the extreme long-term weather disasters. In order to link Global Warming to these extremes, it would be necessary to create a physical mechanism where global warming will increase the number and intensity of these blocking systems. It there is no mechanism found then stating extreme weather events are like loaded dice is an incorrect view because there is nothing random about them. I am still looking for such a link but have not yet found one.
  17. Lessons from Past Climate Predictions: IPCC AR4 (update)
    As Tom identifies @92, there's always a challenge in keeping wording of a post both correct and accessible to the general public. For example, Phil Jones' BBC interview. What he said was scientifically and statistically accurate, and what most of the public got from it was "no warming since 1995" - not at all what he actually said. This post is intended for a broad audience. The language could have been more precise from a statistical standpoint, but the more technical you get, the more of the general audience you lose. My suggestion for those who are nitpicking the language is to go do the analysis for yourself on your own blog. Then you can use whatever language you think is appropriate.
  18. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Chris @90:
    "So first the accuracy of the projections is "difficult to evaluate", but then we find it's "reasonably accurate", and then it's sufficiently accurate that one can have some confidence that it will eventually add to the evidence that climate sensitivity is around 3 oC. Aren't those interpretations somewhat incompatible?"
    No! Think of yourself as trying to predict the match between the AR4 A2 20 year trend in 2020, and the trend of observed GMST between 2000 and 2020. Well, based on eleven years data, that 'accuracy' is difficult to evaluate, ie, to predict. It is difficult to evaluate because an 11 year trend is a very short trend in climate terms with a high noise to signal ratio. Changing the start or end point of the trend by just one year can make a significant difference to the trend. Hence, "it's difficult to evaluate the accuracy of its projections at this point". However, though difficult to evaluate, we are not completely without information regarding the accuracy of the projection. The information we do have suggests that it is more probable than not that the GMST trend will lie within error of the AR4 A2 multi-model projection. In this it contrasts with a host of other possible projections, including some actually made by AGW 'skeptics'. For those other projections, and based on evidence to date, it is more likely than not that the GMST trend will not lie within error of those projections (where within error is based on model mean variance). So, unlike those other possible projections, the AR4 A2 projections are "reasonably accurate" and give some (but not a great deal) confidence that the actual climate sensitivity is close to that of the models. We could be precise about this. But doing so would defeat the purpose of keeping the article accessible to the general reader. What I have been convinced of by the traction the argument of inconsistency has received is, not that it is correct, for it is not, but that as stated the conclusion can foster confusion.
  19. Lessons from Past Climate Predictions: IPCC AR4 (update)
    NewYorkJ Yes, taking a look under the bonnet of the ensemble cast makes it clear that attempting to determine climate sensitivity (essentially what everyone is talking about) from such a short period won't work. Here's a rundown of the models, their equilibrium and transient sensitivities* and their 2000-2010 trends: BCCR-BCM2.0    n.a.    n.a.        -0.03K/Decade CGCM3.1(T63)    3.4    n.a.         0.29K/Decade CNRM-CM3    n.a.    1.6               0.09K/Decade CSIRO-MK3.0    3.1    1.4            0.42K/Decade GFDL-CM2.0    2.9    1.6              0.08K/Decade GFDL-CM2.1    3.4    1.5              0.09K/Decade GISS-ER    2.7    1.5                    0.16K/Decade INM-CM3.0    2.1    1.6                0.34K/Decade IPSL-CM4    4.4    2.1                  0.28K/Decade MIROC3.2(med)    4.0    2.1         0.13K/Decade ECHO-G    3.2    1.7                     0.15K/Decade ECHAM5/MPI-OM    3.4    2.2       0.28K/Decade MRI-CGCM2.3.2    3.2    2.2         0.08K/Decade CCSM3    2.7    1.5                       0.29K/Decade PCM    2.1    1.3                           0.13K/Decade UKMO-HadCM3    3.3    2.0           0.13K/Decade UKMO-HadGEM1    4.4    1.9         0.11K/Decade There really isn't a discernible pattern at this stage linking sensitivity to temperature trend in the model outputs. I actually think the ensemble mean probably is too "warm", in the short term anyway, mainly because a few of the model cast do not include indirect aerosol effects. If you've ever seen a chart showing recent radiative forcing factors, such as this one in AR4 SPM, you'll know why this is a major omission. I've so far identified two models - INMCM3 and CCSM3.0 - which don't include indirect aerosol effects and it's no surprise they both exhibit two of the largest trends over 2000-2010. If just those two are removed from the cast the ensemble mean drops from 0.18K/Dec to 0.16K/Dec. Over the longer term it becomes less important: there is no difference in the 2000-2099 trend between the original 17-member cast and my cut-down 15-member version. Another interesting finding along this train of thought is that the median average is only 0.13K/Dec and 11 of the 17 members feature a lower trend than the ensemble mean. * http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2-3.html
  20. Galactic cosmic rays: Backing the wrong horse
    tblakeslee#16: "CERN experiment was much more abstract" Your first link is to the familiar Laken 2010, discussed here. This 'real world experiment' compared cloud fractions computed via general circulation models. And no, CERN is not duplicating/confirming Svensmark.
  21. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    muoncounter Here is an article that figures out how much of the Texas drought of 2011 was a result of global warming. "I think we’re pretty close now. This record-setting summer was 5.4 F above average. The lack of precipitation accounts for 4.0 F, greenhouse gases global warming [edited 9/11/11] accounts for another 0.9 F, and the AMO accounts for another 0.3 F. Note that there’s uncertainty with all those numbers, and I have only made the crudest attempts at quantifying the uncertainty. But this will do until something better comes along. (Also note that the AMO index is not, strictly speaking, independent of the global mean temperature, but the AMO trend since 1900 is weak so any double-counting here is very small. [edits 9/11/11]) Link to article of the above quote.
  22. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    #30, Norman: there are threads about extreme event causation from time to time: e.g. /extreme-weather-global-warming.htm Some other promising threads devolved into debates about insurance company record keeping.
  23. Galactic cosmic rays: Backing the wrong horse
    15, Paul D, I did not read it as you did, but I can see that it could cause confusion. Perhaps: "Despite an excellent rebuttal here on SkS, supported by Jasper Kirkby's own words, the popular press is still pushing the preliminary CERN CLOUD results..."
    Moderator Response: [muoncounter] Fixed opening paragraph, thanks to Paul D and Sph.
  24. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Tom, I find it interesting that Jonathan and I seem to be the only two people that consider that the last section of the top article is problematic. Incidentally, I think all the other stuff flying around about this analysis is unproductive nit-picking. I agree that the term "accuracy" can have gradations of meaning. The problem with the use of "accuracy" in the last section of the top article is that its use in describing the AR4 projections changes throughout the section. So first the accuracy of the projections is "difficult to evaluate", but then we find it's "reasonably accurate", and then it's sufficiently accurate that one can have some confidence that it will eventually add to the evidence that climate sensitivity is around 3 oC. Aren't those interpretations somewhat incompatible? On their own the AR4 projections have little to say about climate sensitivity and I don't think we can presuppose that they will eventually support a climate sensitivity near 3 oC. On the other hand comparison of empirical surface temperature progression with simulations from the late 1980's does support our confidence arising from a wealth of other data, that the likely value of the climate sensitivity is near 3 oC. I don't think this is "creative misunderstanding". It's simply how my mind interprets the text! I have quite a lot of confidence that the best value for the climate sensitivity is around 3 oC, but I don't think the AR4 projections (and their comparisons with surface temperature progression), gives us much insight into climate sensitivity, nor that we can presuppose that they will in the future support a value near 3 oC.
  25. Pielke Sr. Agrees with SkS on Reducing Carbon Emissions
    No, we didn't comment on the specific plan because I don't think anyone at SkS has read it, Eric. We'd be interested in hearing your comments one you have a look.
  26. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    #29, Norman. Think of it as a notional graph- a graph of temperature "events" in multiple locations compared to the baseline for those locations. The daily average, low and high could all be considered events. The PDF of events would be normally distributed as shown given a sufficient number of locations. With "climate change" the PDF would probably shift right and flatten out as depicted by muoncounter above (note that the PDF is still in comparison to the pre-CC baseline). I disagree a bit though, I think the right tail will be truncated somewhat by physical limits imposed by the initiation of convection.
  27. Ocean Heat Content And The Importance Of The Deep Ocean
    tblakeslee, need to be careful in taking Shaviv's analysis at face value, due, amongst other things, to his selection of data sets. For example much of his analysis is based on a set of tide guage records of Douglas (1997), which shows a marked cyclic variation of local sea level that matches the solar cycle. However, this doesn't match the globally averaged sea level variation, especially the satellite-derived record which doesn’t show a marked variation with the solar cycle; e.g. see this paper and Figure 3 - can't find a downloadable version right now: Church JA, White NJ, Aarup T, et al. (2008) Understanding global sea levels: past, present and future Sustainability Sci. 3, 9-22. It’s proposed that the tide guage measures, many of which are close to continental margins, have solar forcings magnified by more rapid warming/cooling in shallow waters, and that this amplifies the amplitudes of responses to forcings by a factor of 2-3 relative to the globally averaged response. So Shaviv’s use of this data to determine a radiative forcing from sea level response may well be erroneous (greatly overestimated) by that sort of factor. Whatever the origin of the discrepancy between tide guage measures and satellite measures with respect to amplitudes of response to solar cycles, I suspect that Shaviv’s analysis will be found to be a rather marked overestimation of the solar cycle response and his required “amplification”. There are some other problems with the paper that we could discuss. Notice that Shaviv himself points out problems with his analysis; e.g.: “Note that the relatively low correlation coefficient between the OHC and solar signals may seem somewhat suspicious” (page 10) This is a serious problem (i.e. that the OHC content variation doesn't really correlate with the solar cycle). Note also that Shaviv neglects to account for the effect of volcanic eruptions, which is important for assessing solar cycle effect on OHC, since for two of the 5 cycles (or 6; it’s not clear from Shaviv’s paper) analyzed, the volcanic forcing happens to be in phase with the solar cycle. This will produce a spurious “amplification” of any apparent solar effects that is not, in fact, related to solar effects. This has been pointed out by Lean and Rind in their recent analysis of attributions to 20th century warming (see section 4 on page 4 of their paper).
  28. Galactic cosmic rays: Backing the wrong horse
    There are many conditions that must come together to produce cloud formation. Cosmic rays can't create a cloud unless temperature and moisture conditions are right. If the air isn't supersaturated you are right and that is why cosmic rays are not sufficient for formation of clouds. This is why some attempts to connect Forbush events with cloud formation have failed. Here is an experiment that found a robust (R=-0.93) connection by working backwards and starting with abrupt cloud changes and then looking at the cosmic ray changes: http://www.atmos-chem-phys.org/10/10941/2010/acp-10-10941-2010.pdf The CERN experiment was much more abstract but this is a real world experiment using nature itself. The CERN experiment is really a confirmation and refinement of results already obtained in 2005 by Svensmark in his SKY experiment: http://www.space.dtu.dk/English/Research/Research_divisions/Sun_Climate/Experiments_SC/SKY.aspx lowclouds%20and%20gcr).pdf
  29. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    Sphaerica @24 In post #7 you state "We can certainly say the climate is changing. The number of extreme events and the extremity of those events has certainly increased." In post #24 you state "A proper verbal interpretation of the graph would say "the overall trend for the period prior to the impact of anthropogenic climate change is downward, but there is not yet enough data to determine if the 'climate change tail' will be definitively upward -- i.e. yet another hockey stick." My position on this topic is that there is not enough data to make a declaration of certainty on the topic. I think earlier data on severe weather events was not as fully reported as today. I do not think there is adequate accounting of severe weather events to take a strong position that the number and intenstiy of severe weather events has certainly increased. I am not stating it has not. I am making the case that there is not enough good reliable data to make any claim of certainty on this issue at this time and we may not know for many more years. Your contention is that if we wait to see if it is getting worse, it might just be too late. I just wish on these threads that more mechanisms would be developed to demonstrate how global warming will create more severe weather. If someone could demonstrate how global warming will create more blocking patterns, or hurricanes, or tornadoes or floods I would maybe share the certainty you have that things are getting worse weatherwise.
  30. Lessons from Past Climate Predictions: IPCC AR4 (update)
    There has been some discussion of an apparent contradiction in Dana's summary. I say "apparent" because there is no actual contradiction in Dana's conclusion. Accuracy is not bivalent like truth. Something is either true, or it is not - but things can be more or less accurate. Indeed, Dana clearly states that the AR4 results meet one (vague) standard of accuracy, they are "reasonab[ly] accura[te]", but it is impossible to tell as yet whether they meet another, more stringent standard of accuracy. Because different levels of accuracy are being considered, there is no contradiction. To illustrate the point, we can compare the AR4 projections to predictions analyzed earlier in this series, in this case the one by Don Easterbrook in 2008: The image was formed by overlaying Zeke's version of Dana's fig 3 graph above with figure 3 from Dana's discussion of Easterbrook's prediction (link above). The heavy Red line is a running mean of Gistemp, the heavy blue and green lines two of Easterbrook's three projections (the third declining even faster. Even the best of Easterbrook's projections (heavy green line) performs poorly. From the start it rapidly declines away from the observed temperature series. Briefly in 2008 (the year of the prediction) it is closer to the observations than is the A2 multimodel mean, but then falls away further as temperatures rise so that in the end it is further away from the observations than the A2 projections ever are. Given that 2008 was a strong La Nina year and in the middle of a very deep solar minimum, we would expect it to be below, not above and projected trend. But regardless of that subtlety, Easterbrook's projection performs far worse than the AR4 A2 projection. It is not reasonably accurate, although it may not yet be falsified. However, despite the fact that the conclusion contains no contradiction, I would suggest rewording it, or appending a clarifying update to the bottom of the post. As it stands it is an invitation to misunderstanding by those (apparently including Lucia) who think "accuracy" is an all or nothing property. It is also an invitation to the creative misunderstanding some deniers attempt to foster.
  31. Pielke Sr. Agrees with SkS on Reducing Carbon Emissions
    He closes with "In terms of how to do this with respect to carbon emissions, I completely agree with my son’s perspective as he presents in The Climate Fix" Has anyone read this? I just ordered a copy ($10.40 from the Amazon bargain bin).
    Moderator Response: Clarification: The statement quoted was made by Dr. Roger Pilke Sr. The book was written by his son, Dr. Roger Pilke Jr.
  32. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    I do have a question about the graph in the article titled "Future Climate Shift". I do not know if it is a correct assumption (logic can be perfect but if the assumption is not correct, perfect logic will not lead to the correct answer). The author of the graph is making the assumption that heat waves are a random fluctuation of weather patterns (in order to get a bell curve you need random sampling). The problem with this is that heat waves, monsoon rains and other weather phenomena are not random noise in the variables of heat, humidity, etc. They are not like waves in the ocean. They are organized patterns that persist over time. The current Texas heat wave and drought is caused by a High pressure ridge aloft (similar to what happened in Russia last year). A warmer world does not necessarily lead to more extreme heat wave events. The only way Global warming would create more heat waves (as implied by the graph) would be if the increase in heat content of the atmposphere would cause more blocking highs. I have not seen this demonstrated yet on this thread or the others where this topic has been brought up. If one would conclusively prove that a warmer earth would develop more blocking highs (which cause heat waves and droughts) then I would consider this a valid argument. Showing a bell curve and forming this conclusion is not based upon the mechanisms that are responsible for heat waves and droughts. They are not random events that respond to bell curve descriptions.
  33. Ocean Heat Content And The Importance Of The Deep Ocean
    Paul Magnus, there's a couple of nice maps of thermohaline circulation on wiki. If you read the item they also refer to the meridional overturning circulation.
  34. Review of Rough Winds: Extreme Weather and Climate Change by James Powell
    muoncounter @21 "That is starting to scare people, including the same John NG, who doesn't see it ending any time soon:" I would suggest John NG look at the annual temperature graphs of Texas that were provided by Jeffrey Lindner in the link I posted at #15. If you look at the annual temp graph then people must have been really scared from 1920 to 1940. Your graph is only of summer temps, the overall annual temps in 2007 and 2010 were below the normal temp line.
  35. Ocean Heat Content And The Importance Of The Deep Ocean
    Here is an excellent article on using the Oceans as a Calorimeter to Quantify the Solar Radiative Forcing: http://www.sciencebits.com/files/articles/CalorimeterFinal.pdf Using the 11 year solar sunspot cycles, he finds that the total radiative forcing associated with solar cycles variations is about 5 to 7 times larger than would result from only total solar irradiance variations. This is apparantly due to cloud formation changes resulting from the solar cycle.
  36. Galactic cosmic rays: Backing the wrong horse
    I know what Kirkby said, I'm not sure the sentence above is clear enough though, or maybe I am reading it incorrectly: "Despite an excellent rebuttal here on SkS featuring Jasper Kirkby's own words to the contrary (PAUSE) the popular press is still pushing the preliminary CERN CLOUD results..." One could read the 'contrary' being applied to the first part of the sentence rather than the second part. I think you need another comma after 'own words'. The 'to the contrary' is a break in the flow of the sentence. Alternatives are to use dashes or parentheses depending on grammar style. eg. "Despite an excellent rebuttal here on SkS featuring Jasper Kirkby's own words, to the contrary, the popular press is still pushing the preliminary CERN CLOUD results..." or: "Despite an excellent rebuttal here on SkS featuring Jasper Kirkby's own words - to the contrary - the popular press is still pushing the preliminary CERN CLOUD results..."
  37. Ocean Heat Content And The Importance Of The Deep Ocean
    Perhaps ocean heat exchange is on a cycle of some sorts. Is it possible to filter this out in data such as ocean sediments? The exchange cycle will probably be affected by GW if we start seeing say more and stronger el nino/la nina cycles. I think things may be more chaotic. The melting of the Arctic sea ice will surely affect the ocean heat exchange dynamics due to a few things like currents and local climate. Wind speed over water in the southern hemisphere is getting significantly higher and may also have a big impact on heat exchange. What exactly is the mechanism of how the heat is transfered from the atmosphere to the ocean surface waters? Re-Radiation, convection or conduction? What effect do storms and wind speed have on this?
  38. Dikran Marsupial at 05:39 AM on 25 September 2011
    Lessons from Past Climate Predictions: IPCC AR4 (update)
    I should just add, that even with Lucia's variance pooling method (which I agree with in principle even if there may be difficulties in the details), that estimate of internal climate variability is only valid if you accept that GCMs are basically a valid way to model the climate (as the estimate is based on the behaviour of the models rather than the actual climate, so if you don't think the models are representative of the climate then the variability of the model runs can't logically be representative of the variability of the climate either). So those skeptics that don't accept models as being valid need some other way to estimate internal climate variability in order to determine what would be a "reasonably accurate" projection. Best of luck with that!
  39. Dikran Marsupial at 05:20 AM on 25 September 2011
    Lessons from Past Climate Predictions: IPCC AR4 (update)
    lucia I was talking of an ensemble of perfect models, in which case the spread of (an infinite number of) model runs is exactly a characterisation of the plausible variation due to internal climate variability. Whenever discussing tests or model-data comparison it is always a useful boundary case to consider what you can expect from a perfect model. Of course if you have imperfect models (as all models are in practice) then the spread of the ensemble will also include a component relecting the uncertainty in the model itself. However, the overall spread of the model runs in a multi-model ensemble is still a characterisation of how close we should expect the observations to lie to the multi-model mean, given all known uncertainties. Thus if the observations lie within the spread of the models, then the ensemble is "reasonably accurate" as it would be unreasonable to expect any more than that. Having a hetrogenous ensemble does make things a bit more awkward, I think I am broadly in agreement with Lucia about estimating the effect of climate variability by averaging the variances from the runs for each model type. I am also in agreement about what the observations lying within the spread means, it is essentially a test of the consistency of the models. No big complement if they are consistent, quite a severe criticisism if they are not. Having said which, as GEP Box said "all models are wrong, but some are useful". It is not unreasonable for the model to fail tests of consistency with respect to one metric, but still be a useful predictor of another. I should add that there may be subtleties in pooling the variances due to the act we are talking about time-series, which is more Tamino's field than mine (I'm also a Bayesian and so I don't really agree with hypothesis testing or confidence intervals anyway ;o)
  40. Lessons from Past Climate Predictions: IPCC AR4 (update)
    John Hartz at 00:29 AM on 25 September, 2011 Has Lucia left the building?
    I don't know how our times stamps line up, but there has been plenty of discussion at my blog. I tend to comment lightly on Friday nights, most of Saturday and Sunday. Moreover, I've tried to foster the habit of not responding to unimportant things if I miss the page turn on blog comments. I do, however see something worth commenting on this page. I disagree with this:
    To know whether the ensemble mean is "reasonably accurate" you need an estimate of the plausible effects of climate variability on the observations. Currently the spread of the models is the best estimate of this available.
    I agree that you need an estimate of the plausible effects of climate variability on the observations. However, I disagree with the notion that the spread of all runs in all models in an ensemble is the best estimate of climate variability. I don't even think it's the best model based estimate of the contribution of climate variability to the spread in trends. Or, maybe it's better to say that based on my guess of what DM probably means by "the spread of models", I think the information from the spread in model runs is often used in a way that can tend to over-estimate the contribution of natural variability on trends. First, to engage this, admit I'm guessing what DM means by "the spread of models". I suspect he means that to estimate climate variability we just find all the trends in a multi-model ensemble, create a histogram and that spread tells us the contribution of climate variability to the spread of the trends. (If this is not what he means, it may turn out we agree on how to get a model based estimate.) I don't consider this sort of histogram of all runs in all models to produce the best estimate of the spread in variability due to actual, honest too goodness climate variability. The reason is that in addition to the spread due to natural variability in each model, this distribution includes the spread due to the mean response of each model. That is: if each model predicts a different trend on average, this broadens the spread of runs results beyond what one expects from "weather". So, for example, in the graph below, the color or the trace is specific to a particular model. You can see that run trends tend to cluster around the mean trend for individual models: (Note, a long time period was selected to illustrate the point I am attempting to make; this choice of times does result in particularly clustering of trends about the mean for individual models.) In my opinion, if you wanted a model based estimate of the contribution of climate variability to spread in trends on earth for any time period, it would be unwise to simply take all those trends, make a histogram and suggest that the full spread was due to something like 'weather' or "variability in initial conditions" or, possibly "climate variability". At least part of the spread in trends in the graph above is due to the difference in mean trends in different models. That's why we can see clustering of similarly colored lines for individual runs from the matched models. If someone wanted to do a model based test of the likely spread, I would suggest that examining the variance of runs in each model gives an estimate of the variance due to 'natural variability' based on that model. (So, in the 'blue' model above, you could get an estimate of natural variability by taking the spread over the trends corresponding to the individual 'blue' traces). We have multiple models (say N=22). If we have some confidence in each model, then the average of the variance over the N models gives an ensemble estimate of the variability in trends based on the ensemble. Using a distribution with a standard deviation equal to the square root of this average variance is likely a better estimate of the spread of natural variability. (Note btw that you want to do this by variances for a number of reasons. But lest someone suspect I'm advocating averaging the variances instead of standard deviations because it results in a smaller estimate of variability, that is not so. If modelA gets a variance of 0 and modelB gets a variance of 4. Averaging variances results in an average of (0+4)/2 =2; the standard deviation is 1.4. In contrast, if we average the s.d. we get (0+2)/2=1. There are other reasons to average variances instead of s.d. though. ) The method I describe for getting a model based estimate of the spread results in a somewhat smaller spread than one based on the spread of runs taken from an ensemble of models whose mean differs. Needless to say, since it gives tighter uncertainty intervals on trends, we will detect in accurate models more often than using the larger spread. But I think this method is more justifiable because the difference in the mean trends is not a measure of natural variability. Having said that: I think checking whether the trend falls inside the spread of runs tells us one thing: Does the trend fall inside the spread of runs in the ensemble. That's worth knowing, but I don't happen to think the spread of runs in the ensemble is the best model based estimate of the spread of trends that we expect based on natural variability. I also don't think the full spread of all runs-- including contributions from difference in the means-- should be represented as an estimate of uncertainty in natural variability around the climate trend. Of course, I'm not sure that by "the spread of the models" DM mean the spread of all runs in all models. It may turn out he means exactly what I meant, and in which case, we agree.
  41. Lessons from Past Climate Predictions: IPCC AR4 (update)
    DM#82: Don't feel bad; an interesting list of arguments from over-extending an analogy appears here. I like this one: "The solar system reminds me of an atom, with planets orbiting the sun like electrons orbiting the nucleus. We know that electrons can jump from orbit to orbit; so we must look to ancient records for sightings of planets jumping from orbit to orbit also." Alert Doug Cotton!
  42. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Jonathon#80: "the statement is completely devoid of meaning. " To be clear, we are discussing the statement you made: "Without sufficient data, any model, hypothesis, or prediction can be "reasonably accurate". Side question: If a statement has no meaning why make it? In this case, the only reason that there is 'insufficient data' is because of an artificial restriction to a short time period. A more important question to ask might be: Is there any reason in data obtained between the 3rd and 4th assessment reports to invalidate the model that increasing GHGs are influencing climate change? How would you answer that question? Here's one possible answer: None of the criticisms leveled at Dana's graphs (nor charlies, for that matter) suggest that there is any such reason. All else is nit-picking -- and while I understand there is a place for that activity, it does not alter the basic conclusion. "that is not an excuse to do nothing, as you claim." On the contrary, there are many who make exactly that claim under the guise of 'the science is not settled.' Guvna Perry is just one high profile example. Note to DB: a colleague of mine has a 100 sided die; makes grading very easy.
    Moderator Response:

    [Dikran Marsupial] Well I use a Mersenne twister for that, it fills in the marksheet as well! ;oP

    [DB] I found the D100 liked to roll off the table, thus destroying the class curve and my "critical hit" chances at the same time...

  43. Dikran Marsupial at 04:23 AM on 25 September 2011
    Lessons from Past Climate Predictions: IPCC AR4 (update)
    Jonathan Just to verify something. Are you trying to say that the observations should lie within some number of standard errors of the mean (SEM) from the ensemble mean, and that as the size of the ensemble grows the SEM will decrease?
  44. Dikran Marsupial at 04:21 AM on 25 September 2011
    Lessons from Past Climate Predictions: IPCC AR4 (update)
    Jonathon Sadly we can only roll the die once as we only have one planet Earth. It seems you have not grasped the analogy. The single roll of the die represents the value of the trend we observe on earth, note I haven't specified the period over which this trend is calculated, becuase it is irrelevant. If you compute the trend over a longer period then the uncertainty of the observed trend will decrease, but the spread of the distribution of modelled trends will decrease along with it. Why is it that whenever I offer an analogy, the person I am trying to explain something to always proceeds to over-extend the analogy in a way that doesn't relate to reality?
  45. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Moderator DB, inline comment #70 "One thinks that the skeptical thing to do would be to first understand the other approach (which you say you do) and either agree that there is no meaningful difference in results (which you do) or show why the other approach is invalid (which you don't)." I agree there is not difference in trends. I did not say, and I do not agree, that there is no meaningful difference in results. If I wished to compare the mean projected temperature for 2000-2010 with observations, my graph and calculations would give a proper comparison. The Figure 3 graph by Dana1981 would give an erroneous result. Do you (and Dana) understand that statement? Do you agree or disagree? If I were to rebaseline the 2000-2010 model mean time series to match the GISS 2000-2010 mean, then the projected temperature for 2000-2010 would perfectly match the observed data, not matter what the projection was originally. Dana uses only a portion of the observed data (up through 2005, if I understand correctly) to adjust the mean of the projection, but philosophically it has the same problem as matching over the entire period for which we are comparing projections vs observation. If DB and Dana1981 don't see any problems using the hindsight of observations performed after the start date of the projections to make post hoc adjustment of the projections, then there is nothing further I can say.
  46. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Muon, If it only takes one data point, then that qualifies as sufficient. Yes, the statement is completely devoid of meaning. Based on the observations, reasonably inaccurate would also qualify. However, that is not an excuse to do nothing, as you claim. Dikran, if you continue to roll the dice enough, the uncertainty will decrease until you achieve a mean of 3.5 with an uncertainty such that 5 will fall outside your error. With enough data, we can can get a temperature trend that will determine the whether the model falls within or outside of error bars.
  47. Lessons from Past Climate Predictions: IPCC AR4 (update)
    NYJ - yeah, there have been other cooling effects this decade too. I'll update the post to clarify that later.
  48. Ocean Heat Content And The Importance Of The Deep Ocean
    Suggested reading “Hottest Decade on Record Would Have Been Even Hotter But for Deep Oceans — Accelerated Warming May Be On Its Way” by Joe Romm, Climate Progress, Sep 23, 2011. To access this informative article, click here
  49. Lessons from Past Climate Predictions: IPCC AR4 (update)
    Nitpicking a bit: Dana: "This data falls well within the model uncertainty range (shown in Figure 2, but not Figure 3), but the observed trend over the past decade is a bit lower than projected. This is likely mainly due to the increase in human aerosol emissions, which was not expected in the IPCC SRES" Couldn't the extended solar minimum, leveling off of methane concentration, and/or potentially a negative trend in ENSO (not sure if this applies with the starting date) have had some effect? "A bit lower than projected" and the text that follows implies that there must be a likely explanation for it identified. In fact, the trend is extraordinarily close to the mean model projection, perhaps within measurement margin of error. Starting later reduces the trend quite a bit, but from the RC post we can see that there are many individual model runs over 8-year periods, and in a small percentage of cases, 20-year periods, that run flat or negative. So we're back to short time periods don't tell us much. There's also an impression perpetuated among denial realms that observations are expected to match closely with the mean model projection over a 10-year period or less, which is bogus.
  50. Dikran Marsupial at 02:27 AM on 25 September 2011
    Lessons from Past Climate Predictions: IPCC AR4 (update)
    Johnathon Perhaps an example from a more basic domain will help. Say we roll a six-sided unbiased die, and we get a value of five. This is our observation. To make our model ensemble, we get a get 100 six sided die, and roll them once each and get a mean value of 3.5 with a standard deviation of 1.7. So is our ensemble mean of 3.5 a "reasonably accurate" estimate of the observed value of 5. I'd say yes, because the observation is a random variable, that is only predictable within the limits of its internal variability. In this case, we can accurately estimate this variability by the variability in the ensemble runs (because in this case our models are exactly correct). Model uncertainty is another matter. Say we didn't know what kind of dice we had (c.f. uncertainty regarding climate physics). In this case, we might make an ensmble of D6s, D4s, D8s and D20s etc (ask your local D&D player). In this case we will have an even larger standard deviation, because of the model uncertainty in addition to the inherent uncertianty of the problem. In climate modelling, that is why we have multi-model ensembles.
    Moderator Response: [DB] Not to mention the D10s, D12s and D32s some of us used. :)

Prev  1473  1474  1475  1476  1477  1478  1479  1480  1481  1482  1483  1484  1485  1486  1487  1488  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us