Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  1089  1090  1091  1092  1093  1094  1095  1096  1097  1098  1099  1100  1101  1102  1103  1104  Next

Comments 54801 to 54850:

  1. Record Arctic Sea Ice Melt to Levels Unseen in Millennia
    (-Snip-)
    Moderator Response:

    [DB] Please note that posting comments here at SkS is a privilege, not a right. This privilege can and will be rescinded if the posting individual continues to treat adherence to the Comments Policy as optional, rather than the mandatory condition of participating in this online forum.

    Moderating this site is a tiresome chore, particularly when commentators repeatedly submit offensive or off-topic posts. We really appreciate people's cooperation in abiding by the Comments Policy, which is largely responsible for the quality of this site.

    Finally, please understand that moderation policies are not open for discussion. If you find yourself incapable of abiding by these common set of rules that everyone else observes, then a change of venues is in the offing.

    Please take the time to review the policy and ensure future comments are in full compliance with it as no further warnings will be issued.

  2. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Tom Curtis writes:
    I was talking to my wife about the Lewandowsky paper yesterday. She noted two points in particular. First, the absence of a neutral (I don't know/I know nothing about it) option in the questions was a serious methodological flaw. This is particularly the case for the conspiracy theory questions, in which at least one of the conspiracy theories are obscure (IMO), and not inherently implausible:
    I can partially answer this question with the cited literature. Belief in Conspiracy Theories, Ted Goertzel, 1994 cited by 81
    The respondents were then asked their opinions about nine other conspiracies which had been in the news lately. A four point scale was used, ranging from "definitely true" and "probably true" to "probably false" and "definitely false." "Don't know" was not offered as an alternative, but was recorded when the respondents volunteered it. This question wording encouraged respndents to give their best guess as to the truth of a conspiracy, while relying the distinction between "probably" and "definitely" to distinguish between hunches and strong beliefs.
    Now let me be clear that it does not fully answer the question of how a respondent would be able to show that this conspiracy was an "unknown". Perhaps the answer lies in the ability of an online survey vs a phone survey (as was the case in Goertzel 94). But it's a good question for Stephan L to answer nonetheless.
  3. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    @metzomagic #59: Such alredy exists in Ostrichville which, by the way, is a gated-community.
  4. Record Arctic Sea Ice Melt to Levels Unseen in Millennia
    I can see Christopher Booker's Telegraph column right now. How do we head that off?
    I had an image in my mind of a representation of Santa's place, in place at the North Pole, and with 360° webcam monitoring. Imagine the restive response of the planet's kids as they watch the digs of their favourite fantasy character disappear. Imagine how the adults of the world might explain to their children why they're allowing Santa's shed to not-so-slowly sink into the sea.
  5. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    There is no such thing as a 'perfect' survey. Methodology always plays some part in determining the results. Self selecting internet surveys and small sample sizes layer several additional concerns on top of those standard problems. However, Lewandowsky seems to have acknowledged these issues in his results, stated that the results were limited to a specific sub-group of skeptics, discussed the uncertainties, et cetera. It is not a perfect survey because it cannot be. No such thing exists. That said, it seems adequate to its task. I'm surprised by the hub bub. When I first heard of these results my reaction was along the lines of, 'Yes... and the Earth revolves around the Sun and the sky is blue.' The findings of this survey fall into the category of 'blindingly obvious'. Of course there is a correlation between internet GW 'skepticism' and free-market ideology / belief in conspiracy theories. Half the stuff we see from these people is about how the evil scientist cabal is faking data and any sort of CO2 regulation would destroy the economy and usher in world communism. Go to any of the blogs complaining about the survey and I guarantee you will find plenty of examples proving it redundant.
  6. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    You can already see it coming: "Surveygate" :-)
  7. Record Arctic Sea Ice Melt to Levels Unseen in Millennia
    Hmph! Is there an 'unthwarted' version of the NSIDC 'Observations and model runs'-graph somewhere? It's not that I question the data per se, but having it presented thus in 3D gives space for 'interpretation'. Is the red line in front of the blue field, is is lower because of the 'lifted' POV etc. Let's leave the graph-mangling to Monckton and his ilk.
  8. Record Arctic Sea Ice Melt to Levels Unseen in Millennia
    When, sometime in the not-too-distant future, the Arctic becomes -- to all intents and purposes -- 'ice-free', it will be important that it's worded correctly in newspapers, articles and blogs. I'm sure we can all imagine the rush amongst those in denial (and Daily Mail reporters) to find a photo of any ice still remaining, or reforming, anywhere within the Arctic circle -- at any time of year -- to prove, "it's all a hoax". I can see Christopher Booker's Telegraph column right now. How do we head that off?
  9. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    #55 Michael Sweet No, I don’t think it was a conspiracy, simply a poorly conceived survey, with much to criticise in the questionnaire, the methodology, the analysis and the conclusions. Following the statement from the moderator that I should contact John Cook directly, I received a cordial email from John Cook yesterday offering to answer my questions. I wrote back briefly, asking simply when he posted Lewandowsky’s request, when he deleted it, if there had been any comments, and whether they still existed or had been deleted too. That was just over 24 hours ago. I’ll post his response here when I receive it.
    Moderator Response:

    [DB] "I’ll post his response here when I receive it."

    Not unless it is explicitly made clear that the contents of the email are for public dissemination.

  10. Miriam O'Brien (Sou) at 20:08 PM on 4 September 2012
    AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Agree with BernardJ and Michael. I do a lot of internet surveys. Interim results can be virtually instant if you've set up your analytical software in advance. Had McIntyre posted a link when he got the first or second request and his visitors had responded, then given the numbers of responses recorded, the main difference would more than likely have been that N would have been higher.
  11. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    I'm with michael sweet on this point. Any competent researcher will have established a priori his statistical analysis methodology - in fact that's a fundamental assumption of any experimental protocol. His/her spreadsheets would have been constructed, populated with dummy data, run, examined, and refined until all s/he need do is to drop in the real data as it comes, with the results returned almost immediately after the last entry. All the more so if s/he's an old hand at the process. With progressive data entry, there should be no surprises by the end: only the establishing of the final few decimal places. geoffchambers is looking for reds under the bed.
  12. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Geoff, So Lewandowsky processed his data as he received it. When McIntyre turned him down he announced what he had collected. What do you think would be better, for Lewandowsky to sit on his data forever waiting for a response from the skeptics? Your entire premise is that there was a conspiracy to prove that skeptics believe in conspiracies. Your position proves that you believe in conspiracies. Your posts are in violation of the comments policy. You are wasting our time. Go cry somewhere else.
  13. Models are unreliable
    opd68,
    Are we confident in our understanding of the forcings that are underpinning our predictions at increasing CO2 levels?
    The forcing resulting from increasing CO2 levels is very accurately known from both physics and direct measurement. By itself it accounts for about 1.2 C per doubling. The forcing from water vapour in response to warming is also quite well known from both physics and direct measurement and, together with the CO2, amounts to about 2 C per doubling. Other feedbacks are less well known, but apart from clouds, almost all seem to be worryingly positive. As for clouds, they are basically unknown, but I think a very strong case can be made that the reason they are unknown is precisely because they're neither strongly positive nor negative. As such, any attempt to claim that they are strongly negative and will therefore counteract all the positive feedbacks seems like wishful thinking that's not supported by evidence. If anything, the most recent evidence seems to suggest slightly positive. One way to avoid all these complications is to simply use the paleoclimate record. That already includes all feedbacks because you're looking at the end result, not trying to work it out by adding together all the little pieces. Because the changes were so large, the uncertainty in the forcings is swamped by the signal. Because the timescales are long, there is enough time for equilibrium to be reached. The most compelling piece of evidence, for me, is the fact that the best way to explain the last half billion years of Earth's climate history is with a climate sensitivity of about 2.8 C, and if you deviate too much from that figure then nothing makes sense. (Richard Alley's AGU talk from 2009 covers this very well, if you haven't seen that video yet then I strongly recommend you do so.) Look at what the evidence tells us the Earth was like during earlier times with similar conditions to today. This is a little bit complicated because you have to go a really long way back to get anywhere near today's CO2 levels, but if you do that then you'll find that, if anything, our current predictions are very conservative. (Which we already suspected anyway -- compare the 2007 IPCC report's prediction on Arctic sea ice with what's actually happened, for example.) No matter which way you look at it, the answer keeps coming up the same. Various people have attempted to argue for low climate sensitivity, but in every case they have looked at just one piece of evidence (e.g. the instrumental record) and then made a fundamental mistake in using that evidence (e.g. ignoring the fact that the Earth has not yet reached equilibrium, so calculating climate sensitivity by comparing the current increase in CO2 with the current increase in temperature is like predicting the final temperature of a pot of water a few seconds after turning the stove on) and ignored all of the other completely independent lines of evidence that conflict with the result they obtained. If they think that clouds will exert a strong negative feedback to save us in the near future, for example, they need to explain why clouds didn't exert a strong negative feedback during the Paleocene-Eocene Thermal Maximum when global temperatures reached 6 C higher than today and the surface temperature of the Artic ocean was over 22 C. My view is that the default starting position should be that we assume the result will be the same as what the evidence suggests happened in the past. That's the "no models, no science, no understanding" position. If you want to move away from that position, and argue that things will be different this time, the only way to do so is with scientifically justifiable explanations for why it will be different. Some people seem to think the default position should be "things will be the same as the past thousand years" and insist on proof that things will change in unacceptable ways before agreeing to limit behaviour that basic physics and empirical evidence shows must cause things to change, while at the same time ignoring all the different lines of evidence that should be that proof. I find that hard to understand.
  14. Realistically What Might the Future Climate Look Like?
    @gws #52 The development of the abbreviation 'CAGW' -- which is used almost exclusively by the denial lobby -- is interesting. I think 'CAGW' started to appear when a significant section of the fake sceptics realised that -- in spite of years of denying that CO2 is a greenhouse gas, that the planet is warming and that humans are causing it (and so many other memes) -- they would finally have to start secretly accepting the mounting evidence supporting the idea of AGW. At that point they had to rethink their denial and thus start to re-cast the 'debate' (as they saw it) into one of whether the outcome of climate change would be serious. Opposing 'CAGW' lets them continue denying and keeps their real agenda -- that we should do nothing about the problem. Of course, as we all know, just because those who switch to using 'CAGW' when commenting on posts -- thus demonstrating their underlying acceptance of 'AGW' -- will never go as far as to correct the more ill-informed fake sceptics who are still denying the unquestionable basics of climate science. They're just happy that doubt is being sown.
  15. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    sout #53 McIntyre says he received the request from Lewandowsky’s assistant 6th September, (a week after the survey had been posted at Tamino, Deltoid etc) and a follow up request two weeks later. That brings us to 20th September. 23rd of September Lewandowsky gave a presentation at Monash University in which he anounced the results of the survey, with the current sample size of 1100 (i.e. after the elimination of false data and duplicated IPs). So three days after asking for cooperation in fieldwork, he’d processed the results and written his conclusions and announced them.
  16. Record Arctic Sea Ice Melt to Levels Unseen in Millennia
    I'm afraid my caveats on figure 2 didn't make it into the final article. Figure 2 is a comparison of the end of August for 2012 to an unspecified August estimate for the 1938 data. It would be better to use the NSIDC August average, or the extent from a date in mid August. Unfortunately NSIDC doesn't seem to archive its daily images, and the August average is not available yet. I will provide an updated figure as soon as I can. Also, the white area in the 1938 image in not observed directly: The observations are the red lines and symbols (and possibly coastal observations which are unmarked), so some of it is speculative. However everywhere where observations are available, the limits of the ice extend far beyond this years pack. Nonetheless, the best evidence we have found directly contradicts Chisty's claims. If he has any evidence to support those claims, he should present it.
  17. Models are unreliable
    JasonB - all clear and understood, and I agree completely that the same clarity and scientific justification is required for the opposite hypothesis of increased CO2 having no significant effect on our climate. Science is the same whichever side you are on. I spend my working life having people try to discredit my models and science in court cases, and doing the same to theirs. I therefore think very clearly on what is and what is not scientifically justifiable and am careful to state only that which I know can be demonstrated. If it can't I am only able to describe the science and processes behind my prediction/statements which by necessity become less certain the more I am asked to comment on conditions outside those that have been observed at some stage. My entry to this conversation is because I keep hearing that the science is settled and I want to see that science. From what I have learned here (thankyou!) the key question for me (which I will start looking through at the climate sensitivity post) is: - Are we confident in our understanding of the forcings that are underpinning our predictions at increasing CO2 levels?
  18. Record Arctic Sea Ice Melt to Levels Unseen in Millennia
    Thanks Dana for nice summary on recent ice, especially the pointers to those arctic/greenland reconstructions that were unknown to me tonow. Fig3 & 4 nicely represent Arctic amplification: delta T=3K within 64-90°N vs. 0.8K globally. I would suggest to add the recent John Christy's testimony in Congress to Christy Crocks button. That latest crock deserves a big prominence, because it's beyond my comprehenssion how a person of his stature could sacrifice his entire reputation by telling evident lies under oath. And he keeps doing it while evidence keeps mounting with 2012 melt.
  19. Models are unreliable
    opd68, The Intermediate form of this post contains six figures (including Tamino's) demonstrating the results of exactly the kinds of tests you are talking about. The first one, Figure 1, even shows what should have happened in the absense of human influence. Since the models aren't "tuned" to the actual historical temperature record, the fact that they can "predict" the 20th century temperature record using only natural and anthropogenic forcings seems to be exactly the kind of demonstration of predictive capability that you are looking for. The objection usually raised with regards to that is that we don't know for certain exactly what the aerosol emissions were during that time, and so there is some scope for "tuning" in that regard. But I think it's important to understand that the aerosols, while not certain, are still constrained by reality (so they can't be arbitrarily adjusted until the output "looks good", the modellers have to take as input the range of plausible values produced by other researchers) and there are limits to how much tuning they really allow to the output anyway due to the laws of physics. I think that if anyone really wants to argue that there is nothing to worry about, they need to come up with a model that is based on the known laws of physics, that can take as input the range of plausible forcings during the 20th century, that can predict the temperature trend of the 20th century using those inputs at least as skillfully as the existing models, and has a much lower climate sensitivity than the existing models do and therefore shows the 21st century will not have a problem under BAU. Simply saying that the existing models, which have passed all those tests, aren't "good enough" to justify action is ignoring the fact that they are the most skillful models we have and there are no models of comparable skill that give meaningfully different results. Due to the consequences of late action, those who argue there is nothing to worry about should be making sure that their predictions are absolutely, scientifically justifiable if they expect acceptance of their predictions, rather than just saying they "aren't convinced ". In the absence of competing, equally-skillful models, how can they not be? Regarding climate sensitivity, which you are correct in assuming is usually given as delta T for doubled CO2, the models aren't even the tightest constraint on the range of possible values anyway. If you look at the SkS post on climate sensitivity you'll see that the "Instrumental Period" in Figure 4 actually has quite a wide range compared to e.g. the Last Glacial Maximum. This is because the signal:noise ratio during the instrumental period is quite low. We know the values of the various forcings during that period more accurately than during any other period in Earth's history, but the change in those values and the resulting change in temperature is relatively small. Furthermore, the climate is not currently in equilibrium, so the full change resulting in that change in forcings is not yet evident in the temperatures. In contrast, we have less accurate figures for the change in forcings between the last glacial maximum and today, but the magnitude of that change was so great and the time so long that we actually get a more accurate measure of climate sensitivity from that change than we do from the instrumental period. So it is completely unnecessary to rely on modern temperature records to come up with an estimate of climate sensitivity that is good enough to justify action. In fact, if you look at the final sensitivity estimate that is a result of combining all the different lines of evidence, you'll see that it is hardly any better than what we already get just by looking at the change since the last glacial maximum. The contribution to our knowledge of climate sensitivity from modelling the temperature trend during the 20th century is almost negligible. (Sorry modellers!) So again, if anyone really wants to argue that there is nothing to worry about, they also need a plausible explanation for why the climate sensitivity implied by the empirical data is much larger than what their hypothetical model indicates. And just to be clear:
    And whilst sensitivity may be an output, my understanding is that it is determined by our input assumptions re: the component forcings such as increased atmospheric water vapour (positive feedback) and cloud cover (negative feedback).
    No. It is influenced by some of the inputs that go into the models, but those inputs must be reasonable and either measured or constrained by measurements and/or physics. And the models constrain it less precisely than the empirical observations of the change since the last glacial maximum anyway -- without using GCMs at all we get almost exactly the same estimate of climate sensitivity as what we get when adding them to the range of independent lines of evidence.
  20. Record Arctic Sea Ice Melt to Levels Unseen in Millennia
    Funny, just made an image that might fit in the article without knowing you were in process of writing about this: http://erimaassa.blogspot.fi/2012/09/who-did-it.html
  21. Realistically What Might the Future Climate Look Like?
    John @55, Well said. I would also add to it: we (city dwellers/couch potatoes, which seems increasingly large portion of us) need to change our lifestyle and responsibility: learn to grow our food to support ourselves, rather than solely rely on a farmer. For example, do a tour of your vegie patch each afternoon which will also give you prescribed dose of physical excercise, rather than going to the gym and to the supermarket afterwards. Working in the garden can be both more interesting and healthoer than running on the treadmill in front of TV. If an average couch potato does not understand the environmental benefit of such lifestyle change, the financial & health self-benefit should be obvious.
  22. Models are unreliable
    Thanks scaddenp. That link is exactly what I was after. And not fixated, just referring to what is provided and communicated most often. Always best to start simple I find. My point about prediction is really what the models are about - if we aren't able to have confidence in their predictions (even if it's a range) then we will struggle to gain acceptance of our science re: the underlying processes. And the question of climate sensitivity is really the key to this whole area of science - i.e. we know that CO2 is increasing and can make some scientifically robust predictions about rates of increase and potential future levels. But that isn't an issue unless it affects out climate. So, the question is then if we (say) double C02 what will happen to our climate and what implications does that have to us? If we have confidence in our predictive models we can then give well-founded advice for policy makers. And whilst sensitivity may be an output, my understanding is that it is determined by our input assumptions re: the component forcings such as increased atmospheric water vapour (positive feedback) and cloud cover (negative feedback). (ps. When you talk about climate sensitivity, I gather the values are referring to delta T for doubled CO2?)
  23. Models are unreliable
    Whoops, latest model/data comparison at RC is here
  24. Record Arctic Sea Ice Melt to Levels Unseen in Millennia
    It's mostly been ignored in the American media too, probably due to the political conventions leading up to the November elections. Unfortunate timing in that respect. Hopefully there will be some news coverage once we reach the minimum.
  25. Models are unreliable
    The IPCC report compares model predictions with observations to date and a formal process is likely to be part of AR5 next year. You can get informal comparison from the modellers here. One thing you can see from the IPCC reports though is that tying down climate sensitivity is still difficult. Best estimates are in range 2-4.5 with I think something ~3 being model mean?? Climate sensitivity is a model output (not an input) and this is range present. Hansen's earlier model had a sensitivity at 4.5 which on current analysis is too high (see the Lessons bit on why) whereas Broecker's 1975 estimate of 2.4 looks too low. In terms of trying to understand what the science will predict for the future, we have to live that uncertainty for now. I still think you too fixated on surface temperature for validation. Its one of many variables that affected by anthropogenic change. How about things like GHG change to radiation leaving planet or received at surface? How about OHC?
  26. Potential methane reservoirs beneath Antarctica
    @Agnostic #6: We can only hope that the human race does not learn the answer to your question the hard way.
  27. Models are unreliable
    Once again, many thanks for the replies. Hopefully I’ll address each of your comments to some degree, but feel free to take me to task if not. It also appears that I should take a step back into the underlying principles of our scientific 'model' (i.e. understanding) - for example how CO2 affects climate and how that has been adopted in our models. Sphearica – thanks for the links. Totally recognise the complexity of the system being modelled, and understand the difference between physically-based, statistical and conceptual modelling. I agree that it is difficult and complex and as such we need to be very confident in what we are communicating due to the decisions that the outcomes are being applied to and the consequences of late action or, indeed, over-reaction. The GCMs, etc that are still our best way of assessing and communicating potential changes into the future are based our on understanding of these physical processes and so our concepts need to be absolutely, scientifically justifiable if we expect acceptance of our predictions. Yes, we have observed rising temperatures and have a scientific model that can explain them in terms of trace gas emissions. No problem there, it is good scientific research. Once we start using that model to predict future impacts and advise policy then we must expect to be asked to demonstrate the predictive capability of that model, especially when the predicted impacts are so significant. Possibly generaling however my opinion is that the acceptance of science is almost always evidence-based. As such, to gain acceptance (outside those who truly understand all the complexities or those who accept them regardless) we realistically need to robustly and directly demonstrate the predictive capability of our models against data that either wasn’t used or wasn't in existence when we undertook the prediction. In everyday terms this means comparing our model predictions (or range thereof) to some form of measured data, which is why I asked my original question. Tom C thanks for the specifics. So, my next question is: Q: there are models referred to in the Topic that show predictions up to 2020 from Hansen (1988 and 2006) and I was wondering if we had assessed these predictions against an appropriate data from one of these 4 datasets up to the present?
  28. Miriam O'Brien (Sou) at 12:00 PM on 4 September 2012
    AGU Fall Meeting sessions on social media, misinformation and uncertainty
    After all the kerfuffle, 'skeptic' blogger McIntyre finally found his invitation to post a link to the survey. He said he ignored it.
  29. Record Arctic Sea Ice Melt to Levels Unseen in Millennia
    What's worse is how little reporting of this has occurred in the Australian media. I suspect that the media in this country have quietly placed global warming in the too hard basket and decided to concentrate instead on the politics of the carbon tax and carbon pricing.
  30. Bert from Eltham at 11:05 AM on 4 September 2012
    Record Arctic Sea Ice Melt to Levels Unseen in Millennia
    I have read this three times Dana and I still do not fully understand all the nuances. That is due to my lack of full understanding not the quality of the article. What is very plain is that we are in real trouble even us Aussies. You have presented real evidence from refereed sources, not some glib hand waving argument based on fallacies or myth and following theories that have no basis in reality. How someone who purports to be an expert like Christy can argue against the overwhelming evidence is beyond any rational analysis. Bert
  31. Miriam O'Brien (Sou) at 11:04 AM on 4 September 2012
    AGU Fall Meeting sessions on social media, misinformation and uncertainty
    From what I've read, most people do not have an issue with the paper itself (except for Tom's wife). Apart from expressing 'miffness' that some blogs say they were not invited to post a link to the survey, their main concern is the title. The complaint is that, while the title reflects what the study was investigating, it does not adequately reflect what was found. I have a couple of suggestions for the title that should be more acceptable: No Market is 100% Laissez-Faire - Therefore (Climate) Science is a Hoax: An Anatomy of the Motivated Rejection of Science Alternatively: Many societies elect Governments - Therefore (Climate) Science is a Hoax: An Anatomy of the Motivated Rejection of Science Either of the above would more closely reflect the actual findings. (Variations could include: I don't want to pay tax therefore...; or similar.)
  32. Potential methane reservoirs beneath Antarctica
    Antarctic clathrates are larger than those found in the Arctic - but are they more vulnerable?
  33. Models are unreliable
    opd68. Your process of calibrate, predict,validate does not capture climate modelling at all well. This is a better description of statistical modelling, not physical modelling. Broadly speaking, if your model doesnt predict the observations, you dont fiddle with calibration parameters; you add more physics instead. That said, there are parameterizations used in the climate models to cope with sub-scale phenomena (eg evaporation versus windspeed). However the empirical relationship used is based on fitting measured evaporation rate to measured wind speed, not fiddling with a parameter to match a temperature trend. In this sense they are not calibrated to any temperature series at all. You can find more about that in the modelling FAQ at Realclimate (and ask questions there of the modellers). Sks did a series of articles past predictions. Look for the Lessons from past predictions series.
  34. Models are unreliable
    opd68, Your definitions of calibration and validation are pretty standard but I'd like to make a few points that reflect my understanding of GCMs (which could be wrong): 1. GCMs don't need to be calibrated on any portion of the global temperature record to work. Rather, they take as input historical forcings (i.e. known CO2 concentrations, solar emissions, aerosols, etc.) and are expected to produce both historical temperature records as well as forecast future temperatures (among other things) according to a proscribed future emissions scenario (which fundamentally cannot be predicted because we don't know what measures we will take in future to limit greenhouse gasses -- so modellers just show what the consequences of a range of scenarios will be so we can do a cost-benefit analysis and decide which one is the optimal one to aim for -- and which we then ignore because we like fossil fuels too much). There is some ability to "tune" the models in this sense due to the uncertainty relating to historical aerosol emissions (which some "skeptics" take advantage of, e.g. by assuming that if we don't know precisely what they were then we can safely assume with certainty that they were exactly zero) but this is actually pretty limited because the models must still obey the laws of physics, it's not an arbitrary parameter fitting exercise like training a neural net would be. 2. GCMs are expected to demonstrate skill on a lot more than just global temperatures. Many known phenomena are expected to be emergent behaviour from a well-functioning model, not provided as inputs. 3. Even without sophisticated modelling you can actually get quite close using just a basic energy balance model. This is because over longer time periods the Earth has to obey the laws of conservation of energy so while on short time scales the temperature may go up and down as energy is moved around the system, over longer terms these have to cancel out. Charney's 1979 paper really is quite remarkable in that respect -- the range of climate sensitivities proposed is almost exactly the same as the modern range after 30+ years of modelling refinement. Even Arrhenius was in the ballpark over 100 years ago!
  35. Models are unreliable
    opd68, I think part of your difficulty is in understanding both the complexity of the inputs and the complexity of measuring those inputs in the real world. For example, dimming aerosols have a huge effect on outcomes. Actual dimming aerosols are difficult to measure, let alone to project into their overall effect on the climate. At the same time, moving forward, the amount of aerosols which will exist requires predictions of world economies and volcanic eruptions and major droughts. So you have an obfuscating factor which is very difficult to predict and very difficult to measure and somewhat difficult to apply in the model. This means that in the short run (as scaddenp said, less than 20 years) it is very, very hard to come close to the mark. You need dozens (hundreds?) of runs to come up with a "model mean" (with error bars) to show the range of likely outcomes. But even then, in the short time frame the results are unlikely to bear much resemblance to reality. You have to look beyond that. But when you compare you predictions to the outcome... you now need to also adjust for the random factors that didn't turn out the way you'd randomized them. And you can't even necessarily measure the real world inputs properly to tease out what really happened, and so what you should input into the model. You may look at this and say "oh, then the models are worthless." Absolutely not. They're a tool, and you must use them for what they are meant for. They can be used to study the effects of increasing or decreasing aerosols and any number of other avenues of study. They can be used to help improve our confidence level in climate sensitivity, in concert with other means (observational, proxy, etc.). They can also be used to help us refine our understanding of the physics, and to look for gaps in our knowledge. They can also be used to some degree to determine if other factors could be having a larger effect than expected. But this statement of yours is untrue:
    This has meant we need some form of predictive model in which we have sufficient confidence to simulate temperature changes over tim, under changing conditions, to an appropriate level of uncertainty.
    Not at all. We have measured global temperatures and they are increasing. They continue to increase even when all other possible factors are on the decline. The reality is that without CO2 we would be in a noticeable cooling trend right now. There are also other ways (beyond models) to isolate which factors are influencing climate: Huber and Knutti Quantify Man-Made Global Warming The Human Fingerprint in Global Warming Gleckler et al Confirm the Human Fingerprint in Global Ocean Warming
  36. Models are unreliable
    Thanks all for the feedback - much appreciated. For clarification, my use of the terms 'calibration' and 'validation' can be explained as: - We calibrate our models against available data and then use these models to predict an outcome. - We then compare these predicted outcomes against data that was not used in the calibration. This can be data from the past (i.e. by splitting your available data into calibration and validation subsets) or data that we subsequently record over time following the predictive run. - So validation of our predictive models should be able to be undertaken against the data we have collected since the predictive run. Dikran & scaddenp – totally agree re: importance of validation against a series of outcomes wherever possible, however I feel that in this case the first step we need to be able to communicate with confidence and clarity is that we understand the links between CO2 and GMT and can demonstrate this against real, accepted data. As such, in the first instance, whatever data was used to calibrate/develop our model(s) is what we need to use in our ongoing validation. Tom Curtis – thanks for that. The four you mention seem to be the most scientifically justifiable and accepted. In terms of satellite vs surface record (as per paragraph above) whatever data type was used to calibrate/develop the specific model being used is what should be used to then assess its predictive performance. From my reading and understanding, a key component of the ongoing debate is: - Our predictive models show that with rising CO2 will (or has) come rising GMT (along with other effects such as increased sea levels, increased storm intensity, etc). - To have confidence in our findings we must be able to show that these predictive models have appropriately estimated GMT changes as they have now occurred (i.e. since the model runs were first undertaken). As an example, using the Hansen work referenced in the Intermediate tab of this Topic, the 1988 paper describes three (3) Scenarios (A, B and C) as: - “Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely” (increasing rate of emissions - quotes an annual growth rate of about 1.5% of current (1988) emissions) - “Scenario B has decreasing trace gas growth rates such that the annual increase in greenhouse forcing remains approximately constant at the present level” (constant increase in emissions) - “Scenario C drastically reduces trace gas growth between 1990 and 2000 such that greenhouse climate forcing ceases to increase after 2000” From Figure 2 in his 2006 paper, the reported predictive outcomes haven’t changed (i.e. versus Fig 3(a) in 1988 paper) which means that the 1988 models remained valid to 2006 (and presumably since?). So we should now be in a position to compare actual versus predicted GMT between 1988 and 2011/12. Again, I appreciate that this is merely one of the many potential variables/outcomes against which to validate the model(s) however it is chosen here as a direct reference to the posted Topic material.
  37. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Eric, A brief Google of Lewandowsky shows he has over 100 publications and is an expert at surveys of this type. Your own quote says "The issue of whether or not to offer a midpoint has been disputed for decades" my emphasis. You then conclude Lewandowsky incorrectly formatted the questions. Obviously experts dispute your conclusion. Your conclusion is wrong according to your own quote. The survey was people who responded to a link online. This is described in detail in the paper. You appear to have not read the paper you are criticizing. Lewandowsky had a number of additional questions that were dropped from the paper. Perhaps he did exactly what you describe. You have offered no evidence to support your position, only uninformed speculation. Your hand waving is not sufficient to argue with a peer reviewed paper. You must provide specific examples of what you think is incorrect and expert opinion that it is not correct. If experts disagree it is obviously acceptable to use the format. In this thread unsupported opinion has run rampant. Please support your assertions with peer reviewed data. Unsupported hand waving can be dismissed with a hand wave.
  38. Realistically What Might the Future Climate Look Like?
    Estiben at 18:33 PM on 1 September, 2012, says, "...Minifarms are not as efficient, for one thing, and then there is the problem of distribution. If everyone is a farmer, who is going to deliver food to where it can't be grown?" This is a highly questionable statement. The most productive, sustainable way to grow food is manually. A small veg patch, run efficiently, can produce more weight of food per sq metre than any commercial operation, especially when the input of commercially-produced fertilisers, pesticides and use of fossil-fuel-powered machinery is taken into account. And that's also why the second half of your statement is invalid: if food is locally-produced, by as many people as possible, there is little or no transport involved. We should be doing everything to grow food as close as possible to where it's eaten. Even ignoring climate change, such a move will help avoid the negative impact of rising oil prices. A farmer without cheap oil is just a man leaning on his shovel.
  39. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Some of Tom's earlier points have a long history of research. For example in analyzing the effect of question format on answer response styles: http://feb1.ugent.be/nl/Ondz/wp/Papers/wp_10_636.pdf where they point out in section 3.2 "The issue of whether or not to offer a midpoint has been disputed for decades...". It essentially implies a possible response style bias in the context of Lewandowsky's survey, perhaps only allowing an extreme style of response to qualify. There is also a lot of research on the effects of question ordering and question wording on responses particularly when trying to determine causation. For example Lewandowsky could ask a random sampling of people whether they believe in free markets before and after the climate change questions. If the response to the climate question changes based on the whether it is before or after the free market question that would imply (although not prove) causation between free market ideology and rejection climate science. Considering the lack of information about the selection of the survey sample, the very short section on development of the questions, and the lack of any references on survey methodologies, I can't make any favorable conclusions about the value of the results. Perhaps more information will become available, or Lewandowsky can answer some of those questions here.
  40. Models are unreliable
    Another thought to when you talk about "model validation". Validation of climate theory is not dependent on GCM outputs - arguably other measures are better. However, models (including hydrological models) are usually assessed in terms of model skill - their ability to make more accurate predictions than simple naive assumptions. For example, GCMs have no worthwhile skill in predicting temperatures etc on timescales of decadal or less. They have considerable skill in predicting 20year+ trends.
  41. Models are unreliable
    Models do not predict just one variable - they necessarily compute a wide range of variables which can all be checked. With the Argo network in place, the predictions of Ocean Heat Content will become more important. They also do not (without post-processing) predict global trends, but values for cells so you can compare regional trends as well as the post-processed global trends. Note too, that satellite MSU products like UAH and RSS measure something different from surface temperature indices like GISS and HAdCrut, and thus different model outputs.
  42. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    I read the Lewandowsky paper and it looks pretty typical to me. The conclusions are not surprising. I have not seen on this thread a single link to a scientific paper that contradicts Lewandowsky's result. Tom: is you wife an expert on surveys? You do not mention any expertise on her part and I wondered why I should listen to anything she says. What is your expertise that allows you to determine if a survey was properly done or not? Why were you not selected to do the peer review? This paper has passed a review by peers who actually have experience performing surveys of this type. You must support your opinion with peer reviewed data also. Your comments on this thread are below your usual standards. Your complaint about two samples is unsound. This is supposed to be a scientific blog. Can people refer to data and not just give their opinions. The paper itself cites many peer reviewed studies. Data is available if you want to become informed. Lewandowsky did not get very many skeptic replies. They did not want to participate. Small sample size is often a problem with surveys. Perhaps next time the skeptics will link the survey.
  43. Arctic Sea Ice Extent: We're gonna need a bigger graph
    Digital Cuttlefish did a great job on that! Thanks for the link. One has to wonder how the deniers will spin this. But then, they are already shifting to other arguments... "we can't do anything about it." "It's too expensive to deal with." Etc.
  44. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Geoff, I'm not irritated, I'm laughing. I know I'm not supposed to-- this is supposed to be deadly serious after all-- but I can't help it. The dumpster diving part just slays me. For what it's worth I don't care what you in particular think or believe so don't sweat it. For all you and I know of each other we're just data points. Everything is information, that's all I'm saying.
  45. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    doug_bostrom (-Snip-) Why should I repeat what Tom Curtis said so well? The fact that I agree with him seems to irritate you more than anything. You don’t criticise anything I say, but merely the fact that I’m supposedly not qualified to say it. I used to run opinion surveys. I certainly didn’t spend a lifetime learning about survey methodology. The basics are not complicated. One thing you learn is, if you’re doing a survey on dog food, you don’t go back to the client and say “the dog owners we spoke to didn’t want to be interviewed, so we spoke to cat owners instead”. Lewandowsky’s analysis in terms of correlations between latent variables simply hides his lack of data. Causations derived from the anonymous on-line box ticks of a dozen or less people - you couldn’t make it up. Or could you?
    Moderator Response:

    [DB] Note that accusations of deception are in violation of the Comments Policy.

    Please note that posting comments here at SkS is a privilege, not a right. This privilege can be rescinded if the posting individual treats adherence to the Comments Policy as optional, rather than the mandatory condition of participating in this online forum.

    Please take the time to review the policy and ensure future comments are in full compliance with it. Thanks for your understanding and compliance in this matter.

    References to moderation snipped; accusations of dishonesty struck out.

  46. Realistically What Might the Future Climate Look Like?
    @Kevin C - in place of CFLs, you might try some of the new LEDs. They're not cheap, so it makes economic sense to put them only where they'll be on many hours each day, but they provide a nicer light than the CFLs. Now if I can figure out the most economical way to get rid of our oil-fired furnace....
  47. 2012 SkS Weekly Digest #35
    Further to funglestrumpet, pictures of "adaptation" would also be useful. Centuries-old homes that have never previously been flooded now underwater, crops dying, that sort of thing. They're all in the "adaptation" bucket, comfy though that word may sound. Adaptation is driven by stress, strain and things being destroyed. Of course there's the usual attribution problem.
  48. 2012 SkS Weekly Digest #35
    This being the day for non-mainstream comments, I would like to suggest a new page for this site (and sorry for any extra workload!). How about having a section entitled ‘Business as Usual’? While fairly obvious, I suppose I had better spell it out for any WUWT types who tune in for the cartoons etc on this day. It seems to me that while we pay a lot of attention to the air temperature rise, what we really need is for business as usual to cease to be business as usual. To that end, if we let people know what will happen if we do nothing, then they might be persuaded to seek a change in attitude to climate change. Let's face it; talk of a six degree (C) rise can seem quite attractive, until one sees exactly what this might mean in business as usual terms showing food and energy prices and or availability, sea level rise and indeed survivability etc. An intelligent sceptic whom I debated climate change with recently said how nice a six degree rise would be, especially with mediteranean conditions in Northern Europe! (No mention of the death toll from the European heat wave in 2003!) I leave it to the blogosphere, assuming it agrees with the notion, to decide what parameters need to be included, but would recommend that air temp, sea temp, ocean acidification, food volumes/tonnages and sea level rise be included, obviously together with any known upper and lower bounds for each parameter. A picture paints a thousand words, so would recommend that the information be shown in the form of graphs/charts etc. where error bars would be most impressive. Business as usual will change as the world wakes up and acts to combat climate change. That will reflect in the graphs and charts and thus show the benefits of any changes that might be made from time to time, especially when it comes to error bars.
  49. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    One (?) other thought on this business of parsing a paper for clues as to its validity. Necessarily Lewandowsky followed statistical clues about his hypothesis. The statistical description he claims to have derived necessarily does not map perfectly to any single individual. That said, the behavior of individuals gives us hints to whether they're a member of one general statistical class or another. For those following discussion of Lewandowsky's paper, consider whether a given individual is choosing to go down the rabbit hole of stolen database diving in hopes of uncovering some sort of plot, or if that individual is sticking to science, knows about or is getting educated on the topic of survey methodology, is carefully separating what can be definitely known versus what must remain in the realm of speculation. Paying attention to individual behavior can help us to understand the possible worth of Lewandowsky's paper as well as sort out who's worth listening to and who isn't. When I apply that standard here, I immediately see the "Tom Curtis" class and the "Geoff Chambers" class. They seem different. Tom's taken a bit of a mental swan dive in firmly ascribing "scam" to certain responses, but his analysis is a first brush at actually performing a critique of the paper that is numerically agreeable and possibly useful. Geoff Chambers appears to be in another class.
  50. Arctic Sea Ice Extent: We're gonna need a bigger graph
    Sauerj @12: One of my favorite poets (though he denies the label) is the Digital Cuttlefish. He is already on the job. And as always, he does not disappoint!.

Prev  1089  1090  1091  1092  1093  1094  1095  1096  1097  1098  1099  1100  1101  1102  1103  1104  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us