Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  1090  1091  1092  1093  1094  1095  1096  1097  1098  1099  1100  1101  1102  1103  1104  1105  Next

Comments 54851 to 54900:

  1. Models are unreliable
    opd68. Your process of calibrate, predict,validate does not capture climate modelling at all well. This is a better description of statistical modelling, not physical modelling. Broadly speaking, if your model doesnt predict the observations, you dont fiddle with calibration parameters; you add more physics instead. That said, there are parameterizations used in the climate models to cope with sub-scale phenomena (eg evaporation versus windspeed). However the empirical relationship used is based on fitting measured evaporation rate to measured wind speed, not fiddling with a parameter to match a temperature trend. In this sense they are not calibrated to any temperature series at all. You can find more about that in the modelling FAQ at Realclimate (and ask questions there of the modellers). Sks did a series of articles past predictions. Look for the Lessons from past predictions series.
  2. Models are unreliable
    opd68, Your definitions of calibration and validation are pretty standard but I'd like to make a few points that reflect my understanding of GCMs (which could be wrong): 1. GCMs don't need to be calibrated on any portion of the global temperature record to work. Rather, they take as input historical forcings (i.e. known CO2 concentrations, solar emissions, aerosols, etc.) and are expected to produce both historical temperature records as well as forecast future temperatures (among other things) according to a proscribed future emissions scenario (which fundamentally cannot be predicted because we don't know what measures we will take in future to limit greenhouse gasses -- so modellers just show what the consequences of a range of scenarios will be so we can do a cost-benefit analysis and decide which one is the optimal one to aim for -- and which we then ignore because we like fossil fuels too much). There is some ability to "tune" the models in this sense due to the uncertainty relating to historical aerosol emissions (which some "skeptics" take advantage of, e.g. by assuming that if we don't know precisely what they were then we can safely assume with certainty that they were exactly zero) but this is actually pretty limited because the models must still obey the laws of physics, it's not an arbitrary parameter fitting exercise like training a neural net would be. 2. GCMs are expected to demonstrate skill on a lot more than just global temperatures. Many known phenomena are expected to be emergent behaviour from a well-functioning model, not provided as inputs. 3. Even without sophisticated modelling you can actually get quite close using just a basic energy balance model. This is because over longer time periods the Earth has to obey the laws of conservation of energy so while on short time scales the temperature may go up and down as energy is moved around the system, over longer terms these have to cancel out. Charney's 1979 paper really is quite remarkable in that respect -- the range of climate sensitivities proposed is almost exactly the same as the modern range after 30+ years of modelling refinement. Even Arrhenius was in the ballpark over 100 years ago!
  3. Models are unreliable
    opd68, I think part of your difficulty is in understanding both the complexity of the inputs and the complexity of measuring those inputs in the real world. For example, dimming aerosols have a huge effect on outcomes. Actual dimming aerosols are difficult to measure, let alone to project into their overall effect on the climate. At the same time, moving forward, the amount of aerosols which will exist requires predictions of world economies and volcanic eruptions and major droughts. So you have an obfuscating factor which is very difficult to predict and very difficult to measure and somewhat difficult to apply in the model. This means that in the short run (as scaddenp said, less than 20 years) it is very, very hard to come close to the mark. You need dozens (hundreds?) of runs to come up with a "model mean" (with error bars) to show the range of likely outcomes. But even then, in the short time frame the results are unlikely to bear much resemblance to reality. You have to look beyond that. But when you compare you predictions to the outcome... you now need to also adjust for the random factors that didn't turn out the way you'd randomized them. And you can't even necessarily measure the real world inputs properly to tease out what really happened, and so what you should input into the model. You may look at this and say "oh, then the models are worthless." Absolutely not. They're a tool, and you must use them for what they are meant for. They can be used to study the effects of increasing or decreasing aerosols and any number of other avenues of study. They can be used to help improve our confidence level in climate sensitivity, in concert with other means (observational, proxy, etc.). They can also be used to help us refine our understanding of the physics, and to look for gaps in our knowledge. They can also be used to some degree to determine if other factors could be having a larger effect than expected. But this statement of yours is untrue:
    This has meant we need some form of predictive model in which we have sufficient confidence to simulate temperature changes over tim, under changing conditions, to an appropriate level of uncertainty.
    Not at all. We have measured global temperatures and they are increasing. They continue to increase even when all other possible factors are on the decline. The reality is that without CO2 we would be in a noticeable cooling trend right now. There are also other ways (beyond models) to isolate which factors are influencing climate: Huber and Knutti Quantify Man-Made Global Warming The Human Fingerprint in Global Warming Gleckler et al Confirm the Human Fingerprint in Global Ocean Warming
  4. Models are unreliable
    Thanks all for the feedback - much appreciated. For clarification, my use of the terms 'calibration' and 'validation' can be explained as: - We calibrate our models against available data and then use these models to predict an outcome. - We then compare these predicted outcomes against data that was not used in the calibration. This can be data from the past (i.e. by splitting your available data into calibration and validation subsets) or data that we subsequently record over time following the predictive run. - So validation of our predictive models should be able to be undertaken against the data we have collected since the predictive run. Dikran & scaddenp – totally agree re: importance of validation against a series of outcomes wherever possible, however I feel that in this case the first step we need to be able to communicate with confidence and clarity is that we understand the links between CO2 and GMT and can demonstrate this against real, accepted data. As such, in the first instance, whatever data was used to calibrate/develop our model(s) is what we need to use in our ongoing validation. Tom Curtis – thanks for that. The four you mention seem to be the most scientifically justifiable and accepted. In terms of satellite vs surface record (as per paragraph above) whatever data type was used to calibrate/develop the specific model being used is what should be used to then assess its predictive performance. From my reading and understanding, a key component of the ongoing debate is: - Our predictive models show that with rising CO2 will (or has) come rising GMT (along with other effects such as increased sea levels, increased storm intensity, etc). - To have confidence in our findings we must be able to show that these predictive models have appropriately estimated GMT changes as they have now occurred (i.e. since the model runs were first undertaken). As an example, using the Hansen work referenced in the Intermediate tab of this Topic, the 1988 paper describes three (3) Scenarios (A, B and C) as: - “Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely” (increasing rate of emissions - quotes an annual growth rate of about 1.5% of current (1988) emissions) - “Scenario B has decreasing trace gas growth rates such that the annual increase in greenhouse forcing remains approximately constant at the present level” (constant increase in emissions) - “Scenario C drastically reduces trace gas growth between 1990 and 2000 such that greenhouse climate forcing ceases to increase after 2000” From Figure 2 in his 2006 paper, the reported predictive outcomes haven’t changed (i.e. versus Fig 3(a) in 1988 paper) which means that the 1988 models remained valid to 2006 (and presumably since?). So we should now be in a position to compare actual versus predicted GMT between 1988 and 2011/12. Again, I appreciate that this is merely one of the many potential variables/outcomes against which to validate the model(s) however it is chosen here as a direct reference to the posted Topic material.
  5. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Eric, A brief Google of Lewandowsky shows he has over 100 publications and is an expert at surveys of this type. Your own quote says "The issue of whether or not to offer a midpoint has been disputed for decades" my emphasis. You then conclude Lewandowsky incorrectly formatted the questions. Obviously experts dispute your conclusion. Your conclusion is wrong according to your own quote. The survey was people who responded to a link online. This is described in detail in the paper. You appear to have not read the paper you are criticizing. Lewandowsky had a number of additional questions that were dropped from the paper. Perhaps he did exactly what you describe. You have offered no evidence to support your position, only uninformed speculation. Your hand waving is not sufficient to argue with a peer reviewed paper. You must provide specific examples of what you think is incorrect and expert opinion that it is not correct. If experts disagree it is obviously acceptable to use the format. In this thread unsupported opinion has run rampant. Please support your assertions with peer reviewed data. Unsupported hand waving can be dismissed with a hand wave.
  6. Realistically What Might the Future Climate Look Like?
    Estiben at 18:33 PM on 1 September, 2012, says, "...Minifarms are not as efficient, for one thing, and then there is the problem of distribution. If everyone is a farmer, who is going to deliver food to where it can't be grown?" This is a highly questionable statement. The most productive, sustainable way to grow food is manually. A small veg patch, run efficiently, can produce more weight of food per sq metre than any commercial operation, especially when the input of commercially-produced fertilisers, pesticides and use of fossil-fuel-powered machinery is taken into account. And that's also why the second half of your statement is invalid: if food is locally-produced, by as many people as possible, there is little or no transport involved. We should be doing everything to grow food as close as possible to where it's eaten. Even ignoring climate change, such a move will help avoid the negative impact of rising oil prices. A farmer without cheap oil is just a man leaning on his shovel.
  7. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Some of Tom's earlier points have a long history of research. For example in analyzing the effect of question format on answer response styles: http://feb1.ugent.be/nl/Ondz/wp/Papers/wp_10_636.pdf where they point out in section 3.2 "The issue of whether or not to offer a midpoint has been disputed for decades...". It essentially implies a possible response style bias in the context of Lewandowsky's survey, perhaps only allowing an extreme style of response to qualify. There is also a lot of research on the effects of question ordering and question wording on responses particularly when trying to determine causation. For example Lewandowsky could ask a random sampling of people whether they believe in free markets before and after the climate change questions. If the response to the climate question changes based on the whether it is before or after the free market question that would imply (although not prove) causation between free market ideology and rejection climate science. Considering the lack of information about the selection of the survey sample, the very short section on development of the questions, and the lack of any references on survey methodologies, I can't make any favorable conclusions about the value of the results. Perhaps more information will become available, or Lewandowsky can answer some of those questions here.
  8. Models are unreliable
    Another thought to when you talk about "model validation". Validation of climate theory is not dependent on GCM outputs - arguably other measures are better. However, models (including hydrological models) are usually assessed in terms of model skill - their ability to make more accurate predictions than simple naive assumptions. For example, GCMs have no worthwhile skill in predicting temperatures etc on timescales of decadal or less. They have considerable skill in predicting 20year+ trends.
  9. Models are unreliable
    Models do not predict just one variable - they necessarily compute a wide range of variables which can all be checked. With the Argo network in place, the predictions of Ocean Heat Content will become more important. They also do not (without post-processing) predict global trends, but values for cells so you can compare regional trends as well as the post-processed global trends. Note too, that satellite MSU products like UAH and RSS measure something different from surface temperature indices like GISS and HAdCrut, and thus different model outputs.
  10. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    I read the Lewandowsky paper and it looks pretty typical to me. The conclusions are not surprising. I have not seen on this thread a single link to a scientific paper that contradicts Lewandowsky's result. Tom: is you wife an expert on surveys? You do not mention any expertise on her part and I wondered why I should listen to anything she says. What is your expertise that allows you to determine if a survey was properly done or not? Why were you not selected to do the peer review? This paper has passed a review by peers who actually have experience performing surveys of this type. You must support your opinion with peer reviewed data also. Your comments on this thread are below your usual standards. Your complaint about two samples is unsound. This is supposed to be a scientific blog. Can people refer to data and not just give their opinions. The paper itself cites many peer reviewed studies. Data is available if you want to become informed. Lewandowsky did not get very many skeptic replies. They did not want to participate. Small sample size is often a problem with surveys. Perhaps next time the skeptics will link the survey.
  11. Arctic Sea Ice Extent: We're gonna need a bigger graph
    Digital Cuttlefish did a great job on that! Thanks for the link. One has to wonder how the deniers will spin this. But then, they are already shifting to other arguments... "we can't do anything about it." "It's too expensive to deal with." Etc.
  12. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Geoff, I'm not irritated, I'm laughing. I know I'm not supposed to-- this is supposed to be deadly serious after all-- but I can't help it. The dumpster diving part just slays me. For what it's worth I don't care what you in particular think or believe so don't sweat it. For all you and I know of each other we're just data points. Everything is information, that's all I'm saying.
  13. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    doug_bostrom (-Snip-) Why should I repeat what Tom Curtis said so well? The fact that I agree with him seems to irritate you more than anything. You don’t criticise anything I say, but merely the fact that I’m supposedly not qualified to say it. I used to run opinion surveys. I certainly didn’t spend a lifetime learning about survey methodology. The basics are not complicated. One thing you learn is, if you’re doing a survey on dog food, you don’t go back to the client and say “the dog owners we spoke to didn’t want to be interviewed, so we spoke to cat owners instead”. Lewandowsky’s analysis in terms of correlations between latent variables simply hides his lack of data. Causations derived from the anonymous on-line box ticks of a dozen or less people - you couldn’t make it up. Or could you?
    Moderator Response:

    [DB] Note that accusations of deception are in violation of the Comments Policy.

    Please note that posting comments here at SkS is a privilege, not a right. This privilege can be rescinded if the posting individual treats adherence to the Comments Policy as optional, rather than the mandatory condition of participating in this online forum.

    Please take the time to review the policy and ensure future comments are in full compliance with it. Thanks for your understanding and compliance in this matter.

    References to moderation snipped; accusations of dishonesty struck out.

  14. Realistically What Might the Future Climate Look Like?
    @Kevin C - in place of CFLs, you might try some of the new LEDs. They're not cheap, so it makes economic sense to put them only where they'll be on many hours each day, but they provide a nicer light than the CFLs. Now if I can figure out the most economical way to get rid of our oil-fired furnace....
  15. 2012 SkS Weekly Digest #35
    Further to funglestrumpet, pictures of "adaptation" would also be useful. Centuries-old homes that have never previously been flooded now underwater, crops dying, that sort of thing. They're all in the "adaptation" bucket, comfy though that word may sound. Adaptation is driven by stress, strain and things being destroyed. Of course there's the usual attribution problem.
  16. 2012 SkS Weekly Digest #35
    This being the day for non-mainstream comments, I would like to suggest a new page for this site (and sorry for any extra workload!). How about having a section entitled ‘Business as Usual’? While fairly obvious, I suppose I had better spell it out for any WUWT types who tune in for the cartoons etc on this day. It seems to me that while we pay a lot of attention to the air temperature rise, what we really need is for business as usual to cease to be business as usual. To that end, if we let people know what will happen if we do nothing, then they might be persuaded to seek a change in attitude to climate change. Let's face it; talk of a six degree (C) rise can seem quite attractive, until one sees exactly what this might mean in business as usual terms showing food and energy prices and or availability, sea level rise and indeed survivability etc. An intelligent sceptic whom I debated climate change with recently said how nice a six degree rise would be, especially with mediteranean conditions in Northern Europe! (No mention of the death toll from the European heat wave in 2003!) I leave it to the blogosphere, assuming it agrees with the notion, to decide what parameters need to be included, but would recommend that air temp, sea temp, ocean acidification, food volumes/tonnages and sea level rise be included, obviously together with any known upper and lower bounds for each parameter. A picture paints a thousand words, so would recommend that the information be shown in the form of graphs/charts etc. where error bars would be most impressive. Business as usual will change as the world wakes up and acts to combat climate change. That will reflect in the graphs and charts and thus show the benefits of any changes that might be made from time to time, especially when it comes to error bars.
  17. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    One (?) other thought on this business of parsing a paper for clues as to its validity. Necessarily Lewandowsky followed statistical clues about his hypothesis. The statistical description he claims to have derived necessarily does not map perfectly to any single individual. That said, the behavior of individuals gives us hints to whether they're a member of one general statistical class or another. For those following discussion of Lewandowsky's paper, consider whether a given individual is choosing to go down the rabbit hole of stolen database diving in hopes of uncovering some sort of plot, or if that individual is sticking to science, knows about or is getting educated on the topic of survey methodology, is carefully separating what can be definitely known versus what must remain in the realm of speculation. Paying attention to individual behavior can help us to understand the possible worth of Lewandowsky's paper as well as sort out who's worth listening to and who isn't. When I apply that standard here, I immediately see the "Tom Curtis" class and the "Geoff Chambers" class. They seem different. Tom's taken a bit of a mental swan dive in firmly ascribing "scam" to certain responses, but his analysis is a first brush at actually performing a critique of the paper that is numerically agreeable and possibly useful. Geoff Chambers appears to be in another class.
  18. Arctic Sea Ice Extent: We're gonna need a bigger graph
    Sauerj @12: One of my favorite poets (though he denies the label) is the Digital Cuttlefish. He is already on the job. And as always, he does not disappoint!.
  19. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Foxgoose: It matters not whether serious scientists take the paper seriously - its conclusions are now imprinted on the public mind. The ox gored, so to speak. Horrible, isn't it? As for errors, those seem to hinge on the notion that we can say definitely that one response is a scam and another isn't. We actually can't do that. We can speculate but we can't say with any degree of formally defensible confidence. To become attached to firm conclusions by winnowing responses into "valid" and "invalid" buckets is to commit an error. There are much more subtle ways to manipulate a survey than jamming response knobs to one extreme or another. Go read about it. If any of us were to spend a career getting educated on survey methodology we might have something genuinely useful to say here.
  20. Hurricanes aren't linked to global warming
    To IanC--thank you for understanding the argument. While I'm correcting the page, there is a related but still more subtle error on it: "Recent research has shown that we are experiencing more storms with higher wind speeds, and these storms will be more destructive, last longer and make landfall more frequently than in the past. Because this phenomenon is strongly associated with sea surface temperatures, it is reasonable to suggest a strong probability that the increase in storm intensity and climate change are linked." The mistake is in the "because" phrase. Since sea temperature and air temperature are not perfectly correlated, high sea temperature will correlate with a high difference between sea and air temperature. So the evidence doesn't tell us whether it is the sea temperature or the temperature difference that is related to storm intensity. This one is interesting partly because it echoes a famous error in economics. The Phillips Curve showed an inverse relation between inflation and unemployment--suggesting that by accepting some level of inflation one could hold down unemployment. When the attempt was made, it didn't work (hence "stagflation") because the real relation was not with the inflation level but the difference between the actual and anticipated level--and once a country maintained an inflation rate of (say) 5% for a while, people came to anticipate it, and the unemployment rate went back up.
  21. 2012 SkS Weekly Digest #35
    Hello Moderators, The link for "Kashmir's melting glaciers" points to the same ThinkProgress link as the article on Russia's Wildfire and dried-out peat bogs
    Moderator Response: [JH] Link fixed. Thanks for bringing this to our attention.
  22. Miriam O'Brien (Sou) at 00:53 AM on 4 September 2012
    AGU Fall Meeting sessions on social media, misinformation and uncertainty
    I made some observations about the Lewandowsky et al paper (on my own blog). I was fascinated mainly by the reaction of skeptics, for example on WUWT and in response to an article in the UK Telegraph. Responses from 'skeptics' lent considerable support to the findings of the paper to which they so strongly objected - that right wing ideologies are a predictor of rejection of climate science. In regard to those people who accept conspiracy theories as a matter of course (rather than 'skeptics' who just seem to think climate science is a giant conspiracy), I noted that there seemed to be too few respondents to draw any conclusions. This is likely to be for two reasons. Firstly, conspiracy theorists tend to congregate on conspiracy theory sites and are less likely to visit sites like this one. Secondly, I don't imagine they make up more than a tiny percentage of the world's population. (They make a lot of noise for such a small group though, and there are probably more of them than most of us think.) In regard to the survey design, it seemed adequate for the purpose. It was a shame there was not a greater proportion of responses from 'skeptics', although it's probably not that far from the proportion in the general community. I personally don't mind the tone taken in the study. This can be put down to the fact that I hold strong views about 'skeptics' and the antics of people who've set themselves up to delay action to mitigate global warming.
  23. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Re - michael sweet at 22:40 PM on 3 September, 2012 It would indeed be nice if that was how "science is normally done". In this case, however someone managed to get the erroneous conclusions of the paper headlined in two UK national newspapers before the paper was even press released, let alone published. It matters not whether serious scientists take the paper seriously - its conclusions are now imprinted on the public mind. The interesting question is - who organised the pre-release press exposure?
  24. Bostjan Kovacec at 23:10 PM on 3 September 2012
    Realistically What Might the Future Climate Look Like?
    @Tom 51 I can agree with your argument that it's human volcanoes that keep us nice and cool, and yes, who knows when – if ever - people will stop burning coal and oil. You're right also that Shellnhuber did say that it's Ramanathan who says we've already committed to 2.4 oC and he did say there are uncertainties about that. 2.1, 1.9, or even »just« 1.7 – that's really not that important. Now, I've listened to Ramanathan's lecture and it makes enough sense to be skeptical about the »budget«. With so many lives at stake I believe we have to take Ramanthan’s message seriously until he’s proven absolutely wrong and not vice versa. What worries me it's that small detail about the atmospheric lifetime of the GHG and aerosols. If humanity happened to achieve emission reductions as in Figure 1 (red curve seems more »plausible« to me), than we could expect a sharp drop in aerosols and a huge acceleration of warming. We saw that in the nineties here in Europe and I experienced it in my home town. When people started heating homes with gas and the Balkan wars destroyed most of the industry the sky became blue again and temperature skyrocketed. We would’ve cooked already if it wasn’t for a local factory which took care of us by spewing tones of TiO2 up the air every day. My local summer Tm from 1851 to 2012. Horizontal line marks 1988 when local industry collapsed. 2003 spike is clearly visible – that’ll be just an average summer by 2030s according to UK MetOffice. I really wouldn’t want to be offensive to anyone, but in this light, I find talking about the “budget” – and not mentioning any uncertainties associated with it - just music to Ms Merkel’s (and everyoneelse’s) ears. In the end we’ll happily accept sulfuric acid/TiO2 air-conditioning. Nobody will remember those “uncertainties” in calculations. The most important thing is that it’ll be good for GDP and everyone will be happy. So I’d suggest a short disclaimer for figure 1: “with massive GE effort”.
    Moderator Response: [RH] Fixed image width that was breaking page format.
  25. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Tom, I have worked on public surveys (on Tobacco use) and when we analyzed the data we only deleted responses that contradicted themselves. It is very difficult to set up an objective measure to reject samples. You might be surprised how dogmatic some deniers are. Read the comments on WUWT for examples. People who believe in conspiracy theories often believe in a lot of theories. Lewandowsky has data he has collected. If you do not like his data you are welcome to perform a better survey and publish your results. If the consensus of scientists is that the data are not supportable than this paper will not be cited by anyone else. That is how science is normally done. I strongly doubt that mainstream scientists will continually cite this paper if the methodology is questioned. It is the deniers who cite papers that have been shown to be poor, since they have no good data to support their premises. I am interested in surveying High School students on Global Warming for publication. Albatross: do you do public surveys for publication?
  26. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Have completly missed this hot topic! But it is nice to note that genuine skepticism is resulting in deeper analysis, which can only be good. It's better than all the fawning that goes on in the 'other' camp.
  27. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    I would just like to second Geoff Chamber's comment and thank Tom Curtis for his honest and intellectually rigorous analysis of Stephan Lewandowsky's paper. I hope Stephan will follow his sound advice and rewrite or withdraw this seriously flawed work.
  28. Realistically What Might the Future Climate Look Like?
    I wold like to remind everyone here that the terms "catastrophy", "catastrophic", etc. already contain a value judgement. While most people coming to these pages are likely sharing similar values and are thus concerned about GW, many others are not, or not yet, as they do not share those values. While the denier community mocks the "C", the scientific community shies away from c-words as it is supposed to stay neutral on such judgements. Thus, leadership will have to come from others. As Bostjan pointed out, it will not come from politicians, as they want to be reelected, and not from the grass roots. That leaves NGOs, the media, and prominent individuals (such as Al Gore I guess). Thanks to SkS, they can find most of what they need here.
  29. Models are unreliable
    opd68 @554, no, there is not a universally accepted measure of Global Mean Surface Temperature (GMST) accepted by all sides. HadCRUT3 and now HadCRUT4, NCDC, and Gistemp are all accepted as being approximately accurate by climate scientists in general with a few very specific exceptions. In general, any theory that is not falsified by any one of these four has not been falsified within the limits of available evidence. In contrast, any theory falsified by all four has been falsified. The few exceptions (and they are very few within climate science) are all very determined AGW "skeptics". They tend to insist that the satellite record is more accurate than the surface record because adjustments are required to develop the surface record (as if no adjustments where required to develop the satellite record /sarc). So far as I can determine, the mere fact of adjustments is sufficient to prove the adjustments are invalid, in their mind. In contrast, in their mind the (particularly) UAH satellite record is always considered accurate. Even though it has gone through many revisions to correct for detected error, at any given time these skeptics are confident that the current version of UAH is entirely accurate, and proves the surface record to be fundamentally flawed. They are, as saying goes, always certain, but often wrong.
  30. Dikran Marsupial at 21:08 PM on 3 September 2012
    Models are unreliable
    opd68 Rather than validate against a single dataset it is better to compare with a range of datasets as this helps to account for the uncertainty in estimating the actual global mean temperature (i.e. none of the products are the gold standard, the differences between them generally reflect genuine uncertainties or differences in scientific opinion in the way the direct observations should be adjusted to cater for known biases and averaged).
  31. Models are unreliable
    My initial comment on here and firstly thanks to the site for a well-moderated and open forum. I am a hydrologist (Engineering and Science degrees) with a corresponding professional interest in understanding the basics (in comparison to GCMs, etc) of climate and potential changes therein. My main area of work is in the strategic planning of water supply for urban centres and understanding risk in terms of security of supply, scheduled augmentation and drought response. I have also spent the past 20 years developing both my scientific understanding of the hydrologic cycle as well as modelling techniques that appropriately capture that understanding and underpinning science. Having come in late on this post I have a series of key questions that I need to place some boundaries and clarity on the subject. But I'll limit myself to the first and (in my mind) most important. A fundamental question in all this debate is whether global mean temperature is increasing. This has meant we need some form of predictive model in which we have sufficient confidence to simulate temperature changes over tim, under changing conditions, to an appropriate level of uncertainty. So, my first question that I'd appreciate some feedback from Posters is: Q: Is there a commonly accepted (from all sides of the debate) dataset or datasets that the predictive models are being calibrated/validated against? Also happy to be corrected on any specific terminology (e.g. GMT).
  32. Arctic Sea Ice Extent: We're gonna need a bigger graph
    Thanks, Andy. Reposted it to the SkS FB page and embedded in the OP above; attribution to you.
  33. Arctic Sea Ice Extent: We're gonna need a bigger graph
    The canary finally fell off its perch. Here's an updated version of my raytraced PIOMAS visualization: http://www.youtube.com/watch?v=bNkyJ7eHHhQ I'm still working on the death spiral version. Takes hours and hours of scripting!
  34. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    In reply to geoffchambers at 16:17 PM on 1 September, 2012 the editor wrote: [DB] References to stolen intellectual property, statements about religions & ideology and general off-topic hypothesizing snipped. --------------------------- For the lords sake DB, don't step all over the lead.
  35. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Tom Curtis #37 Well said. Your point about the two odd outliers has already been made by Manicbeancounter on his blog. Nice to see some agreement about scientific and methodological questions across the great divide. Moderator DB OK. What’s his email address?
    Moderator Response: [DB] At the bottom of every SkS page is the link to the Contact Us form.
  36. Sea Level Isn't Level: This Elastic Earth
    Beautifully presented and please do keep them coming. Complementary item can be found at http://www.abc.net.au/radionational/programs/scienceshow/the-andes3b-formation-and-movement-today/4175684 investigation into big mountain ranges, like Andes, projecting downwards as well as up, like icebergs, except that, from time to time, they lose large lumps into the mantle, causing the crust to flex, similar to above.
  37. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    A (hopefully) final comment on Lewandowski (in press): I have been looking through the survey results and noticed that 10 of the respondents have a significant probability of being produced by people attempting to scam the survey. I base this conclusion on their having reported absurdly low (<2) consensus percentages for at least one of the three categories. An additional response (#861 on the spreadsheet)represents an almost perfect "warmist" caricature of a "skeptic", scoring 1 for all global warming questions, and 4 for all free market and conspiracy theory questions. There may be wackos out there that believe every single conspiracy theory they have heard, but they are a vanishingly few in number, and are likely to appear in a survey with such a small sample size. A second respondent (890) almost exactly mirrored respondent 861 except for giving a 3 for the Martin Luther King Jr assassination, and lower values for the scientific consensus questions. Again this response is almost certainly a scam. Combined, these respondents account for 2 of the strongly agree results in almost every conspiracy theory question; and the other potential scammers also have a noticable number of strong agreements to conspiracy theories. For most conspiracy theory questions, "skeptics" only had two respondents that strongly agreed, the two scammed results. Given the low number of "skeptical" respondents overall; these two scammed responses significantly affect the results regarding conspiracy theory ideation. Indeed, given the dubious interpretation of weakly agreed responses (see previous post), this paper has no data worth interpreting with regard to conspiracy theory ideation. It is my strong opinion that the paper should be have its publication delayed while undergoing a substantial rewrite. The rewrite should indicate explicitly why the responses regarding conspiracy theory ideation are in fact worthless, and concentrate solely on the result regarding free market beliefs (which has a strong enough a response to be salvageable). If this is not possible, it should simply be withdrawn.
  38. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    John Hartz #33 Why on earth should a question about the contents of this blog be addressed to John Cook in a private email? (-Snip-)
    Moderator Response:

    [DB] You have already received a public response from John Cook. Should you wish more detail, please submit an email to him. This is a forum founded and administered by him. Therefore questions of the nature you have been posting should more rightly be submitted to him in private correspondence.

    Continuance in this behavior now constitutes grandstanding and sloganeering, and will be moderated accordingly. FYI.

  39. Global Warming - A Health Warning
    EliRabett - You are right. Smog is a problem in Mexico City and so is ozone, more so than I thought.
  40. Arctic Sea Ice Extent: We're gonna need a bigger graph
    jeffgreen11 @13, it will interest you to know that natice also shows the extent with 20% sea ice, which is most definitely at record low values (3.25 million square kilometers).
  41. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    A further comment on the methodology of the paper: I was talking to my wife about the Lewandowsky paper yesterday. She noted two points in particular. First, the absence of a neutral (I don't know/I know nothing about it) option in the questions was a serious methodological flaw. This is particularly the case for the conspiracy theory questions, in which at least one of the conspiracy theories are obscure (IMO), and not inherently implausible:
    "CYOkla: The Oklahoma City bombers Timothy McVeigh and Terry Nichols did not act alone but rather received assistance from neo-Nazi groups."
    I have never before heard of this conspiracy theory, and it is not inherently implausible that terrorists should receive aid from extremist political groups. Indeed, most terrorists have received such aid. As to whether McVeigh and Nichols did? I have no relevant information. If I had taken the survey, upon coming to this question I would have left it blank. That would be a perfectly rational, and the only honest response. My doing so would have excluded my responses from the sample. The consequence of this lack of a neutral response combined with excluding all results that do not complete all questions is to: 1) Bias the sample by excluding some people who are trying to complete the survey accurately; 2) Force some of those who do complete the survey into more definite responses than they actually hold. In my wife's opinion, this flaw alone is enough to make the survey scientifically worthless; and I trust her judgement on this issue. My wife further said that she would automatically reject a paper with Lewandowsky's title as being politically motivated. In the social sciences, politically motivated papers are a major problem, and generate an excess of background noise and confusion. Part of my wife's response to that is simply to ignore as worthless clearly politically motivated papers. I can see her point, but disagree with the response. Data is data, and so long as you are clear as to how it was obtained, and the results obtained, can be interpreted without consideration of the views expressed in the paper. What is more, in this case the views expressed in the paper are sober analysis. The title makes the paper seem very much worse than it is. Never-the-less, given title, and given the (several) methodological flaws discussed in this post and in my post @12, this has confirmed my opinion that this paper is an "own goal" for opponents of "skepticism" about AGW. It contributes nothing of value scientifically to understanding AGW "skepticism", and its title is a disaster.
  42. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    I agree with Tom about the title. To me it's unnecessarily combative. But the fact that it highlights the "conspiracy" results over the "free-market" results is probably because Dr. Lewandowsky sees that result as being the "new" finding, while the free-market result is already well established. Other than that, people need to pay special attention to what the paper actually says about it's target audience and how the correlation goes. Hint, it is not skepticism about climate science as being all conspiratorial. See Tom's post.
  43. Matt Ridley - Wired for Lukewarm Catastrophe
    Chris@81 I agree that the Schuur and Abbott figures can be confusing (I got them wrong myself previously.) In one sentence they talk about "tonnes of carbon" and in the next explain that by "carbon" they mean "CO2 equivalents". What they wrote was:
    The estimated carbon release from this degradation is 30 billion to 63 billion tonnes of carbon by 2040, reaching 232 billion to 380 billion tonnes by 2100 and 549 billion to 865 billion tonnes by 2300. These values, expressed in CO2 equivalents,combine the effect of carbon released as both CO2 and as CH4.
    My reading of the Matthews et al paper is that the linearity applies up to cumulative emissions of 2 Trillion tonnes of carbon. They wrote:
    Even in the extreme case of instantaneous pulse emissions, the temperature change per unit carbon emitted in the UVic ESCM is found to be constant to within 10% on timescales of between 20 and 1,000 years, and for cumulative emissions of up to 2 Tt C (see Supplementary Fig. 1).
  44. Arctic Sea Ice Extent: We're gonna need a bigger graph
    This was posted at Nevens: blog: For those wondering about the NIC estimates (as can be seen here: http://nsidc.org/data/masie/, NIC produces operational ice analyses, focused on using many data sources of varying quality and quantity to detect as much ice as possible, even small concentrations. NSIDC’s passive microwave data may miss some low concentrations (it uses a 15% concentration cutoff), particularly during melt. So it’s not unusual for NIC/MASIE to show more ice, though it’s more than in other years because the low concentration ice is scattered over a much larger area. An important point is that NIC/MASIE, while picking up more ice, is produced via manual analysis and the data quality and quantity varies. So the product is not necessarily consistent, particularly from year-to-year. NSIDC’s product is all automated and consistently processed throughout the record. So there may be some bias, but the bias is consistent throughout the timeseries. This means that comparison of different years, trend values, and interannual variability are more accurate using NSIDC. Hope this info helps. Walt Meier NSIDC
  45. Arctic Sea Ice Extent: We're gonna need a bigger graph
    Jeff, The NSIDC has posted at several locations on the web, including Realclimate and WUWT, that the IMS ice product includes all ice detected in its extent analysis. The NSIDC includes only areas with at least 15% ice. Therefore the NSIDC is always lower than IMS. Because of the way the data is analyzed, IMS is not comparable from year to year. For this reason it is not useful for long term analysis. It is intended for use by Navy ships for navigation. A lot of the extent IMS currently measures is less than 5% ice and is expected to melt out soon. There is much more low concentration ice this year than there was in 2007. The deniers like IMS since it is the last measure of the ice that is not lower than 2007. Scientists use 15% extent like IJIS and NSIDC because the data is collected and analyzed for the purpose of long term comparisons. Note: Cryosphere Today uses sea ice area and DMI uses 30% extent. It is best to only compare one groups graphs with their own graphs. PIOMAS measures volume which is another animal completely.
  46. Arctic Sea Ice Extent: We're gonna need a bigger graph
    #13 jeffgreen I am not an expert, but that particular chart at the link seems to be updated fortnightly, so it will not be updated again until September 8th. The last date updated is 26th August. Here is another chart from the same site that is updated weekly. http://www.natice.noaa.gov/ims/ Better to wait until well into September until any message can be taken from these particular graphs. I notice Anthony Watts made great play with this second chart - until it got updated.
  47. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    @geoffchambers #31: The bulk of the many questions that you have posted on the SkS comment threads should have been posed directly to John Cook via email. Please stop cluttering this comment thread with an endless stream of queries.
  48. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    Geoff: Does anyone remember what the response was in comments here? Why not ask the expert blog science dumpster divers? Apparently a copy of the entire SkS database ca. 2010 is kept somewhere as an object of obsessive and unhealthy fascination, so other than maintaining a histrionic posture why ask here? John: Is this wild-goose chase really the best and highest use of your time and energy? If laughter and fun are our highest and best purpose then we should sweep off our hats and bow low in recognition of Geoff and Crew's superior efforts.
  49. AGU Fall Meeting sessions on social media, misinformation and uncertainty
    John Hartz (-Snip-) Lewandowsky gave the names of eight blogs as the source of his data. At two of them there is no evidence of the survey having been mentioned. One is totally inactive. The other is the highly active and influential SkepticalScience. John Cook says the post about the survey was deleted after the survey was completed. (Why?) He gave the wrong year, then corrected it on prompting, but still with no precision as to the month. (Why not?) A little more precision would help us to confirm his statement with the Wayback machine. Now we learn from a comment at` http://rankexploits.com/musings/2012/multiple-ips-hide-my-ass-and-the-lewandowsky-survey/ that Kwicksurveys, the free service which conducted the survey, was hacked and all their data lost. This happened in June, just weeks after Lewandowsky had put up a second questionnaire aimed at deniers - which was publicised here and at Watchingthedeniers. I repeat my request. Does anyone here at SkepticalScience remember the survey in August 2010? Or not?
    Response: [DB] Inflammatory tone snipped. [John Cook] I don't remember the month, presumably August or September is the ballpark.
  50. Arctic Sea Ice Extent: We're gonna need a bigger graph
    http://www.natice.noaa.gov/products/ice_extent_graphs/arctic_weekly_ice_extent.html Hoping to understand why this is different from what is in the mainstream information being shown to us. From a conversation on another site there were differences on sea ice figures. Natice sea ice extent does not have the record broken by its own graphs. I put in years 2000 to start and 2012 to end. According to NATICE 2012 has not broken the 2007 low ice extent record. How does NATICE differ in its data from the PIOMASS and others and why?

Prev  1090  1091  1092  1093  1094  1095  1096  1097  1098  1099  1100  1101  1102  1103  1104  1105  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us