Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Recent Comments

Prev  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  Next

Comments 1651 to 1700:

  1. A Frank Discussion About the Propagation of Measurement Uncertainty

    ICU: I mentioned the randomness issue briefly above, in the blog post and in comments. He makes the argument for non-randomness saying he finds a non-normal distribution. I point out in comment 46 that he is wrong: you can have non-normal, but still random, distributions.

    And he keeps using equations to combine uncertainties that require independence of the variances, which is contrary to his claim of non-randomness. And he uses a multiplier of 1.96 to get 2-sigma from 1-sigma, even though he says that distributions are non-normal.

    In the NoTricksZone post, he argues that covariance is not relevant (in fact, that statistics equations combining variances) are not relevant because uncertainty is different from statistics. So he dismisses the equations I have present in the OP - in spite of the fact that they are listed in Wikipedia's "Propagation of Uncertainty" section. Yet covariance is the key concept that needs to be included when things are not random.

    In comment 21, bdgwx provided a link to JCGM 100:288, which is basically the same as the 1995 ISO Guide to Uncertainty in Measurement. Section 5.2 talks about Correlated input quantities. If you look at section 5.2.1 on page 33, it says (emphasis added):

    Equation (10) and those derived from it such as Equations (11a) and (12) are valid only if the input quantities X are independent or uncorrelated (the random variables, not the physical quantities that are assumed to be invariants — see 4.1.1, Note 1). If some of the Xi are significantly correlated, the correlations must be taken into account.

    The internal inconsistencies in Pat Frank's work are numerous and critical. It's not random, but you don't need to use equations that are designed for correlated inputs. It's not normally-distributed, but you can still get to 95% confidence levels by using the proportions from a normal curve and 1-sigma/2-sigma ratios.

    He's picking equations and terms from a buffet based on taste, having no idea how any of the dishes are made,  and claiming that he can cook better than anyone else.

    In comments 49 and 50, I show data from real world measurements comparing three equivalent temperature sensors, and how you need to properly divide Root Mean Square Error into the Mean Bias Error and standard deviation of differences between pairs to properly evaluate the uncertainty. And how accounting for the Mean Bias Error across sensors (by using anomalies) shows that all three sensors agree on how different the current temperature is from the monthly mean.

    Nowhere in Pat Frank's paper does he discuss Mean Bias Error or any other specific form of systematic error. Every single one of his equations ignores the systematic error he claims is a key point.

  2. A Frank Discussion About the Propagation of Measurement Uncertainty

    OK so systematic errors.  What specific systematic errors are being claimed for temperature measurements and their magnitude(s)?

  3. A Frank Discussion About the Propagation of Measurement Uncertainty

    Bob,

     

    As I see it, Frank's basic argument is that measurement errors are not random.  I think it is better to address the major arguments of a position then to get bogged down in the minutia of an argument.

     

    I would argue that, if in fact, measurement errors are not random, then observational sciences are royally fucked!  Meaning, show me explicit examples where the measurement errors are not random and how far those deviate from an assumption of randomness.  Actual measurements, not just some made up just so math concoctions.

  4. Climate Confusion

    Markp @ 27:

    Bluntly, when you dismiss information you don't like with statements such as this one:

    I've spoken to established research scientists who laugh off the climate modelers who so cheerfully say "temperatures will just stop rising" if net zero is achieved. These are people I trust. They've had long careers doing real science, not short ones playing with computers.

    ...then it is hard to take you seriously.

    If you want me to actually believe that such "established research scientists" exist, then you will have to point to a credible source of the statements they make and their arguments against the "established research scientists" that have studied and modelled carbon cycles for many years.

    And I'll see your "working in climate science for a couple of years now" and raise you "studying and working in climate science for 45 years now". Only five of those years were spent dealing with forest carbon cycles and their relationships with climate, though. And I've only been "playing with computers" for 45 years, too.

  5. A Frank Discussion About the Propagation of Measurement Uncertainty

    ICU @52 , methinks you are asking Bob L  to act like Sisyphus.

    Sisyphus had a large stone to move to the top of the mountain ~ but Dr Frank's ideas are a much smaller stone (though obviously much denser! ) . . . and the small dense Frankenstein  [sorry, the pun was irresistible]  is determined to keep rolling itself back down into the gutter  [in this case, the gutter press, aka NoTricksZone ]  at every opportunity.

    As Gavin Cawley says: Frank's methodology is "argument-by-attrition".

    Wise psychiatrists know that sometimes you just have to walk away.

    On the other hand, Bob may simply enjoy a bit of jousting, for the fun & mental exercise.

  6. A Frank Discussion About the Propagation of Measurement Uncertainty

    Hi, ICU/Everett.

    I am aware of the PubPeer discussion of this recent Pat Frank paper. I have started participating on that discussion as Camponotus mus - a pseudonym assigned by PubPeer. I saw that link to NoTricksZone, and have a short response waiting in moderation at PubPeer. (As a new, anonymous user at PubPeer, I understand that my comments will always go through moderation, at least for a while.)

    I have only looked quickly through that NoTricksZone post, and it seems that Pat Frank is mostly just asserting he is right and the whole world is wrong. Part of my PubPeer response is:

    I see that you are arguing that averaging is not the same as weighting. That is quite an amazing claim, as the average of two numbers is algebraically equivalent to weighting each number by 1/2. You do understand that (A+B)/2 is identical to A/2 + B/2, and that this can be re-written as (1/2)*A + (1/2)*B? If you think this is not correct, then I would add basic algebra to the topics that you do not understand.

    He also seems to claim that uncertainty can't use the rules of statistics, such as the covariance term I mention in the OP. This is certainly a most bizarre idea, as the GUM makes extensive use of statistical models in demonstrating concepts of uncertainty.

    Overall, Pat Frank's response at NoTricksZone looks like he is in hagfish mode, as described in DiagramMonkey's blog post. I do not think it would be productive to try to refute it here unless Pat Frank comes here and makes his arguments here. A read-this-blog/read-that-blog cycle will not be at all productive.

    Of course, if Pat Frank does come here, he would be expected to follow the Comments Policy, just like anyone else.

  7. A Frank Discussion About the Propagation of Measurement Uncertainty

    Hello Bob,

    Everett F Sargent or Francis F Sargent of ATTP infamy here.  I just noticed a Crank reply in the most recent PubPeer article from Frank the Crank ...

    https://notrickszone.com/2023/08/24/dr-patrick-franks-refutation-of-a-sks-critique-attempt-loblaws-24-mistakes-exposed/

    Now I know that this Crank is wrong although I am not now able to go through the necessary math.  Your thoughts on this more recent rebuttal to your post here would be most greatly appreciated.  Thanks in advance for your thoughts and/or equations.  Also think an abstract of sorts would be most useful covering this crank's misunderstandings from the 2019 and 2023 (this article) papers.

    Moderator Response:

    [BL] Since this is your first post here, please note that our Comments Policy does ask that you keep the tone civil. The parts that I have flagged are getting a bit too close to the line.

  8. Climate Confusion

    Markp... "We've had the IPCC for 35 years and not much to show for it..."

    Again here, you're seemingly making the claim that nothing has happened when the opposite is true. A great deal has been accomplished. A great deal more must be accomplished. You can certainly make the claim that the changes aren't happening fast enough, but you can't rationally claim nothing has happened.

  9. Climate Confusion

    Markp... It sounds to me like you're seeing this as a problem that is solved through an individual's choices in lifestyle, and I think that's misdirecting where the problem is solved. 

    Most climate solutions are a supply side issue. The supply of energy, the supply of low carbon emitting building materials, etc. Individuals readily embrace low carbon solutions when they are available in the marketplace. But when low carbon alternatives aren't available individuals generally have no choice but to use carbon emitting products and services.

    Laying any blame at the feet of individual choice is essentially letting the fossil fuels industry off the hook for their responsibilities to humanity and a sustainable environment.

  10. Climate Confusion

    I am not a scientist, but I've been working in climate science for a couple of years now.

    I wouldn't say I dismiss models, I'm just careful with them. Perhaps "garbage in, garbage out" is more common an expression to describe models in the financial world than in the natural sciences, but even so, when it comes to climate modelers I'm merely echoing sentiments from those like James Hansen, who clearly value models but prefer using real data, real-world, whenever possible. I've spoken to established research scientists who laugh off the climate modelers who so cheerfully say "temperatures will just stop rising" if net zero is achieved. These are people I trust. They've had long careers doing real science, not short ones playing with computers.

    And Rob, believe me, I do live a low-carbon life myself but I know very few others know or care about the need for that. The problem is, the accepted wisdom for many years now has been to not alarm people with GW talk, and churn out messages with hope and optimism, so what has happened is that people pretty much think "the experts" are taking care of things and there's nothing to really worry about. The person on the street has no clue how bad things are or how soon things will get very bad. No wonder they don't change their lifestyles further than maybe switching to a new sexy Tesla and eating vegan once a week.

    I'm also involved in the renewable energy business and that's definitely been an excellent development but again, people are being misled into thinking that's all we need to do, but it's not going to happen. Have a look at Simon Michaux's work.

    We've had the IPCC for 35 years and not much to show for it, and anyone who doesn't believe that is either simply ignorant or is fooling themselves. The IPCC plays politics with science. We need to take their estimates and double or triple them to achieve results close to reality.  

    All I can say regarding decarbonizing is that the money and the power is dead set against it because they're only concerned about today's profits, but as more and more of the world burns up, as food insecurity gets worse and water scarcity as well, their hands will be forced. The question is: will there be enough time then? Will the "laser focus" that might (might) be squeezed out of people when their backs are against the wall be too late? What is required, at the very least since people aren't acting like adults, are laws limiting waste in all industries, in all areas of government, and in our private lives, but is that coming? Are laws restricting unnecessary consumption coming? Laws banning the worst of the world's luxury goods would put a big dent in the fattest carbon emitters and send a message to all those people idolizing such frivolous living but is that coming? We could put tight curbs on new car sales and enforce drastic changes to allowable car specifications (reduction of size/weight/horsepower). We need governments to enforce "work at home" for all industries and jobs where that's feasible.  We need public service messages telling people to stop trying to "live large," we need celebrities to publically downsize their lifestyles, television shows to stop glamorizing the selfish life... the list is very long. So much needs to be done and could be done, but is it? Scientists gluing themselves to bridges is what we need, it's just about the only sensible thing to do this late in the game to make people wake up, but instead of getting the message people scorn them and governments lock them up. We cannot blame these protesting scientists. They've been asked to do something they were never trained to do, and for which no infrastructure exists, and to make matters worse, they're being pushed into solving this emergency by adopting a profit-making model that flies in the face of the spirit of science.

    The way I see it, we've got about one decade of "somewhat normal" life left before the food insecurity hits the privileged classes hard, and at that point, the societal collapse that has already begin is going to be much more life-threatening than heat. Net zero goals for 2050 may no longer matter when everything begins to fall apart.   

    As for the mirror concept, all the details are being researched but we're not talking about traditional glass mirrors but rather "specular reflectors" such as what you get combining PET with aluminum for a cheap, thin, durable, flexible mirror-like tool. These are already in use for local heat adaptation, but going from adaptation to global heat mitigation is just a matter of scaling up, and there are plenty of resources for it, unlike so much of the other ideas floating around. Plastics with no metal at all are also being developed that could be used. The whole concept is a simple evolution of white paint, which is not feasible for many reasons including the fact that it gets moldy and needs regular attention and re-surfacing. 

  11. Climate Confusion

    Markp @23...

    "The fact is, when we talk about hypothetically achieving no more human emissions, we're talking about a time in the future that is not tomorrow or next year or next decade, but at the very least, several decades, at least going by the extremely lazy response by humanity thus far. Correct?"

    I'd say this is a faulty assumption. It is most certainly a Herculean task that is required, made even more difficult by the need to pull ever more humans out of poverty. But when you look at the changes that are occurring, particularly in how quickly renewables are now getting deployed, I think there's a decent chance we'll get to net zero around 2050 and full zero in the decades following that. 

    It must be disheartening for all the scientists and engineers who have been working on renewable energy for decades, and for them to have now created methods that generate electricity that beat the cost of FF's, only to constantly hear people make statements like "the extremely lazy response by humanity thus far."

    But perhaps they're too busy to take notice or even care what others say.

    It's worth noting, we are definitely going to see huge global challenges in the coming decades as the planet likely warms another degree celcius. So, perhaps it's important to put yourself in the mind of someone living in 2050 with far worse climate impacts each and every year. I think all of humanity is going to be laser focused on getting the last vestigages of carbon emissions eliminated.

  12. Climate Confusion

    Markp @23 , to respond is simple.  Name the person you are addressing, and preferably add in the # post number, for greater precision. Occasionally that # goes "wrong" if the Moderator has altered some post numbers ~ but usually posts go onto the thread in the chronological order they were received.

    Again, I am not sure why you are bothered by "models and scenarios".   As you say, the social/political/technological response to CO2-derived global warming is rather tardier than ideal, in slowing and eventually halting the current rapid warming.   Yes, decades.

    Simply apply common sense, and remember the first step needed is the reducing of human-caused CO2 emissions.   Eventually, the CO2 level stops rising (and if you are curious, you can observe how much more warming occurs after that . . . so no actual need for "models").   Then you can observe the speed of subsequent CO2 level fall . . . and take further high-tech action if that seems warranted, in order to accelerate the CO2 decline.  If not going the high-tech route immediately to begin with!

    Probably best to ensure the CO2 does not drop below about 350ppm  (since eventually the natural Milankovitch-cycle cooling will start to show).   That "natural cooling" has been estimated to become problematic in roughly 15,000 years . . . so not an urgent problem!   Plus humans will then have the option of warming the planet by burning small amounts of coal (assuming we have been wise enough to keep a goodly amount of coal available for such future need . . . although by that stage presumably we will have the option of heating limestone per fusion-powered electricity).

    Markp, your idea of mirrors (ground-based, not space-based) seems reasonable in theory . . . but what about the practicalities?  Please go ahead and "show your workings" for areas needed / desirable locations / dollar cost per sq. meter / CO2-cost of building & installing mirrors / and so on.

    Remember the old axiom : Politics is the art of the possible.

    Stop worrying yourself about models ~ leave the models to the scientists.  Your personal responsibility (to yourself and others) means taking practical action with what you can do now .

  13. Climate Confusion

    Markp @ 23:

    Ah, I see. You simply dismiss models. It must be really difficult for you to do any science with no models of any sort. Since science is based pretty much entirely on models (descriptive, statistical, mathematical, etc.), dismissing models is pretty much saying you dismiss science writ large.

    ...but then, the people doing the science (with models) and presenting results, you dismiss as making "assertions".

    I'm glad you "asserted" this viewpoint.

  14. Climate Confusion

    Not sure how to respond to comments to my comment... There is no "reply" etc., featured in those comments, so I'll just say to Eclectic that I'm sorry you find my last paragraph unclear, and to Bob Loblaw and Rob Honeycutt: I'm clear on the difference between different types of "zero" CO2 scenarios, whether they imply constant concentrations or not. And Zeke's "explainer" is nice but is only a case in point: too many people simply assert that under a complete end to human emissions scenario, whereby natural uptake through oceans and trees continue drawing down CO2, heating will stop. Almost immediately. And they seem to base that belief purely on what has been modelled. And as everyone should know about models: garbage in, garbage out. The models don't reflect reality, though they try. Their inputs aren't complete, but merely partial. For example, ZECMIP is only CO2. The fact is, when we talk about hypothetically achieving no more human emissions, we're talking about a time in the future that is not tomorrow or next year or next decade, but at the very least, several decades, at least going by the extremely lazy response by humanity thus far. Correct? So by that time in the distant future, as emissions have continued, and tipping points have tipped, many things will have likely changed that our current thinking (or modeling) does not account for. So it is a bit silly to claim that temperatures will just stop IF/WHEN/? we ever manage to end human emissions, or "net" end them through the net zero concept. We place far too much reliance on models here, or rather I should say, those who are cheerleaders for net zero do. 

    So to Eclectic, I'm not proposing an alternative to reducing emissions. We need to reduce emissions. But that won't be enough. We also need to try the best form of SRM we can manage, which in my view is land-based mirrors, because the tech is here now, it's low tech, non-toxic, completely scalable, does not block sunlight from reaching our flora and fauna, and has an immediate effect on warming, unlike all the downstream GHG management methods.  

  15. At a glance - How do we know more CO2 is causing warming?

    walschuler @3 - Sorry, we didn't yet have time to take a close(r) look. We'll be in touch directly once we can pick it up again.

  16. At a glance - How do we know more CO2 is causing warming?

    I am wondering if the information I supplied separately, including the pages from de Saussure's work, has been posted somewhere...

  17. A Frank Discussion About the Propagation of Measurement Uncertainty

    [ This will be post #51 , and thus a new page in this thread. ]

    Whew ~ I have just finished reading the PubPeer thread of 271 posts.

    Which features the Star (Pat Frank) engaging with Paul Pukite, Ken Rice, Joshua Halpern, and Gavin Cawley, in 2019.

    Towards the end, Cawley says to Frank : "you are impervious to criticism"  and "I have no interest in argument-by-attrition".

    And Pukite says (to Frank) "Perhaps worse than being wrong, your paper is just not that interesting and may explain why it was rejected so many times."

    Earlier, Rice says (to Frank) : "... you've done a simplistic calculation using a simple model and produced a a result that doesn't represent anything at all."

    And that seems to be the heart of it ~ Frank has fired an arrow which has missed the target completely . . . and he spends years wrangling uninsightfully with almost everyone, and insists over & over that they  "do not understand physical science".   Frank contra mundum.

    I won't tax the reader by going into details.  The statistics of it all ~ are much less interesting than the personality traits of Dr Frank.  Perhaps the most appropriate statistic would be found in the Diagnostic and Statistical Manual  [ DSM, Fifth Edition ].

  18. A Frank Discussion About the Propagation of Measurement Uncertainty

    ...and, to put data where my mouth is....

    I claimed that using anomalies (expressing each temperature as a difference from its monthly mean) would largely correct for systematic error in the temperature measurements. Here, repeated from comment 49, is the graph of error statistics using the original data, as-measured.

    Error statistics - three temperature sensors

     

    ...and if we calculate monthly means for each individual sensor, subtract that monthly mean from each individual temperature in the month, and then do the statistics comparing each pair of sensors (1-2, 1-3, and 2-3), here is the equivalent graph (same scale).

    Error statistics - three temperature anomalies

     

    Lo and behold, the MBE has been reduced essentially to zero - all within the range -0.008 to +0.008C. Less than one one-hundredth of a degree. With MBE essentially zero, the RMSE and standard deviation are essentially the same. The RMSE is almost always <0.05C - considerably better than the stated accuracy of the temperature sensors, and considerably smaller than if we leave the MBE in.

    The precision of the sensors (small standard deviation) can detect changes that are smaller than the accuracy (Mean Bias Error).

    Which is one of the reasons why global temperature trends are analyzed using temperature anomalies.

  19. Ice age predicted in the 70s

    Michael... The other one that has confused me a couple of times is when a post becomes the first on a new page. Even though you hit submit, and it starts a new page, the page numbers don't update. You have to reload the page to see the new page number. I think that often leads to our "contrarian" friends here to jump to the conclusion they're being stifled in some way.

  20. A Frank Discussion About the Propagation of Measurement Uncertainty

    I will not try to say "one last point" - perhaps "one additional point".

    The figure below is based on one year's worth of one-minute temperature data taken in an operational Stevenson Screen, with three temperature sensors (same make/model).

    The graph shows the three error statistics mentioned in the OP: Root Mean Square Error (RMSE), Mean Bias Error (MBE), and the standard deviation (Std). These error statistics compare each pair of sensors: 1 to 2, 1 to 3, and 2 to 3.

    The three sensors generally compare within +/-0.1C - well within manufacturer's specifications. Sensors 2 and 3 show an almost constant offset between 0.03C and 0.05C (MBE). Sensor 1 has a more seasonal component, so comparing it to sensors 2 or 3 shows a MBE that varies roughly from +0.1C in winter (colder temperatures) to -0.1C in summer (warmer temperatures).

    The RMSE error is not substantially larger than MBE, and the standard deviation of the differences is less than 0.05C in all cases.

    This confirms that each individual sensor exhibits mostly systematic error, not random error.

    Error statistics - three temperature sensors

     

    We can also approach this my looking at how the RMSE statistic changes when we average the data over longer periods of time. The following figure shows the RMSE for these three sensor pairings, for two averaging periods. The original 1-minute average in the raw data, and an hourly average (sixty 1-minute readings).

    We see that the increased averaging has had almost no effect on the RMSE. This is exactly what we expect when the differences between two sensors have little random variation. If the two sensors disagree by 0.1C at the start of the hour, they will probably disagree by very close to 0.1C throughout the hour.

    RMSE - three temperature sensors

     

    As mentioned by bdgwx in comment 47, when you collect a large number of sensors across a network (or the globe), then these differences that are systematic on a 1:1 comparison become mostly random globally.

  21. Eastern Canada wildfires: Climate change doubled likelihood of ‘extreme fire weather’

    Davz @4 : you are wrong.  That paper does not support your claim of showing "far fewer not more fires over a long period of time".  Please read through the paper, and with particular attention to the last paragraph.

    The paper is from 2016 and includes mention of "recent" study decades of up to 2012 and up to 2015  ( eight years ago ).   The paper was very vague about "areas burned versus fire intensity" [unquote].

    The authors also said:  "We do not question that the fire season length and area burned has increased in some regions over past decades" [unquote].   Again, no quantification.   And you, Davz, have the advantage of knowing something of the past eight years of global fire activity ~ unlike the authors.

    They also mentioned (in an unquantified manner) the other factors of "increased fire prevention, detection and fire-fighting efficiency, abandonment of slash-and-burn cultivation in some areas and permanent agricultural practice in others" .

    And the authors commenced with:  "Charcoal records in sediments and isotope-ratio in ice cores suggest that global biomass burning during the past century has been lower than at any time in the past 2000 years."    Davz , this is very vague unquantified stuff  ~ indeed, the paper is little more than a discussion essay.

    The title is grand, though.  "Global trends in wildfire and its impacts: perceptions versus realities in a changing world".   But the paper itself is so vague as to be almost useless.

    It is certainly not qualifying as "Counter-Propaganda"  ~ if that was what you were intending?

    Moderator Response:

    [DB] Link added to the OP to the new rapid attribution paper in question, Barnes et al 2023.

  22. Ice age predicted in the 70s

    Don Williamson at 142:

    The software at SkS automatically logs users out after a period of time.  If you spend too long typing out a comment (for example while you are finding relevant links), you get logged out. You cannot tell that you have been logged out.  When you hit submit your comment vanishes.  Vanished posts cannot be recovered.

    At SkS all comments are posted immediately without moderation.  If your comment does not appear immediately then you posted after you were logged out.

    Long time users copy their posts before submitting or type their posts in word and then copy them into SkS.  It is frustrating to have a post with a lot of time consuming links vanish.

  23. Eastern Canada wildfires: Climate change doubled likelihood of ‘extreme fire weather’

    Davz @ 4:

    That specific paper was discussed recently on the "How human-caused global warming worsens wildfire", thread, starting with this comment.

    Short version: the paper has serious weaknesses when using it to make the claim you are making. Reducing fires in savanna and grassland in Africa does not help people living in areas where increases in forest areas burned are affecting livelihoods.

    In spite of "fewer fires" in Canada this year, the area burning - and the damage and cost to the Canadian economy and people's lives - has far exceeded historical records.

    Which would you rather have? Five grease fires in a year while cooking dinner that were easily put out, or one house fire that burns your entire house down? After all, five is worse than one, isn't it? (According to your logic).

    You only see this as "propaganda" because you don't like the message.

  24. Eastern Canada wildfires: Climate change doubled likelihood of ‘extreme fire weather’

    To put this "science " into perspective I would urge you to read this paper on wildfires.  It will show far fewer not more fires over a long period of time. Yet more sensational headlines that can only be seen as propaganda 

    https://royalsocietypublishing.org/doi/10.1098/rstb.2015.0345

    Moderator Response:

    [BL] Link activated.

    The web software here does not automatically create links. You can do this when posting a comment by selecting the "insert" tab, selecting the text you want to use for the link, and clicking on the icon that looks like a chain link. Add the URL in the dialog box.

  25. Eastern Canada wildfires: Climate change doubled likelihood of ‘extreme fire weather’

    And a typo on my line 2, too!!

  26. Eastern Canada wildfires: Climate change doubled likelihood of ‘extreme fire weather’

    Worrying.

    Thank YUoui for the info.

    There is a typo on line 5. 15M Ha is 37.05M acres.   

  27. A Frank Discussion About the Propagation of Measurement Uncertainty

    Yes, bdgwx, that is a good point. The "many stations makes for randomness" is very similar to the "selling many sensors makes the errors random when individual sensors have systematic errors".

    The use of anomalies does a lot to eliminate fixed errors, and for any individual sensor, the "fixed" error will probably be slightly dependent on the temperature (i.e., not the same at -20C as it is at +25C). You can see this in the MANOBS chart (figure 10) in the OP. As temperatures vary seasonally, using the monthly average over 10-30 years to get a monthly anomaly for each individual month somewhat accounts for any temperature dependence in those errors.

    ...and then looking spatially for consistency tells us more.

    One way to look to see if the data are random is to average over longer and longer time periods and see if the RMSE values scale by 1/sqrt(N). If they do, then you are primarily looking at random data. If they scale "somewhat", then there is some systematic error. If they do not change at all, then all error is in the bias (MBE).

    ...which is highly unlikely, as you state.

    In terms of air temperature measurement, you also have the question of radiation shielding (Stevenson Screen or other methods), ventilation, and such. If these factors change, then systematic error will change - which is why researchers doing this properly love to know details on station changes.

    Again, it all comes down to knowing when you are dealing with systematic error or random error, and handling the data (and propagation of uncertainty) properly.

  28. A Frank Discussion About the Propagation of Measurement Uncertainty

    Another interesting aspect of the hypothetical ±0.2 C uncertainty is that while it may primary represent a systematic component for an individual instrument (say -0.13 C bias for instrument A) when you switch the context to the aggregation of many instruments that systematic component now presents itself as a random component because instruments B, C, etc. would each have different biases.

    The GUM actually has a note about this concept in section E3.6.

    Benefit c) is highly advantageous because such categorization is frequently a source of confusion; an uncertainty component is not either “random” or “systematic”. Its nature is conditioned by the use made of the corresponding quantity, or more formally, by the context in which the quantity appears in the mathematical model that describes the measurement. Thus, when its corresponding quantity is used in a different context, a “random” component may become a “systematic” component, and vice versa.

    This is why when we aggregate temperature measurements spatially we get a lot of cancellation of those individual biases resulting in an uncertainty of the average that at least somewhat scales with 1/sqrt(N). Obviously there will be still be some correlation so you won't get the full 1/sqrt(N) scaling effect, but you will get a significant part of it. This is in direct conflict with Pat Frank's claim that there is no reduction in the uncertainty of an average of temperatures at all. The only way you would not get any reduction in uncertainty is if each and every instrument had the exact same bias. Obviously that is infintesemially unlikely especially given the 10,000+ stations that most traditional datasets assimilate. 

     

  29. A Frank Discussion About the Propagation of Measurement Uncertainty

    At the risk of becoming TLDR, I am going to follow up on something I said in comment #5:

    On page 18, in the last paragraph, [Pat Frank] makes the claim that "...the ship bucket and engine-intake measurement errors displayed non-normal distributions, inconsistent with random error."

    Here is a (pseudo) random sequence of 1000 values, generated in a spreadsheet, using a mean of 0.5 and a standard deviation of 0.15. (Due to random variation, the mean of this sample is 0.508, with a standard deviation of 0.147.)

    Normal dsitribution random sequence of values

     

    If you calculate the serial correlation (point 1 vs 2, point 2 vs 3, etc.) you get r = -0.018.

    Here is the histogram of the data. Looks pretty "normal" to me.

    Normal distribution sample

     

    Here is another sequence of values, fitting the same distribution (and with the same mean and standard deviation) as above:

    Normal sorted sequence

    How do I know the distribution, mean , and standard deviation are the same? I just took the sequence from the first figure and sorted the values. The fact that this sequence is a normally-distributed collection of values has nothing to do with whether the sequence is random or not. In this second case, the serial correlation coefficient is 0.99989. The sequence is obviously not random.

    Still not convinced? Let's take another sequence of values, generated as a uniform pseudo-random sequence ranging from 0 to 1, in the same spreadsheet:

    Uniform random sequence

     

    In this case, the mean is 0.4987, and the standard deviation is 0.292, but the distribution is clearly not normal. The serial correlation R value is -0.015. Here is the histogram. Not perfectly uniform, but this is a random sequence, so we don't expect every sequence to be perfect. It certainly is not normally-distributed.

    Uniform distribution

    Once again, if we sort that sequence, we will get exactly the same histogram for the distribution, and exactly the same mean and standard deviation. Here is the sorted sequence, with r = 0.999994:

    Uniform sorted sequence

     

    You can't tell if things are random by looking at the distribution of values.

    Don't listen to Pat Frank.

  30. Eastern Canada wildfires: Climate change doubled likelihood of ‘extreme fire weather’

    In this post, the link to Fire Weather Index (in the Fire Weather section) points to a European site.

    Details on the Canadian Forest Fire Weather Index System can be seen on this Natural Resources Canada web page. The system combines weather, fuel moisture, and fire behaviour indices into a rating of the danger of fires developing.

  31. A Frank Discussion About the Propagation of Measurement Uncertainty

    Getting back to the temperature question, what happens when a manufacturer states that the accuracy of a sensor they are selling is +/-0.2C? Does this mean that when you buy one, and try to measure a known temperature (an ice-water bath at 0C is a good fixed point), that your readings will vary by +/-0.2C from the correct value? No, it most likely will not.

    In all likelihood, the manufacturer's specification of +/-0.2C applies to a large collection of those temperature sensors. The first one might read 0.1C too high. The second might read 0.13C too low. And the third one might read 0.01C too high. And the fourth one might have no error, etc.

    If you bought sensor #2, it will have a fixed error of -0.13C. It will not show random errors in the range +/-0.2C - it has a Mean Bias Error (as described in the OP). When you take a long sequence of readings, they will all be 0.13C too low.

    • You may not know that your sensor has an error of -0.13C, so your uncertainty in the absolute temperature falls in the +/-0.2C range, but once you bought the sensor, your selection from that +/-0.2C range is complete and fixed at the (unknown) value of -0.13C.
    • You do not propagate this fixed -0.13C error through multiple measurements by using the +/-0.2C uncertainty in the large batch of sensors. That +/-0.2C uncertainty would only vary over time if you kept buying a new sensor for each reading, so that you are taking another (different) sample out of the +/-0.2C distribution. The randomness within the +/-0.2C range falls under the "which sensor did they ship?" question, not the "did I take another reading?" question.
    • When you want to examine the trend in temperature, that fixed error becomes part of the regression constant, not the slope.
    • ...and if you use temperature anomalies (subtracting the mean value), then the fixed error subtracts out.

    Proper estimation of propagation of uncertainty requires recognizing the proper type of error, the proper source, and properly identifying when sampling results in a new value extracted from the distribution of errors.

  32. A Frank Discussion About the Propagation of Measurement Uncertainty

    Eclectic # 43: you can write books on propagation of uncertainty - oh, wait. People have. The GUM is excellent. Links in previous comments.

    When I taught climatology at university, part of my exams included doing calculations of various sorts. I did not want students wasting time trying to memorize equations, though - so the exam included all the equations at the start (whether they were needed in the exam or not). No explanation of what the terms were, and no indication what each equation was for - that is what the students needed to learn. Once they reached the calculations questions, they knew they could find the correct equation form on the exam, but they needed to know enough to pick the right one.

    Pat Frank is able to look up equations and regurgitate them, but he appears to have little understanding of what they mean and how to use them. [In the sqrt(N) case in this most recent paper, he seems to have choked on his own vomit, though.]

  33. No, a cherry-picked analysis doesn’t demonstrate that we’re not in a climate crisis

    Paul @ 22:

    Good question. PubPeer can be a useful method of providing further review of a published article. It requires that someone start the discussion - you, for example, started one on an earlier Pat Frank paper, as you noted at ATTP's blog. Authors of the paper may not participate, though, and sometimes the discussions at PubPeer descend into flame wars that make a Boy Scout wiener roast look innocent (for the wiener).

    [Note: I see you posted today at ATTP's that someone has started a PubPeer review.]

    I debated starting one over the recent Pat Frank paper discussed here. but your experience with the earlier Pat Frank paper made me feel that it would likely be a waste of time.

    There have been other "contrarian" papers that have been handled by either writing to the journal or submitting an official comment to the journal, but not all journals are interested in publishing comments.

    Springer has retracted this paper, with only a short note as to why. We do not see the detailed nature of the complaints, what was said in post-publication review, or what the authors said in response. Just the opinion that "...the addendum was not suitable for publication and that the conclusions of the article were not supported by available evidence or data provided by the authors" and the conclusion that "...the Editors-in-Chief no longer have confidence in the results and conclusions reported in this article."

    A lot of speculation can be read between the lines of the Springer retraction notice. Sometimes, such reviews can end up with papers being retracted, editors being removed, or even a publisher shutting down a journal (cf. Pattern Recognition in Physics).

    Springer has not made the paper "disappear". It is still available on the web page, but marked as retracted. It's just that Springer has put a huge "caveat emptor" on the contents.

  34. Ice age predicted in the 70s

    Don... "...offering a narrow view on a be arrow set of discussion points isn't helpful to those seeking answers and clarification..."

    It seems to me, reading back through the conversation, you're not actually seeking answers or clarification at all. When offered such you've merely rejected it and doubled down on your errors.

    Seeking answers requires that you are open to understanding explanations and have some capacity to move a conversation forward through adjusting and learning.

  35. No, a cherry-picked analysis doesn’t demonstrate that we’re not in a climate crisis

    Why wasn't this paper cycled though PubPeer.com for post peer review?  The authors disagreed with the retraction, and that may have given them a chance to air their grievances.  Don't have to wait for Festivus Day.

  36. 2023 SkS Weekly Climate Change & Global Warming News Roundup #34

    The retracted study made questionable claims that food production hasn't been affected by climate change. I came across this commentary recently, following a discussion on another website that suggests food production is already being negatively impacted by climate change:

    Climate change is affecting crop yields and reducing global food supplies
    Published: July 9, 2019 11.22pm NZST

    Farmers are used to dealing with weather, but climate change is making it harder by altering temperature and rainfall patterns, as in this year’s unusually cool and wet spring in the central U.S. In a recently published study, I worked with other scientists to see whether climate change was measurably affecting crop productivity and global food security.
    To analyze these questions, a team of researchers led by the University of Minnesota’s Institute on the Environment spent four years collecting information on crop productivity from around the world. We focused on the top 10 global crops that provide the bulk of consumable food calories: Maize (corn), rice, wheat, soybeans, oil palm, sugarcane, barley, rapeseed (canola), cassava and sorghum. Roughly 83 percent of consumable food calories come from just these 10 sources. Other than cassava and oil palm, all are important U.S. crops.

    We found that climate change has affected yields in many places. Not all of the changes are negative: Some crop yields have increased in some locations. Overall, however, climate change is reducing global production of staples such as rice and wheat. And when we translated crop yields into consumable calories – the actual food on people’s plates – we found that climate change is already shrinking food supplies, particularly in food-insecure developing countries.......


    theconversation.com/climate-change-is-affecting-crop-yields-and-reducing-global-food-supplies-118897

    I wonder if Sky news have published the fact that the paper was retracted?

  37. Don Williamson at 04:38 AM on 28 August 2023
    Ice age predicted in the 70s

    I've recently posted quotes supported by links but the comments haven't appeared on this forum. I understood that forums on Skeptical Science was the go-to for tough questions but I'm beginning to see the filtering - maybe the statements from well respected climate scientists are to difficult to acknowledge or too difficult to explain? IMHO offering a narrow view on a be arrow set of discussion points isn't helpful to those seeking answers and clarification. Please rethink the silencing of those that are bringing up genuine issues for discussion  :)

    Moderator Response:

    [BL] The only comments I can see from you begin with your first post, in this thread, on August 15, 2023.

    You have made a total of 21 comments on this site, and all are still visible. None of your comments have been deleted, and the only changes to the contents of those comments has been to activate links - in which case the displayed text may have changed, but the embedded link is still the same.

    This comment is the exception. You have made accusations that are not acceptable according to the Comments Policy, and I have applied a "warning snip" to the portions that violate the policy. Please read the policy thoroughly, and make sure that future comments adhere to the policy.

    Moderation actions here are always applied post-facto. If you attempt to make a comment and it never appears, then you have done something wrong.

    The moderation sequence you can expect is

    • Warning snips, such as done here. Others will still see your text, but you are beginning to skate on thin ice and you can expect more severe moderation if you continue.
    • Full snips, where offending text is removed and not visible to others.
    • Deletion of entire comments.
    • ...and if you continue to violate the Comments Policy, then your account will be deactivated.

    Also note that moderation policies are not open to discussion, and moderation complaints are always off-topic.

     Also note:

    If you are looking at comments on the Recent Comments page (accessed from the Comments link under the masthead), then clicking on a comment that takes you to a blog post with a long comments section will take you to the wrong page of comments. There is a bug where the link in "New Comments" assumes 50 comments per page, but there are only 25, so it will try to take you to e.g., page 3 instead of page 6.

    This comment of yours, in Recent Comments, shows this incorrect link, which will show you older comments on page 3.

    https://skepticalscience.com/argument.php?a=1&p=3#141648

    If I change "p=3" to "p=6" in the link, I get to the correct comment.

    https://skepticalscience.com/argument.php?a=1&p=6#141648

     

  38. No, a cherry-picked analysis doesn’t demonstrate that we’re not in a climate crisis

    Noted in a couple of other threads, but worth repeating here.

    This paper has been retracted. Further details available in several places:

  39. 2023 SkS Weekly Climate Change & Global Warming News Roundup #34

    Also note that this retracted paper was examined in this Skeptical Science post, and the ATTP reblogged it at that time, and today ATTP has a followup post.

  40. A Frank Discussion About the Propagation of Measurement Uncertainty

    Apart from ATTP's posts on his own website (with relatively brief comments) . . . there is a great deal more in the above-mentioned PubPeer thread (from September 2019)   ~  for those who have time to go through it.

    So far, I am only about one-third of the way at the PubPeer one.   Yet worth quoting Ken Rice [=ATTP]  at #59 of PubPeer :-

    "Pat, Noone disagrees that the error propagation formula you're using is indeed a valid propagation formula.  The issue is that you shouldn't just apply it blindly whenever you have some uncertainty that you think needs to be propagated.  As has been pointed out to you many, many, many, many times before, the uncertainty in the cloud forcing is a base state error, which should not be propagated in the way you've done so.  This uncertainty means that there will be an uncertainty in the state to which the system will tend; it doesn't mean that the range of possible states diverges with time."

    In all of Pat Frank's many, many, many truculent diatribes on PubPeer, he continues to show a blindness to the unphysical aspect of his assertions.

  41. CO2 is not a pollutant

    Please note: a new basic version of this rebuttal was published on August 27 which includes an "at a glance“ section at the top. To learn more about these updates and how you can help with evaluating their effectiveness, please check out the accompanying blog post @ https://sks.to/at-a-glance

  42. A Frank Discussion About the Propagation of Measurement Uncertainty

    Sorry that this is the first time I have commented have been lurking for years.

     Figure 4 3rd explanation

    Typo: Person A will usually fall in between A and C, but for short distances the irregular steps can cause this to vary. Should read between B and C.

     

    Moderator Response:

    [BL] Thanks for noticing that! Corrected...

  43. A Frank Discussion About the Propagation of Measurement Uncertainty

    Oh wow. That PubPeer thread is astonishing. I didn't realize this had already been hashed.

  44. 2023 SkS Weekly Climate Change & Global Warming News Roundup #34

    Note that the lead section of this article is discussing the same retracted paper that was mentioned in the August 24 New Research post.

    Retraction Watch also has an article about it.

  45. A Frank Discussion About the Propagation of Measurement Uncertainty

    Although DiagramMonkey may think that I am a braver person than he is, I would reserve "braver" for people that have a head vice strong enough to spend any amount of time reading and rationally commenting on anything posted at WUWT. I don't think I would survive watching a 1h26m video involving Pat Frank.

    I didn't start the hagfish analogy. If you read the DiagramMonkey link I gave earlier, note that a lot of his hagfish analogy is indeed discussing defence mechanisms. Using the defence mechanism to defend one's self from reality is still a defence mechanism.

    The "per year" part of Pat Frank's insanity has a strong presence in the PubPeer thread bigoilbob linked to in comment 27 (and elsewhere in other related discussions). I learned that watts are Joules/second - so already a measure per unit time - something like 40-50 years ago. Maybe some day Pat Frank will figure this out, but I am not hopeful.

     

  46. A Frank Discussion About the Propagation of Measurement Uncertainty

    Electic, I appreciate the kind words. I think the strangest part of my conversation with Pat Frank is when he quotes Lauer & Hamilton saying 4 W m-2 and then gaslights me because I didn't arbitary change the units to W m-2 year-1 like he did. The sad part is that he did this in a "peer reviwed" publication. Oh wait...Frontiers in Earth Science is a predatory journal.

  47. A Frank Discussion About the Propagation of Measurement Uncertainty

    Bdgwx @36 , you have made numerous comments at WUWT  blogsite, where the YouTube Pat Frank / Tom Nelson interview was "featured" as a post on 25th August 2023.

    Video length is one hour and 26 minutes long.  ( I have not viewed it myself, for I hold that anything produced by Tom Nelson is highly likely to be a waste of time . . . but I am prepared to temporarily suspend that opinion, if an SkS  reader can refer me to a worthwhile Nelson production.)

    The WUWT  comments column has the advantage that it can be skimmed through.  The first 15 or so comments are the usual rubbish,  but then things gradually pick up steam.  Particularly, see comments by AlanJ and bdgwx , which are drawing heat from the usual suspects (including Pat Frank).

    Warning : a somewhat masochistic perseverance is required by the reader.  But for those who occasionally enjoy the Three-Ring Circus of absurdity found at WUWT  blog, it might have its entertaining aspects.   Myself, I alternated between guffaws and head-explodings.  Bob Loblaw's reference to hagfish (up-thread) certainly comes to mind.   The amount of hagfish "sticky gloop" exuded by Frank & his supporters is quite spectacular.

    [ The hagfish analogy breaks down ~ because the hagfish uses sticky-gloop to defend itself . . . while the denialist uses sticky-gloop to confuse himself especially. ]

  48. A Frank Discussion About the Propagation of Measurement Uncertainty

    I'm sure this has already been discussed. But regarding Frank 2019 concerning CMIP model uncertainty the most egregious mistake Frank makes is interpretting the 4 W/m2 calibration error of the longwave cloud flux from Lauer & Hamilton 2013 as 4 W/m2.year. He sneakily changed the units from W/m2 to W/m2.year.

    And on top of that he arbitrarily picked a year as a model timestep for the propagation of uncertainty even though many climate models operate on hourly timesteps. It's easy to see the absurdity of his method when you consider how quickly his uncertainty blows up if he had arbitrarily picked an hour as the timestep.

    Using equstions 5.2 and 6 and assuming F0 = 34 W/m2 and 100 year prediction period we get ±16 K for yearly model timesteps and ±1526 K for hourly model timesteps. Not only is it absurd, but it's not even physically possible.

     

  49. A Frank Discussion About the Propagation of Measurement Uncertainty

    Here is a lengthy interview with Pat Frank posted 2 days ago.

    https://www.youtube.com/watch?v=0-Ke9F0m_gw

    Per usual there are a lot of inaccuracies and misrepresentations about uncertianty in it.

    Moderator Response:

    [RH] Activated link.

  50. A Frank Discussion About the Propagation of Measurement Uncertainty

    In the "small world" category, a comment over at AndThenTheresPhysics has pointed people to another DiagramMonkey post that covers a semi-related topic: a really bad paper by one Stuart Harris, a retired University of Calgary geography professor. The paper argues several climate myths.

    I have had a ROFLMAO moment reading that DiagramMonkey post. Why? Well in the second-last paragraph of my review of Pat Frank's paper (above), I said:

    I am reminded of a time many years ago when I read a book review of a particularly bad “science” book: the reviewer said “either this book did not receive adequate technical review, or the author chose to ignore it”.

    That was a book (on permafrost) that was written by none other than Stuart Harris. I remember reviewing a paper of his (for a permafrost conference) 30 years ago. His work was terrible then. It obviously has not improved with age.

Prev  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  Next



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us