Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

Analysts dissect historic Pacific Northwest ‘heat dome’

Posted on 7 July 2021 by greenman3610

This is a re-post from Yale Climate Connections by Peter Sinclair

“In a climate-changed world, the things that once seemed impossible are not just possible, they are probable.”

Those words come from on-air weather and climate change expert Jeff Berardelli wrapping up a piece for CBSN.

Berardelli had told the CBS cable network anchor that the temperature in the Pacific Northwest has increased 3 to 4o F in the past century, leading to expectations in the future for seven to 10 days per year of temperatures over 100oF. He attributed the historic high-temperature readings across the U.S Pacific Northwest and western Canada to an “omega block”:  a “blocking pattern or traffic jam in the atmosphere” that impedes strong weather systems from moving rapidly from west to east:  Instead, they ‘get stuck’ in place.”

“Almost unbelievable, even for meteorologists,” Berardelli says of the persistent heat that plagued the region June 26-28.

“We can certainly say that heat waves are increasing as a result of climate change,” says Steve Vavrus of the University of Wisconsin. Colorado assistant state climatologist Becky Bolinger points to “a time 30 years ago, 40 years ago, when it was uncommon to get 100-degree days every single year, and that is much more common now.”  They and other scientists in the video find a link to climate change.  

Kai Kornhuber of Columbia University says in the latest “This is Not Cool” video that the result is that “a weather pattern that was harmless in the past is now leading to very extreme conditions.”

0 0

Printable Version  |  Link to this page

Comments

Comments 1 to 12:

  1. Link to study is on this page, and please read "main findings" bullets.  https://www.worldweatherattribution.org/western-north-american-extreme-heat-virtually-impossible-without-human-caused-climate-change/ The bullets are sensible.

    I'm not sure if they are intent on showing the event was mostly weather.  But they are comparing adjusted, homogenized data from prior years to unhomogenized data from this year which exaggerates the current event.  That makes it even more likely to be weather.  They are comparing ERA to the euro model in the bounded area box.  The 5C outlier extreme temperature in the bounded box is not verified by any station other than very short record stations, which I suspect is the problem with the analysis of the box.

    In any case they are showing an event with an extreme outlier temperature without showing an increase in similar extremes. They claim an increase but have no long term data to show an increase.

    0 1
    Moderator Response:

    [BL] Link activated.

    The web software here does not automatically create links. You can do this when posting a comment by selecting the "insert" tab, selecting the text you want to use for the link, and clicking on the icon that looks like a chain link. Add the URL in the dialog box.

  2. Eric:

    You state "they are comparing adjusted, homogenized data from prior years to unhomogenized data from this year which exaggerates the current event. "

    Please explain your reasoning as to why recent unhomogenized data causes this problem.

    Please also explain/support your assertions regarding "very short record stations" and "no long term data".

    1 0
  3. Like Bob, I would like to know more about why exactly Eric has a problem with the World Weather Attribution Group method. 

    There is a discussion at RC about this and they link to the preprint, where this can be found in section 2.1 (Observational data):

    "The main dataset used to represent the heatwave is the ERA5 reanalysis (Hersbach et al., 2020), extended to the time of the heatwave by ECMWF operational analyses produced using a later version of the same model. All fields were downloaded at 0.25º resolution from the ECMWF. Both products are the optimal combination of observations, including near-surface temperature observations from meteorological stations, and the high-resolution ECMWF weather forecast model IFS. Due to the constraints of the surface temperature observations, we expect no large biases between the main dataset and the extension, although some differences may be possible under these extreme conditions."

    It would be nice to propose a potential better methodology before condemning this one to the Gemonies.

    https://www.worldweatherattribution.org/wp-content/uploads/NW-US-extreme-heat-2021-scientific-report-WWA.pdf

     

    Per NOAA, the period of record for Vancouver, WA starts in 1872. For Portland, OR in 1872. For Seattle, WA in 1891. For Vancouver BC in 1877. Etc, etc...

    1 0
  4. Hot off the press:

    The deadly heatwave that hit north-western US and Canada in late June would have been “virtually impossible” without human-caused global warming, a new “rapid-attribution” study finds.

    The event, which saw temperature records shattered by as much as 5C, has been linked to hundreds of deaths in the Pacific north-west region.

    The heatwave was “so extreme” that the observed temperatures “lie far outside the range” of historical observations, the researchers say. Their assessment suggests that the heatwave was around a one-in-1,000-year event in today’s climate – and was made at least 150-times more likely because of climate change.

    The analysis also finds that, if global warming were to hit 2C, a heatwave as extreme as seen last month would “occur roughly every five to 10 years” in the region.

    Pacific north-west heatwave shows climate is heading into ‘uncharted territory’ by Robert McSweeney, Carbon Brief, July 7, 2021

    https://www.carbonbrief.org/pacific-north-west-heatwave-shows-climate-is-heading-into-uncharted-territory

    1 0
  5. Thanks for fixing my link, and thanks for the questions.  I found one long record station that continues to the present in SW Canada:  

    Lillooett, Canada highest temperature by month

    The trend is clearly up but due to the lack of data lacks statistical significance.  I have the code on that website and can provide data if anyone wants to validate.  The bottom line is that the first possibility in the top line of the paper is the most likely: 

    There are two possible sources of this extreme jump in peak temperatures. The first is that this is a very low probability event, even in the current climate which already includes about 1.2°C of global warming — the statistical equivalent of really bad luck, albeit aggravated by climate change. [versus climate nonlinearity]

    They also point out the possibility of a combination.  But charts from the US PNW are clear, a mostly flat trend in the highest monthly temperature followed by an extreme outlier.  Bob, short record stations can't show the trend.  Homogenization may or may not be a factor, and only in the single Canadian example, New Westminster.  They said the 2020 homogenized and raw matched exactly.  But that doesn't guarantee that extreme temperatures were not moderated by homgenization in prior years.

    Phillippe said: "For Portland, OR in 1872. For Seattle, WA in 1891"    They used Portland Int AP which starts in 1938 and SeaTac AP which starts in 1948.

    0 1
  6. Thank you for providing additional information, Eric.

    Regarding Lillooet. What is your source of data, and are you going by name, or the Climate ID used by the Meteorological Service of Canada? The information sources I have contain 29 entries for Lillooet, starting in 1878. Each entry indicates small changes in observing programs.Although there are gaps (as suggested by your graph), I see information that suggest a station was active in the period 1948-1970, and other stations in the 1970s and 1980s.

    There are nine Climate ID values associated with those 29 station information records - a few have additional information in the names, such as "Lillooet A', which indicates an airport location. The nine Climate IDs are associated with slight variations in location, which would indicate a need for homogenization if records are joined.

    You may be looking at a very incomplete record for the Lillooet area.

    You may wish to look at the recent discussion where several of us talked about the Lytton location (record all-time Canadian high temperature) and fire:

    https://skepticalscience.com/pacific-northwest-death-valley-like.html

    In the information I have access to, the current Lytton RCS station (Climate ID 1114746) has been operating since 2006, but there are other records in Lytton going back to 1966. Lytton also has nine different Climate IDs associated with the name (incuding variations such as "Lytton", "Lytton RCS", "Lytton 2"). Again, homogenization would be required to join these together, but the current Lytton RCS station is within one arc-minute of the 1966 location (and 50m higher in altitude).

    A great many weather observing locations in Canada (and throughout the world) have undergone many changes over the years, and it takes a lot of work to collect all the different bits and pieces. That's why people do homogenization, and they do tend to know what they are doing.

    Although you mention "that web site", you did not actually provide a link.

    You also state "Homogenization may or may not be a factor..." and "...that doesn't guarantee that extreme temperatures were not moderated by homgenization in prior years".

    That is a very weak argument. Maybe it is? Maybe it isn't? Maybe you don't really know?

    What do you consider to be a "short record station? How many years? On what basis do you decide that this is too short?

    1 0
  7. Bob, I used this link climate.weather.gc.ca/historical_data/search_historic_data_e.htmland then type a town. I noticed in most cases observation locations have changed. When I used NCDC, these are my steps: followthedata.dev/wx/temp/trends.html Although the US stations that result from that search may show as complete from 18XX to present, there may in fact be observation location changes over that stated time period.

    My impression with the Canada search results is that location changes are all explicit. Elevation changes are obviously likely in the terrain in the parts of BC I looked at which can create obvious discontinuities.

    The main reason I used the highest temperature for each month in the trend is to avoid having to homogenize. The highest temperature for the month has a very low chance of being double counted from the previous month (high temp on last day of previous month counted on first day of next month). I basically avoid one problem (observation time with min/max reset) that homogenization may solve.

    As for Lillooet it is actually missing the temperature data but has some precipitation, e.g.

    "-121.93","50.70","LILLOOET","1114620","1958-01-21","1958","01",

    "21","","","","","","","","","","","","0.0","","13.2","","13.2","","","","","","",""

    So I assumed the temperature was not recorded and those particular gaps (all with the same station ID) are not resulting from station moves which would have completely missing data. I believe I used 955 and 27388 but I no longer have the shell script in my shell history so I don't know for sure. Here is a line from the recent data:

    "-121.93","50.68","LILLOOET","1114619","2021-01-13","2021","01","13","","5.5","","-0.2","","2.6","","15.4","","0.0","","","\
    ","","","","M","","","","M","","M"

    On my US maps I used a 90 year minimum to include the 1930's (source code is linked). My station selection process (described at the link) is to download the first 3-4 stations with 98% or more completeness with current data and earliest start date. Some states have limited records so I settled for as low as 96% in some cases. But I also am downloading any long-record station that achieve new all-time highs. I have not done past mid-June so I do not have all of the PNW heat wave stations (started this before the PNW heat wave).

    The bottom line is with the 98% criteria I can usually get 100 or more years of data. I consider that a minimum for rare events but obviously inadequate for very rare events. We will never see those.

    0 1
    Moderator Response:

    [BL] Added a line break to try to fix page formatting issue.

  8. Thanks for the links, Eric.

    I will first comment on the Canadian sources of data, which I have more than a passing knowledge of. You have found the main public archive of data available from Environment and Climate Change Canada (Meteorological Service of Canada). The web site (https://climate.weather.gc.ca/historical_data/search_historic_data_e.html) has its limitations in search functions - in particular it's a bit hard to determine sources of data, whether data are available on hourly, daily, or monthly periods, etc. It takes a bit of manual labour to try all the drop-downs and see what shows up.

    In my comment #6, the source of information I was using for station listings and observation programs was the old "Station Data Catalog". For decades, it was published on paper by MSC, and for a while in the elctronic age the catalog was available electronically, but they seems to have stopped providing it publicly. The last electronic copy I have is from 2017. The Catalog lists all observing programs/time periods for each station, including some that are not available electronically. MSC still has records that have not been transferred from paper to electronic form, and the public archive at the web site you used also does not cover every station and observing program that exists electroncially. That web site is probably as complete as it gets for public records, though.

    The station catalog I refer to tells you when a measuremnt program was active at a station, but it does not tell you when there are gaps in the observations. For example, the one period I mentioned in comment #6 is based on this entry:

    1114620 LILLOOET BC | 50 42 121 56 | 0290| 1948-03-01 1970-02-01

    but the data do not appear to be avaiable on the web. As you have discovered, even if the web page provides you with a download, you may find missing values.

    Yes, a change in station identifiers usually means a significant change in location, but it is not always so. Over many decades, it is hard to maintain a standard policy on when a new identifier should be used. The same identifier may be used when a station undergoes major changes in instrumentation, too - even though this can mean a discontinuity in methodology that may require homogenization. Station names may also change, making it hard to find other stations that might fill a gap. Once you have selected a specific station and are viewing data, there is a "Nearby Stations with Data" link.

    A useful link to additional sources of MSC data is this one: https://climate.weather.gc.ca/links/index_e.html. It includes a link to gridded data based on adjusted and homogenized data.

    The MSC web site provides data from a variety of sources. both in terms of who originally collects the data, and the type of observations that performed. Data can come from MSC-operated stations, or partners such as Nav Canada, DND, Parks, or provincial agencies. Observations can be done manually, or through a variety of automatic systems. Manual observations can be detailed hourly meteterological measurements by trained ovservers, or simple once-per-day temperature (max/min) and precipitation measurements by volunteers (although the equipment is provided by MSC).

    You can find more information on the observation and processing methods in the Techncial Documentation and Glossary links on this web page: https://climate.weather.gc.ca/about_the_data_index_e.html.

    A few key points:

    • Daily mean temperature is calculated as (Max+Min)/2 regardless of the level of detail available in the original observations. This provides consitency across many different data sources.
    • When manual observations of Max/Min are done once or twice per day, there are specific rules on how these are assigned to calendar dates, to maintain consistency. A high temperature will not be used twice on two days or two months.
    • When manual or automatic readings are available on an hourly basis, all Canadian data use a "climatological day" from 0601Z to 0600Z, regardless of time zone.

    For the archived data, hourly, daily, and monthly reports are stored independently, so you are not looking at daily or monthly values that are calcualted on the fly when you request them. Until recently, only selected stations were being processed into daily and monthly results. For monthly results, the MSC folows WMO rules for completeness of records: monthly value are not computed if more than a few days are missing. As a result, monthly data may not be provided for a lot of stations that do report daily data.

    I will comment on your methodology in the next response.

    0 0
  9. Now, regarding the brief description you give of your methodology, Eric. If I go directly to the image you provided in comment #5, it is a little easier to read the text than it is here on the narrow SkS web page.

    https://followthedata.dev/wx/temp/trends/LILLOOET.png

    You also say a few words on your use of monthly maxima in comment #7.

    The distrbution of temperature involves a lot more than just the extreme values, so you are not looking at the full picture when you restrict the data set that way. Restricting it to monthly maxima is even further limiting. In examining climatological data, it is better to look at all the data and apply a frequency distribution, then use the fitted distribution to assess extreme values. That distribution may be a normal gaussian curve in the case of temperature, but will be something else for other parameters. For instance, precipitation events are not at all described by a normal distribution, so other statistical distributions are used.

    In comment #7, you finish with:

    The bottom line is with the 98% criteria I can usually get 100 or more years of data. I consider that a minimum for rare events but obviously inadequate for very rare events. We will never see those.

    It is erroneous to assume that an extreme temperature with a 100-year return interval is the highest temperature in a 100-year period of record, and it is erroneous to think that a shorter record will not contain these extreme values. A 100-year record may not include a 100-year return interval event, or it may contain several, and it may contain a 1000-year event. These are probabilities, not certainties.

    Take the case of Lytton. Is the extreme measure in June 2021 (which is an all-time Canadiane record) a 100 year event? A 1000 year event? Neither? Even if the Lytton station only had data available starting in 2021, that temperature would be an extreme event even though the period of record is short. The classification "extreme event" is essentially independent of the length or record. Our abiity to assess that classification does require more data - but we need to keep a clear distinction between "is it extreme?" and "how do we know it is extreme?". If an event is extreme, it is extreme whether we know it or not.

    In order to assess the 100-year/1000-year question, what is needed is enough data to properly assess the distribution of temperatures, which can be done with a lot less than 100 years of data, and involves including more than just the extreme data. In particular, the dsitribution within a region can be assessed using many different stations throughout the region, and determining that the behaviour is similar across the region.

    In short, assessing the likelhood of extreme events is a little more complicated than it initially looks.

    0 0
  10. ...it is better to look at all the data and apply a frequency distribution, then use the fitted distribution to assess extreme values.

    Thanks for the added feedback Bob.  Initially I thought my question was simpler: how much of the recent event was weather and how much was global warming using the trend of monthly maximums.  Then your frequency distribution suggestion led me to this paper:

    The changing shape of Northern Hemisphere summer temperature distributions  (the Wiley link may not work, so I included the title)

    They are doing what you suggested, a frequency distribution of all Tmax values.  Then they trend the percentiles.  That seems very sensible.  My trend of the maximum value of the month does not capture the nature of global warming because global warming affects averages.

    That's of course using my assumption from the discussion in the rapid response paper: that the weather was not affected by global warming, just the temperature.  So I have to go back and redo my work.

    It seems reasonable that a warmer Gulf of Mexico could pump out more moisture and temper Tmax in the central and eastern US.  Out west there may be a "desert amplification" effect in dry locations, but there's a lot of variability and that could be Tmax cooling from reforestation (e.g. Nevada City CA), Tmax warming from draining the delta (Sacramento around 1920), and some weather amplification.  Soden and Held said the wet get wetter and dry get drier, and I think that applies to weather and seasons more than locations.

    Drought is natural but amplification of any drought is part of global warming, and that clearly contributed to Tmax in Portland with 14% RH and Lytton, which I believe went all the way down to 9%.  I'll have to leave that for later.

    Finally, thanks for the Canada info.  I drilled into a directory and found gridded anomalies.  That could provide a global warming trend but probably for Tavg, rather than Tmax.  I could compare the trend to the raw Tmax values for the recent event.  But I'll probably stick with USA for now.

    0 0
  11. Yes, Eric. distributions bring in a lot more data and make it a lot easier to evaluate the "how probable?" question. It really is a question of probabilities - asking "how much is cause A, vs. how much is cause B?" is very difficult when both cause A and cause B have built-in variation.

    As a simple thought experiment, let's take the hypothetical case of a location with a mean high temperature of 30 for July, and with a normally-distributed variation that gives us a standard deviation of 5. Based on the characteristics of a normal distribution:

    • We would expect 95.45% of the values for July high temperature to be within 2 SD (so, in the range 20 to 40).
    • We would expect 99.73% of the values for July high temperature to be within 2 SD (so, in the range 15 to 45).
    • If we only look at hot extremes, temperatures >40 would happen 2.3% of the time, and temperatures above 45 would happen 0.14% of the time.

    The latter case (>45) is about a 1:1000 return interval. That does not mean that we'd only see one such event in 1000 tries, though: random variation could mean it happens 0, 1, 2, 3, or more times (with it being increasingly unlikely as that number increases).

    Let's look at two scenarios where conditions change.

    • In the first, the average does not change, but conditions get more variable. The SD increases to 7.5. Now, the upper limit of 45 is only 2 SD away from the mean, so it will be exceeded 2.3% of the time instead of 0.14% of the time. That is much more frequent.
    • In the second, the SD does not change, but the mean increases from 30 to 35. Again, the upper limit of 45 is only 2 SD from the mean, so it will be exceeded 2.3% of the time.

    In reality, both the mean and variation can change - and need to be evaluated.

    To add to the difficulty, the 2.3% figure is a long-term statistic. Try generating a sequence of 100 random values (normally distributed) and count how many are more than 2 SD above the mean. Repeat it several times, and watch the count change. (In my quick check, I can get it to vary from 0 to 7 over a few dozen tries.) It is that variation that makes it difficult to use just the extreme values to assess whether there is a shift in the regime. The count of how many values exceed the chosen extreme, in a random sample of the variable, is given by the Binomial Distribution.

    Getting away from statistics, yes any region will have links to other regions. And yes, as soil dries out and both soil and vegetation respond with reduced evaporation more of the energy received from the sun will go into heating the soil and the air. So it gets hotter, and things dry out more, etc.

    0 0
  12. As usual, if you want it done right, ask Tamino. He has a new blog up on the recent NW US heat wave, and the very first graph illustrates the point I was making in comment #11.

    https://tamino.wordpress.com/2021/07/16/northwest-heat-wave/

    Bringing in the graph:

    Changing temperature distribution

    0 0

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us