Climate Science Glossary

Term Lookup

Enter a term in the search box to find its definition.

Settings

Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).

Term Lookup

Settings


All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.

Home Arguments Software Resources Comments The Consensus Project Translations About Support

Bluesky Facebook LinkedIn Mastodon MeWe

Twitter YouTube RSS Posts RSS Comments Email Subscribe


Climate's changed before
It's the sun
It's not bad
There is no consensus
It's cooling
Models are unreliable
Temp record is unreliable
Animals and plants can adapt
It hasn't warmed since 1998
Antarctica is gaining ice
View All Arguments...



Username
Password
New? Register here
Forgot your password?

Latest Posts

Archives

How reliable are climate models?

What the science says...

Select a level... Basic Intermediate

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

Climate Myth...

Models are unreliable

"[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere."  (Freeman Dyson)

At a glance

So, what are computer models? Computer modelling is the simulation and study of complex physical systems using mathematics and computer science. Models can be used to explore the effects of changes to any or all of the system components. Such techniques have a wide range of applications. For example, engineering makes a lot of use of computer models, from aircraft design to dam construction and everything in between. Many aspects of our modern lives depend, one way and another, on computer modelling. If you don't trust computer models but like flying, you might want to think about that.

Computer models can be as simple or as complicated as required. It depends on what part of a system you're looking at and its complexity. A simple model might consist of a few equations on a spreadsheet. Complex models, on the other hand, can run to millions of lines of code. Designing them involves intensive collaboration between multiple specialist scientists, mathematicians and top-end coders working as a team.

Modelling of the planet's climate system dates back to the late 1960s. Climate modelling involves incorporating all the equations that describe the interactions between all the components of our climate system. Climate modelling is especially maths-heavy, requiring phenomenal computer power to run vast numbers of equations at the same time.

Climate models are designed to estimate trends rather than events. For example, a fairly simple climate model can readily tell you it will be colder in winter. However, it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Weather forecast-models rarely extend to even a fortnight ahead. Big difference. Climate trends deal with things such as temperature or sea-level changes, over multiple decades. Trends are important because they eliminate or 'smooth out' single events that may be extreme but uncommon. In other words, trends tell you which way the system's heading.

All climate models must be tested to find out if they work before they are deployed. That can be done by using the past. We know what happened back then either because we made observations or since evidence is preserved in the geological record. If a model can correctly simulate trends from a starting point somewhere in the past through to the present day, it has passed that test. We can therefore expect it to simulate what might happen in the future. And that's exactly what has happened. From early on, climate models predicted future global warming. Multiple lines of hard physical evidence now confirm the prediction was correct.

Finally, all models, weather or climate, have uncertainties associated with them. This doesn't mean scientists don't know anything - far from it. If you work in science, uncertainty is an everyday word and is to be expected. Sources of uncertainty can be identified, isolated and worked upon. As a consequence, a model's performance improves. In this way, science is a self-correcting process over time. This is quite different from climate science denial, whose practitioners speak confidently and with certainty about something they do not work on day in and day out. They don't need to fully understand the topic, since spreading confusion and doubt is their task.

Climate models are not perfect. Nothing is. But they are phenomenally useful.

Please use this form to provide feedback about this new "At a glance" section. Read a more technical version below or dig deeper via the tabs above!


Further details

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been shown to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

The climate models, far from being melodramatic, may be conservative in the predictions they produce. Sea level rise is a good example (fig. 1).

Fig. 1: Observed sea level rise since 1970 from tide gauge data (red) and satellite measurements (blue) compared to model projections for 1990-2010 from the IPCC Third Assessment Report (grey band).  (Source: The Copenhagen Diagnosis, 2009)

Here, the models have understated the problem. In reality, observed sea level is tracking at the upper range of the model projections. There are other examples of models being too conservative, rather than alarmist as some portray them. All models have limits - uncertainties - for they are modelling complex systems. However, all models improve over time, and with increasing sources of real-world information such as satellites, the output of climate models can be constantly refined to increase their power and usefulness.

Climate models have already predicted many of the phenomena for which we now have empirical evidence. A 2019 study led by Zeke Hausfather (Hausfather et al. 2019) evaluated 17 global surface temperature projections from climate models in studies published between 1970 and 2007.  The authors found "14 out of the 17 model projections indistinguishable from what actually occurred."

Talking of empirical evidence, you may be surprised to know that huge fossil fuels corporation Exxon's own scientists knew all about climate change, all along. A recent study of their own modelling (Supran et al. 2023 - open access) found it to be just as skillful as that developed within academia (fig. 2). We had a blog-post about this important study around the time of its publication. However, the way the corporate world's PR machine subsequently handled this information left a great deal to be desired, to put it mildly. The paper's damning final paragraph is worthy of part-quotation:

"Here, it has enabled us to conclude with precision that, decades ago, ExxonMobil understood as much about climate change as did academic and government scientists. Our analysis shows that, in private and academic circles since the late 1970s and early 1980s, ExxonMobil scientists:

(i) accurately projected and skillfully modelled global warming due to fossil fuel burning;

(ii) correctly dismissed the possibility of a coming ice age;

(iii) accurately predicted when human-caused global warming would first be detected;

(iv) reasonably estimated how much CO2 would lead to dangerous warming.

Yet, whereas academic and government scientists worked to communicate what they knew to the public, ExxonMobil worked to deny it."


Exxon climate graphics from Supran et al 2023

Fig. 2: Historically observed temperature change (red) and atmospheric carbon dioxide concentration (blue) over time, compared against global warming projections reported by ExxonMobil scientists. (A) “Proprietary” 1982 Exxon-modeled projections. (B) Summary of projections in seven internal company memos and five peer-reviewed publications between 1977 and 2003 (gray lines). (C) A 1977 internally reported graph of the global warming “effect of CO2 on an interglacial scale.” (A) and (B) display averaged historical temperature observations, whereas the historical temperature record in (C) is a smoothed Earth system model simulation of the last 150,000 years. From Supran et al. 2023.

 Updated 30th May 2024 to include Supran et al extract.

Various global temperature projections by mainstream climate scientists and models, and by climate contrarians, compared to observations by NASA GISS. Created by Dana Nuccitelli.

Last updated on 30 May 2024 by John Mason. View Archives

Printable Version  |  Offline PDF Version  |  Link to this page

Argument Feedback

Please use this form to let us know about suggested updates to this rebuttal.

Further reading

Carbon Brief on Models

In January 2018, CarbonBrief published a series about climate models which includes the following articles:

Q&A: How do climate models work?
This indepth article explains in detail how scientists use computers to understand our changing climate.

Timeline: The history of climate modelling
Scroll through 50 key moments in the development of climate models over the last almost 100 years.

In-depth: Scientists discuss how to improve climate models
Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.

Guest post: Why clouds hold the key to better climate models
The never-ending and continuous changing nature of clouds has given rise to beautiful poetry, hours of cloud-spotting fun and decades of challenges to climate modellers as Prof Ellie Highwood explains in this article.

Explainer: What climate models tell us about future rainfall
Much of the public discussion around climate change has focused on how much the Earth will warm over the coming century. But climate change is not limited just to temperature; how precipitation – both rain and snow – changes will also have an impact on the global population.

Update

On 21 January 2012, 'the skeptic argument' was revised to correct for some small formatting errors.

Denial101x videos

Here are related lecture-videos from Denial101x - Making Sense of Climate Science Denial

Additional video from the MOOC

Dana Nuccitelli: Principles that models are built on.

Myth Deconstruction

Related resource: Myth Deconstruction as animated GIF

MD Model

Please check the related blog post for background information about this graphics resource.

Fact brief

Click the thumbnail for the concise fact brief version created in collaboration with Gigafact:

fact brief

Comments

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  Next

Comments 226 to 250 out of 469:

  1. Pete Ridley - Regarding temperature data, I must apologize; apparently there are three independent data sets, not four. The two satellite sets are derived from the same sensors, albeit with very different data processing. So, the independent data sets are: satellite data (two major statistical analyses), the GHCN database data (currently 1500-2000 stations, many many analyses), and the Global Summary of the Day (GSOD) database (9000 stations, fewer analyses). You can add to that the increasing Ocean Heat Content, sea level rises, longer growing seasons, and a ton of other data, as per the recent NOAA State of the Climate 2009 All raw data indicates rising temperatures, including the last 10 years. All analyses except short term runs with start dates chosen to be 2-sigma events like the 1998 spike indicate rising temperatures, including the last 10 years. I will stand by my statements on the surface temps, and the lack of a decline in recent years.
  2. I can recommend Paul N. Edwards (2010) "A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming". I'm just starting on the final chapter and its by far the best text on the origins and applications of modeling in climate science. Highly recommended!
  3. Mats (Frick), thanks for advising about "A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming". I responded on 31st July and 1st August but my comment was removed so I’ve modified it a little and hope it is now acceptable to the moderator. I note that Professor Edwards is not a scientists involved in any of the numerous disciplines contributing to improving our poor understanding of global climate processes and drivers, so I wonder what has convinced him that “I think climate change is real, and I think it is the biggest threat the world faces now and will face for generations to come. ... Climate change is not a matter of opinion, belief, or ideology. This book is about how we came to know what we know about climate — how we make climate knowledge” (Intro xiv). It’s that “climate change is .. the biggest threat the world faces .. “ bit that I disagree with. He makes no mention of “uncertainty” anywhere in that introduction, which makes me suspicious about the extent of his understanding of those processes and drivers and of his environmental activism. I will be getting a copy but as you’ve read most of the book can you tell me if he touches on any of that or the subject of validation. The manner in which Edwards presents what Professor Freeman Dyson said about the reliance on models gives a somewhat different impression to how I interpret them. I’m think that Dyson was referring specifically to computer models rather than models in general. Chapter 13 is the one that I’m most interested in reading – any comments on that, taking into consideration my previous comment here? I’ll have a more careful read after my holiday and get back to you. Best regards, Pete Ridley.
  4. Jo Nova’s blog has an interesting new article “The models are wrong (but only by 400%) ” (Note 1) which you should have a look at, along with the comments. It covers the recent paper “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Series” (Note 2) co-authored by those well-known and respected expert statisticians, McIntyre and McKitrick, along with Chad Herman. David Stockwell sums up the importance of this new paper with “This represents a basic validation test of climate models over a 30 year period, a validation test which SHOULD be fundamental to any belief in the models, and their usefulness for projections of global warming in the future”. David provides a more detailed comment on his Niche Modeling blog “How Bad are Climate Models? Temperature” thread (Note 3) in which he concludes “But you can rest assured. The models, in important ways that were once claimed to be proof of “… a discernible human influence on global climate”, are now shown to be FUBAR. Wouldn’t it have been better if they had just done the validation tests and rejected the models before trying to rule the world with them?”. Come on you model worshipers, let’s have your refutation of the McIntyre et al. paper. NOTES: 1) see http://joannenova.com.au/2010/08/the-models-are-wrong-but-only-by-400/#more-9813 2) see http://rossmckitrick.weebly.com/uploads/4/8/0/8/4808045/mmh_asl2010.pdf 3) see http://landshape.org/enm/how-bad-are-climate-models/ Best regards, Pete Ridley
  5. James Annan comments on M&M's comment as published in ASL: A commenter pointed me towards this which has apparently been accepted for publication in ASL. It's the same sorry old tale of someone comparing an ensemble of models to data, but doing so by checking whether the observations match the ensemble mean. Well, duh. Of course the obs don't match the ensemble mean. Even the models don't match the ensemble mean - and this difference will frequently be statistically significant (depending on how much data you use). Is anyone seriously going to argue on the basis of this that the models don't predict their own behaviour? If not, why on Earth should it be considered a meaningful test of how well the models simulate reality? Of course the IPCC Experts did effectively endorse this type of analysis in their recent "expert guidance" note, where they remark (entirely uncritically) that statistical methods may assume that "each ensemble member is sampled from a distribution centered around the truth". But it's utterly bogus nevertheless, as there is no plausible situation in which that can occur, for any ensemble prediction system, ever. Having said that, IMO a correct comparison of the models with these obs does show the consistency to be somewhat tenuous, as we demonstrated in that (in)famous Heartland presentation. It is quite possible that they will diverge more conclusively in the future. Or they may not. They haven't yet. Annan Should be quite a stir out of this, papers of this sort being few and far between. Worth noting that Annan is an unflinching critic of whatever he sees wrong w/IPCC, etc. Probably a useful snapshot metric of the significance of M&M's output here.
  6. I might add as a gratuitous fling, the amount of back-slapping and rejoicing around M&M's first accepted comment in years is indicative of the general poverty of their camp. Looking at the comment threads erupting around this I'm reminded of meat being thrown into a kennel full of emaciated dogs. Folks outside the kennel have more to eat than they care to look at, frankly, are amply fed with dismal facts. Less gratuitously, this publication immediately moves me to point out that not everybody can feed from the meal on offer. Those who've committed themselves to trying to show the temperature observations under discussion are meaningless will have to go hungry unless they disagree w/M&M. Those saying there's no trend will also have to continue listening to their stomachs rumbling, because again M&M's results depend on observing a trend.
  7. Pete, worth noting also that you won't find a refutation to M&M 2010 coming from here, you'll find it reported if and when such a refutation appears. The sites you mention are in full celebration but of course they're not adding any information of their own, the actual information is all in the paper itself. Hopefully for M&M the air the party won't be over before their work actually appears in print. :-)
  8. Pete Ridley at 07:12 AM on 11 August, 2010 I'm guessing you read the comments and not the details of the paper? The paper itself is interesting, as M,M&H confirm that tropical Lower Troposhere temperature trends from 1979 to end of 2009 are significantly positive, and to an extent reflect the earlier views in Santer 2008. See my comment on tropospheric hot-spot for some background on this. At the time of writing that comment I suggested that the inclusion of the 2010 data would allow the trends to more closely approach statistical “robustness”, so confirmation is a useful step. They also confirm the known issues with the earlier models used by Santer, and also confirm that the differences between the UAH and RSS MSU datasets are now statistically significant. For the Tropical Lower Troposphere temperature data they quote “In this case the 1979-2009 interval is a 31-year span during which the upward trend in surface data strongly suggests a climate-scale warming process”. That the original model was flawed in this case is old news, and this has been discussed here previously. I note once again some of your sources (and the comments on this new paper) lack context and scientific objectivity.
  9. There is a trade off betweeen concern for the most vulnerable and mistrust of governments. I am not a confirmed beleiver in the network of socialists doctoring results for their trotskyite masters. That said inevitably there will be incidences where the responsability of stewardship weighs heavy on scientific rigour. The code should be available so we can move on. We all agree models will be better in the future. Not to heed what they are currently delivering is an imprudency beyond recall.
  10. Fun! Schmidt and Knappenberger are found at Annan's blog, discussing M&M 2010. Minor celebrities For extra credits in "Climate Science Arcana" coursework, follow the "old dark smear" links at the top of Annan's post. Those have a bit of useful background material to the M&M 2010 treatment of Santer 2008, to do with RPjr. If you have a clue what that's all about, you spend too much time on climate blogs.
  11. Do any climate models have substantial agreement with the last century of precipitation data?
  12. rcglinski. Not precipitation and not a century, but this item gives a really neat alignment of humidity over the last 40 years. I've not followed the references through, but you might find some leads to what you're after if you do. http://tamino.wordpress.com/2010/08/08/urban-wet-island/#comments
  13. Well that comment of mine on 11th August @ 07:12 did elicit some interesting responses but, as Doug acknowledged @ 08:03 “you won't find a refutation to M&M 2010 coming from here”. I think that Doug’s contribution @ 14:10 offered the best read, at friend James’s blog (Note 1). There are lots of interesting comments there, the one that I found most appropriate being from Ron Cram on 12th August @ 01:10 QUOTE: Gavin writes "It is also perhaps time for people to stop trying to reject 'models' in general, and instead try and be specific." People are not trying to reject models in general. It has already been done. Generally speaking commenters are bringing up points already published in Orrin Pilkey's book "Useless Arithmetic: Why Environmental Scientists Can't Predict the Future." Nature is simply too chaotic to be predicted by mathematical formulas, no matter how sophisticated the software or powerful the hardware. None of the models relied on by the IPCC have been validated. It is fair to say the models are non-validated, non-physical and non-sensical. Perhaps it is time to quit pretending otherwise UNQUOTE. NOTE: 1) see http://julesandjames.blogspot.com/2010/08/how-not-to-compare-models-to-data-part.html#comments Best regards, Pete Ridley
  14. Pete, regarding validation you ought to take a look at Hargreaves' remarks here. Concerning that item, be sure also to read Annan's remarks here where as you can see he leads us to the conclusion that making broad condemnatory statements about purported lack of model utility is not circumspect.
  15. Doug, thanks for that link to Julia Hargreaves’s paper. I wholeheartedly agree with her conclusion that “Uncertainty analysis is a powerful, and under utilized, tool which can place bounds on the state of current knowledge and point the way for future research, but it is only by better understanding the processes and inclusion of these processes in the models that the best models can provide predictions that are both more credible and closer to the truth”. There’s a lot more research to be done into obtaining a proper understanding of those horrendously complicated and poorly understood global climate processes and drivers before any reliable models can be constructed and used for predictions. Best regards, Pete Ridley
  16. Yeah, Pete: circumspect, conservative. Hargreaves notes that Hansen's 1988 model passes the "null hypothesis" test but does not leap to any conclusions about "all the models are really great."
  17. KR, on 30th July at 02:41 (#228) you said that “Regarding temperature data .. there are three independent data sets .. ”. NASA appears to think otherwise according to its 3rd August draft of paper "Global surface temperature change". It says “Analyses of global surface temperature change are routinely carried out by several groups, including the NASA Goddard Institute for Space Studies, the NOAA National Climatic Data Center (NCDC), and a joint effort of the UK Met Office Hadley Centre and the University of East Anglia Climatic Research Unit (HadCRUT). These analyses are not independent, as they must use much the same input observations.” (See http://data.giss.nasa.gov/gistemp/paper/gistemp2010_draft0803.pdf). Any comment? Best regards, Pete Ridley
  18. Pete Ridley - I believe that those various analyses you list are based on the GHCN database, which as I had noted has had a lot of analysis applied to it. The independent GSOD and satellite data sets match trends with the GHCN database (in pretty much any analysis whatsoever). This is shown in the Assessing global surface temperature reconstructions thread. That's an excellent support for the data, and indicates (in the absence of any contradictory data) that these trends are real. The lowest estimate on warming is from the UAH analysis of satellite data (~0.13 C/decade?), which has had some known issues. Averaging the various estimates of land/sea increase gives a number closer to ~0.16 C/decade.
  19. New (model) model comes online: BOULDER—Scientists can now study climate change in far more detail with powerful new computer software released by the National Center for Atmospheric Research (NCAR). ... The CESM builds on the Community Climate System Model, which NCAR scientists and collaborators have regularly updated since first developing it more than a decade ago. The new model enables scientists to gain a broader picture of Earth’s climate system by incorporating more influences. Using the CESM, researchers can now simulate the interaction of marine ecosystems with greenhouse gases; the climatic influence of ozone, dust, and other atmospheric chemicals; the cycling of carbon through the atmosphere, oceans, and land surfaces; and the influence of greenhouse gases on the upper atmosphere. In addition, an entirely new representation of atmospheric processes in the CESM will allow researchers to pursue a much wider variety of applications, including studies of air quality and biogeochemical feedback mechanisms. Press release Release includes this remarkable picture (click for full resolution): "Modeling climate’s complexity. This image, taken from a larger simulation of 20th century climate, depicts several aspects of Earth’s climate system. Sea surface temperatures and sea ice concentrations are shown by the two color scales. The figure also captures sea level pressure and low-level winds, including warmer air moving north on the eastern side of low-pressure regions and colder air moving south on the western side of the lows."
  20. A good, short, essay on the role of computer models in science is in the journal Communications of the ACM, the September 2010 issue, page 5. I can see it on line for free, but I don't know if that's because I'm an ACM member: Science Has Only Two Legs.
  21. Tom, thanks, and that link does appear to work for us in the Great Unwashed Masses. Vardi makes an excellent point.
  22. johnd wrote (on another thread) : "With regards to the previous original reference to the JAMSTEC discussion, I provided the full link so that anyone interested could have full access to the entire discussion, as you so obviously had done, I therefore could not have been accused of being selective or cherry picking parts of the discussion to suit." Unless the particular comment of yours, to which you are referring, has been deleted, I cannot see where you have provided that link. All I can see from you is this from your post which contained the actual quote : His details are at http://cawcr.gov.au/bmrc/clfor/cfstaff/hhendon.htm As far as I am aware, the link to the actual email discussion was provided by doug_bostrom. That is why I replied in the ways (here and here) I did.
  23. Continuing from here. "The algebra of probabilistic distributions is extremely complex ... Extrapolating complex environmental data described by complex statistical relationships into the future is indeed a difficult process" What you've described (in the linked comment) sounds like a fairly routine problem in particle physics. And yet we build devices that rely on the motion of electrons through semiconductors; we manage to collide protons and anti-protons with statistical certainty (and avoid hysterical claims that we'd be creating mini black-holes in the process). We can even make sense of the results and produce a very competent model of the sub-atomic world. Your argument suggests that if a problem is too complex, we can't put any faith in a model solution. The implication is that it's a waste of time and money to begin that process. Yet that is a challenge that has been met successfully in other disciplines. Some of the comments above suggest that it can work here as well.
  24. Also, there a some things we can say with a lot of certainty. - CO2 is increasing (want to guess the uncertainty there?) - CO2 increase is anthropogenic. - CO2 is a greenhouse gas. The major uncertainty to quantify then is sensitivity. Models estimate it but I agree there are issues but its not the only way to estimate sensitivity.
  25. Suppose they took the world's best computers, and the best modellers, and worked on the last twenty years Kentucky Derby results and form. Eventually they produce a model that predicted them all. Would you sell your house, and put the money on the same model's prediction for the next Kentucky Derby?

Prev  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  Next

Post a Comment

Political, off-topic or ad hominem comments will be deleted. Comments Policy...

You need to be logged in to post a comment. Login via the left margin or if you're new, register here.

Link to this page



The Consensus Project Website

THE ESCALATOR

(free to republish)


© Copyright 2024 John Cook
Home | Translations | About Us | Privacy | Contact Us