Recent Comments
Prev 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 Next
Comments 125251 to 125300:
-
Gordon1368 at 07:26 AM on 24 January 2010On the reliability of the U.S. Surface Temperature Record
jpark, I doubt that Watts et al photographed all of the stations, including well sited ones. This study shows that both well sited and poorly sited stations show the same basic trend, and that the bias in poorly sited stations is to cooler temperatures. What could be clearer than that? You haven't seen a bad week in AGW yet. You may well see many in your lifetime. I hope we don't. -
batsvensson at 07:20 AM on 24 January 2010The chaos of confusing the concepts
Errata comment #13 (and #20), "... less to weather being non-chaotic and more to weather being affected by a ..." Above is an editing confusion of mine. I usually edit text a lot before I make a post. I this case I was considering to use the word "chaotic" OR "non-linear", and apparently it all got mixed up in the final edit. :( There are also some editing confusion in post #20 as well. For instance: "because of the presence of a forcing from linearity in that make Ned able to do the predict as he did." Was intended to be: "because of the presence of a forcing from linearity in the system that one is able to do a prediction as Ned did" -
Gordon1368 at 07:07 AM on 24 January 2010On the reliability of the U.S. Surface Temperature Record
I'm not a scientist, but this topic fascinates me. I do have 30 years of professional experience to help guide me. When a colleague speaks in abruptly dismissive terms, claiming something is "useless," "trash," or "not really worth serious discussion" I pay attention, but my guard goes up. My years of experience have taught me to listen, but be skeptical. I have rarely found that such a tone is warranted. Here again, I appreciate the careful explanations by people who have responded. I am not a blind believer, but I do have confidence that serious professionals are sincere and careful in their effort, and are correct more often than not. I think that the argument that temperatures are rising is well backed by the loss of sea ice extent, and especially the rapid loss of multi-year ice in the past couple years. The next few years may be telling. -
jpark at 07:05 AM on 24 January 2010On the reliability of the U.S. Surface Temperature Record
Hi Doug! Many thanks, nice explanation. - I do understand the paper but still feel it does not, like a lot of posts here, answer the quite basic Watts question of how accurate the stations are. http://wattsupwiththat.com/2010/01/23/quote-if-the-week-27/#more-15561 And the picture tells a 1000 words - how do you convince people of global or even just US warming when you get to see a weather station next to a/c. It has been a bad week for AGW. -
Doug Bostrom at 06:53 AM on 24 January 2010On the reliability of the U.S. Surface Temperature Record
jpark at 06:37 AM on 24 January, 2010 It's not actually a debate, instead repeated attempts at explanation. Try this one, at home if you like but it's so simple you'll probably find words do the job: Set up a thermometer in a large room where the temperature is steady. Let the thermometer stabilize at ambient temperature. Now, turn on a small lamp next to the thermometer, close enough to warm it a bit. You'll see an immediate bias in the reading given by the thermometer; the reading will be higher than ambient temperature in the room. Let the thermometer stabilize again. Now raise slowly raise the temperature of the room. The thermometer will still register the increase in the temperature of the room. We've learned that bias does not make it impossible to extract a trend in temperature. It's really -that- simple. Not so hard, really, but easy to lose in a detailed technical explanation. To me it seems what we have here after all the hat and light is stripped away is the famous "failure to communicate". -
michaelkourlas at 06:38 AM on 24 January 2010It's the sun
@Tom Dayton Thanks, I hadn't seen that. -
jpark at 06:37 AM on 24 January 2010On the reliability of the U.S. Surface Temperature Record
Interesting debate. So we are talking measuring trends vs actual data, yes? But this does not really answer the Watts paper http://wattsupwiththat.files.wordpress.com/2009/05/surfacestationsreport_spring09.pdf To me Kforestcat, along with Watts paper, makes sense. I cant see the point of the Menne exercise - why bother with a trend when you could measure how good the station was at measuring temperature. Why not put the army of volunteers to good use - how long would it take? -
Tom Dayton at 06:35 AM on 24 January 2010It's the sun
michaelkourlas, that 2007 paper by Friis-Christensen and Svensmark is old news. See Svensmark and Friis-Christensen rebut Lockwood’s solar paper. -
michaelkourlas at 06:31 AM on 24 January 2010Climategate CRU emails suggest conspiracy
Regarding "hide the decline": If it is true that tree rings are definitely inaccurate after 1960 (having compared them with the instrumental temperature record), shouldn't we question the entire data set, as that might be flawed too?Response: This is a good question and is explored in Tree-ring proxies and the divergence problem. In short, tree-ring proxies show good agreement with other proxies before 1960 and also show good agreement with tree-ring proxies that don't show divergence (eg - at lower latitudes). This indicates divergence is a purely recent phenomenon (and hints that there's a good chance it's anthropogenic in cause). -
michaelkourlas at 06:28 AM on 24 January 2010It's the sun
This paper (http://www.spacecenter.dk/publications/scientific-report-series/Scient_No._3.pdf/view), published in 2007 by Eigil Friis-Christensen and Henrik Svensmark at the Danish National Space Center, is a response to Lockwood and Fröhlich's paper disputing the correlation between solar activity and land surface temperature. This new paper discusses the correlation between cosmic rays (solar activity) and sea surface temperature/atmospheric temperature. In both cases there is a clear correlation. While Lockwood and Fröhlich are correct in saying there is a divergence between solar activity and land surface temperature, the correlation remains true for two other temperature data sets (sea surface temp. and tropospheric temp.) Thus, one must question the validity of the land surface measurements, and admit the possibility that the sun may be playing a major role in current global warming. -
batsvensson at 06:16 AM on 24 January 2010The chaos of confusing the concepts
@chris, #15. "Surely Ned is basing his prediction on "the system itself". Sure, I agree with that, and I see I was a bit unclear with my point, my excuses for that. My point was to try to make a distinction, and to separate, linear and non-linear elements in a system, and to clarify that it is because of the presence of a forcing from linearity in that make Ned able to do the predict as he did. I didn’t meant to say this is not part of the system, but to say it can be seen as a separate from the non-linearity of the system. I sometime notice that some people seems to believe that there is proportional linear relation between CO2 levels and global mean temperature. This relation is thou not so trivial as the green house effect from CO2 is said to be a non-linear function of the concentration. In other words, the contributing effect from a linear increase of CO2 will not change as rapid as temperature, therefore, unless we are working with a system that locally can be said to be linear, if both CO2 and temperature increase linear in respect to each other then I would suspect there to be yet another factor in the equation. -
Jacob Bock Axelsen at 06:01 AM on 24 January 2010The chaos of confusing the concepts
@stevecarsonr and Marcel Bökstedt Thanks for your questions, which I will try to comment on. Please bear with me for trying to answer arguing from the variables of the Rayleigh number. Consider two plates (hot and cold) enclosing a liquid convecting fluid: http://en.wikipedia.org/wiki/File:Convection-snapshot.gif http://en.wikipedia.org/wiki/B%C3%A9nard_cells This is the Rayleigh number. Ra = gravity * expansion coefficient * system size * temperature gradient / (viscocity * conductivity * diffusivity) = g*b*D^3*dT/(v*a*k). In the Lorenz attractor Ra must be above the threshold Ra = 13.926 to exhibit any chaos, and below the dynamics is predictable. For instance, my plots are for Lorenz' own choice of Ra = 28. The idea that chaos is prevented by boundedness can then be understood: just decrease D or dT sufficiently to end below the threshold. I was using the 'leash'- analogy differently: The mean global temperature is determined as a steady state of huge energy fluxes. It is suspended by the Sun pulling up and the heat loss to space pulling down. To exhibit chaos you need to be able to delay heat transport (advection) through fluid dynamics, and with El Niño being the largest phenomenon of relevance we are still far away from fully developed climate chaos. Notice that sea levels increase on the order of centimeters during an El Niño - this is the small expansion coefficient of water. Make b small and you move away from chaos. The thermohaline circulation (THC) is a true convection roll resulting from density change. However, the engine of THC is surface cooling in the Arctic which global warming might turn off. If dT cannot drive even laminar currents, then we have smaller dT and lesser probability of chaos. I mentioned aerosols in the post, but they are much more transient than CO2. Much like airborne water, aerosols is argued to be fighting a negative feedback: cloud seeding, gravity and precipitation. My understanding is that clouds more or less cancels out in climate models. If aerosols cool they lessen dT for possible oceanic chaos. Interestingly, dust depositions on glaciers is hypothesized to be part of the ice age trigger: http://forecast.uoa.gr/conferences/iamas/10july/4b/69_smn_dst_dam_iamas_200707.pdf I hope you find these comments useful. -
Riccardo at 04:29 AM on 24 January 2010The IPCC's 2035 prediction about Himalayan glaciers
Charlie A, i didn't claim it was right or wrong; i just pointed out that pluging in the correct starting time the calculation of the rate is correct. -
dhogaza at 04:04 AM on 24 January 2010On the reliability of the U.S. Surface Temperature Record
Or maybe even restrain myself :) -
dhogaza at 04:04 AM on 24 January 2010On the reliability of the U.S. Surface Temperature Record
" A poor station with an absolute temperature error of +5 degrees C still has a bias error of +5 degree C - no matter what the variation occurring due to instrumentation type." We're interested in trends, so a constant bias has no effect, nor does the choice of baseline from which to compute the anomaly. For any bias B, and any two temperature reading at points in time N0 and N1, (N0-B) - (N1-B) = N0 - N1. And you can extend that into any statistical trend analysis taken over a time series N0 ... Nn. "I'm a chemical engineer with U.S. government and 20 years of research experience in various areas including environmental mitigation. If one of my phD's came to me with this nonsense, I'd fire him on the spot. " I could make a snarky statement about 9th grade algebra students but I'll withstrain myself. -
nofreewind at 00:12 AM on 24 January 2010The IPCC's 2035 prediction about Himalayan glaciers
I live in the East USA but have skied annually in California/West and talking about glaciers as a water supply seems awful SILLY. Whatever glaciers there are in California are teeny(most all of the snow melts by September) and their contribution to melt has to be very small compared to general melting snowpack. Worrying about glaciers, without considering an overall predicted rise in precipitation from AGW "theory", does not seem to be right to me. And even with the worst case AGW scenarios coming true, say a 4F rise in temp, is that going to stop a snowpack from forming? I don't think so. Here in the Eastern US snowfall is extremely variable with some years we have very little snowpack, yet our rivers flow all year long, except in periods of a true drought, when they just flow low. I conclude the glacier scare is nothing but that, another scare. I don't have the deep scientific knowledge of AGW "theory" that many of you have, but common sense appears once again to rule the day on the Himalayan glacier/melt issue! -
Ned at 22:44 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
kforestcat writes: "I'm fully aware of how anomaly data is used ( having used it in my own research)" I'm not sure you actually do understand this, because your comments still show the same kinds of errors and confusion. "NASA's individual station temperature readings are taken in absolute temperature (not as an anomaly as you have suggested)." NASA doesn't take temperature station readings, and nobody has suggested that the temperature sensors measure anomalies directly. "Menne has to have (and use) absolute temperature data to get the 1971-2000 mean temperature and then divide the current temp with the mean to get the anomaly. We are back to the same problem - Menne is measuring instrument error - he is not measuring error resulting from improper instrument location." That is very confused. The temperature anomaly is the current daily (or monthly) temperature minus the mean temperature on the same day (or month) during a given reference period. You don't "divide" any temperatures. And Menne et al. are not measuring "instrument error". They are analyzing measurements of temperature as a function of site quality in order to determine the difference in temperature trends between well-sited and poorly-sited stations. "Actual anomaly is 93F - 85F = 8F; Instrument anomaly is 105F - 90F = 15F. The data is trash. There is simply no way to recover either the actual ambient temperatures nor an accurate anomaly reading. What you are missing is that an improperly placed instrument is reading air temperatures & anomalies influenced by unnatural events." You still completely fail to understand what's going on here. Menne et al. are taking the temperature data and grouping them into categories based on the site quality. They then determine the difference in long-term trends between well-sited and poorly-sited stations. In the raw, unadjusted data, poorly-sited stations tend to have a slightly lower trend than well-sited stations. The network homogenization and adjustment process brings poorly-sited stations into closer agreement with well-sited stations. "The readings bear no relationship to either the actual temperature nor the actual anomaly - the data's no good, can't be corrected, and will not be used by a reputable researcher." That is just bluster. What the analysis shows quite clearly is that if anything, poorly-sited stations on average underestimate the warming trend, but that the network adjustment process is able to successfully compensate for this effect. And even if you were reluctant to accept that, the close agreement between in-situ surface temperature and satellite microwave temperature retrievals from the lower troposphere suggests that the surface temperature record is realistic. "Finally, it's not entirely surprising that Menne finds a downward bias in his individual anomaly readings at poorly situated sites. Because: 1) a poorly located instrument produces a higher mean temperature; hence, the anomaly will appear lower; " Huh? Again, this makes no sense. If a sensor always reads 5C too high, its anomaly will be exactly the same as if it were perfectly sited. If a sensor's environment changes such that the current temperature is biased high relative to the period of record, then it will have a positive anomaly, not a negative one. "and 2) generally there's a limit to how hot an improperly placed instrument will get (i.e. mixing of unnaturally heated air with ambient air will tend to cool the instrument - so the apparent temperature rise is lower than one might expect)." That is both confused and irrelevant to the paper at hand. "Had Mennen (NASA) actually measured both absolute temperature and calculated anomaly data using instrumentation at properly setup sites, within say a couple of hundred feet of the poor sites, as a proper standard to measure the bias against - our conversation would be different." (1) Menne et al. work for NOAA, not NASA, and the paper being discussed here is about NOAA's temperature data. (2) You still seem confused about the relationship between measured temperature data and calculated temperature anomaly. (3) The entire point of this paper is to compare poorly-sited and well-sited stations. (4) By doing this comparison using trends in the anomaly rather than using the absolute temperatures, there's no need to compare stations within "a couple of hundred of feet" of each other. "As it stands Menne's data is useless nonsense and not really worth serious discussion." Again, that is just bluster. It sounds to me like you don't understand the subject but are deeply invested in casting doubt on it. -
chris at 22:13 PM on 23 January 2010Skeptical Science now an iPhone app
re #19 "The numbers don't seem to add up" The numbers add up pretty well if one considers the system in it's entirety (all the forcings and a realistic assessment of climate response times). So, for example, the 20th century global temperature evolution can be reproduced rather well by incorporating all of the contributions and climate response times [*](see Figure 1): [*] http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf It's possible to illustrate part of the difficulty with your analysis by considering the global temperature from the late 19th century to the mid 20th century [**]. The global warming during this period wasn't more than around 0.2-0.3 oC overall. It's just that the surface temperature was knocked back quite a bit for a while (see post just above) by volcanic activity. So the net warming in response to your net forcing of 0.5 W/m2 1910-1940 likely wasn't more than 0.2-0.3 oC (perhaps even a bit less, if there was a significant contribution from ocean current effects of the sort that Tsonis and Swanson have discussed). But the bottom line is that the nett effect can only be assessed by a realistic incorporation of all of the contributions and the earth's responses to these.... [**] http://www.cru.uea.ac.uk/cru/data/temperature/nhshgl.gif -
chris at 21:44 PM on 23 January 2010Skeptical Science now an iPhone app
You're mistaking "lag" and "time constant"/"response time", HumanityRules (see my post #20) It's pretty straightforward: make a step change in a forcing to a new value. The earth starts to warm essentially immediately (no lag!). The time taken for the earth to come to equilibrium with the new forcing is a function of the time constants/response times of the system (rapid time response of a few years in the atmosphere; slower time constants for penetration of heat into the "deeper" elements of the climate system, with a very slow response time indeed for the vast oceans to come towards equilibrium with the forcing). It's the latter that gives the "heat in the pipeline" that you remarked upon. That's all very straightforward I think. The mistake is to think that the response of the surface temperature can be encapsulated within individual simple hived-off pieces of the whole. For example we could look at the temperature rise during the early 20th century. There was some very dramatic volcanic activity in the late 19th century/early 20th century and inspection of the global temperature record [*] shows that this knocked back the surface temperature by quite a large amount (0.2-0.3 oC) during a period of 20-odd years. However volcanic forcings are temporary; they have a significant short term effect on the surface temperature, which can be prolonged if there is a period of sustained volcanic activity [as in the period 1883 (Krakatoa) through to Soufriere, Santa Maria Mt Pelee in 1902], and so their effects don't penetrate "deeply" into the climate system. So much of the earth's surface temperature suppression due to volcanic forcing was recovered relatively quickly through the period 1910-1930's. There was also a small solar contribution and an enhanced greenhouse effect contribution to the early 20th century temerature rise. The earth responds to these again without lag, but the full response to these persistent in the long term forcings will take a long time to saturate the elements of the climate system that have a high intertia to change (i.e. the oceans). The earth still hasn't come fully to equilibrium with the enhanced forcing as it stood in 1940 (say), let alone with the forcing as it stands at this particular point in time. Obviously, 'though, if we want to attribute the contributions to the 20th century temperature evolution, we have to consider all of these (including the negative forcing contributions like anthropogenic aerosols), and the manner that the earth responds to these. It's not that complex. However it does require thinking (modelling) of the system in it's entirety. One can't insist on cutting everything right back to individual components and simplistic responses and then complain that reality doesn't conform to a grossly oversimplified view - that's essentially to use straw-man argumentation! -
angliss at 16:27 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Kforestcat, I'm sorry, but you're off in the weeds on this one. What you describe with your pavement example is an example of signal + bias + noise. Because the instrument's location is constant, we can eventually come up with a correction mechanism to remove the bias from the data. That leaves us with signal + noise. Removing the noise is simple filtering, of which averaging is one variety. Mathematically, averaging a signal removes noise (increases the signal-to-noise ratio) at the rate of the square root of the number of samples. Averaging daily samples over the course of a week increases the SNR by nearly 3 over any single sample. So if we picked up thermal noise from a car one day, then we merely have to average that data point with others from the same instrument in order to dramatically reduce the impact of that noisy sample on the overall data. I'll grant you that, if you only have a single data point, biases and noise on that data point will be a major problem. But that's not the case with the temperature record. -
chris1204 at 16:22 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Regardless of site, we see an obvious rising temperature gradient from 1980 to 2009 based on a line of best fit. However, ‘eyeballing’ the unadjusted data suggests a striking fall based on a line of best-fit beginning with an anomalously high 1998 to 2009. Now we certainly don't want to cherry pick. The 1998 data was attributable to a very large El Nino. However, following on from the preceding post (‘The chaos of confusing the concepts’) with its discussion of the Lorenz attractor, I find myself wondering whether we may indeed be seeing evidence of greater inherent unpredictability than we commonly suppose. Eleven years after all seems a long period, especially when we consider the preceding data set covers eighteen years. Should we be considering the two periods as one segment? Alternatively, should we be considering these periods as two distinct segments and asking why 1998 produced such a high El Nino (followed by a relatively warm period) and why 2007 – 2009 are producing a much lower gradient? Moreover, is this gradient likely to continue? I think the question of site location is clearly a furphy given the broad consistency between better and not so well located sites. However, deciding which periods we select to measure trends is of much more fundamental importance given the arbitrary nature of lines of best fit. Otherwise, we risk failing to ask obvious questions. -
Tom Dayton at 16:01 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Kforestcat, in your example you wrote "Say the mean 1971-2000 temperature well away from the parking lot...." But that's not of interest. Instead, the temperature on that given day, from that parking-lot-situated instrument, is differenced from the average temperature across 1971-2000 of that same instrument. -
Tom Dayton at 15:55 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Kforestcat, of course the temperature stations produce absolute temperatures as their "raw" data rather than as anomalies from a baseline. I have never seen anyone claim otherwise. You are misreading quite drastically. The baseline against which the anomalies are computed, is the average temperature for that specific locality across whatever time range has been chosen as the baseline. Each station has its own, local, baseline computed. Then each individual temperature reading from that one given station is differenced from that baseline for that one given station. The result is a difference of that one reading, from that tailored baseline. That procedure is done separately for each individual temperature reading, each against its own individual, tailored, baseline. It is a simple mathematical transformation that has nothing to do with instrument error and nothing to do with instrument calibration. It is a simple re-expression of each individual temperature reading that preserves all changes from the baseline temperature. The resulting collection of individually transformed temperatures is the collection of "raw" anomalies. Those are the "raw" data that you see being discussed. -
From Peru at 15:37 PM on 23 January 2010Why is Greenland's ice loss accelerating?
The paper states: "Our results show that both mass balance components, SMB and D (eq. S1), contributed equally to the post-1996 cumulative GrIS mass loss (Fig. 2A)." But then, Fig.3 shows: Ice Discharge: -94 Gt/yr Surface Mass Balance: -144 Gt/yr Isn't this a contradiction? Then comes this statement: "A quadratic decrease (r^2 = 0.97) explains the2000–2008 cumulative mass anomaly better thana linear fit (r^2 = 0.90). Equation S1 implies thatwhen SMB-D is negative but constant in time, ice sheet mass will decrease linearly in time. If, however, SMB-D decreases linearly in time, ashas been approximately the case since 2000 (fig.S3), ice sheet mass is indeed expected to decrease quadratically in time" What is this "r^2 = 0.97" and how it is related to the equations: MB = ∂M/∂t = SMB – D (S1) δM = ∫dt (SMB-D) = t (SMB0–D0) + ∫dt (δSMB–δD) (S4) Any idea? -
Kforestcat at 15:23 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Gentlemen I'm fully aware of how anomaly data is used ( having used it in my own research) and I know full well what can go awry in the field experiments. We are talking about every day instrument calibration and QA/QC - this is not rocket science. I firmly maintain the Menne 2010 paper is fundamentally flawed and entirely useless. NASA's individual station temperature readings are taken in absolute temperature (not as an anomaly as you have suggested). The temperature data is reduced to anomaly after the absolute temperature readings for a site are obtained. For example see, the station data Orland (39.8 N, 122.2 W) obtained directly from the NASA's GISS web site. The temperatues are recorded in Annual Mean Temperature in degrees C - not as an anomaly as you have suggested. (Tried to attach a NASA GIF as visual aid -but did not succeed). Bottom line. Menne has to have (and use) absolute temperature data to get the 1971-2000 mean temperature and then divide the current temp with the mean to get the anomaly. We are back to the same problem - Menne is measuring instrument error - he is not measuring error resulting from improper instrument location. The Menne paper is absolutely useless for the stated purpose. Anyone who actually collects field data, I have, knows they are going to immediately run into two fundamental problems when an instrument is improperly located. 1) they are not reading ambient air temperature and 2) neither temperature readings nor the anomaly can be corrected back to a true ambient because other factors are influencing the readings. For example: Suppose we have placed our instrument in a parking lot. Say the mean 1971-2000 temperature well away from the parking lot is 85F; but the instrument is improperly reading a mean of 90F. Now on a given day, say the ambient temp is 93 but your instrument is reading 105F (picked up some radiant heat from a car). Ok our: Actual anomaly is 93F - 85F = 8F; Instrument anomaly is 105F - 90F = 15F. The data is trash. There is simply no way to recover either the actual ambient temperatures nor an accurate anomaly reading. What you are missing is that an improperly placed instrument is reading air temperatures & anomalies influenced by unnatural events. The readings bear no relationship to either the actual temperature nor the actual anomaly - the data's no good, can't be corrected, and will not be used by a reputable researcher. Finally, it's not entirely surprising that Menne finds a downward bias in his individual anomaly readings at poorly situated sites. Because: 1) a poorly located instrument produces a higher mean temperature; hence, the anomaly will appear lower; and 2) generally there's a limit to how hot an improperly placed instrument will get (i.e. mixing of unnaturally heated air with ambient air will tend to cool the instrument - so the apparent temperature rise is lower than one might expect). Had Mennen (NASA) actually measured both absolute temperature and calculated anomaly data using instrumentation at properly setup sites, within say a couple of hundred feet of the poor sites, as a proper standard to measure the bias against - our conversation would be different. As it stands Menne's data is useless nonsense and not really worth serious discussion. Dave -
Charlie A at 15:15 PM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
nofreewind -- As Tom Dayton points out, the absence of glaciers just affects the timing of the water flow. Assuming constant annual precipitation, then the total annual water flow in the rivers will stay the same, but there will be a bigger seasonal variation. Without glaciers, melting snowpack would be the source of summer water flow in most Himalayan/Indian rivers. I've seen some non peer reviewed articles that said that the loss of glaciers would cause most rivers in India to go dry during the summer, but have not seen any peer reviewed articles that had any such drastic predictions. The alarmist articles seem to ignore the snowpack, which is the primary storage in many areas, such as California as mentioned above by Tom Dayton. -
Tom Dayton at 14:58 PM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
nofreewind, glaciers and snowpack are natural reservoirs of water not only in the Himalayas, but in California and many places around the world. They hang on to precipitation during the winter and dole it out gradually as meltwater as the weather warms into Spring and Summer, and even into Fall. Huge numbers of people, agriculture, and industry rely on the resulting somewhat steady and predictable supply of water around the year. It is impossible to build enough artificial reservoirs to compensate for the loss of those natural reservoirs. Also, excessive supply of water (because it is not being held long enough and doled out in measured quantities) causes flooding by exceeding the short-term capacities of the human infrastructure. -
nofreewind at 14:36 PM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
I don't understand why the glaciers are so important to water supply. We don't have glaciers here in Pennsylvania USA, yet the rivers flow year round. In the Himalayans, the water comes down the mountains, not because of glaciers, but because the monsoons bring snow to the mountains. If there is precipitation, the rivers will flow, right? Why is it so important that the water is hundreds or thousands of year old glacier water? Let's get rid of the glaciers and get some fresh water down to drink! -
angliss at 14:25 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Kforestcat, I think you may be confusing two things here - bias error and probabalistic noise. The paper makes it clear that the unadjusted curves represent the measurements before known bias errors are removed, while the "adjusted" curves are after the bias errors have been corrected. Conversion to an anomaly is effectively the same as normalization, and the purpose is the same. Both serve to accentuate the part of the data that you care about. I do both regularly in my professional field of electrical engineering, especially when I'm interested in understanding the nature of noise plaguing my circuitry. Finally, what Watts et al are essentially saying is that heat islands, in this case caused by electrical transformers, waste treatment plants, air conditioners, or pavement, have made the global temperature record unusable. This paper points out that Watts is incorrect, but it's not the first paper to do so by any means. The following paper showed that well established urban areas had the exact same trends as rural areas, but with a removable warm temperature bias: http://www.agu.org/pubs/crossref/2008/2008JD009916.shtml To use an analogy, if a trampoline can get you 10 feet into the air out on a farm, there's every reason to believe that it'll get you just 10 feet into the air if you move it into a city. -
HumanityRules at 13:38 PM on 23 January 2010Skeptical Science now an iPhone app
Chris I guess my point was made in #19. There is no evidence of lag in the early-mid century. The radiative forcing increase 1910-1940 coinsides with a delta T which leaves nothing "in the pipeline". "the earth should somehow miraculously come instantaneously to the new forced surface temperature" it seems miracles did happen 1910-1940. For the preposed system to work lag would have to be a late 20th century phenomenon only. -
Charlie A at 13:37 PM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
Apparently, there are other errors in this section. The erroneous 2035 date has been acknowledge, but the IPCC has not acknowledge the error I've pointed out above in table 10.9. Another error is the statement, referring to Himalayan glaciers of "Its total area will likely shrink from the present 500,000 to 100,000 square kilometers by the year 2035." The statement appears to have its original source in a 1996 article which states that the total worldwide extrapolar glacial area of 500k sq km is expected to go down to 100k sq km by 2350. I don't have a peer reviewed source handy for total Himalayan glacial area, but the UNEP/WGMS report global glacier changes says total area of Himalayan glaciers is 33,040 sq kilometers, so this appears to be yet another clue that the statements in this section should have been reviewed more thoroughly. Georg Kaser, a lead author of a WG1 chapter has said that he told others in the IPCC of the errors, but they chose not to correct them. The entire section on Himalayan glaciers is not of that muc importance. What is more important is that this is yet another example of problems in the IPCC review process. Nominations for reviewers of AR5 are now being taken, but only from selected organizations. The IPCC would be well served to include some reviewers that don't have strong confirmation bias in favor of AGW, and for there to be procedure put in place that don't allow the lead authors to rely almost exclusively on their own publications, to the exclusion of other peer reviewed papers that conflict with the lead authors opinions. -
Doug Bostrom at 13:29 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Swerving off topic so possibly may never see the light of day, but further to remarks on skepticism versus denial, etc., English is a rich language and there's no need to use a single word to describe a plethora of approaches. Doubter, contrarian, skeptic, denier, they're all different in meaning and need to be applied individually. "Faithful" would be a better word for some, for that matter, seemingly detached from the material world. My limited experience w/participating in discussions on this topic tells me I'm generally far too hasty in categorizing, to the point where I've already had to resort to apology too often, enough to make me more cautious about committing accidental slurs. As is said, discretion is the better part of valor. -
Charlie A at 12:48 PM on 23 January 2010The IPCC's 2035 prediction about Himalayan glaciers
19 Ricardo says "it's a typo, the starting year is 1947 not 1847." Ricardo, what is your data source for this statement? 2840 meters of retreat from 1845 to 1966 is consistent with other reports of 1600 meters of retreat from 1847 to 1906 (27 meters/year) and 1040 meters of retreat from 1906 to 1958 (20 meters per year). What is your source for saying that the starting year is 1947 ?? My figures come from http://iahs.info/redbooks/a058/05828.pdf and several other sources of similar numbers. The current retreat rate of 10 meters/year comes from the 9th volume of Fluctuations of Glaciers, issued by the World Glacier Monitoring Service If AR4 is incorrect and the other correct, then the snout of the Pindari has slowed from 27m/yr up to 1906, to 20 meters per year to 1958, to 10 meters per year to 2006. If the AR4 is correct, then there has been an even more dramatic reduction in the retreat rate of 135.2meter/year down to 10 meters per year. -
Marcus at 12:29 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Steve Carson. I'm no expert in climatology, but if I read a paper which claimed that surface temperatures had been falling (not rising) for the last 30 years, then I'd seek independent verification from other sources before I accepted or dismissed the claim-that's what makes me a Skeptic (& a scientist). A denialist, by contrast, will automatically dismiss any evidence that doesn't fit their ideology-without independent verification-no matter how strong the evidence is (yet they still demand ever more evidence-even though they'll dismiss that too). If it helps, the other side contains what I call the "True Believers"-they accept the theory of global warming because someone they admire &/or want to believe tells them so-without independent verification. Personally, I have no time for denialists or true believers, but instead seek independent verification of every claim & counter-claim being made. Its always important to think for yourself rather than blindly accept the claims of people who might have a vested interest. Hope that makes more sense. -
Kforestcat at 12:20 PM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Gentlemen You really ought to read the methods used before you gloat. The individual station anomaly measurements were based on each stations "1971-2000 station mean". See where the document states: "Specifically, the unadjusted and adjusted monthly station values were converted to anomalies relative to the 1971–2000 station mean." In other words, the only thing this study measures is the difference in instrument error at each station. The absolute error occurring at individual stations because the station had not been properly located is not measured. A poor station with an absolute temperature error of +5 degrees C still has a bias error of +5 degree C - no matter what the variation occurring due to instrumentation type. I'm a chemical engineer with U.S. government and 20 years of research experience in various areas including environmental mitigation. If one of my phD's came to me with this nonsense, I'd fire him on the spot. Sorry boys, you are going to have to better than this. DaveResponse: Whenever you look at a graph of global temperature, invariably you're looking at "temperature anomaly" (the change in temperature), not absolute temperature. As NASA puts it, "the reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region". It's the change in temperature (eg - the trend) that is of interest and the analysis in Menne 2010 determines if there is any bias in the trend due to poor siting of weather stations. -
From Peru at 11:57 AM on 23 January 2010Why is Greenland's ice loss accelerating?
As shown in the other Greenland post in this site, the best-fit curve of total ice mass loss from GRACE shows that Greenland ice loss is accelerating at a rate of 30 Gigatonnes/yr^2. But now results that we have the contributions: Ice Discharge: -94 Gt/yr (39,5%) Surface Mass Balance: -144 Gt/yr (60,5%) So most of ice loss comes from just surface melting! This is surprising because surface-melt minus surface-precipitation is something that is very weather-sensitive. Now I ask: 1. How could a weather-sensitive melting follow a quadratic function so closely (i.e. how could the acceleration be so close to a constant value of 30 Gigatonnes/yr^2)? 2. Can we expect this trend to persist or weather-climate variability will "break" the soft curve here shown at any time? -
Marcel Bökstedt at 11:22 AM on 23 January 2010The chaos of confusing the concepts
This is a great site, because it takes the discussion to a high level, but still not so technical that you can't follow it. This particular posting is also quite interesting. I'm wondering about the correct definition and nature of the term "climate". I imagine that "todays weather" is characterized by a large number of parameters, varying in a way which might very well be (weakly) chaotic in the technical sense that the weather of next month is deterministically determined by weather of today, but this dependence is very sensitive to small variations of the initial conditions. Climate on the other hand could be defined not as the weather at a particular point in time, but as a subset of the parameter space. The weather can vary wildly, but only inside the bounds prescribed by the "climate". Or maybe differently, in analogy to the example of the Lorentz attractor, climate is a subset of the parameter space where "weather" spends "most of its time". This would be a slightly different but possibly more flexible definition of "climate". It reminds me of the situation in celestial mechanics. There the laws of motions of the planets in the solar system are quite simple, certainly immensly simpler than the laws governing weather. But you can run into chaos. I believe that at least in some situations you can compute the orbits of the the planets at a time in the far future (climate), but because of chaos you cannot calculate where in the orbit the planet will be at this specific time (weather). In this situation, the orbits of the planets change, so that there is a certain "dynamics of orbits". But it is far easier to predict the development of the orbit of a certain planet at a certain time than to predict exactly where the planet will be at this time. One thing which is unclear to me is if the "climate" of meteorological models can vary on its own. Is there is some sort of "dynamics of climate" - long term non-forced natural variation - or is the climate supposed to be completely determined by the various type of "forcing"? -
stevecarsonr at 11:15 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Marcus, it's not my blog, added to which I'm new here - so apologies if my comment is out of order - but I don't understand the attraction of the labeling of others. (And I've seen it a lot at other blogs). You might characterize their opinion, that's certainly helpful. But "skeptic"/"denialist" seems like a classification of character - or maybe assassination of character. Now you might say the 2 people you are talking about are a distinguished physics professor and a meterologist, so take the following as a general comment that doesn't apply to them as to the specifics, but maybe the general concept does.. Some people don't understand the radiative transfer equation (in fact, I've just realised that maybe I don't understand it properly, maybe someone can help with my totally off-topic question) because they don't have a physics background. So they don't understand how CO2 can impact temperature. Does this make them "a denialist"? Or someone who doesn't understand radiative physics? People are free to call them whatever they like, but one comment I would make it that the more personal attacks thrown the less likely people with questions are to sit down and try and understand a complex subject. And it is complex. And the scientific method is not natural and instinctive. -
Marcus at 10:45 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
HumanityRules-also my apologies. I wasn't actually suggesting that the denialists *invented* the urban heat island (even if that's how it comes across), but they have exploited it *mercilessly* to try & undermine the credibility of the surface temperature record-even long after study after study had shown that (a) the bias wasn't as strong as suggested (b) the bias often gave cooler results than for nearby rural areas (c) researchers always adjusted for the bias & (d) that the surface record was closely correlated to the record from satellites. I doubt that this latest paper will silence their misuse of UHI for ideological purposes-even when its based on the work of one of their own! -
Marcus at 10:35 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
stevecarsonr, my apologies-I misspoke before. I implied that the Urban Heat Island was a myth-that wasn't my intent. What I meant was that Urban Heat Islands being the primary cause of global warming (rather than CO2 emissions) was an Urban Legend. This paper seems to give added weight to the Urban Legend status of the view that poorly sited measuring stations, alone, are capable of producing a +0.16 degree change per decade in average global temperatures! -
Marcus at 10:32 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
CoalGeologist, people who raise doubts about the science, without seeking to prove that those doubts are valid, are not true skeptics-they're denialists-an important distinction. Lindzen, for example, is a skeptic-he doubted that global warming would be as extreme as predicted (due to the Iris Effect) & sought to prove it-so he's a skeptic. On the matter of bias in US temperature records, Watts has behaved as a true skeptic-up to a point-but if he now refuses to publish the results of this study, then he proves himself to be just another denialist. His role as a denialist is already proven, however, in the way he behaves on other matters related to Anthropogenic Global Warming. -
chris at 09:19 AM on 23 January 2010The chaos of confusing the concepts
I hope you don’t mind me responding to your message stevecarsonr (I happen to be on line, it’s a Friday night, and I’ve drunk a good part of a bottle of wine!). I think it comes down to the meaning of “chaos” as I suggested in post #12 just above. I'm not sure that the phenomenon you describe (periodic ice-melt-induced cessation of the Atlantic conveyor, which I also used in my post above), is an example of chaotic behaviour. I would say that it is an example of stochastic behaviour. Weather has inherent chaotic elements since (i) its evolution is critically dependent on the starting conditions (that’s why weather models, but not climate models, are continuously “reset” to current atmospheric conditions), (ii) it progresses on, and is influenced by events on, a very small spatial scale, and (iii) there are an almost infinite set of influences that determine the temporal evolution of local atmospheric conditions. That’s not the case with the example of ice-melt-induced cessation of the Atlantic conveyor. There’s no question that this phenomenon (see e.g. http://en.wikipedia.org/wiki/File:Ice-core-isotope.png ) has a stochastic element. But it is a phenomenon that is bounded within a particular climate regime (glacial period of an ice age with a particular continental arrangement that gives a strong thermohaline heat transport to high Northern latitudes), and is predictable in principle. Presumably if one knew something about the relationship between Arctic ice buildup and its evolution towards instability, then one could make a reasonable prediction (if one was around during the last glacial period say!) of when the next ice-collapse-induced cessation event would occur. I think we also have to be careful not to assign the label “chaotic” to behaviour that we happen to lack the knowledge-base to understand predictively. Focussing on the thermohaline circulation (THC) and the effects of melt water, it does seem a possibility (see e.g. ftp://rock.geosociety.org/pub/GSAToday/gt9901.pdf, for an interesting read) that the THC could slow down or stop if sufficient freshwater from Arctic ice melt were to flood the Arctic ocean. But I don’t think this would be an example of chaotic behaviour even if we might not know, at the moment, what specific conditions (how hot does it have to be; how much ice melt and how fast, would be required…). At some point we might well understand this process well enough that it might cease even to be considered “stochastic”. -
CoalGeologist at 08:59 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Markus (Post #1) is already aware that he would be ill-advised to hold his breath waiting for an acknowledgment from Anthony Watts that the U.S. surface temperature data set apparently does not show a systematic bias toward warming. In fact, in the overall scheme of things, I’d wager it’s more likely that Watts would attempt to stop publication of the paper than to make such an admission. This issue has much broader implications, however, that extend well beyond the reliability of this particular data set, and bears on the entire debate over anthropogenic climate change (ACC). Skeptics such as Watts, Joseph D’Aleo and others are well within their rights to question the validity of the data, particularly considering the poor condition and non-ideal location of many of the measurement stations. More than that, it’s their duty (duty…duty…duty!) to raise skeptical criticisms, just as it is the duty of climate scientists to address reasonable concerns. The advancement of science depends on it. The success of this approach demands, however, that skeptical hypotheses be: a) testable, and b) potentially refutable. If not, then they fall into the domain of ideology, not science, and can never be considered anything more than unsubstantiated conjecture. Skeptics feel they’ve done their job merely by raising questions and doubts, while forgetting the essential next step of hypothesis testing. Sadly, past experience shows that in the current "debate", most arguments against anthropogenic climate change are effectively irrefutable, no matter how much evidence is brought to bear. Worse, the premise that ACC is wrong provides the touchstone by which all evidence is measured. Evidence that appears to support ACC is inferred to be wrong; evidence that appears to refute ACC is inferred to be valid. At the same time, the new re-assessment of the data by Menne et al. gives all of us a greater level of confidence in its reliability. (Readers may also be interested to read this analysis of the NOAA & NASA data: http://www.yaleclimatemediaforum.org/2010/01/kusi-noaa-nasa/ In fairness, we have to acknowledge the important role that skeptics have played in this process. It’s a shame if Watts and others are unable to derive any satisfaction from their efforts, even if the rest of us can. Most likely they’ll just keep plugging away, trying to prove what they already ardently believe. The surface temperature data set does, indeed, pose a “challenge” to put it mildly. While the temptation is there to just chuck the entire lot, we can’t afford to do that, as the results are too important. The only other option is to try to make the best use of the available information, by removing as much of the error as possible without introducing bias. This may entail eliminating some stations, and making some adjustments to some data, where warranted. This is what the researchers at the National Climate Data Center have been trying to do in good faith. They deserve our appreciation, and that of Mr. Watts as well! -
stevecarsonr at 08:45 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Philippe, thanks for the clarification - I should have made it clear my comment was directed at some of the comments, not at the post itself. Everyone knows that the UHI exists, the IPCC has it at 0.006'C per decade. Perhaps that's correct, but the Fujibe paper and the Ren paper do question that number. Anyway, I'm off topic, as this post is about the effect of microsite issues on measurement. -
stevecarsonr at 08:35 AM on 23 January 2010The chaos of confusing the concepts
It's great to see the chaos subject raised especially from someone with a PhD in complexity studies. Hopefully you will indulge some questions from someone who knows less. First of all, I don't see what you have actually demonstrated. Why is it a demonstration that climate is not chaotic by showing a graph of the temperature parameter in climate over 120 years. And how do you know that that particular graph is not actually chaotic? For example, I could plot a graph of the motions (the 2 angles) of the double pendulum for a period and say "see it's not chaotic". What's the difference between these 2 cases? Second, while of course the climate is bounded in many ways that doesn't mean it's not chaotic. The fact that there will always be 4 seasons or the poles always colder than the equator really isn't relevant. Or the fact that mid-latitudes might be between 0'C and 30'C on any given day. I know you didn't put forward these points, but I see them a lot with a kind of "QED climate is not chaotic" and I'm scratching my head.. In your opinion is this correct? I.e., the fact that the above few points are true doesn't disprove the possibility of chaotic behavior? Third, an example. The well-known "Atlantic conveyor belt" of ocean heat is driven by the thermohaline currents. Sufficient melting ice from Greenland/Arctic would disrupt the thermohaline and the conveyor belt stops, northern Europe gets very cold and the Arctic re-freezes. But at what point does this occcur? Prof FW Taylor in his book "Elementary Climate Physics" (2005 OUP) shows the 2-box model of the oceans, apparently originally proposed by Henry Stommel. It's fairly simple but shows an unstable behaviour against peturbations in either direction. He comments that right now (2005) none of the GCMs show the reversal of the circulation, instead they vary between no real change and a 50% drop over the next 100 years. However, my point finally arrived at, if this model (extended to a more realistic one) is correct, then surely the thermohaline will provide chaotic behavior at some point. (Perhaps strictly speaking it might not be chaotic, perhaps just unknown and complex at this stage). Comments? Fourth, without actually knowing the formulae for many important aspects of climate, how can "the climate community" (or subsection thereof?) be so confident that climate is not chaotic? E.g. the aerosol effect, a negative feedback, but with error bars stretching between zero and the effect of CO2 at 380ppm (according to IPCC AR4). I could happily theorize about changes in ocean temperature increasing the production of sulphides forming more clouds and providing more negative feedback.. or higher winds from higher temperature differentials picking up more dust from every drought ridden deserts and therefore seeding more clouds.. or not. I have the full extent of the error bars and given that we don't really know the formulae they might exhibit strong negative feedback - or they might exhibit actually positive feedback under some circumstances. How to confirm "no chaos" when the equations are somewhere between "cloudy" and "unknown"? Sorry for writing such a lengthy set of questions, but a subject that really needs discussion and thanks for posting on the subject. -
chris at 08:30 AM on 23 January 2010The chaos of confusing the concepts
re your comment #13 batsvensson; i.e. "When you say you can predict that the weather in the next 6 month will be 20 to 30 C degree, then you are NOT basing this on the system itself but on a pulse respond to a ramp signal you know exist. However, still there is very tiny and small possibility your prediction may turn out wrong, but it is so unlikely to happened that you are not regarding it, which you probably do perfectly right in. But never less, it less to weather being non-chaotic and more to weather being affected by a ramp signal that you are able to do such prediction. " Surely Ned is basing his prediction on "the system itself". It's got nothing to do with "weather being non-chaotic" (we know weather has significant chaotic elements), it's to do with the likely range of weather events being bounded by a rather well-defined climate regime, and a highly predictable seasonal variation essentially based on Newtonian physics. As you say, there is a tiny and small ("tiny" and "small"?!) possibility that Ned's prediction may turn out to be wrong. There are two main reasons why this might be the case: (i) The variability in the weather encompasses the possibility of rare extreme excursions out of the expected range within a particular climate regime; (ii) a contingent event (volcanic eruption; extraterrestrial impact) might intervene. -
Philippe Chantreau at 08:22 AM on 23 January 2010The chaos of confusing the concepts
Very interesting stuff, thanks Jacob. -
Philippe Chantreau at 08:15 AM on 23 January 2010On the reliability of the U.S. Surface Temperature Record
Stevecarsonr you're confusing UHI and microsite issues. Nobody believes that the UHI is a myth and there is an abundant litterature on the subject. GISTEMP corrects for UHI. Many papers about it were mentioned on this very blog. Watts' basic argument, insofar as it remains consistent (which is not always the case)was that siting issues affected readings so that thermometers read to high. Neither Watts nor anyone among his cheerleading crowd ever attempted to do a real data analysis to verify the hypothesis. One of his readers, however, tackled the problem as sson as enough stations were sampled (John V). He evidently found out that the hypothesis was not verified by data analysis and endured so much malice at Watts's site that he didn't post there any more. Further analysis was done by NOAA once enough stations were sampled so that no regional bias could possibly affect the results, and the results were exactly the same. The very premise for the existence of Watts' blog has been invalidated numerous times. -
blouis79 at 08:13 AM on 23 January 2010Models are unreliable
We have no idea how reliable climate models are: IPCC AR4 8.6.4 How to Assess Our Relative Confidence in Feedbacks Simulated by Different Models? [quote]A number of diagnostic tests have been proposed since the TAR (see Section 8.6.3), but few of them have been applied to a majority of the models currently in use. Moreover, it is not yet clear which tests are critical for constraining future projections. Consequently, a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed.[/quote] Any person on earth knows that clouds can warm and cool. IPCC knows that too. Cloud feedbacks are not well modelled. IPCC AR4 8.6.3.2 Clouds [quote]In many climate models, details in the representation of clouds can substantially affect the model estimates of cloud feedback and climate sensitivity (e.g., Senior and Mitchell, 1993; Le Treut et al., 1994; Yao and Del Genio, 2002; Zhang, 2004; Stainforth et al., 2005; Yokohata et al., 2005). Moreover, the spread of climate sensitivity estimates among current models arises primarily from inter-model differences in cloud feedbacks (Colman, 2003a; Soden and Held, 2006; Webb et al., 2006; Section 8.6.2, Figure 8.14). Therefore, cloud feedbacks remain the largest source of uncertainty in climate sensitivity estimates.[/quote] -
batsvensson at 07:51 AM on 23 January 2010The chaos of confusing the concepts
Ned, One can roughly say there are two classes of chaotic system, deterministic and non-deterministic. The behavior of the latter is the same as a random system. However a deterministic chaotic system isn’t the same as a random system. All deterministic system can in principle be predicted. Therefore saying weather or climate is chaotic is not the very same thing as actually claming it can not be predicted to some degree of certainty, they claim is only that it may be hard to predict, how hard is another matter thou. Climate is affected by regular cyclic phenomena and random event, these can be seen as ramps and step pulses to the system. Any system, linear, chaotic or random will respond to such pulses and such response can in principle be measured or filtered out from a time series of measurements. In a linear system this filtering is trivial, but for a chaotic system the process is non-trivial, a detected pulse may very well be a false positive in such system, this is very hard to say with out knowing anything about the history of the system itself. A linear system response to a pulse is easy to predict, but for a chaotic system one can not in general do this, except within small time scales. Chosen time scale short enough even the behavior of a random system can be predicted, but the error will soon grow to large to get any meaning full prediction out from it. The difference in predicting a linear system vs. a nonlinear lies in the rate of error growth. Linear system has a much smaller growth rate, and therefore we can make the prediction over longer times series with high confidence, while this is not the case with non-linear system. That errors in prediction grows by time is an inherent property of any simulations. The task in making a good prediction is to try make a system in which the growth of error is as small as possible. When you say you can predict that the weather in the next 6 month will be 20 to 30 C degree, then you are NOT basing this on the system itself but on a pulse respond to a ramp signal you know exist. However, still there is very tiny and small possibility your prediction may turn out wrong, but it is so unlikely to happened that you are not regarding it, which you probably do perfectly right in. But never less, it less to weather being non-chaotic and more to weather being affected by a ramp signal that you are able to do such prediction.
Prev 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 Next