Aug 15 2010
When attempting to model complex physical processes one rapidly comes upon the fact that measurements become so noisy you cannot detect and reliable signal in the data. Â This ‘noise’ comes from any number of sources, but usually it is caused by the limited accuracy of the measurement device, or the fact even an accurate measurement can decay over time or distance.
In the world of satellites let me use an example of each noise source. When we need to confirm a satellite’s orbit path we use the RF signal between the satellite and the earth to measure the distance travelled by the RF signal. We do something similar when we bounce a laser off the moon’s surface to measure it’s distance to incredible precision. But the problem is the fact the satellite is whipping by us and it is impossible to measure the velocity in 3 dimensions from a single distance measurement. We actually have to ping the satellite many times for a long time to drive out the noise and derive a fairly accurate position and velocity vector. That over-sampling removes the measurement noise by repeatedly measuring the distance and computing velocity in three dimensions. It works even better if you can measure from to different points on the Earth with divergent views of the orbit path (just like how GPS uses triangulation to fix the point on the Earth).
The second source of noise in satellite orbits is the fact even orbits in space are not simple physical processes. The Newtonian orbit equations of gravity are just fine, but there are other more subtle forces at play. So the orbit ‘decays’ off the Newtonian solution within days, and within a week most satellites need to be resampled to correct for all these other forces. This decay basically builds up error in the initial solution, which itself had to over-sample to remove measurement error. Understanding the sources of errors over time and distance is what we call an error budget. Something completely foreign to the ‘scientist’ at CRU/IPCC/GISS, et al.
In the poor science of Climate Change the most strident voices of Doom & Gloom seemed to be the ones least able to grasp the factors of noise (or error). These people ran statistical models and saw or derived what they wanted to see out of pure noise (of both kinds of error). In the temperature records from 1880 to present time we have ever increasing measurement accuracy and sampling leading to the current age. But you cannot equate measurements of 1880-1940 with measurements taken from 1950-2010 with equal quality. The fact is the older data is riddled with measurement error and low samples. Even the modern data, while accurate, has been thinned to the point the sampling is way too sparse (which is how you can have one thermometer represent a cell of 1200 km or 700 miles in the NASA GISS data sets) to be considered accurate.
Temperature simply decays rapidly in accuracy with distance and time. And I mean time in the sense of historic measurements and distance in terms of all measurements over all times. Even worse, prior to 1880 the accuracy just collapses, as there are no measurements at all. You have proxies from local regions which are inaccurate and both in measurement and distance. For example, trees that exist on the edge of the tree line are supposed to be good temperature proxies (while they are probably, at best, mediocre CLIMATE proxies). A recent study of Russian tree rings, a subset used by CRU and IPCC to claim recent global warming, shows the vast majority of tree line region do not show recent warming and conflict with the one data set IPCC and CRU have placed all their claims on.
Except for the Yamal reconstruction, all tree-ring and non-tree ring reconstructions appear to agree, and so indicate no correlation between temperature and atmospheric CO2 concentration.
Proxies are not thermometers, and anyone who treats them as such is not performing science. Proxies can only point to a wide range of temperatures OR climate conditions.
So the noise level gets really high once we go in history prior to 1880, and these regional proxies can represent a wide range of possible climates, even when they are within 1200 km of each other. Conclusion: they are meaningless benchmarks.
Over at WUWT there is now another earth shattering study coming out proving how bad the science of CRU and IPCC has been in reconstructing the global climate for the last 1000 years. The study is an independent statistical assessment of the infamous hockey stick, and it has concluded there is no way the IPCC-Mann graph is valid. It uses a different method to assess the proxy data and compare it to the modern record. What is most damning in the results is the conclusion that the proxy data is so noisy with regards to pulling out any temperature record that any number of possible climate histories could be artificially pulled from the noise:
 We find that the proxies do not predict temperature significantly better than random series generated independently of temperature.  Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts.  Finally, the proxies seem unable to forecast the high levels of and sharp run-up in temperature in the 1990s either in-sample or from contiguous holdout blocks, thus casting doubt on their ability to predict such phenomena if in fact they occurred several hundred years ago.
Let me put these 3 conclusions into simple english:
- The proxy data is no better at predicting past or current temperatures than a set of random numbers. In other words, so many other factors influence the proxies you could throw dice and get an as good or better guess at temperature.
- When they did tweak their models to align the proxies with the modern temperature record (1880-2010), these models produced a wide range of historic patterns, proving it is impossible to use proxies to look back in time with any accuracy.
- The proxies completely missed the temperature rise of the last 20-30 years, which indicates they cannot detect major temperature swings now or in the past.
This pretty much throws out all the IPCC-CRU conclusions because they were based on faulty premises and ignorance of how to model complex systems. I have said from day 1 the CRU data was too noisy to detect sub degree changes, and would be lucky to accurately make claims of regional temperatures within 2-5Â°C. This report confirms my back-of-the-envelope conclusions.
The report goes on to show why these limited proxies (so few in number that go back in time and limited in geographic distribution) simply fail to the claim to represent historic temperatures:
This is disturbing: if a model cannot predict the occurrence of a sharp run-up in an out-of-sample block which is contiguous with the insample training set, then it seems highly unlikely that it has power to detect such levels or run-ups in the more distant past. It is even more discouraging when one recalls Figure 15: the model cannot capture the sharp run-up even in-sample.Â In sum, these results suggest that the ninety-three sequences that comprise the 1,000 year old proxy record simply lack power to detect a sharp increase in temperature. See Footnote 12
Why? Because the true historic temperature is lost in the noise. There is no way, with current scientific understanding, to pull the historic temperature record within an 1Â°C. None. Can’t be done. Therefore, we don’t confidently know if today is warmer or cooler or the same as the Roman and Medieval Warming periods.
QED: The cult of Global Warming now has no scientific basis behind it.
Here is Â the new temperature record, with its massive error bars and indication things are still normal on planet Earth. All we can say for sure from this new look back in time is that it was not much different from now 1,000 years ago. Then a Little Ice Age hit and the Earth cooled, before springing back in the last 200 years. Which is what we knew already.
BTW, I was going to post on this study showing why grid size does matter, and why the smaller the grid (and the more samples per grid) produces more accurate (less noisy) results. And here is another example from Nepal on why one thermometer only produces wide spread noise, not accuracy, in the temperature record. In fact, this 2nd post hints that the local record was completely fictionalized by NASA GISS to turn raw data showing cooling into ‘adjusted’ data showing warming.
Both just emphasize the basic conclusions from the bombshell that just hit the Church of Al Gore/IPCC – there is no scientific foundation for all the Chicken Little cries. Â I found this conclusion to sum it up nicely:
Research on multi-proxy temperature reconstructions of the earthâ€™s temperature is now entering its second decade. While the literature is large, there has been very little collaboration with universitylevel, professional statisticians (Wegman et al., 2006; Wegman, 2006). Our paper is an effort to apply some modern statistical methods to these problems. While our results agree with the climate scientists findings in someÂ respects, our methods of estimating model uncertainty and accuracy are in sharp disagreement.
Climate scientists have greatly underestimated the uncertainty of proxy based reconstructions and hence have been overconfident in their models.
Emphasis added. Yeah, if you ignore the lack of accuracy and all the error noise and squint just right – everything looks good.