Quote:Cooling the past one hundred years later?
In probably the worst systematic error, the past is rewritten in an attempt to correct for site moves. While some corrections are necessary, these adjustments are brutally sweeping. Thermometers do need to move, but corrections don’t have to treat old sites as if they were always surrounded by concrete and bricks.
New original sites are usually placed in good open sites. As the site “ages” buildings and roads appear nearby, and sometimes air conditioners, all artificially warming the site. So a replacement thermometer is opened in an open location nearby. Usually each separate national meteorology centre compares both sites for a while and figures out the temperature difference between them. Then they adjust the readings from the old locations down to match the new ones. The problem is that the algorithms also slice right back through the decades cooling all the older original readings – even readings that were probably taken when the site was just a paddock. In this way the historic past is rewritten to be colder than it really was, making recent warming look faster than it really was. Thousands of men and women trudged through snow, rain and mud to take temperatures that a computer “corrected” a century later.
We’ve seen the effect of site moves in Australia in Canberra, Bourke, Melbourne and Sydney. After being hammered in the Australian press (thanks to Graham Lloyd), the BOM finally named a “site move” as the major reason that a cooling trend had been adjusted to a warming one. In Australia adjustments to data increase the trend by as much as 40%.
In theory, a thermometer in a paddock in 1860 should be comparable to a thermometer in a paddock in 1980. But the experts deem the older one must be reading too high because someone may have built a concrete tarmac next to it forty or eighty years later. This systematic error, just by itself, creates a warming trend from nothing, step-change by step-change.
Worse, the adjustments are cumulative. The oldest data may be reduced with every step correction for site moves. Ken Stewart found some adjustments to old historic data in Australia wipe as much as 2C off the earliest temperatures. We’ve only had “theoretically” 0.9C of warming this century.
While each national bureau supplies the “preadjusted” data. The Hadley Centre is accepting them. Does it check? Does it care?
No audits, no checks, who cares?
As far as we can tell this key data has never been audited before. (What kind of audit would leave in these blatant errors?) Company finances get audited regularly but when global projections and billions of dollars are on the table climate scientists don’t care whether the data has undergone basic quality-control checks, or is consistent or even makes sense.
Vast areas of non-existent measurements
In May 1861 the global coverage, according to the grid-system method that HadCRUT4 uses, was 12%. That means that no data was reported from almost 90% of the Earth’s surface. Despite this it’s said to be a “global average”. That makes no sense at all. The global average temperature anomaly is calculated from data that at times covers as little as 12.2% of the Earth’s surface”, he says. “Until 1906 global coverage was less than 50% and coverage didn’t hit 75% until 1956. That’s a lot of the Earth’s surface for which we have no data.” – John McLean
Real thermometer data is ignored
In 1850 and 1851 the official data for the Southern Hemisphere only includes one sole thermometer in Indonesia and some random boats. (At the time, the ship data covers about 15% of the oceans in the southern half of the globe, and even the word “covers” may mean as little as one measurement in a month in a grid cell, though it is usually more.) Sometimes there is data that could be used, but isn’t. This is partly the choice of all the separate national meteorology organisations who may not send in any data to Hadley. But neither do the Hadley staff appear to be bothered that data is so sparse or that there might be thermometer measurements that would be better than nothing.
How many heatwaves did they miss? For example, on the 6th of February, 1851, newspaper archives show temperatures in the shade hit 117F in Melbourne (that’s 47C), 115 in Warnambool, and 114 in Geelong. That was the day of the Black Thursday MegaFire. The Australian BOM argues that these were not standard officially sited thermometers, but compared to inland boats, frozen Caribbean islands and 80 degree months in Colombia, surely actual data is more useful than estimates from thermometers 5,000 to 10,000km away? Seems to me multiple corroborated unofficial thermometers in Melbourne might be more useful than one official lone thermometer in Indonesia.
While the Hadley dataset is not explicitly estimating the temperature in Melbourne in 1850 per se, they are estimating “the Southern Hemisphere” and “The Globe” and Melbourne is a part of that. By default, there must be some assumptions and guesstimates to fill in what is missing.
How well would the Indonesian thermometer and some ship data correlate with temperatures in Tasmania, Peru, or Botswana? Would it be “more accurate” than an actual thermometer, albeit in the shade but not in a Stevenson screen? You and I might think so, but we’re not “the experts”.
Time the experts answered some hard questions.