A friend e-mailed me an article stating that “Statisticians Reject Global Cooling.”
Here was my response:
As far as global warming goes, any way you slice it, global surface temperatures since 1998 have still not exceeded the high reached in 1998. Basically the article seems to be saying that this could easily be the result of chance, i.e. natural fluctuations in the temperature. Which is true, except that I don’t think enough is known about the natural causes of temperature variations to say anything meaningful one way or another.
To me, asking whether cooling since 1998 is statistically significant is the wrong question. Instead, one needs to ask what, if anything, the warmists predicted and whether that prediction came true.
One can ask hypothetically what would have happened if we had set a new temperature record in 2008. You can bet that the warmists would have been screaming about it from the rooftops and presenting it as strong evidence in favor of their hypothesis. Which leads me to ask: What would need to have happened to undermine or falsify the warmist position?
The way science normally works is that you test a hypothesis by making a prediction and seeing if reality matches that prediction. If so, it’s evidence that your hypothesis is correct. If not, it’s evidence that your hypothesis is wrong. But global warming science doesn’t seem to work this way. Any time a warming event happens, such as a hot year or a melting glacier, it’s presented as evidence in favor of the warmist hypothesis. If a warming event does not happen (or a cooling event happens), it is explained away as natural variation.
As far as I know, the warmists’ computer models predicted fairly stead warming, year after year, with an average of 2 to 3 years between each new temperature record. So 11 years without a temperature record is a big problem for them, in my opinion. Even if it’s the result of “natural variation,” it shows that there is some unknown factor which is important and which is not properly accounted for in the models.