Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While reasonable skepticism is healthy, global warming is such a well sudied phenomenon by now that an unreasonable number of independent codebases must have identical bugs in order for it to be false.

There's also the fact that we have had a pretty solid grasp about the chemical reactions of greenhouse gases since long before computers, both theoretically and empirically, and we know roughly how much is put out in the atmosphere.

Where the models diverge is on far finer points than what is needed to make the basic policy changes that seems to be where we are stuck right now.



> an unreasonable number of independent codebases must have identical bugs

Entertaining[0] badpun’s skepticism, it is not necessary they have identical bugs, only that their bugs yield similarly biased results.

For example, if a significant number of bugs are identified by their affect on the results, then bugs contributing to “wrong” results might be more likely to be identified and fixed.

[0] In the Aristotelian sense.


Not if the bugs are actually just bad/corrupted data. For example, the main dataset the IPCC is based on appears to contain all sorts of bad data that definitely make me skeptical of the conclusions the IPCC comes to (https://researchonline.jcu.edu.au/52041/).

If global warming is actually wrong, it's most likely because of bad/corrupt data in the datasets used.


Quite right. In fact, those statisticians in the posted article are surely mistaken. We know that using tricks to hide things is sound science.

http://www.realclimate.org/index.php/archives/2009/11/the-cr... http://www.realclimate.org/index.php/archives/2009/11/the-cr...

Since it was sound science in 2009, why would it not be now?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: