Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Bayesianism is a 'grand unified theory of reasoning' that all of science be should be based on assigning (and updating) probabilities for a list of possible outcomes; the probabilities are supposed to indicate your subjective degree of confidence that a given outcome will occur.

Yudkowsky's 'Less Wrong' group of 'rationality' followers, aimed to try to force-fit all of science into the Bayesian framework. And of course it doesn't work at all.

I think Andrew Gelman's criticisms are right on the mark.

Probability theory was designed for reasoning about external observations - sensory data. (for example, "a coin has a 50% chance of coming up heads"). In terms of predicting things in the external world, it works very well.

Where it breaks down is when you try to apply it to reasoning about your own internal thought processes. It was never intended to do this. As Gelman correctly points out, it is simply invalid to try to assign probabilities to mathematical statements or theories, for instance.

You see 'Less Wrong' followers wasting years of their lives engaging in the most unbelievable and ludicrous intellectual contortions to try to force-fit all of science into Bayesianism.

Go to the 'Less Wrong' blog and you can read reams and reams of these massively complicated and contorted ideas, including such hilarious nonsense as 'Updateless decision theory' and 'Timeless decision theory'.

---

David Deutsch in his superb books, 'The Fabric Of Theory' and 'The Beginning Of Infinity', argued for a different theory of reasoning than Bayesianism. Deutsch (correctly in my view) pointed out that real science is not based on probabilistic predictions, but on explanations. So real science is better thought of as the growth or integration of knowledge, rather than probability calculations.

In terms of dealing with internal models or hypothesis, I think the correct solution is not to assign probabilities, but rather to assign a 'conceptual coherence' value, so for instance rather than say 'outcome x has probability y' (where x is a hypothesis) you should say 'concept x has conceptual coherence value y'

Conceptual coherence is the degree with which a hypothesis is integrated with the rest of your world-model, and I think it accurately captures in mathematical terms the ideas that Deutsch was trying express.

Probabilities should be viewed as just special cases of conceptual coherence (in the cases of outcomes where you are dealing with external observations or sensory data, Bayesianism is perfectly valid).

Then all of the problems with probability go away, and none of the massively complicated theories expounded on 'Less Wrong' are necessary ;)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: