Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Read through most of the article just for "people can abuse priors"? Come on. Anything, used wrongly, can promote superstition and pseudoscience.


I think you're missing the broader argument, which is using 'mathy' concepts to dress up poor reasoning. Obviously priors matter, but what matters most of all is how good/complete your evidence is. Using a mathematical formula to lend credence to weak evidence (through liberal use of assumptions) is a hallmark of pseudoscience. The same could be said of many of the abuses of statistics and Bayes theorem is merely one good example of this.


Is using mathy concepts to dress up poor reasoning worse than not using anything to back up your reasoning? At least you can point out exactly what's wrong with the mathy reasoning.

A colleague of mine says 'Sometimes pulling numbers out of your arse and using them to make a decision is better than pulling a decision out of your arse'


> 'Sometimes pulling numbers out of your arse and using them to make a decision is better than pulling a decision out of your arse'

Agreed! Leaving the pseudoscience example aside - since there are strong emotions involved - we can clearly see that it is indeed useful and necessary to take decisions under uncertain/incomplete information. This is advantageous whenever the cost of inaction is expected to exceed the cost of backtracking a less than perfect decision, which often is the case.

Let's say... project management. IF you take the time to find out that your project requires 100 tasks, 30 of which lay in your critical path; you can argue if each task will take one day or one week to complete, and you can debate whether adding a 3rd or 4th member to the team will significantly speed up the completion date or not. But you will definitevely be in better shape than if your PM just cook up some 5-page-spec overnight and commited to have it running in beta test by the end of the month before even anouncing it to the team...

Which itself will be better than having all your potential contracts snatched by competitors that never do any estimation at all but are very good at pulling themselves out of tarpits of their own making.


"Is using mathy concepts to dress up poor reasoning worse than not using anything to back up your reasoning?"

I believe so. If your belief is baseless, or based on flimsy evidence or simple bias, it's best if that's obvious. Dressing up weak reasoning to seem stronger is a form of lying. It's what we call sophistry. A big part of the problem is that for a lot of people don't understand the math well enough to point out what's wrong with it or have a bias towards explanations that seem complex or sophisticated but really aren't.

It's true that sometimes we have to make a decision based on poor or no evidence but it should be clear that that is the case when that is the case. Dressing up the argument only obfuscates that.


Honesty is an ultimate issue here. If my reasoning is shoddy, but I plug it into some math apparatus, then it'll likely make my problems obviously wrong. If my reasoning is very inaccurate and the data uncertain, being precise about it can at least make the results salvageable. Scott Alexander argues for this position quite well in [0].

Humans can lie with statistics well. But they can lie with plain language even better.

[0] - http://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-...


"If my reasoning is shoddy, but I plug it into some math apparatus, then it'll likely make my problems obviously wrong."

That's pretty clearly untrue. I remember reading a study recently where the p value was less than .01 or something like that but where the experimental design was clearly flawed. The correlation wasn't the correlation they thought they had. But because the math looked good and it was easier than actually reviewing the experiment, it was tempting to take the study on face value.

I've read Scott's essay before and I understand his argument, but I don't think it works. While, you might be able to avoid some bad reasoning simply by being more systematic, you can also strengthen bad arguments with a faulty application of statistics. What Scott doesn't do is provide an analysis of how often each of these things happens. I'd argue that for each time a quick application of statistics save someone from a bad intuitive judgment, a misapplication of statistics is used to encourage a bad judgment at least one time if not more.

Understand that my argument here is not that one should never use statistics or even Bayes theorem, but that a naive or lazy application can be worse than no application.


I see your point and I agree.

For myself, I try to limit myself to the mathematical apparatus I feel comfortable with. I know that if I were to open a statistics textbook, I could find something to plug in my estimates and reach a conclusion, and I'm pretty sure the conclusion would be bullshit. I learned it the hard way in high school - I remember the poor results of trying to solve math and physics homework assignments on topics I didn't understand yet. The mistakes were often subtle, but devastating.


This is a general argument against statistics. Or math, in general. Yes, dressing your bullshit in math can make people believe you more, but it doesn't change the fact that you're lying. Are we supposed to stop using math for good because evil people are using it for evil?


No, it's an argument against using statistics without first considering the strength of your data.


Then you should take the Bayesian side, because Bayesians look at the data first, and they take their data as given rather than taking a null hypothesis as given. They don't just blindly go off and run a test (which assumes a particular prior implicitly that may be wildly inappropriate) and see what it says about the likelihood of their already observed data being generated by the test's assumed data generator.


But being a good bayesian makes you do exactly this. The process of describing priors makes it obvious you need to do a sensitivity analysis to check how much the prior is influencing the conclusions...


> being a good bayesian

This is exactly what weirds people out about LessWrong folk. They talk about a tool as if it's a religion.


It's a running joke there.

The people need to get over it. LW crowd is a group of people studying a pretty specific set of subjects, focused around a single website. It's typical for such a group to develop their own jargon and insider jokes, which may look weird from outside. It's normal.


"Good Bayesian" in that context just means being an able user of Bayesian statistics, not necessarily holding any particular philosophical belief about what they mean.


How can you evaluate the strength of your data without using statistics? You've created a catch-22.

I'll speculate you have some sort of meta-heuristic and only apply this catch-22 under those circumstances? E.g. this catch-22 only applies to weird and socially disapproved topics?


On the other hand, one could argue that whenever the Church of Scientism sees someone using one of their favorite tools to argue in favor or a subject considered taboo, said church declares the use of said tool to be "invalid" or "out of scope".




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: