Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If, say, you find that your results are not statistically sigificant at a reasonable p-level, you don't get to claim your conclusion is right in 'broad strokes', maybe has some 'significant errors', a bit 'sloppy statistics', but it's broadly true, right?

Nope, instead it's not statistically significant, so there's no reason to think it wasn't just random chance that made it 'close' to statistically significant.

But if your results are statistically validated, but only because you used statistics incorrectly, you've crossed the line to broadly-true-but-with-sloppiness?

Nope.

There is room for qualitative research. I like qualitative research. But qualitative research has, righty, a different sort of impact.

If you are claiming your research is quantitative, then you need to live and die by good statistics. That's the implicit promise of providing the statistics in the first place, otherwise why do statistical calculations at all?



What if someone claims their p=0.05 but their control group wasn't quite as representative as they assumed and their p is really something more like 0.1?

Or what if one of the experiments in a paper is well-designed and well-executed and supports a hypothesis with very high certainty but some of the other experiments were sloppy or botched? Should the conclusions of the entire paper be labeled as "wrong"?

I find it pretty funny when critiques on statistical rigor in science arrive at language with words such as "mostly" and "wrong".




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: