Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is also a big divide between statistical rigor in "science" research and "medical" research. In their defense, I think it's just extremely difficult for most medical research studies to get the kinds of random or N needed for reliable statistics.

Also regarding John Ioannidis's essay (not paper):

First, he uses the blanket term "research" in his meta-analysis (or at least examples) but his work seems focused primarily on medical research studies. Second, I'm not sure he clearly defines what it means to be "False", or for "most" published research to be "false".

Let's say there is clearly a right and a wrong answer to a question, and up until yesterday, publications A, B and C had concluded the wrong answer. But someone releases a newer, more rigorous finding D that refutes A, B and C conclusively and choses the correct answer. I wouldn't consider this particular field to be 75% wrong after the publication of D. (Though it accurately could have been described as close to 0% conclusive before D). For any particular line of inquiry, the quality of research in this area seems like it should be shifted strongly toward the maximum exemplar of this body of work, and not it's average.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: