Hacker News new | past | comments | ask | show | jobs | submit login

Peer review is boosting with three weak learners. If you think that has much credibility (after STAP, arsenic life, LaCour, etc) you clearly haven't been paying attention.

Nb. I review for various journals, but the process is far from foolproof. I do the best I can, but editors are free to override us in the interest of "impact". At least with preprints and PubMedCommons anyone with a cogent rebuttal can present it. Cell Press must hate that...




Curious question -- when you or someone else does the peer review, is it standard practice to actually double-check the math behind the paper, or audit any of the data?


No. At least not in neuroscience/psychology.

I've never seen a review request that was accompanied by raw data--you typically get an unformatted version of the manuscript, along with the tables and figures. That's it.

The reviewers can comment on anything, but they tend to be pretty conceptual. For example, one might say that the manuscript claims X, but the authors need to rule out competing hypothesis Y. Good reviewers will either suggest ways to do that (e.g., by doing a specific control experiment or citing a paper that rules Y out). They might ask you to comment on why your data claims A, but some previously published work indicates that !A.

To the extent that statistics get reviewed (not enough), reviewers typically comment whether they think the methods are suitable for the specific application or whether their output is being interpreted correctly. However, it's exceedingly unlikely that someone will actually check whether a t-test on the data in Table #2 actually gives the p-value printed at the bottom (or whatever).


Agreed.

I see it as both a cultural thing as well as practical thing (perhaps what initially lead to it becoming cultural). The reason for the second part is because most analyses are not easily reproducible--they exist as a smattering of data files and analysis scripts that coalesce into a magical, undocumented pipeline that spits out statistics and figures. Reproducibility has received more attention lately but it's still an uphill battle against academics' frantic publishing schedule, lack of familiarity with their software, and general lack of incentive for others to replicate the results (for some, a reverse incentive exists).


Thanks -- that is interesting to hear.


For me, yes. For others, no idea.

I run code, I check derivations, and I usually sign my name.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: