Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you determine quality? Right now, publishing below Q2 or even below Q1 in some cases is the same as not publishing at all. I've seen grants that only accept D1 papers. As a curiosity, Gregor Mendel original work was published in a small and newly created local Brno journal. It was cited three times in the following 35 years. By all metrics, it was a low quality work. Only 40 years after being published it was rediscovered as a fundamental work.

That on the clean part. I've also seen papers published well above its merit just because the authors know the editors, or the paper comes from a reputable lab so it must be good. Also, the opposite is true: your work is from a small or unknown lab, or goes against the grain, and you'll be lucky if you get published at all.



> How do you determine quality?

Certainly not by citations counts. Citations have more to do with the authors social or scientific network than the worth of the papers.

> It was cited three times in the following 35 years. By all metrics, it was a low quality work.

It was not low quality work (although it did spark quite a bit of controversy due to its perceived or actual issues). It was just an article written by an unknown author in an unknown journal.


FYI: Some have suggested that Mendel's data might be too perfect, indicating possible manipulation or fraud. (1) Mendel's results are unusually close to the expected ratios of 3:1 for dominant and recessive traits. Some argue that real-world data often show more variation. (2) In 1936, statistician Ronald A. Fisher analyzed Mendel's data and suggested it was "too good to be true." Fisher believed the results might have been altered or selectively reported to better match Mendel's hypothesis. (3) Despite these concerns, many of Mendel's experiments have been replicated, and his fundamental findings remain valid. Most scientists believe any discrepancies in Mendel's data were not due to intentional fraud but possibly unconscious bias or error.


> Also, the opposite is true: your work is from a small or unknown lab, or goes against the grain, and you'll be lucky if you get published at all.

Unfortunately, many papers that come from obscure labs and go against the grain are both bad and wrong. It's a hard problem.


If money is the problem, maybe money is the solution?

Like open a betting market for study replication. If no methodological errors are found and the study can be successfully replicated the authors get a percentage of the pool, replication effort is run by a red team that gets paid the same percentage of the pool regardless of outcome making their incentive to find bets that just attract a lot of bets.

This would incentivize scrutinizing big findings like the one in the OP where a failure would be big, but also act as a force for unearthing dark horse findings in journals off the beaten path where replication success would be big.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: