Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So they shouldn't run it on the old papers, so as to not offend anyone's sensibilities?

Do we just declare said papers completely useless then? Or do they keep getting cited by new papers, and used to guide policy? If the latter, then not vetting them using the newer and better methodology would be unethical.



That would be jumping to a conclusion I didn't state. I think I already address this comment above, I don't think what they did was wrong, and I don't care if people were offended.

But, since you brought it up - old papers are already dead, they cannot be fixed. They can only be referenced as prior work, or retracted in extreme cases. We're not talking about extreme cases here.

Statcheck can only help new papers, it cannot help old papers. Running it on old papers was done as a publicity stunt for statcheck, and nothing more. The authors said so.


> If the latter, then not vetting them using the newer and better methodology would be unethical.

On the contrary, it would be unethical to hold people to new standards that didn't exist when the work was done. If you have a beer and then next week the laws change the drinking age to 65 years old, should you go to jail?

This is simply not how things are done, in science or in society. Laws and standards change all the time, you are only subject to the laws or standards that exist when the work or action you do is performed & evaluated. For the purposes of scientific publication, we do not and will never revisit all prior work and formally re-judge whenever standards or policies change.

You may be conflating publication policy with general scientific understanding. Old papers will always be informally evaluated under the thinking of the day. But that doesn't help the old papers, nothing can be done about the old papers, they are part of a fixed record that can't change, it only allows us to publish new papers. What will and does happen now is new papers will be published that refute old papers. The new papers are subject to the new methodologies.


I don't get this argument.

There is no new standard here. This is just a tool that says, "is there anything that looks like a mathematical/statistical mistake".

The expectation that a paper's calculations are correct and error free is one that has always existed.

There is no "thinking of the day", there is just mathematical correctness or not. At most you could argue that we might learn that some thing we thought mathematically true is no longer so, and warrants us reviewing papers where it had previously been used, but that is not the case here.

It is not unreasonable to hold published research to the standard of correctness, and if a paper contains errors within its calculations these should be fixed - regardless of when or who published it.


You're not holding people to new standards. You're holding their research to new standards. Which is the only sensible course of action.

And no-one suggested that those papers should be rewritten. But insofar as there's some known problem with them, why is it a bad thing to have a public record stating that much?


We may be agreeing and mis-communicating, or agreeing violently, as I've heard it called, so let's get specific. What are you suggesting should happen to a paper when it's shown to have errors?

Nothing at all is wrong with adding new information to the public record stating the issues, that's what I mentioned is already happening -- new papers reference and demonstrate the weaknesses of old papers. In my field, as I suspect most, it's a time honored tradition to do a small literature review in the introduction section of a paper that mainly dismisses all related previous work as not good enough to solve the problem you're about to blow everyone's mind's with.

In my mind, nothing is wrong with what the statcheck authors did either. My one and only point at the top was that it's not surprising it ruffled some feathers, and that it didn't have to ruffle any feathers. That only happened because the results were made public without solicitation. @wyager was trying to paint the situation as a dichotomy between rude or unscientific, that rude was the only option. Rude is not the only option.

If statcheck hadn't published the review of old papers and contacted all the old authors, then I'm pretty sure two things would have happened: 1- this wouldn't have ruffled any feathers, and 2- it wouldn't have gotten much attention, and we wouldn't be talking about it.


> new standards that didn't exist when the work was done

Being correct is not a new standard.

> This is simply not how things are done, in science or in society.

Oh, I say! Positively indecent! A moral outrage! We can't have our morals compromised by scientific objectivity!

> we do not and will never revisit all prior work and formally re-judge whenever standards or policies change.

Have you ever heard of Principia Mathematica?

> nothing can be done about the old papers

Except to try and figure out when they're wrong, as this bot is doing.


Please try to judge comments in their context and not look for excuses to try and humiliate someone with sarcasm. I was discussing standards with @int_19h, who brought up the issue of new methodologies. Of course being correct isn't a new standard, I agree with you, but that's not what was being discussed.

Of course morals shouldn't be compromised by scientific objectivity. Again, you're arguing a straw man - that's not the issue I was talking about.

I have stated multiple times, including my first reply to you, that I think the bot is fine. My argument in context is that a paper cannot change because it has been published. Do you disagree with that? That doesn't have any bearing on whether bots or people find & publish errors later. It does have a bearing on how people will respond to PR campaigns to publish errors when nothing can be done about it on the part of the author. Statcheck will do good things for authors who get to use it before they publish rather than after.

Maybe you're not reading all of what I wrote? Maybe I hurt your feelings?


My argument in context is that a paper cannot change because it has been published.

I think this statement is, if not completely untrue, grossly misrepresenting how existing papers are interacted with.

First of all papers, as with all publications, have errata published all the time. These errata may be included in future prints, or published in a separate location that can be looked up by people using the paper. Publishing errata is not a new occurrence, and although perhaps technically the original paper remains published unchanged, it is disingenuous to claim that this means the paper cannot change.

Modern publishing methods, such as the arXiv, allow for new versions of the paper to be uploaded, literally changing the published version of the paper.

As you point out yourself, literature reviews should point out issues with existing papers. Do you think that the original authors throw their hands in the air, thinking to themselves "oh well, it's published, nothing can be done"?? Of course not! If they are still engaged with the subject they either defend the paper, correct obvious mistakes, or continue experimentation or investigation in response.

To claim that errors should not be pointed out simply because the original authors can do nothing about the errors is diversionary at best. Of course errors in published results should be made public. How else can we trust any of the works?

If errors in existing research is always hidden, squelched, swept under the rug, we have no reason to trust it. It is the openness of research - publishing in the open, criticising in the open, discussing in the open - that allows us to trust research in the first place. Indeed, that trust is already eroded by revelations of systemic issues like p-hacking within published research.

You may be suggesting that posting these analyses to the individual papers was the wrong way to do it, that it would be better done in a literature review or paper.

I completely disagree.

It is essential that anyone looking to reference a paper with a glaring mistake in it (which many of those affected are) is able to see that mistake and correct for it. Leaving the old research be is just ensuring that incorrect ideas are allowed to propagate, and have more of an impact than they ever should.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: