Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, scientists are people too, so they have feelings and are not perfect. I think there are polite ways provide criticism and corrections, hopefully without humiliating people with good intentions. If it's a simple mistake, they can email the author and suggest an erratum. If it's more serious, then it's common to write a response in the journal. I think more public forms of criticism are perceived as bullying because a general audience can't typically judge the magnitude of the error, since they're not scientists working in the field, so it can be unjustly damaging to someone's reputation.


If someone objectively screwed up a published study, there's no room for feelings or social consideration. People might be wasting their precious time and money pursuing impossible goals based on the incorrect study.

Let's say someone finds a mistake in a math paper. Do they politely ask the author to fix it? What happens if the author ignores them? The scientific community doesn't have time to coddle and pursue every mistaken author. Everyone makes mistakes, and the scientific community knows that, but those mistakes need to be brought into then open, not (maybe) resolved behind closed doors.


I think there is some nuance here; not all screw ups are equal, nor are all ways of correcting mistakes equal.


It's not coddling as much as professional courtesy. Mistakes are often brought up at conferences and in peer review, but most scientists within a specific research area try to be on good terms with one another and don't see value in publicly shaming colleagues for their mistakes.


There are two options here:

Use a bot to find tons of mistakes automatically and risk coming across as rude.

Or

Let social trivialities dominate scientific discourse and let most of those mistakes go unchecked forever because there's no feasible way to "politely" address hundreds of authors who made mistakes and keep checking back to make sure they actually fixed the mistake.

The former is clearly the preferable choice. Some individual scientists will suffer for it, but the scientific community as a whole will benefit greatly.


There is a third option that's better than either one you proposed.

Get people writing new papers to use statcheck before they publish.

The only rudeness here was an avoidable choice - publishing statcheck's results on a huge set of already published papers. The statcheck authors chose to do that for exposure, they even said so, it was not for posterity or the scientific well being of the community.

I don't personally think what they did was wrong, I don't particularly care that some people felt it was rude. But the fact of the matter is that the rude part was completely avoidable.


And what happens when a new fact-checking algorithm comes out? What I said in my comment.


Only if they're also rude about it and run it on a huge set of old papers, and do a big PR campaign to get attention.

Otherwise, the only thing that happens is papers quietly get better and everyone on all sides is happy.


So they shouldn't run it on the old papers, so as to not offend anyone's sensibilities?

Do we just declare said papers completely useless then? Or do they keep getting cited by new papers, and used to guide policy? If the latter, then not vetting them using the newer and better methodology would be unethical.


That would be jumping to a conclusion I didn't state. I think I already address this comment above, I don't think what they did was wrong, and I don't care if people were offended.

But, since you brought it up - old papers are already dead, they cannot be fixed. They can only be referenced as prior work, or retracted in extreme cases. We're not talking about extreme cases here.

Statcheck can only help new papers, it cannot help old papers. Running it on old papers was done as a publicity stunt for statcheck, and nothing more. The authors said so.


> If the latter, then not vetting them using the newer and better methodology would be unethical.

On the contrary, it would be unethical to hold people to new standards that didn't exist when the work was done. If you have a beer and then next week the laws change the drinking age to 65 years old, should you go to jail?

This is simply not how things are done, in science or in society. Laws and standards change all the time, you are only subject to the laws or standards that exist when the work or action you do is performed & evaluated. For the purposes of scientific publication, we do not and will never revisit all prior work and formally re-judge whenever standards or policies change.

You may be conflating publication policy with general scientific understanding. Old papers will always be informally evaluated under the thinking of the day. But that doesn't help the old papers, nothing can be done about the old papers, they are part of a fixed record that can't change, it only allows us to publish new papers. What will and does happen now is new papers will be published that refute old papers. The new papers are subject to the new methodologies.


I don't get this argument.

There is no new standard here. This is just a tool that says, "is there anything that looks like a mathematical/statistical mistake".

The expectation that a paper's calculations are correct and error free is one that has always existed.

There is no "thinking of the day", there is just mathematical correctness or not. At most you could argue that we might learn that some thing we thought mathematically true is no longer so, and warrants us reviewing papers where it had previously been used, but that is not the case here.

It is not unreasonable to hold published research to the standard of correctness, and if a paper contains errors within its calculations these should be fixed - regardless of when or who published it.


You're not holding people to new standards. You're holding their research to new standards. Which is the only sensible course of action.

And no-one suggested that those papers should be rewritten. But insofar as there's some known problem with them, why is it a bad thing to have a public record stating that much?


We may be agreeing and mis-communicating, or agreeing violently, as I've heard it called, so let's get specific. What are you suggesting should happen to a paper when it's shown to have errors?

Nothing at all is wrong with adding new information to the public record stating the issues, that's what I mentioned is already happening -- new papers reference and demonstrate the weaknesses of old papers. In my field, as I suspect most, it's a time honored tradition to do a small literature review in the introduction section of a paper that mainly dismisses all related previous work as not good enough to solve the problem you're about to blow everyone's mind's with.

In my mind, nothing is wrong with what the statcheck authors did either. My one and only point at the top was that it's not surprising it ruffled some feathers, and that it didn't have to ruffle any feathers. That only happened because the results were made public without solicitation. @wyager was trying to paint the situation as a dichotomy between rude or unscientific, that rude was the only option. Rude is not the only option.

If statcheck hadn't published the review of old papers and contacted all the old authors, then I'm pretty sure two things would have happened: 1- this wouldn't have ruffled any feathers, and 2- it wouldn't have gotten much attention, and we wouldn't be talking about it.


> new standards that didn't exist when the work was done

Being correct is not a new standard.

> This is simply not how things are done, in science or in society.

Oh, I say! Positively indecent! A moral outrage! We can't have our morals compromised by scientific objectivity!

> we do not and will never revisit all prior work and formally re-judge whenever standards or policies change.

Have you ever heard of Principia Mathematica?

> nothing can be done about the old papers

Except to try and figure out when they're wrong, as this bot is doing.


Please try to judge comments in their context and not look for excuses to try and humiliate someone with sarcasm. I was discussing standards with @int_19h, who brought up the issue of new methodologies. Of course being correct isn't a new standard, I agree with you, but that's not what was being discussed.

Of course morals shouldn't be compromised by scientific objectivity. Again, you're arguing a straw man - that's not the issue I was talking about.

I have stated multiple times, including my first reply to you, that I think the bot is fine. My argument in context is that a paper cannot change because it has been published. Do you disagree with that? That doesn't have any bearing on whether bots or people find & publish errors later. It does have a bearing on how people will respond to PR campaigns to publish errors when nothing can be done about it on the part of the author. Statcheck will do good things for authors who get to use it before they publish rather than after.

Maybe you're not reading all of what I wrote? Maybe I hurt your feelings?


My argument in context is that a paper cannot change because it has been published.

I think this statement is, if not completely untrue, grossly misrepresenting how existing papers are interacted with.

First of all papers, as with all publications, have errata published all the time. These errata may be included in future prints, or published in a separate location that can be looked up by people using the paper. Publishing errata is not a new occurrence, and although perhaps technically the original paper remains published unchanged, it is disingenuous to claim that this means the paper cannot change.

Modern publishing methods, such as the arXiv, allow for new versions of the paper to be uploaded, literally changing the published version of the paper.

As you point out yourself, literature reviews should point out issues with existing papers. Do you think that the original authors throw their hands in the air, thinking to themselves "oh well, it's published, nothing can be done"?? Of course not! If they are still engaged with the subject they either defend the paper, correct obvious mistakes, or continue experimentation or investigation in response.

To claim that errors should not be pointed out simply because the original authors can do nothing about the errors is diversionary at best. Of course errors in published results should be made public. How else can we trust any of the works?

If errors in existing research is always hidden, squelched, swept under the rug, we have no reason to trust it. It is the openness of research - publishing in the open, criticising in the open, discussing in the open - that allows us to trust research in the first place. Indeed, that trust is already eroded by revelations of systemic issues like p-hacking within published research.

You may be suggesting that posting these analyses to the individual papers was the wrong way to do it, that it would be better done in a literature review or paper.

I completely disagree.

It is essential that anyone looking to reference a paper with a glaring mistake in it (which many of those affected are) is able to see that mistake and correct for it. Leaving the old research be is just ensuring that incorrect ideas are allowed to propagate, and have more of an impact than they ever should.


Science being a social activity, I think you have to justify the presumption that we are talking about social trivialities here.


>Science being a social activity

Hoo boy. Research might be a social activity, but science is the application of probabilistic reasoning to evidence collection. Science isn't a "social activity" any more than topology is. Any social complications are entirely incidental.


Yes, but that probabilistic reasoning is applied by humans with feelings which sometimes get in the way of their reason. Wanting to have those feelings simply go away is unrealistic.


What does that have to do with calling out papers for being incorrect?


Sorry for replying so late.

Your findings have to be accepted by the others. If they're not accepted it's like they don't exist. Think of all the great theories that didn't take off at first, because nobody accepted them (for various reasons).

When you attack someone, they are less likely to listen to you, even if you are offering valuable feedback. They care about saving face, so they will focus on defending themselves.

I realize this can be frustrating, because it means truth doesn't always prevail. However, it's what we have to work with, our emotional brains.


I find it hard to understand how something that is clearly an automated bot, posting comments that are 100% factual ("I checked this paper; N things looked wrong"), could be perceived as humiliating.

I guess it would be the case if you assumed that all people are perfect and never make mistakes. I would hope that psychologists, of all people, would know better than that.

So if it's not that, where's the humiliation part in pointing out math mistakes?


I don't necessarily think this is bullying or humiliating, but it's silly to think that being done by a "bot" and being "factual" has anything to do with it. If a malware "bot" secretly posted people's porn viewing history to their facebook page, would that not be humiliating for those people? Or would it not because it was factual and done by a "bot"? Clearly it would be.

Saying it was done by a bot is no excuse for anything. The bot didn't spontaneously pop into existence - somebody created it and decided what behavior it would have.

In this particular case, whoever created the bot could easily have made it email the authors of the mistaken papers and given them a chance to correct the mistakes before outing everybody in public.


Being done by a bot is not an excuse in general. However, when pointing out objective factual mistakes, I think there is a difference between your colleague pointing it out, and an automated tool pointing it out. Even if result is the same, the former can be embarrassing, hence why we learn to phrase negative responses in a roundabout way. But politeness cannot be expected of a brainless machine, and so it can deliver simple facts.

So it seems that it boils down to public disclosure before private?

Out of curiosity, how would you imagine the "correct the mistakes" procedure after private disclosure? The author cannot just edit the paper, it's already published. They would have to publish errata, which draws just as much attention. And, from an ethical perspective, if an author is notified of a mistake found by autonomous tool, wouldn't they be required to disclose the methodology when publishing errata? So I'm not sure how that whole situation is fundamentally different from just dumping it in the public.


I don't care about their reputations. It's not a school mascot contest. These are scientific papers.


Yes, and scientists don't care. Science, as it has been explained to me by scientists, is a penis length contest, where impact factor and prestige serve as ersatz penises.


The best part is that any fool can see that impact factors are a terrible proxy for real impact, and in fact the editors of several glam journals have pointed this out (disclosure: I have plenty of glam papers on my CV*).

But since it props up a myth that senior faculty like to believe (i.e. their shitty old Cell paper is great because Cell is/was great) that's the yardstick. As the director of a CCC once told me, it takes too long to see the impact of papers (citations piling up) so the JIF is used as a proxy.

Sort of like how it takes too long to do good science, so some people just publish whatever garbage they can sneak past the editors (ha fucking ha only serious). There is a LOT of crap in the literature as a result.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: