Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Grad students / postdocs / human lab rats aren't scum, the incentives just aren't in place to promote good behavior (such as calling other researchers out on their bullshit). If you're trying to acquire a vaunted tenure track job, you can't afford to piss off $senior_tenured_researcher_at_prestigious_institution, since $senior could blacklist you so that you won't get hired at the incredibly small set of universities out there. Sometimes things work out despite pissing off major powers (Carl Sagan technically had to "settle" for Cornell due to being denied tenure at Harvard, in no small part because of a bad recommendation letter from Harold Urey [0]), but not often.

Even if you do manage to get a tenure track job, you pretty much have to keep your head down for 7 years in order to secure your position.

And once you have tenure, you still get attacked vociferously. Look at what happened when Andrew Gelman rightly pointed out that Susan Fiske (and other social psychologists) have been abusing statistics for years. Rather than a hearty "congratulations", he was called a "methodological terrorist" and a great humdrum came about [1].

When framed against these circumstances, it should be evident that there is literally nothing to gain and everything to lose from sending out a short e-mail pointing out that someone's model doesn't work.

[0] https://www.reddit.com/r/todayilearned/comments/15m8om/til_c...

[1] https://www.businessinsider.com/susan-fiske-methodological-t...



I'm a researcher myself and I guess this is one of those "does the end justify the means?" scenarios... Out bad research and its perpetrators and science loses out on a scientist that actually wants to do good work. Or don't and then watch yourself rationalize worse decisions later on for the sake of your research, slowly becoming as corrupt as they were and realizing that a lot of your cited work could potentially be as bad (or worse) as the ones you helped get published.

I really believe we need a better way. Privately funded / bootstrapped OPEN research comes to mind as a potential solution to bring some healthy competition to this potentially corrupt system. Mathematicians are starting to do this, I think computational researchers have the potential to be next.


> Grad students / postdocs / human lab rats aren't scum, the incentives just aren't in place to promote good behavior

The question is, would additional incentives promote good behavior or just lead to more measurement dysfunction. Some people think that just giving the "right" incentives is needed, but actual research shows otherwise.

https://hbr.org/1993/09/why-incentive-plans-cannot-work


Without reading through that very long text, claiming that incentives don't influence human behavior is a wildly exotic claim.

There is near infinite evidence to the contrary. That said, constructing a system with "the right incentives" can of course be devilishly hard or even impossible.


The claim is that it does change behavior, but only temporarily and it doesn't change the culture in a positive way / doesn't motivate people. It ends up feeling like a way of manipulating. That being said, according to this article, the entire incentive system would need to be dismantled. Simply adding more incentives wouldn't necessarily produce higher quality, at least not in the long run. So essentially the process of incentivizing new amazing research for funding is the primary issue and adding incentives for pointing out issues would just be a bandaid.


This sounds like a good critique of naive incentive schemes.

I don't think there is any doubt that humans follow incentives.

But working out what the core incentive problems are, and actually changing them might be both (1) intellectually difficult, and (2) challenge some sacred beliefs and strong power structures, thus making it practically impossible.


The HBR article's discussion of incentives is not really quite what I was thinking of when I wrote my comment. Specifically, the article you cite refers to the well-known phenomenon of how introducing extrinsic rewards via positive reinforcement is counterproductive in the long run. I've often noticed this form of "incentive" / reward being offered in the gamification of open science, such as via the Mozilla Open Science Badges [0], which in my opinion are a waste of time, effort, and money that do little to address systemic problems with scientific publishing.

With regard to the issue of grad students being unwilling to come forward and report mistakes, incentives wouldn't be added, but rather positive punishment [1] would be removed, which would then allow rewards for intrinsically motivated [2] actions.

[0] https://cos.io/our-services/open-science-badges/

[1] https://en.wikipedia.org/wiki/Punishment_(psychology)

[2] https://msu.edu/~dwong/StudentWorkArchive/CEP900F01-RIP/Webb...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: