The lay public thinks "peer reviewed" means that others have tried it and validated the results. What it really tends to mean is that a peer looked at the procedures and results and that it passes the "sniff test" and generally doesn't have any glaring errors.
The more subtle problem is that in some circles, it isn't even that. Since fewer and fewer people want to be the person who damaged someone else's work and/or career, it's a blanket pass.
We're drifting away from scientific study and critical thinking to "reasonable" approaches and not upsetting doctrine and/or your superiors. That looks less and less like science and more like religion.
Here's a leading scientist's description of peer review:
Peer review works superbly to separate valid science from nonsense, or, in [Thomas] Kuhnian terms, to ensure that the current paradigm has been respected. It works less well as a means of choosing between competing valid ideas, in part because the peer doing the reviewing is often a competitor for the same resources ... sought by the authors. It works very poorly in catching cheating or fraud, because all scientists are socialized to believe that even their toughest competitor is rigorously honest in the reporting of scientific results ... It certainly does not ensure that the work has been fully vetted in terms of the data analysis and the proper application of research methods.
From: Reference Manual on Scientific Evidence [for U.S. federal judges], Third Edition; How Science Works section by David Goodstein, CalTech Physics Professor and former Provost; published by National Academies Press (2011)
Peer review is usually quite a good way to identify valid science. Of course, a referee will occasionally fail to appreciate a truly visionary or revolutionary idea, but by and large, peer review works pretty well so long as scientific validity is the only issue at stake. However, it is not at all suited to arbitrate an intense competition for research funds or for editorial space in prestigious journals. There are many reasons for this, not the least being the fact that the referees have an obvious conflict of interest, since they are themselves competitors for the same resources. This point seems to be another one of those relativistic anomalies, obvious to any outside observer, but invisible to those of us who are falling into the black hole. It would take impossibly high ethical standards for referees to avoid taking advantage of their privileged anonymity to advance their own interests, but as time goes on, more and more referees have their ethical standards eroded as a consequence of having themselves been victimized by unfair reviews when they were authors. Peer review is thus one among many examples of practices that were well suited to the time of exponential expansion, but will become increasingly dysfunctional in the difficult future we face.
This has not been even close to true in my experience. Reviews are usually unsigned, so there's very little social pressure to "let things slide". On the other hand, a non-trivial number of reviewers seem to think the review process is an opportunity to "rough up the competition" instead of an opportunity to offer constructive feedback.
That doesn't rule out that it's a blanket pass for some papers. How about if we phrase it this way:
In some cases, peer review works correctly with constructive feedback. In some cases, peer review is abused to "rough up the competition" and slow down the progress of science. In yet more cases, peer review is a blanket pass when the correct actors all align on that paper. There is no way to determine which of these cases apply to any particular paper.
Or put another way, all of these experiences can be true and correct at the same time.
I'm willing to believe that the skids could be greased for a some papers, based on the trendiness of the topic or the authors' reputations. I'm also willing to believe that this is hard for people without the relevant expertise to detect.
However, I think describing peer review, generally, as "a blanket pass" is going much too far. If anything, I wish it were harder on actual methodological errors while being much, much more permissive of (openly-disclosed) ambiguities in the data or gaps in the theories. Right now, people tend to 'write around' issues in their data, lest a reviewer argue that this invalidates the entire experiment. Looking at papers from the 1980s and 1990s, it's amazing to me how much more frank the authors were.
Sorry! I'm not trying to, but I'm also not sure how else to interpret them. As I said, I'm willing to believe that papers occasionally slip through the cracks (the arsenic life thing from 2011 could have been caught in review, for example), but the idea that a decent number of papers papers just slide through the peer review process is totally unlike the experience I've had with my own and my friends' and colleagues' papers.
> What it really tends to mean is that a peer looked at the procedures and results and that it passes the "sniff test" and generally doesn't have any glaring errors.
It sometimes means that.
But there are studies that fail the smell test like a refuse heap and still somehow pass a "rigorous" peer-review process.
Remember when George Ricaurte, who by the way was already pretty obviously a charlatan ONDCP whore at the time, injected baboons with what he said was a normal dose of MDMA (2mg /kg) and found severe neurotoxicity? [0]
Yeah, well two of the five of the baboons died. I remember literally the day that study was published - in effing Science. It was all over the news, including the front page of the NYT.
But plenty of us in the drug policy reform movement (and, for that matter, those of us who had used MDMA a few times) knew immediately (and said so) that this study was obviously flawed because, well, people don't die from a normal dose of MDMA. Sure enough, it later turned out that Ricaurte had injected those poor baboons with a 2mg/kg dose of methamphetamine, not MDMA. He said that there had been a "labeling error," which his supplier denied.
There are examples like this every day.
The peer review process is only as good as the political will toward righteous honesty - the state has muscled out-and-out deceit through this system often enough to make any thinking person doubt its capacity even as an effective "sniff test."
> The lay public thinks "peer reviewed" means that others have tried it and validated the results. What it really tends to mean is that a peer looked at the procedures and results and that it passes the "sniff test" and generally doesn't have any glaring errors.
> The more subtle problem is that in some circles, it isn't even that. Since fewer and fewer people want to be the person who damaged someone else's work and/or career, it's a blanket pass.
From my experience in the biomedical review process, I would characterize the process as brutal, at least for top venues and federal grants.
> We're drifting away from scientific study and critical thinking to "reasonable" approaches and not upsetting doctrine and/or your superiors. That looks less and less like science and more like religion.
I mostly agree that there is friction with established doctrine/superiors, but hasn't this always been there? It seems hard to find a major scientific discovery that didn't have some established concept (and proponents) to push against.
Conveniently enough, that topic was discussed here a while back:
> They show that the premature deaths of elite scientists affect the dynamics of scientific discovery. Following such deaths, scientists who were not collaborators with the deceased stars become more visible, and they advance novel ideas through increased publications within the field of the deceased star.
I reject the idea of a religion-like science. I would say it has become what it is now because of the economic view the society has adopted to manage it, rather than because of irrational thinking.
Apparently, science production doesn't scale well, because scientists, when asked to compete for their bread-winning, find it easier to fool their managers than to produce legit science.
Said one Chinese scientist under Mao or soon after:
The Academy of Sciences is the Academy of Sciences ... It is not the Academy of Production. It is a place where one studies, not a place where one plants cabbages. It is not a potato patch, it is a place where one does science ...
You challenge my point of "not upsetting doctrine and/or your superiors" by saying some scientists find it "easier to fool their managers than to produce legit science."
In my reading, pyrale was agreeing with you, but instead of sourcing the problem as some vague social effect putting the blame specifically on our ways of funding science.
Isn't the approach different in different fields? I remember reading about recent advanced papers in mathematics that have been published but then left "in the void" a little bit because it took such a long time for peers to actually read, understand and try to challenge the proofs.
And when the proofs become terabytes of data produced by a program, and the reviewer has to write another program just to verify that the proof is sound, it's going to become intractable.
Maybe we should encourage it financially. Like bug bounties.
The lay public thinks "peer reviewed" means that others have tried it and validated the results. What it really tends to mean is that a peer looked at the procedures and results and that it passes the "sniff test" and generally doesn't have any glaring errors.
The more subtle problem is that in some circles, it isn't even that. Since fewer and fewer people want to be the person who damaged someone else's work and/or career, it's a blanket pass.
We're drifting away from scientific study and critical thinking to "reasonable" approaches and not upsetting doctrine and/or your superiors. That looks less and less like science and more like religion.