> the studies themselves have already been peer reviewed and as such don't need to be questioned individually
Just to add a bit to your comment: Peer review is a community's way of saying a paper stands up to their best understanding of methods at the time. The author could have accidentally left something out, misrepresented their approach or fabricated things. Peer review would not catch that, but replication or failure to replicate would. They are two different things.
For example, I remember one experiment could never be replicated by anyone except the author and his students. So, it got a bit heated. Ultimately, they found it was an experimental setup, not reported in the paper because they didn't think it was important, that allowed the experiment to work. So, it passed peer review, failed replication, but ultimately, because of failed replications and the academic process, further information came to light and knowledge was created.
The academic process can be slow and frustrating. But, when applied properly, it is our best approach to expanding knowledge. That isn't to say it can't be improved, especially regarding failure to replicate and null-hypothesis papers. Academics know this and are working on methods around this, methods called out in these discussions.
Some of the most innovative people i've found, seeking ways to improve this process have been NSF and NIH officials. So, it's promising, but slow.
I think you're putting a little too much weight on peer review here. The way it is presented to non-academics often makes it sound more like a trial or inquest, where a large group of people carefully weigh the evidence.
In practice, peer review means that 1-3 people each spent 1-3 hours thinking about the paper and didn't find anything horribly wrong with it. It is more like a code review--it's good if it catches something, but not terribly surprising if some bugs slip through.
You're definitely right about the the importance of continuing to discuss and replicate work after it has been published.
Sure. Peer reviews in pure math are often exceedingly careful, or so I am told (not in the field).
Still, I think it's a little much to say that it's "a community's way of saying a paper stands up." At best, it's the opinion of handful of people from that community.
I have also moved to this view of peer review (my area is quantitative biology). Given the complexity of the papers, the ambiguity of the environment in which experiments are done, and the inability of most scientists to write up a clear description of their work, and the tendency to pad the significance of a paper's findings, peer reviewers at best act to find "invalidating errors" which should prevent the paper from being published because it's "obviously wrong".
That's a pretty good description of peer review. It varies, but generally the people chosen know the topic and are good representatives of the community, even if only 1-3 people.
For good venues (journals and even conferences), I think the biggest limitation is not the people, but the information contained in the paper. You can't answer all questions and show every last detail.
People are frustrated with academics, but academics are frustrated with how journalists and readers interpret their results.
Just to add a bit to your comment: Peer review is a community's way of saying a paper stands up to their best understanding of methods at the time. The author could have accidentally left something out, misrepresented their approach or fabricated things. Peer review would not catch that, but replication or failure to replicate would. They are two different things.
For example, I remember one experiment could never be replicated by anyone except the author and his students. So, it got a bit heated. Ultimately, they found it was an experimental setup, not reported in the paper because they didn't think it was important, that allowed the experiment to work. So, it passed peer review, failed replication, but ultimately, because of failed replications and the academic process, further information came to light and knowledge was created.
The academic process can be slow and frustrating. But, when applied properly, it is our best approach to expanding knowledge. That isn't to say it can't be improved, especially regarding failure to replicate and null-hypothesis papers. Academics know this and are working on methods around this, methods called out in these discussions.
Some of the most innovative people i've found, seeking ways to improve this process have been NSF and NIH officials. So, it's promising, but slow.