Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Psychology (and social science in general) has a serious problem.

Standards for publication are far too low, incentives for replication almost nonexistent, negative results rarely reported.

How many of the published results in social science are trustworthy?

https://www.xkcd.com/882/



>Psychology (and social science in general) has a serious problem.

Weren't the same (and worse) problems found in hard sciences?

For Biology e.g.: http://journals.plos.org/plosmedicine/article?id=10.1371/jou...


Check out http://retractionwatch.com for a convenient compendium of scandals with published papers.

I think that in the technical community we have a habit of treating scientific publications as absolute truth, ignoring the many susceptibilities in the publication process. I guess people do that because it's the firmest way to centralize on the truth that they know how, which is fine as far as it goes.

I've found that these same people are quick to nitpick studies that don't reach conclusions they like and quick to weaponize studies that do, beating people over the head as "science deniers" for holding a non-compliant opinion (even if it's not necessarily a minority opinion).

Political, personal, and commercial agendas all seriously influence our output, even the output that gets published in peer-reviewed journals. Let's all agree that as humans, all of our work, developments, and opinions are subject to bias and error. Considering this, we shouldn't be too hostile to anyone who may have a different perspective.


> Considering this, we shouldn't be too hostile to anyone who may have a different

I can conclude some things with greater confidence than others. When an expected benefit can be concluded with greater confidence and sufficient magnitude than its expected costs, a decision can - and should - be made.


Sure, I agree that decisions should be made and that the decision process can rightfully incorporate scientific data and consensus. What I'm saying is that if someone refuses to accept our position despite what we consider an abundance of authoritative data, we should sympathize and be kind despite our disagreement, instead of labeling them as ignorant science-deniers. Assuming the person holding the opposing position is well-informed, we should accept that they simply don't recognize the same publications as authoritative, and that there is legitimate room for doubt not only of specific papers but of "scientific consensus" as a whole, especially when that "consensus" is weaponized for political use.

Academia draws people with specific backgrounds and biases. Groupthink is a real and substantial risk, not only because people quite frequently simply copy each others' output, but also because large-scale ostracization is a real risk if one publishes something that goes against the grain. Organizations and institutions pull funding if a finding is too controversial. Studies are often backed by large donors who, whether the pressure is obvious or not, are trying to get a specific result. Graduate students are under a great deal of personal pressure to perform and justify their loans. There are many non-scientific social factors that affect scientific rigor, even in peer-reviewed journals.

Like I said, the convention is that studies that support the speaker's preferred social or political views are usually considered credible, whereas studies that don't are nitpicked and labeled "questionable", for any of a myriad of reasons: the author(s) come from an institution the speaker dislikes, the sample wasn't representative, and so on.

The only common thread is that people won't be swayed in their political or social positions by academic papers -- they'll only use them to justify their pre-existing set of beliefs. This is true for virtually everyone. So I'm suggesting that instead of mistreating someone because we think the "science" bares out our point of view, we recognize that "science" itself is a fallible process susceptible to all kinds of externalities, and that it's often reasonable to mistrust a purported "consensus". Therefore, we should politely accept the difference in opinion instead of getting into a zero-sum rhetorical exercise of "Study X proves my POV" / "Study X was done by clowns! Look at Study Y, which proves my POV", and ends up with both sides detesting each other all the more.


>people won't be swayed in their political or social >positions by academic papers -- they'll only use to justify >their pre-existing set of beliefs. This is true for >virtually everyone.

Speak for yourself. There are plenty of us who are willing to consider our positions falsifiable and actively seek objective answers in good faith.

Your point that academia is fallible is well taken, but this doesn't mean "virtually everyone" is so biased/personally invested as to lack critical thinking skills.


The investment required to truly consider the evidence is so high that none of us can manage that investment for more than a few questions.

We all have to rely on expert opinion in most cases. Even leading experts on one question have to rely on other experts for other questions.


>Your point that academia is fallible is well taken, but this doesn't mean "virtually everyone" is so biased/personally invested as to lack critical thinking skills.

And yet, if history has taught us anything, it's exactly that.


No field is immune from this, but social sciences are just much harder to do experiments in.

Primarily because, as opposed to atoms and anti bodies, your test subjects have intelligent minds of their own.


Do we have to rehash this conversation every time a psychology article is posted?


Another article about a study failing replication?

I too wish we didn't have to have this conversation so often, but I think the problem is that poor studies are published, not that we "rehash this conversation" in response.


what do you think can be done here? These people obviously know that their studies can't be replicated, they are not ignorant so they are obviously malicious.


> These people obviously know that their studies can't be replicated

I don't think that's fair to say at all. I can come up with several ways a study can non-maliciously arrive at a false conclusion:

1. The researchers may accidentally leak information to the participants regarding the hypothesis being studied. When using human subjects, it's very difficult to avoid biasing them toward results they may think you're looking for.

2. Researchers may not sufficiently blind themselves during the experiment, causing them to have undue influence on the outcome, even if they're not consciously trying to exert it.

3. Some unknown confounding effect may be at play during the experiment that wasn't properly accounted and controlled for.

Those are just three I could think of. When dealing with behavioral sciences, I can't imagine how difficult it must be to design a test protocol that eliminates all the messiness of the meatbags being studied.


I helped run a clinical trial for an antidepressant one time. It was a double-blind randomised within-subjects-replicated crossover design. All that control was complicated, time consuming and very expensive. Only the hospital pharmacy knew whether participants were in treatment or placebo for a given session, and we never met the pharmacist, just received an orange vial. But we're pretty sure information still leaked, because drugs have side-effects. There's nothing to be done about this. Use an active placebo, you might say, but now you've added "get ethics approval to administer a nauseating placebo" to your 9-month-long ethics todo list. And then your placebo is worse than your active and the whole study is screwed because everyone spewed up in the MRI. Meatbags are tricky indeed.


4. Bad luck

A proportion of studies will produce false positives just by chance even if they're conducted perfectly.


How exactly do you jump into the first conclusion:

"These people obviously know that their studies can't be replicated"

and what's so "obvious" about it?

Or, also, about the second jump to conclusion: "they are not ignorant, so they are obviously malicious."

Perhaps they fell prey to the same thinking that everything is "obvious" instead, taking for granted a lot of things that they should have checked?


I don't mean to sound rash, but have you ever been trained in an experimental science?

If I had a dime for every time a physics undergraduate makes a subtle and unintentional flaw during an experiment that, were it not a mistake, would overturn a century of science that we know Just Works.


Just treat everything you read in social science with a grain of salt. The studies will never be able to alleviate problems of reproducible with the sample sizes we have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: