It you don't have the time to read the whole paper, jump to page 11 and check out the graph at the bottom of the page. Top confidence interval is from the original study and the rest are confidence intervals from studies that tried to replicate it.
> How does a study with a non-significant result which can't be replicated become "seminal"?
How does anything become "seminal"? People we trust told us so. Whenever something like this happens, you should be asking yourself who you're trusting that you probably shouldn't be.
The same way people write "citation needed" and then put their brains to sleep.
It's not as if they will then go to replicate and evaluate those citations (if they were, they could just as well try to evaluate what the other person told them directly, or at least try finding those citations for or against it themselves).
They just need some comforting assurances -- which will then repeat freely, few (or nobody) bothering to examine those "citations" at any step of the chain.
For me, unless the science is old enough to be including in University 101 courses, I don't much care about it. Unless you work in the field (and will actually use and evaluate the results mentioned in it) a peer reviewed paper is worth almost nothing to the casual HN reader.
By the way, "holding a pen in your mouth makes you feel smiley" is definitely in psych 101 courses, and social 201, and also showed up in cognition 301 if memory serves.
I'm pretty sure the first picture on page 4 shows a misunderstanding of how the pen is supposed to be used. You're supposed to put it in your teeth cross-wise, so your mouth is forced into a kind of smiling position.
I thought I knew the basic premise of "Thinking Fast, Thinking Slow" even though I haven't read it... but I don't understand why this would be a key study for the book. Can anyone provide a quick explain?
Biting a pen with your teeth or your lips should induce one of two different "Type I Thinking" facial expressions, associated with Kahneman's "fast thinking" as I understand it.
This study appears to discuss the link between forcing a type I physical response, while attempting to prompt a type II response (laughter/humor). As I'm reading it, this study is talking about a previous study in which fuzzing the "fast thinking" system affected the "slow thinking" system. This new paper calls those results into question.
I wonder if it'd make a difference if the pen was horizontal in the participant's mouth, touching both corners of the lips. That's the mental image I walked away with when I read "Thinking, Fast and Slow".
The more I see these the more I am thinking these psychological conclusions are just placebos. You pick the one that appeals to you most and ignore the rest. They work only because you want them to be true.
I think that in the technical community we have a habit of treating scientific publications as absolute truth, ignoring the many susceptibilities in the publication process. I guess people do that because it's the firmest way to centralize on the truth that they know how, which is fine as far as it goes.
I've found that these same people are quick to nitpick studies that don't reach conclusions they like and quick to weaponize studies that do, beating people over the head as "science deniers" for holding a non-compliant opinion (even if it's not necessarily a minority opinion).
Political, personal, and commercial agendas all seriously influence our output, even the output that gets published in peer-reviewed journals. Let's all agree that as humans, all of our work, developments, and opinions are subject to bias and error. Considering this, we shouldn't be too hostile to anyone who may have a different perspective.
> Considering this, we shouldn't be too hostile to anyone who may have a different
I can conclude some
things with greater confidence than others. When an expected benefit can be concluded with greater confidence and sufficient magnitude than its expected costs, a decision can - and should - be made.
Sure, I agree that decisions should be made and that the decision process can rightfully incorporate scientific data and consensus. What I'm saying is that if someone refuses to accept our position despite what we consider an abundance of authoritative data, we should sympathize and be kind despite our disagreement, instead of labeling them as ignorant science-deniers. Assuming the person holding the opposing position is well-informed, we should accept that they simply don't recognize the same publications as authoritative, and that there is legitimate room for doubt not only of specific papers but of "scientific consensus" as a whole, especially when that "consensus" is weaponized for political use.
Academia draws people with specific backgrounds and biases. Groupthink is a real and substantial risk, not only because people quite frequently simply copy each others' output, but also because large-scale ostracization is a real risk if one publishes something that goes against the grain. Organizations and institutions pull funding if a finding is too controversial. Studies are often backed by large donors who, whether the pressure is obvious or not, are trying to get a specific result. Graduate students are under a great deal of personal pressure to perform and justify their loans. There are many non-scientific social factors that affect scientific rigor, even in peer-reviewed journals.
Like I said, the convention is that studies that support the speaker's preferred social or political views are usually considered credible, whereas studies that don't are nitpicked and labeled "questionable", for any of a myriad of reasons: the author(s) come from an institution the speaker dislikes, the sample wasn't representative, and so on.
The only common thread is that people won't be swayed in their political or social positions by academic papers -- they'll only use them to justify their pre-existing set of beliefs. This is true for virtually everyone. So I'm suggesting that instead of mistreating someone because we think the "science" bares out our point of view, we recognize that "science" itself is a fallible process susceptible to all kinds of externalities, and that it's often reasonable to mistrust a purported "consensus". Therefore, we should politely accept the difference in opinion instead of getting into a zero-sum rhetorical exercise of "Study X proves my POV" / "Study X was done by clowns! Look at Study Y, which proves my POV", and ends up with both sides detesting each other all the more.
>people won't be swayed in their political or social
>positions by academic papers -- they'll only use to justify
>their pre-existing set of beliefs. This is true for
>virtually everyone.
Speak for yourself. There are plenty of us who are willing to consider our positions falsifiable and actively seek objective answers in good faith.
Your point that academia is fallible is well taken, but this doesn't mean "virtually everyone" is so biased/personally invested as to lack critical thinking skills.
>Your point that academia is fallible is well taken, but this doesn't mean "virtually everyone" is so biased/personally invested as to lack critical thinking skills.
And yet, if history has taught us anything, it's exactly that.
Another article about a study failing replication?
I too wish we didn't have to have this conversation so often, but I think the problem is that poor studies are published, not that we "rehash this conversation" in response.
what do you think can be done here? These people obviously know that their studies can't be replicated, they are not ignorant so they are obviously malicious.
> These people obviously know that their studies can't be replicated
I don't think that's fair to say at all. I can come up with several ways a study can non-maliciously arrive at a false conclusion:
1. The researchers may accidentally leak information to the participants regarding the hypothesis being studied. When using human subjects, it's very difficult to avoid biasing them toward results they may think you're looking for.
2. Researchers may not sufficiently blind themselves during the experiment, causing them to have undue influence on the outcome, even if they're not consciously trying to exert it.
3. Some unknown confounding effect may be at play during the experiment that wasn't properly accounted and controlled for.
Those are just three I could think of. When dealing with behavioral sciences, I can't imagine how difficult it must be to design a test protocol that eliminates all the messiness of the meatbags being studied.
I helped run a clinical trial for an antidepressant one time. It was a double-blind randomised within-subjects-replicated crossover design. All that control was complicated, time consuming and very expensive. Only the hospital pharmacy knew whether participants were in treatment or placebo for a given session, and we never met the pharmacist, just received an orange vial. But we're pretty sure information still leaked, because drugs have side-effects. There's nothing to be done about this. Use an active placebo, you might say, but now you've added "get ethics approval to administer a nauseating placebo" to your 9-month-long ethics todo list. And then your placebo is worse than your active and the whole study is screwed because everyone spewed up in the MRI. Meatbags are tricky indeed.
I don't mean to sound rash, but have you ever been trained in an experimental science?
If I had a dime for every time a physics undergraduate makes a subtle and unintentional flaw during an experiment that, were it not a mistake, would overturn a century of science that we know Just Works.
Just treat everything you read in social science with a grain of salt. The studies will never be able to alleviate problems of reproducible with the sample sizes we have.
Both valid points, but this failure-to-replicate does erode some confidence in the general notion of embodied cognition (and I believe deservedly so because these phenomena have always seemed rather incredible). Much of Kahneman's other work has a more solid empirical foundation.
The rise of social 'science' is equal to the demise of intellectual rigour and common sense. The sad part is that social science could be done right, but few do it right because most people who don't suck at statistics go for fields that are more worthwile.