Hacker News new | past | comments | ask | show | jobs | submit login
A Meta-Scientific Perspective on Thinking: Fast and Slow (2020) (replicationindex.com)
130 points by signa11 on Oct 21, 2021 | hide | past | favorite | 39 comments



From the conclusion of the article -

>> In conclusion, Daniel Kahneman is a distinguished psychologist who has made valuable contributions to the study of human decision making. His work with Amos Tversky was recognized with a Nobel Memorial Prize in Economics (APA). It is surely interesting to read what he has to say about psychological topics that range from cognition to well-being. However, his thoughts are based on a scientific literature with shaky foundations. Like everybody else in 2011, Kahneman trusted individual studies to be robust and replicable because they presented a statistically significant result. In hindsight it is clear that this is not the case. Narrative literature reviews of individual studies reflect scientists’ intuitions (Fast Thinking, System 1) as much or more than empirical findings. Readers of “Thinking: Fast and Slow” should read the book as a subjective account by an eminent psychologists, rather than an objective summary of scientific evidence. Moreover, ten years have passed and if Kahneman wrote a second edition, it would be very different from the first one. Chapters 3 and 4 would probably just be scrubbed from the book. But that is science. It does make progress, even if progress is often painfully slow in the softer sciences.


Thanks for the copy/paste, it summarises exactly what I was hoping to get from the article.

> Readers of “Thinking: Fast and Slow” should read the book as a subjective account by an eminent psychologists, rather than an objective summary of scientific evidence.

Like with most pop science, they may give you useful models that can help you analyse and reflect on your personal experience, but shouldn't be seen as ground truth. Unless you read the science (and the science is good!) you're probably not enough of an expert to differentiate distilled truth from diluted truth.


> Narrative literature reviews of individual studies reflect scientists’ intuitions (Fast Thinking, System 1) as much or more than empirical findings.

This seems to be the crux of the issue. Could someone explain it better? Does Daniel K promote Type 2 more than he should?


The asymmetrical warfare of bullshit "science" is demoralizing. It seems to take so much effort to get the bad stuff taken down, compared with the apparent ease of made-up stuff to be published. This one is pretty crazy: a paper alleging paranormal time-reversed causality still not retracted 10 years later https://replicationindex.com/2018/01/05/bem-retraction/


This article makes a great case for why parapsychology is worthwhile. A researcher follows common practices and comes up with an absurd result. In figuring out how that happened researchers discover problems with the entire field. That's utility for parapsychology I've never imagined.


Scott Alexander makes that case in one of my favourite posts - "The control group is out of control". The control group for science being parapsychology.

https://slatestarcodex.com/2014/04/28/the-control-group-is-o...


"It could be that the purpose of your study is only to serve a a warning to others."


> It seems to take so much effort to get the bad stuff taken down, compared with the apparent ease of made-up stuff to be published.

More than that, flawed studies with false results get cited more often, which is a main incentive for scientific prominence.


I’ve got a neuroscience degree and did a fair amount of psychology as well. As I read the book I knew that type 1 and 2 were oversimplifications and that some of the studies he cites were weird or suspect, but I also enjoyed the book because the overall message that humans sometimes struggle to think about things logically is largely true about many topics. So I guess if you’re looking to read it, read this article and some other critiques first so you’re not carried away in incorrect info, then enjoy it for what it is. My two cents anyway. Probably not worth much but there you go.


>As I read the book I knew that type 1 and 2 were oversimplifications

They announce as much in the beginning of the book. They said there is overlap between the different systems and it is an abstraction exercise grouping together various capabilities that do not necessarily fit neatly together. In spite of that, they think it makes sense to consider two different categories broadly speaking.


I read Thinking: Fast and Slow a few years ago when it was all the hype and was not convinced. I remember many experiments where it was clear to me that the subjects just answered a different question than the one the researcher asked them. And I do not mean that they "anchored" their answers on something else or they replaced the question with a different question because they had no answer to the original question - they just had a completely different understanding of the question itself, and their answers to this question were correct, even in "System 2."

For example, I always had the suspicion (which may be wrong) that in questions like "which is a more likely outcome of 6 coin tosses, HHHHHH or HTHHTT", people tend to understand this question as: "which is a more likely outcome of 6 coin tosses, a sequence of 6 heads, or another sequence without any obvious regularity in it", in which case the latter is clearly the correct answer.

Now, in my experience, people with slight autistic tendencies often have problems with sorting their impressions into broad, general categories, but instead always analyze (even remember) the exact impression (or "special case") they are confronted with. For people like this, the question "HHHHHH or HTHHTT" is a completely different question than for the majority of the population, for which HTHHTT is a class of outcomes, not a specific one.

So, the problem to me does not seem to be a problem with human intuition for probability (or with "System 1", in Kahneman's words), but just one of semantics. If you go further and assume that scientists have a higher percentage of people with autistic tendencies than the general population, then the problem simply becomes one of communication between the researcher and the subject.


That’s an interesting conjecture, and I wonder if you have any evidence for it. In teaching probability and asking students this exact question about the results of coin tosses, most do say the second sequence is more likely (early in the class), and they are in fact answering the question that we intend them to answer. But, after further discussion, most accept that the sequences are equally likely. It’s my observation of that specific transition in understanding that convinces me that they are, in fact, considering the particular sequences that we want them to consider, and not, as you conjecture, substituting a general notion of some class of sequences. Further, there are always those several students who just never buy it, no matter how much evidence and calculation I show them. They can’t overcome their faulty intuitions.


Imo a very good "alternative" perspective to Thinking: Fast and Slow (very well researched and very well argued, although the basic premise was quite unintuitive - to me at least - it only started to "click" towards the end of the book) on how (evolved, human) reason might work is the book The Enigma of Reason by Hugo Mercier and Dan Sperber. It was suggested to me by an American forensics professor and was probably one of the most important books I read in 2019. I don't want to hype it too much, but I sort of also do. I highly recommend it even (or especially) if you won't agree with everything they have to say. It's quite long, but totally worth the time and effort.


>For people like this, the question "HHHHHH or HTHHTT" is a completely different question than for the majority of the population, for which HTHHTT is a class of outcomes, not a specific one.

Could you clarify which group experiences "HTHHTT [as] a class of outcomes"? I'm pretty confident in my reading that this applies to the general population, but wanted to check.


I find it refreshing to see that some pop psychology books now get some score of how trustworthy they are.

But since I'm battered with distrust thanks to the events in 2011 and later, I can't help but wonder if the replication index should be trusted.


Ioannidis is Wrong Most of the Time https://replicationindex.com/2020/12/24/ioannidis-is-wrong/

> The main feature that distinguishes science and fiction is not that science is always right. Rather, science is superior because proper use of the scientific method allows for science to correct itself, when better data become available. In 2005, Ioannidis had no data and no statistical method to prove his claim. Fifteen years later, we have good data and a scientific method to test his claim. It is time for science to correct itself and to stop making unfounded claims that science is more often wrong than right.


Science is always wrong, it's just about knowing that. That's what makes science great, it knows it is constantly being corrected, sometimes in big ways and sometimes in little ways, as we slowly get closer to "the truth." Reaching for that truth is the impossible objective. If we ever think that we have found the truth, that is when we have stopped doing science because we have stopped questioning. Put another way, science is about finding the next less wrong view of the world. It will still be wrong, but it will be less wrong.


may you please elaborate on ‘…events of 2011 and later…’ ?


He's talking about the replication crisis.

https://en.m.wikipedia.org/wiki/Replication_crisis


ah i see, thank you !


Oh sorry, that's how it was phrased in the article. And I distinctly remember having a "your life is a lie, science is the new religion" moment, or year rather.

Except for physics and tech of course. Those prove to be real! :D


I think we need to separate the usefulness of pop-science literature from its accuracy. In the great words of George Box "All models are wrong but some are useful".

The science is still emerging so of course its impossible to write an accurate book of this breadth, but does that make writing a book wrong? I don't think so. As the conclusion says it just needs to be "read as a subjective account".


>but does that make writing a book wrong?

No, it means you should be more careful what you include in your book. The problem isn't isn't that authors write books on emerging science, the problem here is that they overly trust small studies and present them as stronger evicence than they are.

As the author of the article said if Kahneman was writing the book later it wouldn't be nonexistent but different and with some chapters missing.


There was a discussion on ego depletion (also in the book) multiple times here in the past. I'll just reference my favourite: "Break out the marshmallows, friends: Ego depletion is due to change sign!" https://news.ycombinator.com/item?id=21589825


I wasn't aware "ego depletion" was studied in such a direct and simplistic/naive manner.

People with ADHD can attest "executive functioning" can be depleted or even borrowed from/overexhausted - like a resource. E.g. an extended period of hyperfixation may lead to days or even weeks of near complete inability of executive control, or "lethargy"/"depression". There certainly is some kind of budget/debt and some tasks are more taxing than others.

I assumed it's some deficit in homeostatic balancing stemming from a partially deficient brain preventing "burnig out".

'Ego depletion' was a fitting term, but now I cringe at myself for using that in presence of people knowing psychology/psychiatry, if it's something of a more defined thing.

Maybe the situation in ADHD is different as reward/expectation of reward/"dopamine" is very much different than "ego" in neuro typical folks. Guess with NTs it could be more related to "exhaustion", priorizing of activities over rest/sleep.

Is there a better hypothesis/theory related?


huh. Yesterday I was thinking to re-read the Kahneman's bestseller, now I am wondering if it's worth it. Thanks HN!


The value of pop-science books is not in their scientific accuracy but their carry-over to everyday life.

The overall points of Thinking Fast and Slow still stands whether backed by science or not.


It's also a great insight into how serious psychological research was done in the late 90s and early 2000s and how that field built many of its models and theories.


Definitely worth it! If your first read-through tended toward believing every statement as factual, make your second read a test of how well the evidence has held up.


You are reading HN. Do you think it is intellectually stimulating, then a first step to further research, or do you think it is «worth it» for scientific accuracy?


This is the issue I (and no doubt many others) have when I read pop science books from fields I'm not an expert in is the ability to discern whether their claims are based on solid foundations.

I read "Thinking: Fast and Slow" something like 7 years ago now and while I wouldn't be able to give you a direct account of the ideas I've taken away from it at this point, I wonder how much partially or completely incorrect ideas on thinking I internalized from it.

Obviously I can now adjust my mental model but pop sci still feels a bit like a mine field for the quality of thought I may be taking away from it. I just started "The Selfish Gene" and this post has me wary of the quality of it even though just like Fast And Slow it's fairly critically acclaimed.

I think if anything in the last few years I've found myself being highly skeptical of most claims and ideas (which is also a balancing act, but largely good) which wasn't necessarily the case when I was reading Fast and Slow instead committing a common sin and taking the fact the author was a well regarded practitioner in their field at face value.

I guess the point of my comment here is I find that without taking (or really having the) the time to follow every idea to it's foundations and given there are an incredible number of ideas being thrown around openly and with little barrier to entry having to rely on some type of illogical filter and tendency towards your own biases is required just to filter down to a smaller set of ideas that at least seem plausible. The issue with that as we see on a day to day basis is that most people's filter (including my own given the critique of this book) sucks and we wind up with a bunch of plausible but incorrect ideas being treated as fact and propagated throughout the world. It would be nice if finding some kind of objective truth didn't require turning seeking that truth into a life mission.

It's also something to be said that most people operate on these faulty models throughout life to great success by conventional standards so it's not like knowing some set of objective truths, if they're there in the first place (I'm not entirely convinced everything has an objective truth), is a pre-requisite to a happy, successful or fulfilling life -- they may even be the opposite. I just have always found that unpalatable and hope we find ways to think better.


"The problem with selecting significant results from a broader literature is that significance alone, p < .05, does not provide sufficient information about true versus false discoveries. It also does not tell us how replicable a result is."

Yet, in other fields the standard for discovery is p=0.0000003.


It will be interesting when they realize that it's not a "crisis", but that it is what it is and that psychological phenomenon are just too variable (in time and space) to be reproduced.


The whole point of statistics is dealing with those variable phenomena in psychology. Variability is the starting assumption. IMO, we need to find new methods of discovering non-variable and reproducibe phenomena. I like the theory that goals are relatively constant while means (behavior) are variable. The problem is to discover the goals.


I think that the goals you describe are already discovered, they're just used in a different field of "study": marketing.


Statistics has plenty of tools for describing variable phenomena.


Yes, for periodic or well-behaved phenomena. But for most cases you can't summarize human behaviour into one or two variables.


But for most cases you can't summarize human behaviour into one or two variables.

And statistics is a useful tool separate those cases from the few cases where you actually can.


This seems like a strawman--I don't think any part of behavioural science/psychology is claiming to be able to do this.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: