Doesn't science require replication? He wrote books based on un-replicated studies.
Further, the confidence he extolled about his now debunked ideas make him a charlatan. This person was a bad scientist. If we esteem people who don't check their data and influence millions of people with falsities, we are going to create a society with low trust.
Just look at this thread, the man lost respect among the people in the know. There are a few people clinging onto 'well just because its not true, doesnt mean I didn't find it interesting". I'm not sure what we get out of promoting anti-science scientists.
I’m in the know and the replication crisis created boosted my confidence in him because it wiped out half the field while merely discrediting a few of the chapters and studies in thinking fast and slow, most of which was discredited he cited from other researchers.
I did see a lot of charlatans in this thread fail to appreciate the broader context of the replication crisis and failed to appreciate how unscathed Kahneman was by it because he was being careful when his peers were not and long before people started judging him with the wisdom of perfect hindsight. Of course if they wrote such a book they would only express their ideas with timidity and never make a mistake.
I read his book alongside a guide as to what in his book could be ignored. I knew every damning word people said about the man before I read a word he said and left impressed.
>Doesn't science require replication? He wrote books based on un-replicated studies.
People even publish studies on un-replicated research! There might be a lot to be said about his research, but I disagree that publishing your research is the worst thing you can do. Maybe there wouldn't be any replication of his studies if it hadn't been for his books.
Can you be specific what ideas of his aren't scientific?
It's true that science requires replication but he deals with models but perhaps uses bad studies to support it.
It's like saying he should replicate the theory of evolution.
> "Table 1 shows the number of results that were available and the R-Index for chapters that mentioned empirical results. The chapters vary dramatically in terms of the number of studies that are presented (Table 1). The number of results ranges from 2 for chapters 14 and 16 to 55 for Chapter 5. For small sets of studies, the R-Index may not be very reliable, but it is all we have unless we do a careful analysis of each effect and replication studies.
> Chapter 4 is the priming chapter that we carefully analyzed (Schimmack, Heene, & Kesavan, 2017). Table 1 shows that Chapter 4 is the worst chapter with an R-Index of 19. An R-Index below 50 implies that there is a less than 50% chance that a result will replicate. Tversky and Kahneman (1971) themselves warned against studies that provide so little evidence for a hypothesis. A 50% probability of answering multiple choice questions correctly is also used to fail students. So, we decided to give chapters with an R-Index below 50 a failing grade. Other chapters with failing grades are Chapter 3, 6, 7, 11, 14, 16. Chapter 24 has the highest highest score (80, which is an A- in the Canadian grading scheme), but there are only 8 results.
Which is to say in other words, most of the book probably replicates, particularly the parts based on Kahneman’s own work, and for the parts that don’t you can just skip the chapters or take them with a grain of salt.
Kahneman to me always struck me as the one eyed king of the replication crisis. Yes he fucked up but he fucked up notably less than his contemporaries and most of his work is still readable.
Without a copy of the book, I don't remember which parts were based on Kahneman’s own work, and I don't see that we can/should just skip the other chapters.
R-index guys said [0]: "Table 1 shows the number of results that were available and the R-Index for chapters that mentioned empirical results."
Chapters where estimated R-index < 50: Ch 3,4,6,7,11,14,16
Chapters where estimated R-index > 50: Ch 5,8,9,12,17,24
Chapters that don't cite empirical results (by Kahneman, or who?): 1,2,10,13,15,18-23, all of 25-38
As to the chapters that had empirical results, and had an estimated R-index > 50: scores of 55, 57, 60, 62 are really scraping by; saying that means they "probably replicate" is setting the bar really low, even quoting Tversky and Kahneman (1971) back at themselves. (The R-index guys say "Even some of the studies with a high R-Index seem questionable with the hindsight of 2020.")
As to whether he was the one-eyed king of the replication crisis, he certainly started speaking out in 2012 [3] after the social priming scandal broke; did insiders have suspicions about non-replicability before that and should people have pushed back more, earlier? The fallout from the Francesca Gino and Ariely scandals continues.
That's a fair point. Frankly, I don't know what to say. Should we only promote studies that have been replicated? My first thought tells me that the answer should be "yes". At the same time, that would mean we would never talk about certain studies, because I don't think we can reach a 100% degree of replication.
Further, the confidence he extolled about his now debunked ideas make him a charlatan. This person was a bad scientist. If we esteem people who don't check their data and influence millions of people with falsities, we are going to create a society with low trust.
Just look at this thread, the man lost respect among the people in the know. There are a few people clinging onto 'well just because its not true, doesnt mean I didn't find it interesting". I'm not sure what we get out of promoting anti-science scientists.