Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Studying logical fallacies and behavioral economics biases have been the best ways for me to become more rational. I'm constantly calling myself out for confirmation bias, home country bias, and the recency effect in my internal investment thought process.

Learning about logical fallacies and identifying them in conversations is great. Don't tell the counterparty of their logical fallacies in conversations cause that's off putting. Just note them internally for a more rational inner dialogue.

Learning other languages and cultures is another way to learn about how different societies interact with objective truth. Living other places taught me a lot about how denial works in different places.

Thinking rationally is quite hard and I've learned how to abandon it in a lot of situations in favor of human emotions. How someone feels is more important than how they should feel.



> Learning other languages and cultures is another way to learn about how different societies interact with objective truth. Living other places taught me a lot about how denial works in different places.

This also had a rather frustrating effect. It is true that not just intensely traveling (not in the sight seeing way), but also actual living in several different countries and cultures, changed my horizon a lot. It definitely had the effect you talk about.

But then what? You cannot tell your partner in discussion "you would not think like that if you had traveled/lived outside of your culture", and it's also impossible to send everyone off to travel in order to experience the same. Much less in the US, where for most of the country you cannot just hop into a train for a few hours to encounter a completely different language and culture. (I grew up in Europe and moved to the US as an adult, but I've also lived in several different European countries before, and traveled to far away places like Asia.)


The sunk cost fallacy is particularly important to learn about and teach your children about.

I see it everywhere, from my own decision making process to international politics. Just this morning I was thinking about it as I read the news about the US leaving Afghanistan, and last week talking with a friend who is staying at a bad job.


Here's a question for you: what is the difference between the sunk cost fallacy and persistence?

And here's the answer: Persistence is good when it is successful. If the activity us unsuccessful, it's an example of the irrational sunk cost fallacy. (Making decisions without knowledge of future events is quite hard.)

And the important lesson: If you bail at the first sign of adversity, no one can ever accuse you of being irrational. Of course, as the old saying goes, all progress is made due to the irrational.


that's not irrationality, that's decision-making under uncertainty, which is the norm, not the exception. probabilities are dynamic, information is imperfect, and so decision-making must incorporate that uncertainty.

the sunk cost fallacy is simply considering existing loss when deciding on continued investment (in time, money and other resources), when you should only consider future cost for future benefit. it's thinking erroneously that existing loss is not already locked in, that it's salvageable somehow. but no, it's already lost.

in a project with continuously updating probabilities of success, and under imperfect information, the go-or-no-go decision should only be based on the likelihood of future gains exceeding future losses, not future+existing losses.

in this framework, persistence would be having credible evidence (e.g., non-public information), not just belief, of the likelihood of future net gain relative to opportunity cost. it'd be irrational to be persistent simply on belief rather than credible information and probability estimation.


There is a real difference. Take the example of the “money pit” house, where the cost of repairs vastly outweighs the gain you get at the end (either in resale or in livability), vs a fixer upper house.

After is working on repairs, and after significant investment time and money, you are not as far along as you thought you would be. You update your calculations on time and cost based on your progress so far and any new problems you have uncovered.

A persistent person looks at the costs of a fixer upper, sees it is still likely worth doing, and is willing to put in the additional effort they had not originally planned for to see the project through. But they can also look at the future costs, recognize that the house is likely a money pit and continued work would be unlikely to ever yield a return on their investment, and that the time and money they have already spent are gone no matter what, but that they can prevent additional loss.

Someone biased by the sunk cost fallacy sees both projects the same. They look at the money pit and see that it is unlikely to show a return, but they hold on to the time and money they have already spent being lost if they walk away from the project, influencing them to continue.

To look at it another way, a persistent person would make the same calculation of the likely success of a project regardless if they came into it at the first point or the second point. They are persistent, so they won’t give up on something because it is more difficult than they originally thought and they won’t give up on worthwhile work just because it is hard. Someone biased by the sunk cost fallacy will not make the same calculation after they have invested effort, as they will hold on to already invested effort as a reason in and of itself to continue.


The difference between sunk-cost fallacy and persistence is that of motivation. If you keep doing something because "you've worked so hard already" then that's sunk-cost fallacy. If you keep doing something because "success is just around the corner" then that's persistence.

You can't go back in time and not work hard on something, so whether or not you should continue is purely a function of whether or not you think you will succeed, not a function of how much effort you've already put into it.


In an attempt to catch myself in the act of logical fallacies I have a flash card app on my phone. One of the sets I have is of logical fallacies. Educating myself has helped make me more aware of them and when I fall victim to them.

It's not an easy task. But 10 minutes a day can add up and reinforce that information.

A related idea is cognitive distortion. It's basically an irrational thought pattern that perpetuates negative emotions and a distorted view of reality. One example many here can relate to is imposter syndrome. But to feel like an imposter you have to overlook your achievements and assets and cherry-pick negative data points.


“Logical fallacies” are mostly Boolean/Aristotelian and identifying them is completely useless and/or counterproductive in 99% of real world scenarios. Most of your reasoning should be Bayesian, not Boolean, and under Bayesian reasoning a lot of “fallacies” like sunk cost, slippery slope, etc. are actually powerful heuristics for EV optimization.


> under Bayesian reasoning a lot of “fallacies” like sunk cost, slippery slope, etc. are actually powerful heuristics for EV optimization.

Can you elaborate on that?

This really piqued my interest. I feel like logic is easy to apply retrospectively (especially so for spotting fallacies), but trying to catch myself in a fallacy in the present feels like excessive second quessing and overanalyzing. The sort that prevents forward momentum and learning.

Would you by any change have any recommendations on reading on the topic?


Sure. Fallacies, as usually stated, tell you when something that feels like a logical entailment isn’t actually a logical entailment.

Intuitively, people find “bob is an idiot so he’s wrong” a reasonable statement.

Technically, the implication does not hold (stupid people can be correct) and this is an ad hominem fallacy.

However, if we analyze this statement from a Bayesian standpoint (which we should), the rules of entailment are different and actually bob being stupid is evidence that he’s wrong. So maybe this is actually a pretty reasonable thing to say! Certainly reasonable people should use speakers’ intelligence when deciding how much to trust speakers’ claims, even though this is narrowly “fallacious” in an Aristotelian sense.

I’m not aware of any reading on this topic. It seems under-explored in my circles. However I know some other people have been having similar thoughts recently.


No disagreement with the main thrust of your comment, it's a very good one and imo goes to the heart of the seemingly intractable divide between the logician's approach to truth and that of damn near everyone else - which tends to leave the logician reasoning into the void, doing not a bit of good for anyone.

However I myself would probably label the statement "Bob is an idiot" (or perhaps less abrasively, "Bob has often been wrong in the past in easily verifiable ways") not as evidence that he's wrong per se, but as a signal, possibly a rather strong signal, that he is likely also incorrect in the current matter.

A minor semantic quibble, but in my own experience I've found that conceiving of it as such helps frame the situation as a "sensor fusion of individually unreliable data sources" type of problem, as opposed to one of "collecting experimental results in a logbook and deriving conclusions from them."

The latter of which can lead pretty seamlessly to a towering edifice of belief built upon some ultimately pretty shaky foundations. Ask me how I know ;)


I just use the term “evidence” from probability theory. “Signal” feels pretty synonymous.


Yes, 100% agreed! Your post reflects my feelings on this.

It's important to understand that something being a "logical fallacy" just implies that you can't unilaterally justify conclusion X by using reasoning Y.

But that does not mean that reasoning Y is not valid or helpful in understanding conclusion X.

Ultimately it's important to justify your views with sound reasoning, but life is full of heuristics, so often use of heuristics to reach a conclusion can be reasonable. It just means the conclusion is not definitive from a logical point of view.

Ideally you use a combination of logically sound and heuristic based statements to support an argument.

Following your Bob example... It's important that the person making the argument uses stronger reasoning than just calling Bob an idiot. But agreed that it's a totally valid point of supporting evidence.. assuming that Bob is an idiot is a fairly agreed upon statement.


Some are grateful to have them pointed out, after a bit of initial discomfort and resistance. Didn’t work out so well for Socrates of course, but we’re more enlightened now.


If you want to be like Socrates it'd be better to not simply point out the fallacy but make people realize the fallacy with the Socratic method.

As far as arguments go "That's an XXX fallacy" is one of the weaker ones, if not fallacious in and of itself.


> but we’re more enlightened now

We hope.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: