Hacker News new | past | comments | ask | show | jobs | submit login
Fallacies (nizkor.org)
94 points by bildung on Jan 31, 2014 | hide | past | favorite | 94 comments



(I'm a philosophy professor.)

Some training in logic -- reflecting on what make good reasons and justified conclusions -- does have value. It has the same sort of value that learning grammar does. It enables you to make certain things explicit, to put them under conscious scrutiny and control, when needed.

But I think this sort of enumeration of logical fallacies is of very limited value. In the abstract there are only a few ways that arguments fail, and informal fallacies are all iterations on an extremely similar theme.

Moreover, the kind of argument for which it is easiest to characterize fallacies is deductive, and many instances of reasoning that appear deductive are in fact statistical, inductive, inferences to the best explanation, or some combination thereof. Even for deductive arguments, what if any fallacies are being committed will depend upon how one interprets suppressed premises.

The best thing is to learn about different sorts of reasoning and how they work in good cases, and pick up a feel for some fallacies along the way. At the end of the day, reasoning is an art.


Just so. (My late dad was an industrial engineer whose recreational reading was all about the philosophy of science, so I grew up in a home with logic textbooks on the home bookshelf.) Plenty of incorrect arguments encountered here on Hacker News or elsewhere in cyberspace are incorrect not so much because they fit on a checklist of classical logical fallacies, but rather because they are based on false factual premises. One has to keep seeking knowledge of the world to recognize mistaken premises of arguments. One thing I enjoy about Hacker News is that it includes people from all over the world, including people who have seen places I have never seen, and done things I have never done, so that I can learn new facts here as I read discussion threads. That is often every bit as valuable as knowing lists of logical fallacies, although I did do a bit of teaching of formal deductive logic back in the days when I was a high school debate coach.

A previous comment of mine

https://news.ycombinator.com/item?id=5835453

from a thread recommended in another comment on this thread goes into more detail about why logical fallacies are not the only problems to look for when evaluating claims for truth or falsehood.


> But I think this sort of enumeration of logical fallacies is of very limited value. In the abstract there are only a few ways that arguments fail, and informal fallacies are all iterations on an extremely similar theme.

I think the value of enumerations -- in addition to creating a vocabulary which is often useful in discussing problems with an argument, even if, as you correctly note, exactly which fallacy is most applicable is a subject of interpretation -- is that it helps recognize instances of potentially problematic arguments. Even if, on a certain level of abstraction, a lot of the named fallacies are just different views of a smaller number of real problems, understanding the different manifestations of those common problems helps to recognize them in practice.


The limit case of this is lesswrong.com, which is a kind of cultish/Vulcan purge of all fallacies. (Ask around about Roko's basilisk)


I read both HN and LW, and enjoy both. I've noticed people on HN describing LW as "cultish" a couple of times now, and am constantly suprised by this. Can you shed any light on why you feel this way?


I think there are two main things about LW that strike some people as cultish. (There are others, less important.) Both are less true than they were, say, a year ago.

1. Its distinctive brand of rationalism grew out of this huge long series of blog posts by Eliezer Yudkowsky, conventionally referred to on LW as "The Sequences". So: we have a group of people united by their adherence to a set of writings by a single person -- a mixture of generally uncontroversial principles and more unusual ideas. It's not a big surprise if this reminds some people of religious scriptures and the prophets who write them.

2. The LW culture takes seriously some ideas that (a) aren't commonly taken very seriously in the world at large, and (b) share some features with some cults' doctrines. Most notably, following Yudkowsky, a lot of LW people think it very likely that in the not too distant future the following will happen: someone will make an AI that's a little bit smarter than us and able to improve itself (or make new AIs); being smarter than us, it can make the next generation better still; this iteration may continue faster and faster as the AIs get smarter; and, perhaps on a timescale of days or less, this process will produce something as much smarter than us as we are smarter than bacteria, which will rapidly take over the world. If we are not careful and lucky, there are many ways in which this might wipe out humanity or replace us with something we would prefer not to be replaced by. -- So we have a near-omnipotent, incomprehensible-to-us Intelligence, not so far from the gods of various religions, and we have The End Of The World (at least as we know it), not so far from the doomsdays of various religions.

Oh, and LW is somewhat associated with Yudkowsky's outfit, MIRI (formerly the Singularity Institute), and Yudkowsky is on record as saying that the Right Thing to do is to give every cent one can afford to them in order to reduce the probability of a disastrous AI explosion. Again, kinda reminiscent of (e.g.) a televangelist telling you to send him all your money because God is going to wrap things up soon. On the other hand, I do not believe that's his current position.

For the avoidance of doubt, I do not myself think LW is very cult-like.


Couple of links on Roko's Basilisk, which is Forbidden Knowledge in LW:

An explanation: http://rationalwiki.org/wiki/Roko%27s_basilisk

Yudkowsky going gaga on reddit: http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncenso...

> The fact that you disagree and think you understand the theory much better than I do and can confidently say the Babyfucker will not hurt any innocent bystanders, is not sufficient to exempt you from the polite requirement that potential information hazards shouldn't be posted without being wrapped up in warning envelopes that require a deliberate action to look through. Likewise, they shouldn't be referred-to if the reference is likely to cause some innocently curious bystander to look up the material without having seen any proper warning labels. Basically, the same obvious precautions you'd use if Lovecraft's Necronomicon was online and could be found using simple Google keywords - you wouldn't post anything which would cause anyone to enter those Google keywords, unless they'd been warned about the potential consequences. A comment containing such a reference would, of course, be deleted by moderators; people innocently reading a forum have a reasonable expectation that Googling a mysterious-sounding discussion will not suddenly expose them to an information hazard. You can act as if your personal confidence exempts you from this point of netiquette, and the moderator will continue not to live in your personal mental world and will go on deleting such comments.

Uh...


Roko's Basilisk itself is fine. An interesting idea that is fun to consider. The notion that the idea is too dangerous to discuss publicly is rather cultish.

If I understand correctly, such violent reactions to Roko's Basilisk only come from a minority of LW people, but a prominent minority...


Yudkowsky pushed a virgin into a volcano once.


I want an "Ask me about Roko's basilisk!" button.


It's not quite clear whether you cite the Roko basilisk incident as (1) an instance of a "purge of fallacies" or (2) something generally cultish.

It probably does count as #2, but not #1. Roko's basilisk wasn't purged on account of being a fallacy.

(For the avoidance of doubt, I am not defending the purging, which I think was a really stupid move.)


What about looking at these lists of fallacies as vocabulary building tools? Perhaps fallacies are all variations on a handful of errors, but it certainly expedites conversation to be able to say, "No, that's just begging the question." Otherwise I would have to waste my breath disagreeing in a more clumsy way.


I agree, there's some value in that. But two things: (1) For begging the question, I think it's more perspicuous to go up a level of abstraction and talk about circular reasoning. And something analogous is true of most of these fallacies. (2) For any real argument you're going to have to spend some breath anyway. It's very rare when you can just stop at, "Isn't that question-begging/circular?", or, "But aren't you equivocating on the word 'freedom'?" This is part of what I mean by saying that reasoning is an art: the classification of an error in reasoning in any particular case will involve nuance and controversy.


Your last sentence feels unfinished.

At the end of the day, reasoning is an art, and an art worth your learning. The question is rather, whether you be capable of learning it? [0]

[0] http://www.gutenberg.org/cache/epub/683/pg683.html


I'm not sure exactly what you mean, but I meant "art" to imply a contrast to something more algorithmic or top-down.


Fwiw, in my experience, simply knowing all or as many as possible logical fallacies better enables one to recognize and discard invalid arguments quicker. It facilitates a more systematic process of elimination.

It also helps one realize and appreciate just how difficult it is to construct a correct argument, when you can quickly spot the fallacies in your own assertions. As a result, on a personal level, this has caused me to tend to read, listen, and question more, and assert less.


I think I get what you're saying, but I also think it more usefully applies to the conversations you might have with your philosophy friends. Meanwhile, there seem to be a lot of people on my Facebook wall who could benefit from a list of basic fallacies.


Not really. I'd bet the people on your Facebook wall could benefit from learning how to construct a cogent argument. A list of "DO NOT"s rarely benefits anyone.


> The best thing is to learn about different sorts of reasoning and how they work in good cases, and pick up a feel for some fallacies along the way. At the end of the day, reasoning is an art.

Do you have good suggestions for learning about different sorts of reasoning?


I do but if you're a senior U of C student (as your profile says), I'm sure you've already been very well-served in this regard by your curriculum. :)


I'm sure others in the community would like to know. I'm also always looking for good books full of ideas and modes of explanation and augmentation. :)


Logical fallacy #1: that actual argumentation in real life can be reduced to some axiomatic system where you can just discard arguments as logical fallacies.

It's easy as a nerd to fell into that trap, but real life has much more nuances than those fallacy lists capture.

Case in point: "the no true scotchman fallacy". In real life groups CAN be argued to have certain characteristics to recongnise members from non-members, hypocritical members, non-practicing members, posers and "fakes". So the "no true scotchman" fallacy breaks down when you're dealing with such nuances.

Or take "appeal to tradition": 1) X is old or traditional 2) Therefore X is correct or better.

Well, it depends on how you define correct or better. If you value tradition and see conformance to it as the most important metric, then X is indeed better for you.

Who is to say what metric you should use for "correct", in issues like ethical ones, that are not clear cut and measurable as things are in the hard sciences and mathematics?


I am curious have you ever taken any philosophy classes?

> Well, it depends on how you define correct or better. If you value tradition and see conformance to it as the most important metric, then X is indeed better for you.

This is a common response when people first hear/read about "appeals to tradition." So common that that linked web page deals with this response:

Obviously, age does have a bearing in some contexts. For example, if a person concluded that aged wine would be better than brand new wine, he would not be committing an Appeal to Tradition. This is because, in such cases the age of the thing is relevant to its quality. Thus, the fallacy is committed only when the age is not, in and of itself, relevant to the claim.

Just for completeness, the following is from rationalwiki has the following to say about your beloved "scotchman":

"Broadly speaking, the fallacy does not apply if there is a clear and well understood definition of what membership in a group requires and it is that definition which is broken (e.g., "no honest man would lie like that!", "no Christian would worship Satan!" and so on). "[1]

[1] http://rationalwiki.org/wiki/No_True_Scotsman


Actually it's "no true Scotsman".

Man, now I just don't know about everything else you've written there.

(kidding)


I know you're doing it in jest, but it's interesting how often the tendency to 'well-actually' minor, irrelevant factoids and the tendency to reduce conversations to axiomatic systems appear together. It's part of the hacker desire to be Less Wrong (TM).

But in doing so, they are More Wrong (TM). Both situations involve missed social cues. Correcting a minor point is another way of saying you place a higher priority on correctness for its own sake - even if doing so prevents you from reaching higher goals - than you do on understanding.

These things can make it hard to for non-hackers to converse with hackers, and vice versa.


> These things can make it hard to for non-hackers to converse with hackers, and vice versa.

Discussion should be about connecting and learning, not one-upping.

This distinction is why the Internet 'communities' almost never are. Nerds love to be pedantic while not realizing it doesn't add anything to the discussion. Actually, it potentially poisons the discussion, steering it towards who is more right, rather than actually, y'know discussing things.

"But, someone was WRONG!" It doesn't matter, and nobody cares.


Yes, and the ironic thing is that the 'well-actually' person is often /less/ correct in that they miss the main point in pursuit of compliance with arbitrary rules.

Being pedantic is akin to writing beautiful code that never gets used.


It's not necessarily correctness for its own sake. Sometimes it's the linguistic equivalent to pointing out a spot of mustard on someone's shirt, which is a courtesy in my book.

That being said, an audience member pointing this out to someone on stage is a net negative. But I'm not convinced that I should ignore irrelevant errors altogether, especially for errors that people use as signal levels of education or intelligence (nuclear and nucular, you're and your, etc.).


To be more specific - the notion that it's a courtesy is mistaken. It's considered rude at best, and the general public may (rightly) interpret a well-actually as a mark of low social or emotional intelligence.

So, we must carefully consider what really signals intelligence.

Linguistic errors pale in comparison to errors of understanding like missing the point.


> it's interesting how often the tendency to 'well-actually' minor, irrelevant factoids and the tendency to reduce conversations to axiomatic systems appear together.

Simplicity is seductive, especially when you're convinced that the minor factoid you're harping on is something you're absolutely sure is correct. If you take a situation and reduce it down to the abstract axiom that you know is true, you can assure yourself that your interpretation of the situation is correct regardless of any nuances in the reality. Because "facts".

This is the definition of fundamentalism. Fundamentalism is named from the effort to get the fundamentals right. In doing so, they completely isolated and cut themselves off from reality.


You're completely misrepresenting what the No True Scotsman fallacy is. The key characteristic of a No True Scotsman fallacy is that the arguers original claim is revised to handle a counterexample. In the titular example, the arguer originally says "no Scotsman does x," and the obvious implied definition of "Scotsman" is simply a man from Scotland. But when faced with a counterexample (a man from Scotland who does do x), the arguer revises the original claim by adding the word "true." Under this revelation, the arguer's original claim is not an actual claim about what men from Scotland do, but rather a proposed definition for the term "true Scotsman."


Most of the time I see No True Scotsman called, it's because they were using ordinary imprecise language even though their group is real and the word they used for it is reasonable. Not because they were proposing a 'true' definition.

Such as 'no vegan eats meat' 'what if they had fries at mcdonalds with secret meat in the oil' 'okay no vegan purposefully eats meat' 'ha! you narrowed the group! no true scotsman! you lose!'


>You're completely misrepresenting what the No True Scotsman fallacy is. The key characteristic of a No True Scotsman fallacy is that the arguers original claim is revised to handle a counterexample.

Yes. I don't think I'm misrepresenting it. Revising an original claim to handle a counterexample is something that is essential in actual conversations. It can just mean you forgot an important distinction in your original claim.

>Under this revelation, the arguer's original claim is not an actual claim about what men from Scotland do, but rather a proposed definition for the term "true Scotsman."

That's beside the point. In real life conversations, we often use the term X to mean the essense of X (the true X) and not just the bare notion of X. That is, there's nothing fallacious about the following exchange:

- A metal fan would never listen to Bieber.

- Well, I'm a metal fan and I listen to Bieber.

- Well, you're not a true metal fan then.


The example you give about metal fans and Bieber is a perfect example of No True Scotsman. The term "metal fan" will be widely understood to mean "someone who likes metal music," not "someone who never listens to anything other than metal music." The initial claim is clearly false, so the person revises the claim to contain the word "true," which reduces the argument to nothing more than "well I refuse to consider you a metal fan if you listen to Bieber."


Once you learn to recognize them, a surprising number of fights are, at root, about competing definitions of terms.


Understanding that "life can't be reduced to some axiomatic system" is a really important part of being a well-rounded human.

But I quibble on calling it a logical fallacy. It is emphatically not a logical fallacy. That's the whole point you're trying to make - which is that logic itself is limited in its ability to capture the totality of human experience.

Other philosophers call this the limits of Reason (capital R) or the failure of the Enlightenment. But calling it 'Logical Fallacy #1' undermines the very lesson you are trying to impart.


I think the OP used 'logical fallacy' ironically.

But maybe not: Consider this. Many, many formal systems describe behavior and not some underlying truth. The question of whether reality can even be reduced to a formal system is still open.

From the above axioms, we can reasonably conclude that what formal systems prove is provably true only insofar as the axioms reflect underlying reality exactly.

In 'social' formal systems like the one that describes logical fallacies, it is clear that context is inherently lacking, that the axioms are approximations, and that the system cannot be expected to accurately describe reality.

Thus, we can formally conclude that logical fallacies are, well, approximations.


Well, I might be too dense to grasp what you're saying, but I don't really understand it. I'm not sure, for starters, which axioms you're referring to.

It seems worth noting, however, that there is, and there should be, a distinction between formal logical fallacies and informal logical fallacies. The latter are what I have in mind when you use the term "social formal systems".

Mistaking ad hominem, or argument from authority, etc., for a formal fallacy is a huge mistake - the one OP seems to be referring to.

All I'm trying to say here to you is that there is no need to resort to formal systems theory, because the taxonomy of informal fallacies never was meant to be considered illogical in the same way that Affirming the Consequent (a formal fallacy)is.


Well, OP is saying that fallacies in logic are really only fallacies if the assumptions that constitute the definition of that fallacy actually apply to a particular situation. Because situations differ so much, context is critical, and OP is saying that people often incorrectly define real-world social situations as instances of logical fallacies.


> "In real life groups CAN be argued to have certain characteristics to recognize members from non-members, hypocritical members, non-practicing members, posers and "fakes". So the "no true scotchman" fallacy breaks down when you're dealing with such nuances."

On the other hand, where this most frequently pops up^, there is frequently no authoritative definition of what constitutes membership. Where there is, there is frequently no authoritative interpretation of the definition.

^ Any internet argument involving religion. The best you can typically manage is membership of particular sects which happen to have an agreed upon defining body. If the Pope says that you are not a Roman Catholic, you are pretty much by definition not a Roman Catholic. However if I define "Christian" in clear unambiguous terms, then point to my definition when asserting that somebody who claims to be a Christians is not a true Christian, then really I have said little of interest. That person may very well be a Christian by a definition other than my own (for instance, according to their definition), and I have justification for asserting that my definition is the true definition and theirs is false.

The issue here is not quite that there is a true Scotsman fallacy going on. Rather it is a failure to agree to the definition of terms before engaging in the discussion.


The way I look at it is that arguments are either factual or normative. Facts can be easy to test even if they are counter-intuitive. Normative, since you can't derive "is" from "ought" alone, means you are ultimately resting the conclusion on moral or value-based axioms. While people can reasonably disagree on the value axioms themselves, there might still be logical errors in the implications that flow from it, so it can still be useful to engage in a dialect to test for that.


They aren't intended as some magical argument ending tool, they are designed as reminders are common flaws in logic. They are more for personal use than use against others, because it is easy to fall into bad logic, even more so when emotions (notoriously immune to logic) are involved. Being aware of them will help you be self-aware.

RE: "No True Scotsman" -- you missed the point a bit. The point is that it is a shell game. "No hackers news reader would EVER do X" ... I link to a hackers news reader doing EXACTLY X ... "Well, no REAL hackers news reader" -- it is a informal fallacy about goal moving basically. You can't add "no REAL abc" to excuse counterpoints, because it means that ANYTHING I say / prove / point out, you will always claim "no TRUE hacker news reader would do that", making the argument pointless.

Re: Appeal to tradition. The point is that tradition has no inherent value, and doesn't relate to correctness. If my mother thought 2+2=17... and she taught that to me, and I taught that to my children, that would be a tradition in our family. The point is that just cause something is traditional doesn't mean it is correct.


"Who is to say what metric you should use for "correct", in issues like ethical ones, that are not clear cut and measurable as things are in the hard sciences and mathematics?"

See this talk[1] by Sam Harris for an argument as to why that is an illusion. In essence he argues that we make axiomatic claims all the time in areas for which there aren't specifically quantifiable goals. In terms of morality the metric is the well-being of conscious creatures. If you argue that there's something else that's more important than the well being of conscious creatures then you probably don't know what you mean.

Likewise with the idea of 'health'. 'Health' is a squishy idea. But if you were to argue that "My idea of health is involuntarily vomiting 6 times a day" we'd have nothing to discuss.

[1]http://www.ted.com/talks/sam_harris_science_can_show_what_s_...


Thanks coldtea, I think you're onto something important. I'll try to expand on it, tho it's hard to do this briefly.

People reason (and discuss) in several modes, which are ordinarily mixed freely. Deductive, a.k.a. logical (in the proper sense) reasoning is one, and the fallacy-or-not analysis is strictly applicable.

Another mode is that of empirical science, which is basically inductive. You can't prove logically that, for example, substance X causes condition C, but you can establish a good probability by isolating the case, ruling out other factors, showing that the relation is reliably reproducible, etc..

A third mode is humanistic or intuitive. It is an error to suppose that this is unreliable or categorically invalid. Say you have thefts at a workplace, and question employees, and most answer straightforwardly, but one is evasive, furtive, avoids eye contact, can't explain simple events - despite the inapplicability of the logical or scientific modes, this is a rational basis for directing suspicion.

Errors of a meta variety enter when one mode is evaluated by the standards for another. E.g., this writer depicts "slippery slope" as a fallacy of logic by inserting "inevitably" in his interpretation of claims - attributing to the argument-maker the claim that A inevitably leads to B. But in fact "slippery slope" belongs to the third mode, and as such it can be valid - censors do tend to expand their categories, and the actual claim may be, not that the progression is inevitable, but instead that it's unacceptable to go further in that direction - and on this kind of interpretataion there may be no fallacy at all.

It's a good list of logic fallacies, but we just have ot make sure we don't try to apply the refutations in the context of other modes of reasoning.


Exactly. These are fallacies that are constrained to arguments that are logical in nature. That is, the agreed upon rules of argument and discussion is to use logic. And most arguments even of your second or third category use elements of logic to help strengthen the argument.

However, not all arguments are meant to be or can be solved with pure logic. Otherwise we'd all be Kantians. People will intentionally argue using Appeal to Emotion or Slippery Slope and, within certain contexts, this can be perfectly fine. Just throwing around "That's a logical fallacy, therefore your argument is invalid" is the perfect example of Fallacy Fallacy.

Even worse, just relying on having a codex of fallacies to throw around into any context reduces most arguments into petty squabbles and making it too easy to dismiss your opposition. But for discussions of the second and third mode you need to engage the person and get to the root of their ideas and argue using the proper rules and context.

My personal revelation of all of this came from reading this piece titled "You Baloney Detection Kit Sucks": http://plover.net/~bonds/bdksucks.html


> ...but we just have ot make sure we don't try to apply the refutations in the context of other modes of reasoning.

Why not? I'm not sold that it is an error to disagree with another's (often unconscious) criteria for making a conclusion.


It is absolutely correct to question the underlying premises. All the modes have them. It's just that the evaluations are not amenable to the same standards.

E.g., you can cast doubt on a scientific hypothesis of "A causes B" by pointing out a previously overlooked possible common cause. Or e.g., the polygraph is fairly bogus, not because of the mere logical possibility of false positives and false negatives, but rather because of their relative magnitudes and the poor support of the premises (about physiological correlates of lying or truth-telling). The flaws are strictly defined in one mode, approximate or probabilistic in another.


I thought Stephen Toulmin had the most useful formulation of argumentation.

"Toulmin's practical argument is intended to focus on the justificatory function of argumentation, as opposed to the inferential function of theoretical arguments. Whereas theoretical arguments make inferences based on a set of principles to arrive at a claim, practical arguments first find a claim of interest, and then provide justification for it. Toulmin believed that reasoning is less an activity of inference, involving the discovering of new ideas, and more a process of testing and sifting already existing ideas—an act achievable through the process of justification."

http://en.wikipedia.org/wiki/Stephen_Toulmin


You may find this article titled "Your baloney detection kit sucks" interesting:

http://plover.net/~bonds/bdksucks.html

Which was discussed on HN under the title "Logical Fallacies Are Usually Irrelevant or Cited Incorrectly":

https://news.ycombinator.com/item?id=5832320


Fallacies are invoked too often in internet arguments. Take ad hominem, for example. People invoke ad hominem as a counter to people who attack the credentials/funding sources of scientists who have an interest in promulgating a particular point of view. However, it is totally valid to argue that the minority of scientific studies that find, say, chemical X not to be harmful were funded by chemical companies. It's not a valid counter for a purely logical argument, but debates on the internet are rarely based on pure logic. Rather, they are usually based on evaluating the credibility of experts, evaluating the relevance of evidence, and distinguishing what kind of conclusions can and cannot be determined by particular evidence.

To me, a better source for internet debate is the Federal Rules of Evidence: http://www.law.cornell.edu/rules/fre. Particularly the modern statistical approach to evaluating the relevancy of evidence based on Bayes' rule: http://en.wikipedia.org/wiki/Evidence_under_Bayes_theorem.


Attacks against credentials are valid against an argument to authority (in which case they are, in fact, pointing out a fallacy), but you only have an argument to authority when a conclusion is offered on the sole basis that the conclusion was offered by an authority. When the actual argument by the alleged authority is presented (including by reference), an attack against credentials is not a valid rebuttal, because the credentials are not the support offered for the conclusion. It is appropriate and correct to label such an attack, offered as a rebuttal, as invalid by way of the ad hominem fallacy.


Your point is correct, but irrelevant. Almost any internet debate that you an imagine is ultimately rooted in arguments to authority. It's simply not useful to look at internet arguments as logical ones. It's the wrong level of abstraction. They involve some basic logical reasoning, but generally, they are all about weighing conflicting evidence and expert opinions. Should we regulate fracking? It all depends on what the evidence is, and which evidence we believe and which evidence we don't believe. Each of those expert opinions might be characterized as logical arguments themselves (generalizing from particular empirical observations to conclusions), and could possible be tackled at that level (well you can't reach this particular inference from that particular data point), but that would make internet arguments completely intractable and not very illuminating.


Weighing conflicting evidence and expert opinions is a domain in which correct understanding of fallacies -- particularly those of argument to authority and ad hominem -- is critical. In fact, it is a domain I which it is critically important, because it is intimately tied to the vital distinction between expert opinion, on the one hand, and reference to evidence which happens to have been previously presented by an expert, on the other.

So, no, the distinction between the valid use of attack against credential to rebut expert opinion, and the fallacious, ad hominem, nature of such an attack when used to “rebut” evidence that happens to have been previously presented by an alleged expert is not, at all, irrelevant to internet debate.


> However, it is totally valid to argue that the minority of scientific studies that find, say, chemical X not to be harmful were funded by chemical companies.

I completely disagree with this common attempt to debunk the ad hominem fallacy. I truly believe that the arguer is completely separate from the argument. It's true that we shouldn't trust unsubstantiated claims from biased chemical companies. But the key is that we shouldn't trust unsubstantiated claims from unbiased sources either, even historically reliable unbiased sources.


I'm trying to find the name of a suspected logical fallacy that I think some founders make... wonder if anyone can spot it?

It goes something like: 1. Established product A has traits X,Y and Z and is a billion dollar business 2. Our new Product B also has traits X,Y and Z and so it will be a billion dollar business

Example I saw the other week from "Why Bitcoin Matters"

> A mysterious new technology emerges, seemingly out of nowhere, but actually the result of two decades of intense research and development by nearly anonymous researchers. Political idealists project visions of liberation and revolution onto it; establishment elites heap contempt and scorn on it. On the other hand, technologists – nerds – are transfixed by it. They see within it enormous potential and spend their nights and weekends tinkering with it. What technology am I talking about? Personal computers in 1975, the Internet in 1993, and – I believe – Bitcoin in 2014.

Please note, I'm not knocking bitcoin here. I'm just wondering if this is indeed a fallacy and has a name.


The fallacy you're describing, and it's something that far too many startups do, falls under the banner of "cargo cult behaviour". It was first described by Richard Feynman, while talking about science, but the principle applies to people who merely mimic behaviour and expect the same outcome regardless of what they're doing.

"In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they've arranged to imitate things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas--he's the controller--and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land." [1]

Believing you're building a billion dollar business because you're nationally doing the same things as someone who has built a billion dollar business before you is all too common.

[1] http://en.wikipedia.org/wiki/Cargo_cult_science


I think this person is mostly off the hook here. To begin with "I believe" makes it clear that this is an opinion, not a statement of fact. They aren't saying "bitcoin will be popular because personal computers were popular" they are saying "I can see similarities between the two".

They aren't even making any specific statement about bitcoin beyond it being controversial and interesting to nerds; both of which are already true.


It is highly related to the Questionable Cause fallacy: namely many believe that if two events are highly correlated then there is a causal link between them.

The business has traits X, Y, Z and successful, so we conclude that the traits X, Y, Z causes it to be successful.


The fallacy of the form "A has trait X, A has trait Y, B has trait X, therefore B has trait Y" is an association fallacy (the particular subtype you refer to is sometimes called "honor by association", its exactly the same as the somewhat more popular "guilt by association", but where the faulty conclusion is positive rather than negative.)

http://en.wikipedia.org/wiki/Association_fallacy


A lot of this is just confusion of necessary and sufficient conditions. Assuming that X, Y, and Z are necessary, it doesn't mean they are sufficient. (Although in #1 they might not even be arguing they are necessary.)


This is an inductive argument I'd say, which is also listed as a fallacy on that site - see the "Examples of Fallacies" below the numbered list).


"Fallacy of the undistributed middle"

all A are X

this B is X

therefore this B is A

then: for A: successful business. for X: has certain traits. for B: our new product.


Not really. The case is not being made that "all successful businesses have this specific trait".

I think the most relevant fallacy here is "post hoc ergo propter hoc", which says that if A came before B, then A must have caused B.

I.e., this business had traits A, and then they went on to become successful, so it must have been these traits, since they were there first.


True, there isn't even a claim of "all" (which makes it even weaker of course).

But there is likely a plausible explanation why some property of the product improves its sales. And indeed, that property may really be necessary for a product's success. But it may be an insufficient cause.

e.g. facebook started among college kids so my product focused college kids will success.

So I'm thinking it's a flavor of over generalization or selection bias and I opine that you can most easily show how ridiculous that kind of reasoning is with the "A is X, B is X therefore B is A" fallacy.


It's always nice to see the fallacies of debate promoted so we can hopefully all improve in our exchanging of ideas.

Ones to really watch out for IMO are Appeal to Emotion and Ad Hominem. These two are the most used fallacies in internet debates IMO. The more you familiarize yourself with these, the better you may become in recognizing a poor argument.


It's important to really know what those fallacies look like, though. Just insulting someone isn't an ad hominem, and just using emotionally charged language isn't necessarily an argument that's relying on an appeal to emotion.

And if you're going to call someone out on a fallacy, don't throw it down like a trump card. SHOW how their argument is fallacious.


Personally I would guess that biases such as confirmation bias interferes with or reasoning much more than any fallacies. Also, they are harder to get rid of.


Agreed. The fallacies are simply a toolset for confirmation bias IMO.

Personally, I think cognitive dissonance is a major problem. People in general have opinion set A at any given moment, and when a direct contradiction to one opinion in set A occurs, they either fight it, or flee.

There's a lot of science behind the idea that most people use their reptilian brain in arguments, emotionally defend their already engrained views, and then try to logically defend it after the fact.

The real challenge is while defending your own views, to try and attempt at discerning whether or not you are simply emotionally responding to a cognitive dissonance, or whether you are rationally defending a built up worldview with foundations in knowledge and insight.

I'd like to say I do the latter, but I'm sure everybody would like to agree with that...


Adding to that is the strange fascination with slippery slope on the internet. It's so bad that people will start by calling their own argument a slippery slope and implying that that makes it logically sound. Removing that from discussions would go along way to promoting reasonable debate.


If you believe a slippery slope argument, you'll soon find yourself believing any old rubbish.


A lot of people also mix up ad hominem for poisoning the well.




Sadly missing is the "Fallacy Fallacy", which seems to crop up in internet discussions rather frequently.


Good point. Though the fallacy fallacy is just a special case of Denying the Antecedent:

  1. If [argument is sound], then [conclusion is true].
  2. [argument] is not sound.
  3. Therefore, [conclusion] is not true.
Similar to:

  1. If it has rained, then the street is wet.
  2. It hasn't rained.
  3. Therefore, the street isn't wet.


Indeed, but see the date of the source: The content of these pages is a few decades old. And it is indeed quite handy in real life situations e.g. the panel discussions.


Fallacies, fallacies, one for you, and two for me...


Interesting. Just keep in mind that pointing fallacies out in friendly conversation is very annoying.

Relevant previous discussion on "YOUR BALONEY DETECTION KIT SUCKS": https://news.ycombinator.com/item?id=5832320


"For example, a moderate amount of exercise is better than too much exercise or too little exercise."

Of course, "too much" and "too little" already implies a negative. "Too much" of anything is bad, that's why it's called "too much".


I'd love to see all the fallacies reframed as "persuasion tactics".


Yeah, I'm not crazy about the overly simplistic way some of these "fallacies" are constructed. I've always thought people who go on about fallacies in argument are incredibly smug, and the fallacies themselves are just names given to things we already recognized and understood.

In the "Appeal to Pity" fallacy the example given is:

Jill: "He'd be a terrible coach for the team." Bill: "He had his heart set on the job, and it would break if he didn't get it." Jill: "I guess he'll do an adequate job."

The argument that Bill cares immensely about the job IS evidence that he would likely do it well. A coach who cares is more likely to work hard, be more invested in the outcomes of games, etc.

But the real problem, as you mention, is that people don't argue to logically prove something. People argue to convince someone else (often not even the person they're arguing with). If you can convince people you won the argument, you won the argument. It doesn't matter whether you appealed to base human emotion or made a logically/factually empty argument.



"Science fallacy": Anything coming from a scientist or a science/research group/organization becomes "scientifically proven" and therefore "the truth".


Sounds like "appeal to authority" to me, but the way you stated it, it looks more like a straw man.

The reason to trust scientists is their method, which gives a good approximation to truth. If you are capable of evaluating their results yourself, then you don't need to trust them, but if you aren't, you have a good reason to trust them, even if technically committing the fallacy, because what else are you gonna do?


Actually I think "Ad Hominem" is a rather good filter. Although it's not a proof of statement being true or false, it's still good to ask your self a question "why is this person saying this?"

For example: A stranger rolls up in a truck up to a kid on their walk from school and says "get in my truck, your parents were in an accident, and I need to drive you to hospital". Now all the person said could be true - but I would strongly advise from getting into the truck.


Ad hominem is when you dismiss what someone says because of who they are, not what they say.

There's no fallacy with simply not bothering to listen to them in the first place, if they have a history of spouting bullshit.


Ad Hominem is grossly 'over-diagnosed' on the internet. I would say that the vast majority of the time that somebody claims somebody else is using ad hominems, what they are actually doing is merely insulting the other person.

In other words:

  "You are wrong because of X, Y, and Z.  You fucking idiot."
not:

  "You are wrong, because you are a fucking idiot."
The first will typically get cries of "ad hominem", but it isn't. The second sentence is not being used to logically support the assertion in the first sentence, it is just a tacked on plain-old insult.


What's wrong with example one, the fact that it doesn't explicitly say 'probably'? Just because something is a conclusion doesn't mean it's an absolute. When talking about real-life situations, there are no absolutes. Your premises can change behind your back.

Sometimes you have to go with the most likely option to not get stuck in a quagmire. Being imprecise for the purposes of focus is not an error.


The Adventures of Fallacy Man (Comic):

http://existentialcomics.com/comic/9


interesting, but i want to nitpick on the appeal to authority:

> This fallacy is committed when the person in question is not a legitimate authority on the subject. More formally, if person A is not qualified to make reliable claims in subject S, then the argument will be fallacious.

Such arguments are fallacious even when person A is a well informed individual who is sufficiently well qualified to understand the subject. Not recognising this misses the spirit of the fallacy - that all claims must be substantiated and that prior performance is /completely/ irrelevant as to whether any individual claim is valid - even when that experience makes it the case that person A is the foremost expert in the field.


Yes, and it's amazing and ironic how many otherwise authoritative sources (i.e. sources worth reading and listening to) get this totally wrong -- they still think that the source of an idea can have a bearing on whether it is true. I think they are variously conflating three issues: (1) whether an idea is true, (2) whether one can (rationally) rely upon it in some way, (3) whether one understands it.


It's always interesting to see these (in)formal fallacies on the internet. They're really only fallacies if they're wrong.

Usually I just want to say "fallacy of not knowing mathematical logic."


Fallacies are favorites of Internet forums because you don't need a bit of domain knowledge to attack any argument. They're sort of like grammar or spelling corrections.


the biggest logical fallacy is not realizing that homo sapiens is evolved to NOT use logic in certain areas: tribal allegiances, bonding and politics. You can see this play out in the GOP/conservative vs Dem/liberal farces in various forums and on TV every day. Same thing goes for what passes for the term 'political correctness.'

Also, religion--logic does not come into play there.

Also, death and the afterlife--logic is precluded there as well.

Homo sapiens is evolved to use logic in certain limited areas only.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: