> The problem with argument ad hominem isn't that it never works. It often leads to the correct conclusion ...
You immediately self-contradicted here. If a decision heuristic often leads to the correct conclusion, then it's a good heuristic (if you are under time constraints, which we are).
Of course, given unlimited time to think about it, we would never use ad hominem reasoning and consider each and every argument fully. But there are tens of thousands of cults across the world, each insisting that they possess the Ultimate Truth, and that you can have it if you spend years studying their doctrine. Are you carefully evaluating each and every one to give them a fair shake? Of course not. Even if you wanted to, there is not enough time in a human lifespan. You must apply pattern-matching. The argument being made here isn't really an ad hominem, it's more like "The reason AI risk-ers strongly resemble cults is because they functionally are one, with the same problems, and so your pattern-matching algorithm is correct". Note that the remainder of the talk is spend backing up this assertion.
There's a good discussion of this in the linked article about "learned epistemic helplessness" (and cheers to idlewords for the cheeky rhetorical judo of using Scott Alexander and LW-y phrases like "memetic hazard" in an argument against AI risk), but what it boils down to is that our cognitive shortcuts evolved for a reason. Sometimes the reason is because of our ancestral environment and no longer applies, but that is not always true. When you focus solely on their failure cases, you miss sight of how often they get things right... like protecting you from cults with the funny robes.
> "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).
A lot of people did ignore Einstein until the transit of Mercury provided the first empirical evidence for relativity, and they were arguably right to do so.
A good heuristic leads to a reduction in the overall cost of a decision (combining the cost of making the decision with the cost of the consequences if you get it wrong).
A heuristic like "it's risky to rent a car to a male under 25" saves a lot of cost in terms of making the decision (background checks, accurately assessing the potential renter's driving skills and attitude towards safety, etc.) and has minimal downside (you only lose a small fraction of potential customers) and so it's a good heuristic.
A heuristic like "a 26-year-old working a clerical job who makes novel statements about the fundamental nature of reality is probably wrong" does reduce the decision cost (you don't have to analyze their statements) but it has a huge downside if you're wrong (you miss out on important insights which allow a wide range of new technologies to be developed). So even though it's a generally accurate heuristic, the cost of false negatives means that it's not a good one.
I agree with you in principle, but the combination of the base rate for "26-year-olds redefining reality" being so low and the consequences being not nearly as dire as you make out mean I stand by my claim, at least for the case of heuristics on how to identify dangerous cults.
With regards to the Einstein bit, per my above comment I still think that skepticism of GR was perfectly rational right up until it got an empirical demonstration. And it's not like the consequences for disbelieving Einstein prior to 1919 were that dire: the people who embraced relativity before then didn't see any major benefit for doing so, nor did it hurt society all that much (there was no technology between 1915 and 1919 that could've taken advantage of it).
Pascal's Wager (https://en.wikipedia.org/wiki/Pascal's_Wager) is also about a small but likely downside with a potentially large but unlikely upside. Do you think it's analogous to your 2nd case? If not, how is it different?
The difference is that in Pascal's Wager, the proposition is not a priori falsifiable, and so you cannot assign a reasonable expected cost (ie. taking probability into account) to either decision.
In the case of a 26-year-old making testable assertions about the nature of spacetime (right down to the assertion that space and time are interconnected), there's a known (if potentially large) cost to testing the assertions.
> If a decision heuristic often leads to the correct conclusion, then it's a good heuristic (if you are under time constraints, which we are).
So, is it a good heuristic to conclude that since crime is related to poverty and minorities tend to be poor, minorities qua minorities ought to be shunned?
No. The base rate of having a crime committed against you is extremely low, and the posterior probability of having a crime committed against you by a poor minority- even if higher- is still extremely low. My point refers to the absolute value of the posterior probability of one's heuristic being correct, not the probability gain resulting from some piece of evidence (like being a poor minority).
Seconding the point. If you want to accept "ad hominem" or stereotypes as a useful heuristic, you'll quickly hit things that will get you labeled as ${thing}ist. This is an utterly hypocritical part of our culture.
I've been thinking about this a lot lately, and am coming to the conclusion that it's similar to dead-weight loss in a taxation scenario. As a society we've accepted the "lower efficiency" and deadweight loss of rejecting {thing}ism because we don't want any one {thing} to get wrongly persecuted only on the basis of it being such a {thing}.
If you think of it that way, it's a rephrasing of the old and quite universally accepted "I would rather 100 guilty men go free than one innocent man go to prison."
"I would rather 100 deadbeat {class} get hired than one deserving {class} not be hired due to being a {class}."
Let's suppose we want to solutionize our criminal problem. There are 1000 people in the population; 90% white, of which 5% are criminals and 10% black, of which 10% are criminals. (I rather doubt the difference in criminality is 2x, but....)
So, there are 900 white people and 100 black people; if we finalize the black people, we'll have put a big dent in the criminal issue, right?
Well, we reduce our criminals from 55 to 45 while injuring an innocent 9% of the population.
You immediately self-contradicted here. If a decision heuristic often leads to the correct conclusion, then it's a good heuristic (if you are under time constraints, which we are).
Of course, given unlimited time to think about it, we would never use ad hominem reasoning and consider each and every argument fully. But there are tens of thousands of cults across the world, each insisting that they possess the Ultimate Truth, and that you can have it if you spend years studying their doctrine. Are you carefully evaluating each and every one to give them a fair shake? Of course not. Even if you wanted to, there is not enough time in a human lifespan. You must apply pattern-matching. The argument being made here isn't really an ad hominem, it's more like "The reason AI risk-ers strongly resemble cults is because they functionally are one, with the same problems, and so your pattern-matching algorithm is correct". Note that the remainder of the talk is spend backing up this assertion.
There's a good discussion of this in the linked article about "learned epistemic helplessness" (and cheers to idlewords for the cheeky rhetorical judo of using Scott Alexander and LW-y phrases like "memetic hazard" in an argument against AI risk), but what it boils down to is that our cognitive shortcuts evolved for a reason. Sometimes the reason is because of our ancestral environment and no longer applies, but that is not always true. When you focus solely on their failure cases, you miss sight of how often they get things right... like protecting you from cults with the funny robes.
> "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).
A lot of people did ignore Einstein until the transit of Mercury provided the first empirical evidence for relativity, and they were arguably right to do so.