> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is".
Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
Are you sure you're not painting this group with an overly-broad brush?
I think this is a valid point. But to some degree both can be true. I often felt when reading some of these type of texts: Wait a second, there is a wealth of thinking on these topics out there; You are not at all situating all your elaborate thinking in a broader context. And there absolutely is willingness to be challenged, and (maybe less so) a willingness to be wrong. But there also is an arrogance that "we are the ones thinking about this rationally, and we will figure this out". As if people hadn't been thinking and discussing and (verbally and literally) fighting over all sorts of adjacent and similar topics in philosophy and sociology and anthropology and ... clubs and seminars forever. And importantly maybe there also isn't as much taste for understanding the limits of vigorous discussion and rational deduction. Adorno and Horkheimer posit a dialectic of rationality and enlightenment, Habermas tries to rebuild rational discourse by analyzing its preconditions. Yet for all the vigorous intellectualism of the rationalists, none of that ever seems to feature even in passing (maybe I have simply missed it...).
And I have definitely encountered "if you just listen to me properly you will understand that I am right, because I have derived my conclusions rationally" in in person interactions.
On the balance I'd rather have some arrogance and willingness to be debated and be wrong, over a timid need to defer to centuries of established thought though. The people I've met in person I've always been happy to hang out with and talk to.
That's a fair point. Speaking only for myself, I think I fail to understand why it's important to situate philosophical discussions in the context of all the previous philosophers who have expressed related ideas, rather than simply discussing the ideas in isolation.
I remember as a child coming to the same "if reality is a deception, at least I must exist to be deceived" conclusion that Descartes did, well before I had heard of Descartes. (I don't think this makes me special, it's just a natural conclusion anyone will reach if they ponder the subject). I think it's harmless for me to discuss that idea in public without someone saying "you need to read Descartes before you can talk about this".
I also find my personal ethics are stronly aligned with what Kant espoused. But most people I talk to are not academic philosophers and have not read Kant, so when I want to explain my morals, I am better off explaining the ideas themselves than talking about Kant, which would be a distraction anyway because I didn't learn them from Kant, we just arrived at the same conclusions. If I'm talking with a philosopher I can just say "I'm a Kantian" as shorthand, but that's really just jargon for people who already know what I'm talking about.
I also think that while it would be unusual for someone to (for example) write a guide to understanding relativity without once mentioning Einstein, it also wouldn't be a fundamental flaw.
(But I agree there's no certainly excuse for someone asserting that they're right because they're rational!)
It may be easier to imagine someone trying to derive mathematics all by themselves, since it's less abstract. It's not that they won't come up with anything, it's that everything that even a genius can come up with in their lifetime will be something that the whole of humanity has long since come up with, chewed over, simplified, had a rebellion against, had a counter-rebellion against the rebellion, and ultimately packaged it up in a highly efficient manner into a textbook with cross-references to all sorts of angles on it and dozens of elaborations. You can't possible get through all this stuff all on your own.
The problem is less clear in philosophy than mathematics, but it's still there. It's really easy on your own terms to come up with some idea that the collective intelligence has already revealed to be fatally flawed in some undeniable manner, or at the very least, has very powerful arguments against it that an individual may never consider. The ideas that have survived decades, centuries, and even millenia against the collective weight of humanity assaulting them are going to have a certain character that "something someone came up with last week" will lack.
(That said I am quite heterodox in one way, which is that I'm not a big believer in reading primary sources, at least routinely. Personally I think that a lot of the primary sources noticeably lack the refinement and polish added as humanity chews it over and processes it and I prefer mostly pulling from the result of the process, and not from the one person who happened to introduce a particular idea. Such a source may be interesting for other reasons, but not in my opinion for philosophy.)
Well, sure, but mathematics is the domain for which this holds maybe the most true out of any. It's less true for fields which are not as old.
I'm not sure if this counterpoint generalizes entirely to the original critique, since certainly LessWrongers aren't usually posting about or discussing math as if they've discovered it-- usually substantially more niche topics.
I suppose you're right about that, so I can't make the argument go through by saying "mathematics" vs "philosophy". Maybe what I should say instead is that as some dialectics advance/technologies develop, subfields of both such things sprout up and have a lot of low-hanging fruit to pick, and in these cases, the new work will be descended from but not too-essentially informed by the prior work.
Like mathematical logic (in the intersection of math and philosophy) didn't have that many true predecessors and was developed very far by maybe only 5-10 individuals cumulatively, or information theory was basically established by Claude Shannon and maybe two other guys, or various aspects of convex optimization or Fourier analysis were only developed in the 80s or so, it stands to reason that the AI-related applications of various aspects of philosophy are ripe to be developed now. (By contrast, we don't see, as much, people on LW trying to redo linear algebra from the ground up, nor more "mature" aspects of philosophy.)
(If anything, I think it's more feasible than ever before, also, for a bunch of relative amateurs to non-professionally make real intellectual contributions, like noticeably moreso than 100 or even 20 years ago. That's what increasing the baseline levels of education/wealth/exposure to information was intended to achieve, on some level, isn't it?)
Generally conservatives only want to conserve the way the world is as they grow up. Whether that involves respect for prior ideas and thought depends on their current ideas - hence the book burning.
Did you discover it from first principles by yourself because it's a natural conclusion anyone would reach if they ponder the subject?
Or because western culture reflects this theme continuously through all the culture and media you've immersed in since you were a child?
Also the idea is definitely not new to Descartes, you can find echoes of it going back to Plato, so your idea isn't wrong per se. But I think it underrates the effect to which our philosophical preconceptions are culturally constructed.
Because it's a natural conclusion anyone would reach if they ponder the subject. Sorry, I thought I expressed that opinion clearly. I don't think I was exposed to that idea through media by that age.
Odds are good that the millions of people who have also read and considered these ideas have added to what you came up with at 6. Odds are also high that people who have any interest in the topic will probably learn more by reading Descartes and Kant and the vast range of well written educational materials explaining their thoughts at every level. So if you find yourself telling people about these ideas frequently enough to have opinions on how they respond, you are doing both yourself and them a disservice by not bothering to learn how the ideas have already been criticized and extended.
There’s definitely a tension between having a low tolerance for crankery and being open to fresh perspectives. If I’m being charitable to the critics of Rationalism (big “r”), I suppose that they have encountered arguments from Rationalists that struck them as wrong specifically in a way that would have been avoided if the person making the argument had read any of the relevant literature.
How could you say that your views are aligned with those of Descartes and Kant if you have not seriously engaged with their works and what others have written about them?
All serious works in philosophy (Kant especially) are subject to interpretation. Whole research programmes exist around the works of major philosophers, interpreting and building on their works.
One cannot really do justice to e.g. the Critique of Pure Reason by discussing it based on a high level summary of the “main ideas” contained in it. These works have had a major impact on the history of Western philosophy and were groundbreaking at the time (and still are).
I think they basically agree with your point here -- they mention Descartes and Kant to say roughly "I hold basically these ideas, but I don't mention the philosophers' names when I talk about them because 1) I came to them independently, and 2) the people I'm talking to are not familiar with the context so situating our conversation there isn't helpful." Their argument is that you can have philosophical conversations without relying on the context of the canon, and that in a first-level discussion they wouldn't bring up Descartes or Kant.
Here's a very simple explanation as to why it's helpful from a "first principles" style analogy.
Suppose a foot race. Choose two runners of equal aptitude and finite existence. Start one at mile 1 and one at mile 100. Who do you think will get farther?
Not to mention, engaging in human community and discourse is a big part of what it means to be human. Knowledge isn't personal or isolated, we build it together. The "first principles people" understand this to the extent that they have even built their own community of like minded explorers, problem is, a big part of this bond is their choice to be willfully ignorant of large swaths of human intellectual development. Not only is this stupid, it also is a great disservice to your forebears, who worked just as hard to come to their conclusions and who have been building up the edifice of science bit by bit. It's completely antithetical to the spirit of scientific endeavor.
It really depends on why you are having a philosophical discussion. If you are talking among friends, or just because you want to throw interesting ideas around, sure! Be free, have fun.
I come from a physics background. We used to (and still) have a ton of physicists who decide to dable in a new field, secure in their knowledge that they are smarter than the people doing it, and that anything worthwhile that has already been thought of they can just rederive ad hoc when needed (economists are the only other group that seems to have this tendency...) [1]. It turned out every time that the people who had spent decades working on, studying, discussing and debating the field in question had actually figured important shit out along the way. They might not have come with the mathematical toolbox that physicists had, and outside perspectives that challenge established thinking to prove itself again can be valuable, but when your goal is to actually understand what's happening in the real world, you can't ignore what's been done.
> But there also is an arrogance that "we are the ones thinking about this rationally, and we will figure this out". As if people hadn't been thinking and discussing and (verbally and literally) fighting over all sorts of adjacent and similar topics in philosophy and sociology and anthropology and ... clubs and seminars forever
This is a feature, not a bug, for writers who hold an opinion on something and want to rationalize it.
So many of the rationalist posts I've read through the years come from someone who has an opinion or gut feeling about something, but they want it to be seen as something more rigorous. The "first principles" writing style is a license to throw out the existing research on the topic, including contradictory evidence, and construct an all new scaffold around their opinion that makes it look more valid.
I use the "SlimeTimeMoldTime - A Chemical Hunger" blog series as an example because it was so widely shared and endorsed in the rationalist community: https://slimemoldtimemold.com/2021/07/07/a-chemical-hunger-p... It even received a financial grant from Scott Alexander of Astral Codex Ten
Actual experts were discrediting the series from the first blog post and explaining all of the author's errors, but the community soldiered on with it anyway, eventually making the belief that lithium in the water supply was causing the obesity epidemic into a meme within the rationalist community. There's no evidence supporting this and countless take-downs of how the author misinterpreted or cherry-picked data, but because it was written with the rationalist style and given the implicit blessing of a rationalist figurehead it was adopted as ground truth by many for years. People have been waking up to issues with the series for a while now, but at the time it was remarkable how quickly the idea spread as if it was a true, novel discovery.
I don't read HN or LW all that often, but FWIW I actually learned about SlimeMoldTimeMold's "A Chemical Hunger" series from HN and then read its most famous takedown from LessWrong: https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-proba... (I don't remember any detailed takedowns of SlimeMoldTimeMold coming before that article, but maybe there are).
I think that SlimeMoldTimeMold's rise and fall was actually a pretty big point in favor of the "rationalist community".
> I think that SlimeMoldTimeMold's rise and fall was actually a pretty big point in favor of the "rationalist community".
That feels like revisionist history to me. It rose to fame in LessWrong and SlateStarCodex, was promoted by Yudkowski, and proliferated for about a year and half before the takedowns finally got traction.
While it was the topic du jour in the rationalist spaces it was very difficult to argue against. I vividly remember how hard it was to convince anyone that SMTM wasn't a good source at the time, because so many people saw Yudkowski endorse it, saw Scott Alexander give it a shout out, and so on.
Now Yudkowski has gone back and edited his old endorsement, it has disappeared from the discourse, and many want to pretend the whole episode never happened.
> (I don't remember any detailed takedowns of SlimeMoldTimeMold coming before that article, but maybe there are).
Exactly my point. It was criticized widely outside of the rationalist community, but the takedowns were all dismissed because they weren't properly rationalist-coded. It finally took someone writing it up in the form of rationalist rhetoric and seeding it into LessWrong to break the spell.
This is the trend with rationalist-centric contrarianism: You have to code your articles with the correct prose, structure, and signs to get uptake in the rationalist community. Once you see it, it's hard to miss.
> It was criticized widely outside of the rationalist community, but the takedowns were all dismissed because they weren't properly rationalist-coded.
Do you have any examples of this that predate that LW article? Ideally both the critique and its dismissal but just the critique would be great. The original HN submission had a few comments critiquing it but I didn't see anything in depth (or for that matter as strident).
Is this "evidence" one of those silly things that rationalists so much love to talk about?
Don't worry, HN commenters can figure out the truth about Yudkowsky's articles from the first principles. They have already figured out that EAs no longer care about curing malaria, despite https://www.givewell.org/charities/top-charities only being a Google search away.
At the end, they will give you a lecture about how everyone hates people who are smug and talk about things they have no clue about. The lecture will then get a lot of upvotes.
Did you end up presenting the evidence? I'm following the discussion a few days too late, so my apologies if you've already linked to the evidence and ended up deleting it after.
It was Aurornis who made the claim "Yudkowski has gone back and edited his old endorsement" that EnPissant asked evidence for. And nope, he didn't receive it.
Similarly, Aurornis made a claim that "Scott Alexander predicted at least $250 million in damages from Black Lives Matter protests", when if fact (as the very link provided by Aurornis shows) Scott predicted that the probability of such thing happening was 30%, i.e. it's more likely not to happen.
Elsewhere in this thread, another user, tptacek, claims that "Scott Alexander published some of his best-known posts under his own name". When I asked him for evidence, he said "I know more about this than you, and I'm not invested in this discussion enough to educate you adversarially". Translated: no evidence provided.
.
From my perspective, this all kinda proves my point.
Is the rationality community the only place where people care about evidence? Of course not.
But is the rationality community a rare place where people can ask for evidence in an informal debate and reasonably expect to actually get it? Unfortunately, I think the evidence we got here points towards yes.
Hacker News is a website mostly visited by smart people who are curious about many things. They are even smart enough to notice that some claims are suspicious, and ask for evidence. But will they receive it? No, they usually won't.
And in the next debate on the same topic, most likely the same false claims will be made again, maybe by people who have learned them in this thread. And the claims will be upvoted again.
This is an aspect where the rationality community strives to do better. It is not about some people being smarter than others, or whatever accusations are typically made. It is about establishing social norms where people e.g. don't get upvoted for making unsubstantiated negative claims about someone they don't like, without being asked to back it up, or get downvoted.
You're spot on here, and I think this is probably also why they appeal to programmers and people in software.
I find a lot of people in software have an insufferable tendency to simply ignore entire bodies of prior art, prior research, etc. outside of maybe computer science (and even that can be rare), and yet they act as though they are the most studied participants in the subject, proudly proclaiming their "genius insights" that are essentially restatements of basic facts in any given field that they would have learned if they just bothered to, you know, actually do research and put aside their egos for half a second to wonder if maybe the eons of human activity prior to their precious existence might have led to some decent knowledge.
Yeah, though I think you may be exaggerating how often the "genius insights" rise to the level of correct restatements of basic facts. That happens, but it's not the rule.
I grew up with some friends who were deep into the early roots of online rationalism, even slightly before LessWrong came online. I've been around long enough to recognize the rhetorical devices used in rationalist writings:
> Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
There's a lot of in-group signaling in rationalist circles like the "epistemic status" taglines, posting predictions, and putting your humility on show.
This has come full-circle, though, and now rationalist writings are generally pre-baked with hedging, both-sides takes, escape hatches, and other writing tricks that make it easier to claim they weren't entirely wrong in the future.
A perfect exaple is the recent "AI 2027" doomsday scenario that predicts a rapid escalation of AI superpowers followed by disaster in only a couple years: https://ai-2027.com/
If you read the backstory and supporting blog posts from the authors they are filled to the brim with hedges and escape hatches. Scott Alexander wrote that it was something like "the 80th percentile of their fast scenario", which means when it fails to come true he can simple say it wasn't actually his median prediction anyway and that they were writing about the fast scenario. I can already predict that the "We were wrong" article will be more about what they got right with a heavy emphasis on the fact that it wasn't their real median prediction anyway.
I think this group relies heavily on the faux-humility and hedging because they've recognized how powerful it is to get people to trust them. Even the comment above is implying that because they say and do these things, they must be immune from the criticism delivered above. That's exactly why they wrap their posts in these signals, before going on to do whatever they were going to do anyway.
Yes, I do think that these hedging statements make them immune from the specific criticism that I quoted.
If you want to say their humility is not genuine, fine. I'm not sure I agree with it, but you are entitled to that view. But to simultaneously be attacking the same community for not ever showing a sense of maybe being wrong or uncertain, and also for expressing it so often it's become an in-group signal, is just too much cognitive dissonance.
> es, I do think that these hedging statements make them immune from the specific criticism that I quoted.
That's my point: Their rhetorical style is interpreted by the in-group as a sort of weird infallibility. Like they've covered both sides and therefore the work is technically correct in all cases. Once they go through the hedging dance, they can put forth the opinion-based point they're trying to make in a very persuasive way, falling back to the hedging in the future if it turns out to be completely wrong.
The writing style looks different depending on where you stand: Reading it in the forward direction makes it feel like the main point is very likely. Reading it in the backward direction you notice the hedging and decide they were also correct. Yet at the time, the rationalist community attaches themselves to the position being pushed.
> But to simultaneously be attacking the same community for not ever showing a sense of maybe being wrong or uncertain, and also for expressing it so often it's become an in-group signal, is just too much cognitive dissonance.
That's a strawman argument. At no point did I "attack the community for not ever showing a sense of maybe being wrong or uncertain".
> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
edit: my apologies, that was someone else in the thread. I do feel like between the two comments though there is a "damned if you do, damned if you don't". (The original quote above I found absurd upon reading it.)
Haha my thoughts exactly. This HN thread is simultaneously criticizing them for being too assured, not considering other possibilities, and hedging that they may not be right and other plausibilities exist.
This is right, but doesn't actually cover all the options. It's damned if you [write confidently about something and] do or don't [hedge with a probability or "epistemic status"].
But the other option, which is the one the vast majority of people choose, is to not write confidently about everything.
It's fine, there are far worse sins than writing persuasively about tons of stuff and inevitably getting lots of it wrong. But it's absolutely reasonable to criticize this choice, irregardless of the level of hedging.
Well, on a meta level, I think their community has decided that in general it's better to post (and subsequently be able to discuss) ideas that one is not yet very confident about, and ideally that's what the "epistemic status" markers are supposed to indicate to the reader.
They can't really be blamed for the fact that others go on to take the ideas more seriously than they intended.
(If anything, I think that at least in person, most rationalists are far less confident and far less persuasive than the typical person in proportion to the amount of knowledge/expertise/effort they have on a given topic, particularly in a professional setting, and they would all be well-served to do at least a normal human amount of "write and explain persuasively rather than as a mechanical report of the facts as you see them".)
(Also, with all communities there will be the more serious and dedicated core of the people, and then those who sort of cargo-cult or who defer much, or at least some, of their thinking to members with more status. This is sort of unavoidable on multiple levels-- for one, it's quite a reasonable thing to do with the amount of information out there, and for another, communities are always comprised of people with varying levels of seriousness, sincere people and grifters, careful thinkers and less careful thinkers, etc. (see mobs-geeks-sociopaths))
(Obviously even with these caveats there are exceptions to this statement, because society is complex and something about propaganda and consequentialism.)
Alternately, I wonder if you think there might be a better way of "writing unconfidently", like, other than not writing at all.
Yeah I think you're getting at what my skepticism stems from: The article with the 55% certain epistemic status and the article with the 95% certain epistemic status are both written with equal persuasive oomph.
In most writing, people write less persuasively on topics they have less conviction in.
> That's a strawman argument. At no point did I "attack the community for not ever showing a sense of maybe being wrong or uncertain".
Ok, let's scroll up the thread. When I refer to "the specific criticism that I quoted", and when you say "implying that because they say and do these things, they must be immune from the criticism delivered above": what do you think was the "criticism delivered above"? Because I thought we were talking about contrarian1234's claim to exactly this "strawman", and you so far have not appeared to not agree with me that this criticism was invalid.
If putting up evidence about how people were wrong in their predictions, I suggest actually pointing at predictions that were wrong, rather than on recent predictions about the future that that you disagree over how they will resolve. If putting up evidence about how people make excuses for failing predictions, I suggest actually showing them do so, rather than projecting that they will do so and blaming them for your projection.
It's been a while since I've engaged in rationalist debates, so I forgot about the slightly condescending, lecturing tone that comes out when you disagree with rationalist figureheads. :) You could simply ask "Can you provide examples" instead of the "If you ____ then I suggest ____" form.
My point wasn't to nit-pick individual predictions, it was a general explanation of how the game is played.
Since Scott Alexander comes up a lot, a few randomly selected predictions that didn't come true:
- He predicted at least $250 million in damages from Black Lives Matter protests.
- He predicted Andrew Yang would win the 2021 NYC mayoral race with 80% certainty (he came in 4th place)
- He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
It's also noteworthy to read that a lot of his predictions are about his personal life, his own blogging actions, or [redacted] things. These all get mixed in with a small number of geopolitical, economic, and medical predictions with the net result of bringing his overall accuracy up.
> He predicted at least $250 million in damages from Black Lives Matter protests.
He says
> 5. At least $250 million in damage from BLM protests this year: 30%
which, by my reading means he assigns it greater-than-even odds that _less_ than $250 million dollars in damages happened (I have no understanding of whether or not this result is the case, but my reading of your post suggests that you believe that this was indeed the outcome).
You say
> He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
while he says
> Vitamin D is _not_ generally recognized (eg NICE, UpToDate) as effective COVID treatment: 70%
(emphasis mine)
For what it's worth, your comments in this thread have been very good descriptions of things I became frustrated with after once being quite interested / enthralled with this community / movement!
(I feel like you're probably getting upvotes from people who feel similarly, but sometimes I feel like nobody ever writes "I agree with you" comments, so the impression is that there's only disagreement with some point being made.)
Thanks for sharing. You summed it up well: The community feels like a hidden gem when you first discover it. It feels like there's an energy of intelligence buzzing about interesting topics and sharing new findings.
Then you start encountering the weirder parts. For me, it was the group think and hero worship. I just wanted to read interesting takes on new topics, but if you deviated from the popular narrative associated with the heroes (Scott Alexander, Yudkowski, Cowen, Aaronson, etc.) it felt like the community's immune system identified you as an intruder and started attacking.
I think a lot of people get drawn into the idea of it being a community where they finally belong. Especially on Twitter (where the latest iteration is "TPOT") it's extraordinarily clique-ish and defensive. It feels like high school level social dynamics at play, except the players are equipped with deep reserves of rhetoric and seemingly endless free time to dunk on people and send their followers after people who disagree. It's a very weird contrast to the ideals claimed by the community.
Well nobody sent me; instead I had the strange experience of waking up this morning, seeing an interesting post about Scott Aaronson identifying as a rationalist, and when I check the discussion it's like half of HN has decided it's a good opportunity to espouse everything they dislike about this group of people.
Since when is that what we do here? If he'd written that he'd decided to become vegetarian, would we all be out here talking about how vegetarians are so annoying and one of them even spat on my hamburger one time?
And then of these uncalled-for takedowns, several -- including yours -- don't even seem to be engaging in good-faith discourse, and seem happy to pile on to attacks even when they're completely at odds with their own arguments.
I'm sorry to say it but the one who decided to use their free time to leer at people un-provoked over the internet seems to be you.
Seems like a perfectly reasonable thing to discuss in the comments on this article, and I don't know which other "takedowns" you're referring to, but this person's comments on it have not been in bad faith at all.
(Indeed, I think it's in worse faith to try to guilt trip people who are just expressing critical opinions. It's fine - good, even! - to disagree with those people, but this particular comment has a very "how dare you criticize something!" tone that I don't think is constructive.)
That sounds an awful lot like other people making stuff up to oppress me by sticking a "condescending" label to me without me having any way to contest it.
That sounds an awful lot like a victimhood complex.
"Are you being condescending" is a subjective judgement that other people will make up their own minds about. You can't control what people think about things you say and do, and they aren't "oppressing" you by making up their own minds about that.
> it was a general explanation of how the game is played.
You seem to be trying to insinuate that Alexander et. al. are pretending to know how things will turn out and then hiding behind probabilities when they don't turn out that way. This is missing the point completely. The point is that when Alexander assigns an 80% probability to many different outcomes, about 80% of them should occur, and it should not be clear to anyone (including Alexander) ahead of time which 80%.
> He predicted at least $250 million in damages from Black Lives Matter protests.
Edit: I see that the prediction relates to 2021 specificially. In the wake of 2020, I think it was perfectly reasonable to make such a prediction at that confidence level, even if it didn't actually turn out that way.
> He predicted Andrew Yang would win the 2021 NYC mayoral race with 80% certainty (he came in 4th place)
> He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
If you make many predictions at 70-80% confidence, as he does, you should expect 20-30% of them not to come true. It would in fact be a failure (underconfidence) if they all came true. You are in fact citing a blog post that is exactly about a self-assessment of those confidence levels.
Also, he gave a 70% chance to Vitamin D not being generally recognized as a good COVID treatment.
> These all get mixed in with a small number of geopolitical, economic, and medical predictions with the net result of bringing his overall accuracy up.
The point is not "overall accuracy", but overall calibration - i.e., whether his assigned probabilities end up making sense and being statistically validated.
You have done nothing to establish that any correlation between the category of prediction and his accuracy on them.
I genuinely don't understand how you can point to someone's calibration curve where they've broadly done well, and cherry pick the failed predictions they made, and use this not just to claim that they're making bad predictions but that they're slimy about admitting error. What more could you possibly want from someone than a tally of their prediction record graded against the probability they explicitly assigned to it?
lol, what? That was a civil comment. This seems like an excellent example of the point being made. Replying to a perfectly reasonable but critical comment with "please be civil" is super condescending.
So is stuff like "one man's modus ponens".
Look, we get it, you're talking to people who found this stuff smart and interesting in the past. But we got tired of it. For me, I realized after awhile that the people I most admired in real life were pretty much the opposite of the people I was reading the most on the internet. None of the smartest people I know talk in this snooty online iamsosmart style.
> This isn’t about me being an expert on these topics and getting them exactly right, it’s about me calibrating my ability to tell how much I know about things and how certain I am.
> At least $250 million in damage from BLM protests this year: 30%
Aurornis:
> I forgot about the slightly condescending, lecturing tone that comes out when you disagree with rationalist figureheads.
> Since Scott Alexander comes up a lot, a few randomly selected predictions that didn't come true:
> He predicted at least $250 million in damages from Black Lives Matter protests.
Is this a "perfectly reasonable but critical comment"?
Am I condescending if I say that predicting a 30% chance that something happens means predicting a 70% chance that it won't happen... so the fact that it didn't happen probably shouldn't be used as "gotcha!"?
(I did waffle upon re-reading my comment and thinking it could have been more civil. But then decided that this person is also being very thin skinned. So I think you're right that we're both right.)
Weirdly enough, both can be true. I was tangentially involved in EA in the early days, and have some friends who were more involved. Lots of interesting, really cool stuff going on, but there was always latent insecurity paired with overconfidence and elitism as is typical in young nerd circles.
When big money got involved, the tone shifted a lot. One phrase that really stuck with me is "exceptional talent". Everyone in EA was suddenly talking about finding, involving, hiring exceptional talent at a time where there was more than enough money going around to give some to us mediocre people as well.
In the case of EA in particular circlejerks lead to idiotic ideas even when paired with rationalist rhetoric, so they bought mansions for team building (how else are you getting exceptional talent), praised crypto (because they are funding the best and brightest) and started caring a lot about shrimp welfare (no one else does).
I don't think this validates the criticism that "they don't really ever show a sense of[...] maybe I'm wrong".
I think that sentence would be a fair description of certain individuals in the EA community, especially SBF, but that is not the same thing as saying that rationalists don't ever express epistemic uncertainty, when on average they spend more words on that than just about any other group I can think of.
> caring a lot about shrimp welfare (no one else does).
Ah. I guess they are working out ecology through first principles, I guess?
I feel like a lot of the criticism of EA and rationalism does boil down to some kind of general criticism of naivete and entitlement, which... is probably true when applied to lots of people, regardless of whether they espouse these ideas or not.
It's also easier to criticize obviously doomed/misguided efforts at making the world a better place than to think deeply about how many of the pressing modern day problems (environmental issues, extinction, human suffering, etc.) also seem to be completely intractable, when analyzed in terms of the average individual's ability to take action. I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
>I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
I think it's that combined with the EA approach to it which is: let's focus on space flight and shrimp welfare. Not sure which side is more in denial about the impending future?
I have no belief any particular individual can do anything about shrimp welfare more than they can about the intractable problems we do face.
> I think it's that combined with the EA approach to it which is: let's focus on space flight and shrimp welfare. Not sure which side is more in denial about the impending future?
I think its a result of its complete denial of and ignorance of politics. Because rationalist and effective altruist movements make a whole lot more sense, if you realize they are talking about deeply social and political issues with all politics removed from it. Its technocrat-ism the poster child of the kind of "there is no alternative" neoliberalism that everyone in the western world was indoctrinated into since the 80s.
Its a fundamental contradiction, we don't need to talk about politics because we already know liberal democracies and free-market capitalism is the best we ever going to achieve, faced with the numerous intractable problems we face that can not possibly be related to liberal democracies and free-market capitalism.
The problem is: How do we talk about any issue the world is facing today without ever challenging or even talking about any of the many assumptions the western liberal democracies are based upon? In other words: the problems we face are structural/systemic, but we are not allowed to talk about the structures/systems. That's how you end up with space flight and shrimp welfare and AGI/ASI catastrophizing taking up 99% of everything these people talk about. It's infantile, impotent liberal escapism more than anything else.
They bought one mansion to host fundraisers with the super-rich, which I believe is an important correction. You might disagree with that reasoning as well, but it's definitely not as described.
As far as I know it's never hosted an impress-the-oligarch fundraiser, which as you say would at least have a logic behind it[1] even if it might seem distasteful.
For a philosophy which started out from the point of view that much of mainstream aid was spent with little thought, it was a bit of an end of Animal Farm moment.
(to their credit, a lot of people who identified as EAs were unhappy. If you drew a Venn diagram of the people that objected, people who sneered at the objections[2] and people who identified as rationalists you might only need two circles though...)
[1]a pretty shaky one considering how easy it is to impress American billionaires with Oxford architecture without going to the expense of operating a nearby mansion as a venue, particularly if you happen to be a charitable movement with strong links to the university...
[2]obviously people are only objecting to it for PR purposes because they're not smart enough to realise that capital appreciates and that venues cost money, and definitely not because they've got a pretty good idea how expensive upkeep on little used medieval venues are and how many alternatives exist if you really care about the cost effectiveness of your retreat, especially to charitable movements affiliated with a university...
> If you drew a Venn diagram of the people that objected, people who sneered at the objections[2] and people who identified as rationalists you might only need two circles though...)
I’m a bit confused by this one.
Are you saying that no-one who identifies as rationalist sneered at the objections? Because I don’t think that’s true.
Nope, I'm implying the people sneering at the objections were the self proclaimed rationalists. Other, less contrarian thinkers were more inclined to spot that a $15m heritage building might not be the epitome of cost-effective venues...
Yes! It can be true both that rationalists tend, more than almost any other group, to admit and try to take account of their uncertainty about things they say and that it's fun to dunk on them for being arrogant and always assuming they're 100% right!
Because they we doing so many workshops that buying a building was cheaper than renting all the time.
You may argue that organizing workshops is wrong (and you might be right about that), but once you choose to do them, it makes sense to choose the cheaper option rather than the more expensive one. That's not rationalist rhetoric, that's just basic economy.
I read rationalist writing for a very long time, and eventually concluded that this part of it was, not universally but predominantly, performative. After you read enough articles from someone, it is clear what they have conviction in, even when they are putting up disclaimers saying they don't.
>Are you sure you're not painting this group with an overly-broad brush?
"Aren't these the people who"...
> And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
What's the value of that if it doesn't appear to be reasonably put to their own ideas. What you described otherwise is just another form of the exact kind of self-congratulation often (reasonably, IMO) lobbed at these "people"
They're behind Anthropic and were behind openai being a nonprofit. They're behind the friendly AI movement and effective altruism.
They're responsible for funneling huge amounts of funding away from domain experts (effective altruism in practice means "Oxford math PhD writes a book report about a social sciences problem they've only read about and then defunds all the NGOs").
They're responsible for moving all the AI safety funding away from disparate impact measures to "save us from skynet" fantasies.
I think GP is saying that their epistemic humility is a pretense, a pose. They do a lot of throat clearing about quantifying their certainty and error checking themselves, and then proceed to bring about very consequential outcomes anyway for absurd reasons with predictable side effects that they should have considered but didn't.
Yeah. It's not that they never express uncertainty so much as they like to express uncertainty as arbitrarily precise and convenient-looking expected value calculations which often look like far more of a rhetorical tool to justify their preferences (I've accounted for the uncertainty and even given a credence as low as 14.2% I'm still right!) than a decision making heuristic...
Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
Are you sure you're not painting this group with an overly-broad brush?