It's easy to say in retrospect, but it's not like all the movement ever did in it's entire existence was buy a single castle. Do you really think they were expecting EA to explode in the public consciousness after the SBF thing and to have all of their line-items exhaustively audited for what could make the best outrage-bait article? That's quite a black swan event.
>and to have all of their line-items exhaustively audited [...]?
That part they should have expected (and I'm sure they did). Scott claims they expected negative social reactions, just maybe not to this extent.
Even in the absence of journalist attention, EAs love to exhaustively audit line-items. Something as big as Wytham Abbey was clearly not going to escape commentary.
I don't follow why SBF somehow excuses this. It seems you're suggesting buying the abbey would have been fine if only it could have been kept quiet, but because of this unforeseeable black swan event people heard about the Secret EA Castle and now we have all this outrage.
I heard about it from the EA Adjacent forum, and I don't see what SBF has to do with the argument that this burned a lot of social capital and reputation within the community for very nebulous gains (compared to buying a counterfactual property). The abbey might be relatively cheap for what it is, but not absolutely cheap. And it turned out to be very expensive in other ways.
My point is that you could take an action which seems reasonable and justified and thought out within the community that you're in (e.g. EA) because you know everyone's working off the same set of axioms and can follow your reasoning. But when you suddenly get the entire world watching, that's not true any more, and you realize you can't explain the axioms to them, because it's a lot easier to say haha castle stupid EA people than to actually think it through and besides they're already scrolling past on their feed to the next bit of outrage-bait.
Thanks. I didn't experience that part, but I can definitely see that this would happen. My position is still that it was a very questionable idea even without outside attention, but this definitely didn't help
William McCaskell wrote and promoted a book in August 2022, in an effort to popularize and justify the sorts of “longtermist” utilitarian views held by (some, increasingly dominant) EA folks. Allegedly this was backed by a multi-million dollar PR budget, which is why it got so much press at the time, and also why so many people were talking about EA philosophies last summer — even before FTX.
I think your response is strange. The EA/longtermist folks have been making a very deliberate and considered effort to promote and popularize their movement. This wasn’t something that “just happened to them.” And FTX blowing up was a major event in the course of that debate, since it starkly illustrated the weaknesses of a moral philosophy that centers numbers and dollars over the kind of traditional ethical judgement practiced by other charitable movements.
This piece, in turn, feels like more of the same promotional effort. Nominally it’s about the author’s kidney donation, but it immediately and prominently shifts to arguing about how EAs are great people who will donate their kidneys to strangers and how unfair the world is to criticize them over castles, which incidentally were the best possible use of money. It’s not subtle or incidental at all, and it felt like I was reading a piece of promotional religious material or something.
> And FTX blowing up was a major event in the course of that debate, since it starkly illustrated the weaknesses of a moral philosophy that centers numbers and dollars over the kind of traditional ethical judgement practiced by other charitable movements.
Honestly, I think this just doesn't make sense (and I have no ties with EA whatsoever). You've written it nicely, but it just doesn't follow. It makes no sense to judge a movement by its worst possible member, and it doesn't make sense to say that the overall philosophy doesn't work when one guy obviously didn't follow the philosophy and then had it explode in his face.
The argument, to me, feels akin to "well, vegetarians think they're morally superior, because they don't kill animals, but just look at PETA! PETA does this horrible stuff where they kill animals in shelters[1]." And then perhaps follow it up with "this shows the weaknesses of a moral philosophy that centers around saving animals lives..." but it doesn't. PETA doing shady stuff doesn't illustrate any philosophical failures any more than SBF doing shady stuff. If you want to address the philosophical failures of EA, you may, but I don't see any of that in your comment.
> but it immediately and prominently shifts to arguing about how EAs are great people who will donate their kidneys to strangers
Is this so surprising? Look at the immediate and negative response that EA receives today. Of course any mention of EA would want to be brought with a caveat that "hey, EAs do some good things too, you know - we're not all SBF!"
But I question if your interpretation of the article is even correct. Simply Ctrl-F "effectiv" gives a bunch of hits in section IV, and then no more hits for the rest of the article, except for a stray one in section VII, which was essentially my impression when reading the first time. He talked about it enough to address the controversy, then moved on. Like a reasonable person, not an author of "promotional religious material".
I like the general idea of EA. But it's a human movement, and thus vulnerable to mismanagement and corruption that's characteristic of distributed organizations that manage large sums of money. To that end I've observed three worrying trends in the EA movement over past couple of years. They are (in no particular order):
1. To focus EA efforts on donations from high-net-worth individuals, often at the cost of giving these individuals massive influence over organizational priorities.
2. To shift (more) towards a longtermist philosophy, wherein "effectiveness" is determined by the wellness of hypothetical future beings, rather than measurable near-term impacts like "lives saved, bed nets distributed." This measurability was supposed to be the bedrock of EA, what kept it from becoming like other wasteful organizations.
3. As a consequence of (1) and (2), to shift the balance of internal priorities away from practical and measurable efforts, towards work like "AI alignment"; spending millions on book tours to promote EA/longtermist ideas; and spending on charities that provide facilities to help EA organizations "come up with ideas about the future."
4. To close ranks against outside criticism of EA's priorities, and to refuse any efforts for a community-wide re-evaluation of these new priorities, or to pose tough internal questions about donors or spending.
In this new regime, spending on luxurious meeting facilities ("castles") sits on equal footing with malaria nets. Because perhaps the ideas developed therein will save billions of future lives, or the facilities will encourage big new donations, and that's an organizational priority now. In any case, there's no way you can prove this won't happen, because nothing is empirically measurable. Also: castles are awesome.
None of these priorities represent the entirety of EA, but it's obvious that the opinionated people who control these (huge!) purse-strings are gradually gaining organizational control of the movement, to the (I suspect) long-term detriment of the "let's spend effectively on Malaria nets" wing. It's quite sad, but it's also exactly what I'd expect of an organization that is insufficiently defended against this kind of drift.
Far from "everything is great but SBF couldn't have been predicted," you see evidence of all these mismanagement in the events that I mention. First, there's the well-funded McCaskill book, which attempted to mainstream EA/longtermist priorities. This would not have been possible without (1) [and quite possibly, without stolen FTX deposits.] Then there's the presence of obvious grifters like SBF within the inner-circles of the community, and the fact that nobody with power was asking the obvious question about whether these people should be such an important part of the movement. (It did not take a lot of looking, apparently.)
And finally, you see it in the orgy of undisciplined, poorly-justified spending by EA organizations that occurred right at a time when they were deliberately courting increased prominence, including two different castles. All of this would be perfectly normal in a cult like Scientology, but has no place whatsoever in a mass-scale effective altruist movement.
As someone in the "malaria nets wing", I think you're directionally correct, but overstating things.
> it's quite obvious that the opinionated people who control the (huge!) purse-strings are gradually gaining organizational control of the movement
This is to some extent true of Open Philanthropy, although the effect looks larger than it is because they're consciously committed to not throwing all of their resources behind whatever they think is the best option. I'm not a fan in principle, but it's not insane. See https://www.openphilanthropy.org/research/worldview-diversif... for their take.
GiveWell remains firmly on the global health side, and I don't see that changing. Here's the first my-screen-worth of organizations they've funded in the last year, with approximate numbers:
- 87 million to the Malaria Consortium
- 77 million to Hellen Keller International (Vitamin A supplementation)
- 42 million to New Incentives (infant vaccination)
- 17 million to Sightsavers (parasitic worms)
- 8 million to PATH (malaria)
- 6 million to Nutrition International (Vitamin A)
- 5 million to IRD Global (healthcare infrastructure)
Like I said, I don’t think EA is bad or that the situation is irreparable. I just think there are people within “the movement” who are taking it in a worrying direction. And by “taking it” I don’t mean they’ll brainwash everyone in the org, but I do believe they might succeed in capturing the EA brand and a lot of its organizational capacity towards their priorities.
I think in the medium/long term, the Givewell wing of the EA movement will either need to (1) develop better organizational strategies to defend against this kind of takeover and keep priorities balanced, or (2) consider breaking off from the rest of the EA movement and recognizing that the brand now means something different. But that new movement will also need to develop some defenses to prevent the same thing from happening.
To use an excellent rationalist phrase: there’s a “Chesterton’s fence” that I think a lot of EA folks have torn down in their attempt to refactor charity and make it more efficient: namely, that the intentions of leadership really matter. And in any human organization that manages large sums of money and power, you have to have sharply-enforced defenses against charismatic leaders who say they’re your friends and share your priorities, but actually want to take the movement in a very different direction.
> consider breaking off from the rest of the EA movement and recognizing that the brand now means something different.
Yes, this seems fairly likely to happen.
> the intentions of leadership really matter. And in any human organization that manages large sums of money and power, you have to have sharply-enforced defenses against charismatic leaders who say they’re your friends and share your priorities, but actually want to take the movement in a very different direction.
I don't think this is exactly the issue. Openphil's leadership is, as far as I can tell, sincerely not trying to dominate the movement. The problem is that they're such an important funding source for charities that they can't not do so: even if they would never actually withdraw funding to punish people, the mere fact that they could creates a chilling effect.
In principle the same dynamic could apply to GiveWell and global health charities, it's just that there aren't the same sorts of deep ideological differences there: e.g. maybe Alice thinks parasitic worms are the most important problem, and Bob thinks it's malaria, but they're always going to agree that both are extremely bad and should get significant funding.
I think the notion of “sincerity” should be viewed very skeptically here. Not because I believe you’re wrong about anyone’s intentions: but because intentions don’t matter. If I sincerely believe issue A is the most important issue in the field, and I sincerely believe my “obtain donor funds at all costs” strategy is the best strategy to pursue it, then I can end up dominating the movement without ever intending to do so. It takes a strong and explicit effort to prevent this from happening, and that defense won’t happen if everyone is more concerned about being amicable than about vociferously defending the mission.
And of course, once one branch of the movement dominates it, then you’re at the mercy of their continued sincerity. This means you have to assume they’ll always continue to behave sincerely, and their organization won’t be captured by opportunists in the future.
I am an EA of long standing, though not very active on the movement side of things. I think SBF is a big deal, at least in so far as it prominently exposed a moral weakness in the movement. A heavy use of Bayes' rule coupled with a focus on the best uses of dollars led in my eyes to SBF being seen less as a successful EA celebrity and more of a moral exemplar. We're trying to effectively improve the world! SBF has developed a magic money printing machine which will make the world awesomely better by generating dollars for EA causes! Earn-To-Give proven the best strategy because of unlimited upside risk! We should be more like SBF!
I had two problems with this: firstly that a movement focusing on extracting money from the megarich rather than 10% tithes from the public more generally may potentially generate more cash (good!) but will probably do so most effectively with thought leadership and fundraising teams and donor care and castles for conferences of important people. This loses a distinctive simplicity and non-heirarchy that feels important.
The second is that I had much less sympathy with the 'longtermism' sub-sect than SBF and many of the richer and increasingly more prominent Californian types do[0]. And that the Good Old-Fashioned EA focus on cheap but unglamorous interventions on malaria, cash transfers etc. were being increasingly overshadowed in the public eye.
So I don't think it reflects badly on EAs that SBF turned out to be shady (as you say, all groups have a worst member). But it should prompt some awkward questions about the extent that the movement was taken in by the smoke-and-mirrors act. And ideally a reconsideration about whether chasing the money and interests of SBF-types is the right direction for the movement.
[0]: I find it suspicious that the equations demonstrating the infinite importance of fairly recherché concerns on the specifics of AI safety, for example, just happen to line up with the research interests of some EA-adjacent people. That suggests people aren't discounting sufficiently for their own group biases.
>to have all of their line-items exhaustively audited
Historically, yes, that's the entire point of the movement. Audit everything in the goal of doing the most good and not wasting money on frivolities. The auditing approach faded as the movement has grown and became more longtermist, though I (somewhat) expect it to come back now that we're post-ZIRP, post-SBF.